input
stringlengths 66
114k
| instruction
stringclasses 1
value | output
stringlengths 63
3.46k
|
---|---|---|
1. A neural network based system for spell correction and tokenization of natural language, said system comprising: An artificial neural network architecture, to generate variable length ‘character level output streams’ for system fed variable length ‘character level input streams’; An auto-encoder for injecting random character level modifications to the variable length ‘character level input streams’, wherein the characters include a space-between-token character; and An unsupervised training mechanism for adjusting said neural network to learn correct variable length ‘character level output streams’, wherein correct variable length ‘character level output streams’ needs to be similar to respective original variable length ‘character level input streams’ prior to their random character level modifications. 2. The system according to claim 1 wherein the random character level modifications are selected from the group consisting of adding random characters, deleting characters, transposing characters and replacing characters. 3. The system according to claim 2 wherein said neural network is implemented using a sequence to sequence artificial neural network architecture, sequences of the variable length ‘character level input streams’ are mapped to a hidden state, and sequences of the variable length ‘character level output streams’ are generated from the hidden state. 4. The system according to claim 3 wherein the sequence to sequence artificial neural network architecture is implemented using a bidirectional long short-term memory (LSTM) input layer. 5. The system according to claim 4 wherein the variable length ‘character level input streams’ are Unicode character streams, and further comprising a UTF-8 encoder for applying UTF-8 encoding to the Unicode character streams prior to their inputting to said neural network. 6. The system according to claim 5 wherein said unsupervised training mechanism is further adapted for adjusting said neural network to learn a per-character embedding representation of the variable length ‘character level input streams’, in parallel to the learning of correct variable length ‘character level output streams’ 7. The system according to claim 2 further comprising a random modification selector for randomly selecting the character level modifications from the group. 8. The system according to claim 7, wherein said auto-encoder is further adapted for incrementing the frequency of injecting the random character level modifications to the variable length ‘character level input streams’, responsive to an increase in the level of similarity of the variable length ‘character level output streams’ to the respective original variable length ‘character level input streams’ prior to their random character level modifications. 9. The system according to claim 1, wherein at least some of the variable length ‘character level input streams’, fed to the system represent dialogs, and dialog metadata is at least partially utilized by said artificial neural network to generate the variable length ‘character level output streams’. 10. The system according to claim 9, wherein dialog metadata at least partially includes dialog state data. 11. A neural network based system for semantic role assignment of dialog utterances, said system comprising: An artificial recurrent neural network architecture, implemented using long short-term memory (LSTM) cells, to generate variable length ‘tagged tokens output streams’ for system fed variable length ‘dialog utterance input streams’; and A weakly supervised training mechanism for feeding to said artificial recurrent neural network, one or more variable length ‘dialog utterance input streams’ with their respective correctly-tagged variable length ‘tagged tokens output streams’, as initial input training data, and for adjusting said recurrent neural network to learn correct variable length ‘tagged tokens output streams’, by generating, and suggesting for system curator tagging correctness feedback—additional variable length ‘dialog utterance input streams’ with their respective variable length ‘tagged tokens output streams’ as tagged by said recurrent neural network—wherein correct tagging of the suggested additional variable length ‘dialog utterance input streams’ improves the capability of said recurrent neural network to refine the decision boundaries between correctly and incorrectly tagged inputs and to more correctly tag following system fed variable length ‘dialog utterance input streams’. 12. The system according to claim 11, wherein at least some of the variable length ‘dialog utterance input streams’, fed to the system represent dialogs, and dialog metadata is at least partially utilized by said artificial recurrent neural network to generate the variable length ‘tagged tokens output streams’. 13. The system according to claim 12, wherein dialog metadata at least partially includes dialog state data. 14. The system according to claim 11, wherein said weakly supervised training mechanism is further adapted to modify the variable length ‘tagged tokens output stream’ of a specific given incorrectly labeled variable length ‘dialog utterance input stream’, without retraining of the entire said recurrent neural network, by reiterating the variable length ‘dialog utterance input stream’ and applying gradient learning with a low learning rate across multiple training epochs. 15. The system according to claim 11, wherein said weakly supervised training mechanism is further adapted to self-improve while actively handling real end-user variable length ‘dialog utterance input streams’ by utilizing under-utilized Central Processing Unit (CPU) cycles of its hosting computer to run additional epochs of training. 16. The system according to claim 1, further comprising: An artificial recurrent neural network architecture, implemented using long short-term memory (LSTM) cells, to generate variable length ‘tagged tokens output streams’ for system fed variable length ‘dialog utterance input streams’; and A weakly supervised training mechanism for feeding to said artificial recurrent neural network, one or more variable length ‘dialog utterance input streams’ with their respective correctly-tagged variable length ‘tagged tokens output streams’, as initial input training data, and for adjusting said recurrent neural network to learn correct variable length ‘tagged tokens output streams’, by generating, and suggesting for system curator tagging correctness feedback—additional variable length ‘dialog utterance input streams’ with their respective variable length ‘tagged tokens output streams’ as tagged by said recurrent neural network—wherein correct tagging of the suggested additional variable length ‘dialog utterance input streams’ improves the capability of said recurrent neural network to refine the decision boundaries between correctly and incorrectly tagged inputs and to more correctly tag following system fed variable length ‘dialog utterance input streams’; and wherein variable length ‘character level output streams’ generated by said artificial neural network for variable length ‘character level input streams’, are fed as variable length ‘dialog utterance input streams’ to said artificial recurrent neural network. 17. A method for spell correction and tokenization of natural language, said method comprising: feeding variable length ‘character level input streams’ to an artificial neural network architecture, to generate variable length ‘character level output streams’; injecting random character level modifications to the variable length ‘character level input streams’, wherein the characters include a space-between-token character; and adjusting the neural network to learn correct variable length ‘character level output streams’, wherein correct variable length ‘character level output streams’ needs to be similar to respective original variable length ‘character level input streams’ prior to their random character level modifications. 18. The method according to claim 17 wherein the random character level modifications are selected from the group consisting of adding random characters, deleting characters, transposing characters and replacing characters. 19. A method for semantic role assignment of dialog utterances, said method comprising: feeding variable length ‘dialog utterance input streams’ to an artificial recurrent neural network architecture, to generate variable length ‘dialog utterance input streams’; feeding to the artificial recurrent neural network, one or more variable length ‘dialog utterance input streams’ with their respective correctly-tagged variable length ‘tagged tokens output streams’, as initial input training data; and adjusting the recurrent neural network to learn correct variable length ‘tagged tokens output streams’, by generating, and suggesting for system curator tagging correctness feedback—additional variable length ‘dialog utterance input streams’ with their respective variable length ‘tagged tokens output streams’ as tagged by the recurrent neural network—wherein correct tagging of the suggested additional variable length ‘dialog utterance input streams’ improves the capability of the recurrent neural network to refine the decision boundaries between correctly and incorrectly tagged inputs and to more correctly tag following fed variable length ‘dialog utterance input streams’. 20. The method according to claim 19, wherein the weakly supervised training mechanism is further adapted to further comprising modifying the variable length ‘tagged tokens output stream’ of a specific given incorrectly labeled variable length ‘dialog utterance input stream’, without retraining of the entire recurrent neural network, by reiterating the variable length ‘dialog utterance input stream’ and applying gradient learning with a low learning rate across multiple training epochs. | Please help me write a proper abstract based on the patent claims. | Disclosed are systems, methods, circuits and associated computer executable code for deep learning based natural language understanding, wherein training of one or more neural networks, includes: producing character strings inputs ‘noise’ on a per-character basis, and introducing the produced ‘noise’ into machine training character strings inputs fed to a ‘word tokenization and spelling correction language-model’, to generate spell corrected word sets outputs; feeding machine training word sets inputs, including one or more ‘right’ examples of correctly semantically-tagged word sets, to a ‘word semantics derivation model’, to generate semantically tagged sentences outputs. Upon models reaching a training ‘steady state’, the ‘word tokenization and spelling correction language-model’ is fed with input character strings representing ‘real’ linguistic user inputs, generating word sets outputs that are fed as inputs to the word semantics derivation model for generating semantically tagged sentences outputs. |
1. A method comprising: applying a sequence of spin-control modulation pulses to electronic spin impurities in a solid-state spin system; and extracting a spectral content of a spin bath that surrounds the electronic spin impurities within the solid-state spin system, by measuring the coherent evolution and associated decoherence of the spin impurities as a function of number of the applied modulation pulses, and the time-spacing between the pulses. 2. The method of claim 1, wherein the act of measuring the coherent evolution and associated decoherence of the spin impurities comprises: defining a time-dependent coherence function C(t)=eχ(t) to represent the coherence of spin impurities within the solid-state spin system, where χ(t) is a decoherence functional that describes the decoherence of the spin impurities as a function of time; and measuring the time-dependent coherence function C(t)=e−χ(t) so as to extract a spectral component S(ω0) of the composite solid-state spin system at the frequency ω0. 3. The method of claim 2, wherein the modulation pulse sequence has a modulation waveform described in a frequency domain by a filter function Ft(ω) that is mathematically related to the decoherence functional by: χ ( t ) = 1 π ∫ 0 ∞ ω S ( ω ) F t ( ω ) ω 2 , where S(ω) is a spectral function describing coupling of the spin impurities to a spin bath environment of the composite solid-state spin system. 4. The method of claim 1, wherein the act of extracting the spectral content at a desired frequency ω0 comprises subjecting the spin impurities to a spectral δ-function modulation, with an ideal filter function Ft(ω) with a Dirac delta function localized at ω=ω0, so that the spectral content of the spin bath at the desired frequency ω0 is given by S(ω0)=πln(C(t))/t and the ideal filter function Ft(ω) is mathematically represented by: Ft(ω)/(ωLt)=δ(ω−ω0) 5. The method of claim 4, further comprising repeating, for a number of different frequencies ω=ωi, i=1 . . . n, the acts of subjecting the spin impurities to spectral S-function modulations with the Dirac delta function localized at each frequency co, so as to extract the spectral content S(ω) at all of the different frequencies ω=ωi, i=1 . . . n to obtain a broad range of spectral decomposition for the spin bath. 6. The method of claim 3, further comprising: approximating the delta function in the filter function TWO at a frequency slightly different from am, then extracting a spectral component S(ω0) of the composite solid-state spin system at the slightly different frequency. 7. The method of claim 6, wherein the modulation pulse sequence is an n-pulse CPMG sequence; and wherein a mathematical formula for the filter function for the n-pulse CPMG sequence is: F n CPMG ( ω t ) = 8 sin 2 ( ω t 2 ) sin 4 ( ω t 4 n ) cos 2 ( ω t 2 n ) . 8. The method of claim 7, wherein the modulation pulse sequence is an n-pulse XY sequence. 9. The method of claim 1, wherein the solid state system is a diamond crystal, the spin impurities are NV centers in the diamond crystal. 10. The method of claim 9, wherein the spin bath environment in the diamond crystal is dominated by fluctuating N(nitrogen atom) electronic spin impurities so as to cause decoherence of the NV centers through magnetic dipolar interactions. 11. The method of claim 10, wherein the N spins of the spin bath are randomly oriented, and wherein the act of extracting the spectral content of the spin bath comprises extracting a Lorentzian spectrum of the N spin bath's coupling to the NV centers, given by: S ( ω ) = Δ 2 τ C π 1 1 + ( ω τ C ) 2 , where Δ is the average coupling strength of the N bath to the NV spin impurities, and where τc is the correlation time of the N bath spins with each other. 12. The method of claim 11, further comprising the act of determining the values of A and Tc from the extracted spectrum S(ω). 13. A system comprising: a microwave pulse generator configured to generate a sequence of spin-control modulation pulses and to apply the pulses to a sample containing electronic spin impurities in a solid-state spin system; and a processing system configured to measure the coherent evolution and associated decoherence of the electronic spin impurities as a function of the number of the applied pulses and the time-spacing between the pulses, so as to extract a spectral content of a spin bath that surrounds the electronic spin impurities within the solid-state spin system. 14. The system of claim 13, wherein the electronic spin impurities comprise NV (nitrogen-vacancy) centers, and wherein the solid-state spin system comprises a diamond crystal. 15. The system of claim 13, wherein the spin-bath environment comprises 13C nuclear spin impurities and N electronic spin impurities within the diamond crystal. 16. The system of claim 13, further comprising an optical system, including an optical source configured to generate excitation optical pulses that initialize and read out the spin states of the spin impurities, when applied to the sample. 17. The system of claim 16, wherein the optical source is a laser tunable to a frequency of about 532 nm. 18. The system of claim 16, wherein the processing system comprises a computer-controlled digital delay generator coupled to the optical source and the microwave source and configured to control the timing of the microwave pulses and the optical pulses. 19. The system of claim 16, further comprising a detector configured to detect output radiation from the NV centers after the microwave pulses and the optical pulses have been applied thereto. 20. The system of claim 16, wherein the optical system further comprises an acousto-optic modulator configured to time the optical pulses so as to prepare and read out the NV spin states. 21. The system of claim 19, wherein the optical system further includes at least one of: a dichroic filter configured to separate fluorescent radiation generated by the NV centers in response the excitation optical pulses; and an objective configured to collect the fluorescent radiation generated by the NV centers in response to the excitation optical pulses and directed the collected fluorescence to the detector. 22. The system of claim 13, wherein the solid state system is a diamond crystal, the spin impurities are NV centers in the diamond crystal, and the spin bath environment in the diamond crystal is dominated by fluctuating N(nitrogen atom) electronic spin impurities, so that the spectrum of the N spin bath's coupling to the NV centers is a Lorentzian spectrum given by: S ( ω ) = Δ 2 τ C π 1 1 + ( ωτ C ) 2 . where Δ is the average coupling strength of the N bath to the NV spin impurities, and where τc is the correlation time of the N bath spins with each other. 23. The system of claim 22, wherein the processing system is further configured to determine the values of Δ and τc from the extracted spectrum S(ω). 24. The system of claim 13, wherein the electronic spin impurities comprise phosphorus donors, and wherein the solid-state spin system comprises silicon. 25. The system of claim 13, wherein the modulation pulse sequence comprises at least one of: an n-pulse CPMG sequence; and an n-pulse XY sequence. | Please help me write a proper abstract based on the patent claims. | Methods and systems are described for spectral decomposition of composite solid-state spin environments through quantum control of electronic spin impurities. Δ sequence of spin-control modulation pulses are applied to the electronic spin impurities in the solid-state spin systems. The spectral content of the spin bath that surrounds the electronic spin impurities within the solid-state spin system is extracted, by measuring the coherent evolution and associated decoherence of the spin impurities as a function of number of the applied modulation pulses, and the time-spacing between the pulses. Using these methods, fundamental properties of the spin environment such as the correlation times and the coupling strengths for both electronic and nuclear spins in the spin bath, can be determined. |
1. A method for monitoring construction of a building, the method comprising: receiving, by a computer system, training data that includes three dimensional training point data that corresponds to a plurality of objects associated with a training building and that includes image data that corresponds to a plurality of images of the objects associated with the training building, wherein each point of the three dimensional training point data represents a three dimensional coordinate that corresponds to a surface point of one of the objects associated with the training building, and wherein the training data includes data that identifies the objects associated with the training building; generating a convolution neural network, by the computer system; training the convolution neural network, by the computer system, based on the training data and the data that identifies the objects; receiving, by the computer system, building object data that includes three dimensional point data after a LIDAR system scans objects associated with a building under construction to determine the three dimensional point data; receiving, by the computer system, building object image data that corresponds to images of the objects associated with the building after an imaging device acquires the images of the objects associated with the building; analyzing, by the computer system, by use of the convolution neural network, the building object data and the building object image data to identify the objects associated with the building and to determine physical properties of the objects associated with the building; receiving, by the computer system, building design data that represents physical design plans associated with the building; determining a mapping, by the computer system, of the objects associated with the building to objects associated with the physical design plans of the building; comparing, by the computer system, physical properties of the objects associated with the building to physical properties of the objects associated with the physical design plans of the building; based on the comparison, detecting, by the computer system, a discrepancy beyond a predetermined threshold between a physical property of an object associated with the building and a corresponding physical property of a corresponding object associated with the physical design plans of the building; and sending a message, by the computer system, that indicates the discrepancy. 2. The method of claim 1, wherein the training data is data derived from design data output by a computer-aided design (CAD) application that was used to capture physical design data associated with the training building, wherein the data that identifies the objects associated with the training building are data that was input by use of the CAD application and that labels the objects associated with the training building, wherein physical properties of a first object of the objects associated with the building include any of a dimension of the first object, a shape of the first object, a color of the first object, a surface texture of the first object, or a location of the first object, wherein physical properties of a second object of the objects associated with the physical design plans of the building include any of a dimension of the second object, a shape of the second object, a color of the second object, a surface texture of the second object, or a location of the second object, wherein the first object or the second object are any of a pipe, a beam, a wall, a floor, a ceiling, a toilet, a roof, a door, a door frame, a metal stud, a wood stud, a light fixture, a piece of sheetrock, a water heater, an air conditioner unit, a water fountain, a cabinet, a table, a desk, a refrigerator, or a sink, wherein the imaging device is a camera, a video camera, or a mobile device, wherein the building design data are design data output by the CAD application, and wherein the physical design plans of the building were captured by use of the CAD application. 3. The method of claim 2, wherein the CAD application is AutoCAD from Autodesk, Inc. or MicroStation from Bentley Software, Inc., and wherein the mobile device is any one of a smart phone, a tablet computer, a portable media device, a wearable device, or a laptop computer. 4. The method of claim 1, wherein the discrepancy indicates that a pipe is located in an incorrect location, the method further comprising: receiving data that represents a schedule for construction of the building; determining, based on the received building object data, that causing the pipe to be located in a correct location will cause the schedule for the construction of the building to be delayed; and sending a message that indicates that the construction of the building will be delayed. 5. A method comprising: receiving, by a computer system, sensor data determined based on sensor readings of a structure that is under construction, wherein the sensor data indicates a physical property of an object associated with the structure; analyzing, by the computer system, the sensor data to determine a mapping between the object associated with the structure and a corresponding object of a three dimensional model of the structure; detecting, by the computer system, a discrepancy between the indicated physical property of the object associated with the structure and a physical property of the corresponding object of the three dimensional model; and sending a message, by the computer system, that indicates the discrepancy. 6. The method of claim 5, wherein the sensor data corresponds to data obtained by a LIDAR system based on a scan of the structure, and wherein the sensor readings of the structure are the data obtained by the LIDAR system based on the scan of the structure. 7. The method of claim 6, wherein the sensor data includes three dimensional point data that corresponds to objects associated with the structure. 8. The method of claim 5, wherein the sensor data corresponds to data obtained by an image capture device while capturing an image of the structure, and wherein the sensor readings of the structure are the data obtained by the image capture device while capturing the image of the structure. 9. The method of claim 5, wherein the sensor data corresponds to data obtained by a sonar device while capturing a sonar image of the structure, and wherein the sensor readings of the structure are the data obtained by the sonar device while capturing the sonar image of the structure. 10. The method of claim 5, wherein the sensor data corresponds to data obtained by a radar system while capturing a radar image of the structure, and wherein the sensor readings of the structure are the data obtained by the radar system while capturing the radar image of the structure. 11. The method of claim 5, wherein the analyzing of the sensor data to determine the mapping includes: identifying the object associated with the structure based on a convolution neural network, and determining the mapping based on the identification of the object. 12. The method of claim 5, wherein the structure is any of a building, an airplane, a ship, a submarine, a space launch vehicle, or a space vehicle. 13. The method of claim 5, further comprising: receiving data that correlates to a schedule for construction of the structure; determining, based on the received sensor data, that fixing the discrepancy will cause the schedule for the construction of the structure to be delayed; and sending a message that indicates that fixing the discrepancy will cause the schedule for the construction of the structure to be delayed. 14. The method of claim 5, further comprising: receiving structure design data that represents physical design plans of the structure, wherein the three dimensional model of the structure is based on the structure design data. 15. The method of claim 5, wherein the discrepancy is a difference above a predetermined threshold in a dimension of the object and a corresponding dimension of the corresponding object. 16. The method of claim 5, wherein the discrepancy is a difference in color of a portion of the object and a color of a corresponding portion of the corresponding object. 17. The method of claim 5, wherein the discrepancy is a difference above a predetermined threshold of a location of the object and a location of the corresponding object. 18. A computing system comprising: a processor; a networking interface coupled to the processor; and a memory coupled to the processor and storing instructions which, when executed by the processor, cause the computing system to perform operations including: receiving, via the networking interface, sensor data determined based on sensor readings of a structure, wherein the sensor data indicates a physical property of an object associated with the structure; analyzing the sensor data to determine a mapping between the object associated with the structure and a corresponding object of a three dimensional model of the structure; and detecting a discrepancy between the indicated physical property of the object associated with the structure and a physical property of the corresponding object of the three dimensional model. 19. The computing system of claim 18, further comprising: a LIDAR device, wherein the sensor data includes three dimensional data points determined by the LIDAR device based on a scan of the structure. 20. The computing system of claim 19, further comprising: an image capture device, wherein the sensor data includes data determined by the image capture device based on a captured image of the structure. 21. The computing system of claim 18, wherein the object associated with the building is a component associated with the building, and wherein the corresponding object of the three dimensional model of the structure is a corresponding component of the three dimensional model of the structure. | Please help me write a proper abstract based on the patent claims. | Methods, apparatuses, and embodiments related to a technique for monitoring construction of a structure. In an example, a robot with a sensor, such as a LIDAR device, enters a building and obtains sensor readings of the building. The sensor data is analyzed and components related to the building are identified. The components are mapped to corresponding components of an architect's three dimensional design of the building, and the installation of the components is checked for accuracy. When a discrepancy above a certain threshold is detected, an error is flagged and project managers are notified. Construction progress updates do not give credit for completed construction that includes an error, resulting in improved accuracy progress updates and corresponding improved accuracy for project schedule and cost estimates. |
1. A method of actuating a technical system, the method comprising: determining, by a data processor, a relaxed abduction problem; solving, by the data processor, the relaxed abduction problem; and actuating the technical system according to the solution of the relaxed abduction problem. 2. The method as claimed in claim 1, further comprising determining, by the data processor, tuples by taking as a basis two orders of preference over a subset of observations and a subset of assumptions, so that a theory together with the subset of the assumptions explains the subset of the observations. 3. The method as claimed in claim 2, in which the relaxed abduction problem is determined to be RAP=(T, A, O, <A, <O), wherein the theory is T, a set of abducible axioms is A, a set of observations is O with TO , and further comprising taking orders of preference <A⊂P(A)×P(A) and <O⊂P(O)×P(O) as a basis for determining <-minimal tuples (A, O)∈P(A)×P(O), so that T∪A is consistent and t∪A|=O holds. 4. The method as claimed in claim 2, in which the relaxed abduction problem is solved by transforming the relaxed abduction problem into a hypergraph, so that the tuples (A, O) are encoded by pareto-optimal paths in the hypergraph. 5. The method as claimed in claim 4, wherein the pareto-optimal paths are determined via a label approach. 6. The method as claimed in claim 4, further comprising inducing hyperedges of the hypergraph by transcriptions of prescribed rules. 7. The method as claimed in claim 6, wherein the prescribed rules are determined as follows: A ⊑ A 1 A ⊑ B [ A 1 ⊑ B ∈ ] ( CR1 ) A ⊑ A 1 A ⊑ A 2 A ⊑ B [ A 1 ⊓ A 2 ⊑ B ∈ ] ( CR2 ) A ⊑ A 1 A ⊑ ∃ r . B [ A 1 ⊑ ∃ r · B ∈ ] ( CR3 ) A ⊑ ∃ · A 1 A 1 ⊑ A 2 A ⊑ B [ ∃ r · A 2 ⊑ B ∈ ] ( CR4 ) A ⊑ ∃ r 1 · B A ⊑ ∃ s · B [ r 1 ⊑ s ∈ ] ( CR5 ) A ⊑ ∃ r 1 · A 1 A 1 ⊑ ∃ r 2 · B A ⊑ ∃ s · B [ r 1 ∘ r 2 ⊑ s ∈ ] ( CR6 ) 8. The method as claimed in claim 4, wherein a weighted hypergraph HRAP=(V, E), which is induced by the relaxed abduction problem, is determined by V={(AB),(A∃r.B) |A, B ∈NCT, r ∈NR}, wherein VT={(AA),(AT)|A∈NCT}⊂V denotes a set of final states and E denotes a set of the hyperedges e=(T(e), h(e), w(e)), so that the following holds: an axiom a∈T∪A exists that justifies derivation h(e)∈V from T(e)⊂V based on one of the prescribed rules, wherein the edge weight w(e) is determined according to A = { { a } if a ∈ A , ∅ otherwise O = { { h ( e ) } if h ( e ) ∈ O , ∅ otherwise 9. The method as claimed in claim 8, wherein pX,t=(VX,t,EX,t) is determined as a hyperpath in H=(V, E) from X to t if (1) t∈X and pX,t=({t}, ) or (2) there is an edge e∈E, so that h(e)=t, T(e)={y1, . . . , yk} holds. 10. The method as claimed in claim 9, wherein shortest hyperpaths are determined by taking account of two preferences. 11. The method as claimed in claim 10, wherein the shortest hyperpaths are determined by taking account of two preferences via a label correction algorithm. 12. The method as claimed in claim 11, wherein the labels encode pareto-optimal paths to the hitherto found nodes of the hypergraph. 13. The method as claimed in claim 12, wherein alterations along the hyperedges are propagated by a meet operator and/or by a join operator. 14. The method as claimed in claim 1, wherein the relaxed abduction problem is determined via a piece of description logic. 15. An apparatus for actuating a technical system by the data processor performing the method as claimed in claim 1. | Please help me write a proper abstract based on the patent claims. | To enable efficient abduction even for observations that are faulty or inadequately modeled, a relaxed abduction problem is proposed in order to explain the largest possible part of the observations with as few assumptions as possible. On the basis of two preference orders over a subset of observations and a subset of assumptions, tuples can therefore be determined such that the theory, together with the subset of assumptions, explains the subset of observations. The formulation as a multi-criteria optimization problem eliminates the need to offset assumptions made and explained observations against one another. Due to the technical soundness of the approach, specific properties of the set of results (such as correctness, completeness etc.), can be checked, which is particularly advantageous in safety-critical applications. The complexity of the problem-solving process can be influenced and therefore flexibly adapted in terms of domain requirements through the selection of the underlying representation language and preference relations. The invention can be applied to any technical system, e.g. plants or power stations. |
1. A document structure extraction method comprising: accessing, by a document structure analytics server, an untagged document that comprises a plurality of document parts, wherein certain of the document parts have a visual appearance that is defined by formatting information included in the untagged document, and wherein at least two of the document parts are distinguishable from each other based on having distinctive visual appearances; extracting at least a portion of the formatting information from the untagged document; for a particular one of the plurality of document parts, generating one or more feature-value pairs using the extracted formatting information, wherein each of the generated feature-value pairs characterizes the visual appearance of the particular document part by associating a particular value with a particular formatting feature; using a predictive model to predict a categorization for the particular document part based on the one or more feature-value pairs, wherein the predictive model applies a machine learning algorithm to make predictions based on a collection of categorized feature-value pairs aggregated from a corpus of tagged training documents; and defining tag metadata that associates the particular document part with the predicted categorization generated by the predictive model. 2. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a font size formatting feature with a particular font size value. 3. The document structure extraction method of claim 1, further comprising: identifying a characteristic of the untagged document; and selecting the predictive model based on the corpus of tagged training documents also having the identified characteristic. 4. The document structure extraction method of claim 1, wherein accessing the untagged document further comprises receiving the untagged document from a client computing device; and the method further comprises applying the tag metadata to the untagged document to produce a tagged document, and sending the tagged document to the client computing device. 5. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a font size formatting feature with a particular value that is selected from a group consisting of a largest font in the untagged document, an intermediate-sized font in the untagged document, and a smallest font in the untagged document. 6. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a font size formatting feature with a particular value that is selected from a group consisting of a font size that is larger than a preceding paragraph, a font size that is smaller than the preceding paragraph, a font size that is larger than a following paragraph, and a font size that is smaller than the following paragraph. 7. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a formatting feature with a particular value that defines the formatting feature for a first document part in relation to the formatting feature for a second document part. 8. The document structure extraction method of claim 1, wherein one of the generated feature-value pairs associates a particular value selected from a group consisting of left justification, center justification, right justification, and full justification with a paragraph alignment formatting feature. 9. The document structure extraction method of claim 1, further comprising using the predictive model to determine a confidence level in the categorization for the particular document part. 10. The document structure extraction method of claim 1, wherein accessing the untagged document further comprises receiving, from a document viewer executing on a client computing device, the plurality of document parts and the formatting information. 11. The document structure extraction method of claim 1, wherein accessing the untagged document further comprises receiving, by the document structure analytics server, a plurality of untagged documents from a document management system. 12. The document structure extraction method of claim 1, further comprising sending the tag metadata from the document structure analytics server to a client computing device, wherein the untagged document is stored at the client computing device. 13. The document structure extraction method of claim 1, further comprising embedding the tag metadata into the untagged document to produce a tagged document, wherein sending the tag metadata to the client computing device comprises sending the tagged document to the client computing device. 14. The document structure extraction method of claim 1, further comprising modifying the untagged document such that the visual appearance of the particular document part is further defined by the predicted categorization generated by the predictive model. 15. A non-transitory computer readable medium encoded with instructions that, when executed by one or more processors, cause a document structure analysis process to be invoked, the process comprising: identifying a plurality of training documents; accessing a particular one of the training documents, the particular training document comprising a plurality of document parts, wherein a particular one of the document parts has (a) a visual appearance defined by formatting information included in the particular training document, and (b) a document part categorization; generating, for the particular document part, one or more feature-value pairs using the formatting information, wherein each of the generated one or more feature-value pairs characterizes the visual appearance of the particular document part by correlating a particular value with a particular formatting feature; defining a document part feature vector that links the generated one or more feature-value pairs with the document part categorization; storing the document part feature vector in a memory resource hosted by a document structure analytics server; and using the document part feature vector to train a predictive model in a supervised learning framework, wherein the predictive model is configured to establish a predicted document part categorization based on at least one feature-value pair received from a client computing device. 16. The non-transitory computer readable medium of claim 15, wherein: a particular one of the generated feature-value pairs defines a proportion of the particular training document; and the document part categorization is selected from a group consisting of a heading, a title, and a body paragraph. 17. The non-transitory computer readable medium of claim 15, wherein: the plurality of training documents are identified on the basis of a common characteristic that is selected from a group consisting of an author and a topic keyword; and the predictive model is associated with the common characteristic. 18. A document structure evaluation system that comprises a memory device and a processor that is operatively coupled to the memory device, wherein the processor is configured to execute instructions stored in the memory that, when executed, cause the processor to carry out a document structure evaluation process that comprises: displaying, in a document viewer, an untagged document that comprises a plurality of document parts, wherein certain of the document parts have a visual appearance that is defined by formatting information included in the untagged document, and wherein at least two of the document parts are distinguishable from each other based on having distinctive visual appearances; sending, to a document structure analytics server, a particular one of the document parts and formatting information that characterizes the visual appearance of the particular document part; receiving, from the document structure analytics server, a predicted categorization for the particular document part; and embedding into the untagged document metadata that correlates the particular document part with the predicted categorization received from the document structure analytics server. 19. The document structure evaluation system of claim 18, wherein the process further comprises: receiving, from the document structure analytics server, a confidence level associated with the predicted categorization; and displaying, in the document viewer, the predicted categorization and the confidence level. 20. The document structure evaluation system of claim 18, wherein the process further comprises: displaying, in the document viewer, the predicted categorization; and receiving, from a user of the document viewer, an acceptance of the predicted categorization, wherein the acceptance is received before the metadata is embedded into the untagged document. | Please help me write a proper abstract based on the patent claims. | The structure of an untagged document can be derived using a predictive model that is trained in a supervised learning framework based on a corpus of tagged training documents. Analyzing the training documents results in a plurality of document part feature vectors, each of which correlates a category defining a document part (for example, “title” or “body paragraph”) with one or more feature-value pairs (for example, “font=Arial” or “alignment=centered”). Any suitable machine learning algorithm can be used to train the predictive model based on the document part feature vectors extracted from the training documents. Once the predictive model has been trained, it can receive feature-value pairs corresponding to a portion of an untagged document and make predictions with respect to the how that document part should be categorized. The predictive model can therefore generate tag metadata that defines a structure of the untagged document in an automated fashion. |
1. A computer-implemented method of managing data for convolution processing, the method comprising: in a device having: a memory, an input channel for receiving a stack of input data, an output channel for receiving a stack of output data, and a convolution kernel containing a stack of weights for convolving the stack of input data into the stack of output data; packing the input stack into a continuous block of memory, packing the convolution kernel into a continuous block of memory, and unpacking the output stack based on the architecture of the device; and convolving the input stack into the output stack using the stack of weights in the convolution kernel. 2. A computer-implemented method as in claim 1, wherein packing the input stack into the continuous block of memory includes: reading all input blocks in the input stack corresponding to a portion of the input data; and arranging all of the input blocks into the continuous block of memory. 3. A computer-implemented method as in claim 2, wherein the portion of the input data to which the input blocks correspond is one or more input pixels and their neighboring pixels. 4. A computer-implemented method as in claim 1, wherein packing the output stack based on the architecture of the device is allocating a set of output blocks in the output stack to use a maximum number of registers, the set of output blocks in the output stack corresponding to the portion of input data being convolved. 5. A computer-implemented method as in claim 4, wherein the portion of input data being convolved is any one of a continuous row and a continuous column of input pixels. 6. A computer-implemented method as in claim 4, wherein convolving the input stack into the output stack includes: loading into memory the stack of weights corresponding to the portion of input data; arranging the loaded weights into a convolution weight matrix; calculating each value in the allocated set of output blocks in the output stack from the corresponding values in the input blocks and the convolution weight matrix. | Please help me write a proper abstract based on the patent claims. | Convolution processing performance in digital image processing is enhanced using a data packing process for convolutional layers in deep neural networks and corresponding computation kernel code. The data packing process includes an input and weight packing of the input channels of data into a contiguous block of memory in preparation for convolution. In addition, data packing process includes an output unpacking process for unpacking convolved data into output channel blocks of memory, where the input channel block and output channel block sizes are configured for efficient data transfer and data reuse during convolution. The input packing and output packing processes advantageously improve convolution performance and conserve power while satisfying the real-time demands of digital image processing. |
1. A system for fully integrated collection of business impacting data, analysis of that data and generation of both analysis driven business decisions and analysis driven simulations of alternate candidate business decision comprising: a business data retrieval engine stored in a memory of and operating on a processor of a computing device; a business data analysis engine stored in a memory of and operating on a processor of a computing device; and a business decision and business action path simulation engine stored in a memory of and operating on a processor of one of more computing devices; wherein, the business information retrieval engine: (a) retrieves a plurality of business related data from a plurality of sources; (b) accept a plurality of analysis parameters and control commands directly from human interface devices or from one or more command and control storage devices; (b) stores accumulated retrieved information for processing by data analysis engine or predetermined data timeout; wherein the business information analysis engine: (c) retrieves a plurality of data types from the business information retrieval engine; (d) performs a plurality of analytical functions and transformations on retrieved data based upon the specific goals and needs set forth in a current campaign by business process analysis authors; wherein the business decision and business action path simulation engine: e) employs results of data analyses and transformations performed by the business information analysis engine, together with available supplemental data from a plurality of sources as well as any current campaign specific machine learning, commands and parameters from business process analysis authors to formulate current business operations and risk status reports; and (f) employs results of data analyses and transformations performed by the business information analysis engine, together with available supplemental data from a plurality of sources, any current campaign specific commands and parameters from business process analysis authors, as well as input gleaned from machine learned algorithms to deliver business action pathway simulations and business decision support to a first end user. 2. The system of claim 1, wherein the business information retrieval engine a stored in the memory of and operating on a processor of a computing device, employs a portal for human interface device input at least a portion of which are business related data and at least another portion of which are commands and parameters related to the conduct of a current business analysis campaign. 3. The system of claim 2, wherein the business information retrieval engine employs a high volume deep web scraper stored in the memory of an operating on a processor of a computing device, which receives at least some scrape control and spider configuration parameters from the highly customizable cloud based interface, coordinates one or more world wide web searches (scrapes) using both general search control parameters and individual web search agent (spider) specific configuration data, receives scrape progress feedback information which may lead to issuance of further web search control parameters, controls and monitors the spiders on distributed scrape servers, receives the raw scrape campaign data from scrape servers, aggregates at least portions of scrape campaign data from each web site or web page traversed as per the parameters of the scrape campaign. 4. The system of claim 3, wherein the archetype spiders are provided by a program library and individual spiders are created using configuration files. 5. The system of claim 3, wherein scrape campaign requests are persistently stored and can be reused or used as the basis for similar scrape campaigns. 6. The system of claim 2, wherein the business information retrieval engine employs a multidimensional time series data store stored in a memory of and operating on a processor of a computing device to receive a plurality of data from a plurality of sensors of heterogeneous types, some of which may have heterogeneous reporting and data payload transmission profiles, aggregates the sensor data over a predetermined amount of time, a predetermined quantity of data or a predetermined number of events, retrieves a specific quantity of aggregated sensor data per each access connection predetermined to allow reliable receipt and inclusion of the data, transparently retrieves quantities of aggregated sensor data too large to be reliably transferred by one access connection using a further plurality access connections to allow capture of all aggregated sensor data under conditions of heavy sensor data influx and stores aggregated sensor data in a simple key-value pair with very little or no data transformation from how the aggregated sensor data is received. 7. The system of claim 1, wherein the business data analysis engine employs a directed computational graph stored in the memory of an operating on a processor of a computing device which, retrieves streams of input from one or more of a plurality of data sources, filters data to remove data records from the stream for a plurality of reasons drawn from, but not limited to a set comprising absence of all information, damage to data in the record, and presence of in-congruent information or missing information which invalidates the data record, splits filtered data stream into two or more identical parts, formats data within one data stream based upon a set of predetermined parameters so as to prepare for meaningful storage in a data store, sends identical data stream further analysis and either linear transformation or branching transformation using resources of the system. 8. A method for fully integrated collection of business impacting data, analysis of that data and generation of both analysis driven business decisions and analysis driven business decision simulations method comprising the steps of: (a) retrieving business related data and analysis campaign command and control information using a business information retrieval engine stored in the memory of an operating on a processor of a computing device; (b) analyzing and transforming retrieved business related data using a business information analysis engine stored in the memory of an operating on a processor of a computing device in conjunction with previously designed analysis campaign command and control information; and (c) presenting business decision critical information as well as business pathway simulation information using a business decision and business path simulation engine based upon the results of analysis of previously retrieved business related data and previously entered analysis campaign command and control information. 9. The method of claim 8, wherein the business information retrieval engine employs, a portal for human interface device input at least a portion of which are business related data and at least another portion of which are commands and parameters related to the conduct of a current business analysis campaign. 10. The method of claim 9, wherein the business information retrieval engine employs a high volume deep web scraper stored in the memory of an operating on a processor of a computing device, which receives at least some scrape control and spider configuration parameters from the highly customizable cloud based interface, coordinates one or more world wide web searches (scrapes) using both general search control parameters and individual web search agent (spider) specific configuration data, receives scrape progress feedback information which may lead to issuance of further web search control parameters, controls and monitors the spiders on distributed scrape servers, and receives the raw scrape campaign data from scrape servers, aggregates at least portions of scrape campaign data from each web site or web page traversed as per the parameters of the scrape campaign. 11. The method of claim 10, wherein the archetype spiders are provided by a program library and individual spiders are created using configuration files. 12. The method of claim 10, wherein scrape campaign requests are persistently stored and can be reused or used as the basis for similar scrape campaigns. 13. The method of claim 9, wherein the business information retrieval engine employs a multidimensional time series data store stored in a memory of and operating on a processor of a computing device to receive a plurality of data from a plurality of sensors of heterogeneous types, some of which may have heterogeneous reporting and data payload transmission profiles, aggregates the sensor data over a predetermined amount of time, a predetermined quantity of data or a predetermined number of events, retrieves a specific quantity of aggregated sensor data per each access connection predetermined to allow reliable receipt and inclusion of the data, transparently retrieves quantities of aggregated sensor data too large to be reliably transferred by one access connection using a further plurality access connections to allow capture of all aggregated sensor data under conditions of heavy sensor data influx and stores aggregated sensor data in a simple key-value pair with very little or no data transformation from how the aggregated sensor data is received. 14. The system of claim 8, wherein the business data analysis engine employs a directed computational graph, stored in the memory of an operating on a processor of a computing device which, retrieves streams of input from one or more of a plurality of data sources, filters data to remove data records from the stream for a plurality of reasons drawn from, but not limited to a set comprising absence of all information, damage to data in the record, and presence of in-congruent information or missing information which invalidates the data record, splits filtered data stream into two or more identical parts, formats data within one data stream based upon a set of predetermined parameters so as to prepare for meaningful storage in a data store, sends identical data stream further analysis and either linear transformation or branching transformation using resources of the system. | Please help me write a proper abstract based on the patent claims. | A system for fully integrated collection of business impacting data, analysis of that data and generation of both analysis driven business decisions and analysis driven simulations of alternate candidate business action comprising a business data retrieval engine stored in a memory of and operating on a processor of a computing device, a business data analysis engine stored in a memory of and operating on a processor of a computing device and a business decision and business action path simulation engine stored in a memory of and operating on a processor of one of more computing devices has been developed. |
1. A method of avoiding harmful chemical emission concentration levels, comprising: implementing a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels; detecting an indicator; and implementing a corrective action by at least one of altering the operation of a chemical emissions source, modifying a time of a scheduled task, or changing prescribed personal protective equipment. 2. The method of claim 1, wherein the cognitive suite of workplace hygiene and injury predictors has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels by receiving signals from one or more sensors, correlating the signals with scheduled operations, and identifying indicators corresponding to one or more chemical emission sources operating at the time of the harmful chemical emission concentration levels based on the scheduled operation. 3. The method of claim 1, which further comprises predicting chemical emission exposure levels of a person, tracking cumulative actual chemical emission exposure levels for the person, and pre-emptively adjusting the time of a scheduled task in anticipation of predicted chemical emission exposure levels. 4. The method of claim 3, wherein predicting chemical emission exposure levels includes determining the location of the person for one or more assigned tasks, identifying a path used by the person to transit to the location(s), analyzing a chemical emissions map for the location(s), and calculating a predicted amount of cumulative chemical emissions exposure for the person. 5. The method of claim 3, which further comprises monitoring the actual chemical emission exposure levels experienced by the person in a chemical emissions zone. 6. The method of claim 5, which further comprises identifying the location of a person in the chemical emissions zone, and transmitting a control signal to the chemical emissions source to slow down or turn off for a predetermined period of time to reduce the actual chemical emission exposure levels in the chemical emission zone. 7. The method of claim 1, wherein the indicator is identification by facial recognition, activation of an interlock, detection of an RFID at a portal, or combinations thereof. 8. A chemical emissions protection system, comprising: a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emissions sources and indicators of harmful chemical emission concentration levels; a monitoring interface coupled to one or more sensor(s) for detecting an indicator; and a warning system configured to implement a corrective action by altering the operation of a chemical emission source, modifying a time of a scheduled task, and/or changing prescribed personal protective equipment. 9. The system of claim 8, wherein the cognitive suite of workplace hygiene and injury predictors is trained by receiving signals from one or more sensors, correlating the signals with scheduled operations, and identifying indicators corresponding to one or more chemical emission sources operating at the time of the harmful chemical emission concentration levels based on the scheduled operation. 10. The system of claim 8, which further comprises a scheduler configured to predict chemical emission exposure levels of a person, track cumulative actual chemical emission exposure levels for the person, and pre-emptively adjust the time of a scheduled task in anticipation of predicted chemical emission exposure levels. 11. The system of claim 10, wherein the monitoring interface is configured to determine the location of the person for one or more assigned tasks, identify a path used by the person to transit to the location(s), analyze a chemical emissions map for the location(s), and calculate a predicted amount of cumulative chemical emissions exposure for the person. 12. The system of claim 10, wherein the monitoring interface is configured to monitor the actual chemical emission exposure levels experienced by the person in a chemical emissions zone. 13. The system of claim 12, wherein the monitoring interface is configured to identify the location of a person in the chemical emissions zone, and transmit a control signal to the chemical emissions source to slow down or turn off for a predetermined period of time to reduce the actual chemical emission exposure levels in the chemical emissions zone. 14. The system of claim 8, wherein the indicator is identification by facial recognition, activation of an interlock, detection of an RFID at a portal, or combinations thereof. 15. A non-transitory computer readable storage medium comprising a computer readable program for predicting exposure to harmful chemical emission concentration levels, wherein the computer readable program when executed on a computer causes the computer to perform the steps of: implementing a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels; detecting an indicator; and implementing a corrective action by at least one of altering the operation of a chemical emissions source, modifying a time of a scheduled task, or changing the prescribed personal protective equipment. 16. The non-transitory computer readable storage medium of claim 15, wherein the computer readable program when executed on a computer causes the computer to: learn to identify chemical emission sources and indicators of harmful chemical emission concentration levels by receiving signals from one or more sensors, correlating the signals with scheduled operations, and identifying indicators corresponding to one or more chemical emission sources operating at the time of the harmful chemical emission concentration levels based on the scheduled operation. 17. The non-transitory computer readable storage medium of claim 15, wherein the computer readable program when executed on a computer causes the computer to: predict chemical emission exposure levels of a person, track cumulative actual chemical emission exposure levels for the person, and pre-emptively adjust the time of a scheduled task in anticipation of predicted chemical emission exposure levels. 18. The non-transitory computer readable storage medium of claim 17, wherein the computer readable program when executed on a computer causes the computer to: predict chemical emission exposure levels by determining the location of the person for one or more assigned tasks, identifying a path used by the person to transit to the location(s), analyzing a chemical emissions map for the location(s), and calculating a predicted amount of cumulative chemical emissions exposure for the person. 19. The non-transitory computer readable storage medium of claim 17, wherein the computer readable program when executed on a computer causes the computer to: monitor the actual chemical emission exposure levels experienced by the person in a chemical emissions zone. 20. The non-transitory computer readable storage medium of claim 19, wherein the computer readable program when executed on a computer causes the computer to: identify the location of a person in the chemical emissions zone, and transmit a control signal to the chemical emissions source to slow down or turn off for a predetermined period of time to reduce the actual chemical emission exposure levels in the chemical emissions zone. | Please help me write a proper abstract based on the patent claims. | A method of avoiding harmful chemical emission concentration levels, the method comprising implementing a cognitive suite of workplace hygiene and injury predictors (WHIP) that has learned to identify chemical emission sources and indicators of harmful chemical emission concentration levels, detecting an indicator, and implementing a corrective action by at least one of altering the operation of a chemical emissions source, modifying a time of a scheduled task, or changing prescribed personal protective equipment. |
1. A method of formatting a constraint problem for input to a quantum processor and solving the constraint problem, the method comprising: representing, with a classical processor, a quantum processor, or a combination thereof, the constraint problem as a digital circuit comprising at least one gate and at least one constrained input, at least one constrained output, or a combination of at least one constrained input and at least one constrained output; generating, with the classical processor, the quantum processor, or the combination thereof, a matrix for each of the at least one gates; generating, with the classical processor, the quantum processor, or the combination thereof, a constraint matrix for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output; generating, with the classical processor, the quantum processor, or the combination thereof, a final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix; translating, with the classical processor, the quantum processor, or the combination thereof, the final matrix into an energy representation useable by the quantum processor. minimizing, with the quantum processor, an energy of the energy representation to generate a quantum bit (q-bit) output; and determining, with the classical processor, the quantum processor, or the combination thereof, a result of the constraint problem based on the q-bit output. 2. The method of claim 1, wherein the translating comprises interpreting the final matrix as a Hamiltonian energy matrix. 3. The method of claim 2, wherein the Hamiltonian energy matrix comprises a spin glass Hamiltonian energy matrix. 4. The method of claim 2, wherein the Hamiltonian energy matrix represents each of the at least one constrained inputs, each of the at least one constrained outputs, or each of the combination of at least one constrained input and at least one constrained output as a row and column entry in the Hamiltonian energy matrix. 5. The method of claim 2, further comprising converting, with the classical processor, the quantum processor, or the combination thereof, the Hamiltonian energy matrix into an appropriate form for the quantum computer used to minimize the energy of the Hamiltonian energy matrix. 6. The method of claim 1, wherein the representing further comprises assigning a label to each of a plurality of intermediate outputs within the digital circuit. 7. The method of claim 1, wherein the representing further comprises assigning a label to each of the at least one gates. 8. The method of claim 1, wherein the digital circuit comprises at least one two-input logic gate selected from a set of universal gates. 9. The method of claim 8, wherein the set of universal gates comprises eight two-input gates formed by all two-input combinations of AND and OR with optional NOT functionality on one or both of the inputs. 10. The method of claim 1, wherein the digital circuit comprises at least one sub-circuit that evaluates to true when constraints on an input are satisfied and an output of the sub-circuit is constrained to be true. 11. The method of claim 1, further comprising converting, with the classical processor, the quantum processor, or the combination thereof, the digital circuit into a table comprising data about the at least one gate and the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. 12. The method of claim 1, wherein generating the matrix for each of the at least one gates comprises: computing a permutation matrix for the gate; choosing a gate matrix based on a gate type of the gate; and multiplying a transpose of the permutation matrix, the gate matrix, and the permutation matrix to form the matrix for the gate. 13. The method of claim 1, wherein generating the final matrix comprises: adding each matrix for each of the at least one gates together to create a circuit matrix; and adding the constraint matrix to the circuit matrix. 14. The method of claim 1, wherein the quantum processor uses adiabatic quantum computing. 15. The method of claim 1, wherein the digital circuit represents a cryptographic function, a cryptographic algorithm, or a traveling salesman problem. 16. The method of claim 15, wherein the cryptographic function is a one-way function. 17. A system for formatting a constraint problem for input to a quantum computer and solving the constraint problem, the system comprising: a classical computer configured to: represent the constraint problem as a digital circuit comprising at least one gate and at least one constrained input, at least one constrained output, or a combination of at least one constrained input and at least one constrained output; generate a matrix for each of the at least one gates; generate a constraint matrix for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output; generate a final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix; and translate the final matrix into an energy representation useable by the quantum computer; and the quantum computer configured to: minimize an energy of the energy representation to generate a quantum bit (q-bit) output; wherein the classical computer is further configured to determine a result of the constraint problem based on the q-bit output. 18. The system of claim 17, wherein the translating comprises interpreting the final matrix as a Hamiltonian energy matrix. 19. The system of claim 18, wherein the Hamiltonian energy matrix comprises a spin glass Hamiltonian energy matrix. 20. The system of claim 18, wherein the Hamiltonian energy matrix represents each of the at least one constrained inputs, each of the at least one constrained outputs, or each of the combination of at least one constrained input and at least one constrained output as a row and column entry in the Hamiltonian energy matrix. 21. The system of claim 18, wherein the classical computer is further configured to convert the Hamiltonian energy matrix into an appropriate form for the quantum computer used to minimize the energy of the Hamiltonian energy matrix. 22. The system of claim 17, wherein the representing further comprises assigning a label to each of a plurality of intermediate outputs within the digital circuit. 23. The system of claim 17, wherein the representing further comprises assigning a label to each of the at least one gates. 24. The system of claim 17, wherein the digital circuit comprises at least one two-input logic gate selected from a set of universal gates. 25. The system of claim 24, wherein the set of universal gates comprises eight two-input gates formed by all two-input combinations of AND and OR with optional NOT functionality on one or both of the inputs. 26. The system of claim 17, wherein the digital circuit comprises at least one sub-circuit that evaluates to true when constraints on an input are satisfied and an output of the sub-circuit is constrained to be true. 27. The system of claim 17, wherein the classical computer is further configured to convert the digital circuit into a table comprising data about the at least one gate and the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. 28. The system of claim 17, wherein generating the matrix for each of the at least one gates comprises: computing a permutation matrix for the gate; choosing a gate matrix based on a gate type of the gate; and multiplying a transpose of the permutation matrix, the gate matrix, and the permutation matrix to form the matrix for the gate. 29. The system of claim 17, wherein generating the final matrix comprises: adding each matrix for each of the at least one gates together to create a circuit matrix; and adding the constraint matrix to the circuit matrix. 30. The system of claim 17, wherein the quantum computer uses adiabatic quantum computing. 31. The system of claim 17, wherein the digital circuit represents a cryptographic function, a cryptographic algorithm, or a traveling salesman problem. 32. The system of claim 31, wherein the cryptographic function is a one-way function. 33. A quantum computer configured to: represent a constraint problem as a digital circuit comprising at least one gate and at least one constrained input, at least one constrained output, or a combination of at least one constrained input and at least one constrained output; generate a matrix for each of the at least one gates; generate a constraint matrix for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output; generate a final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix; translate the final matrix into an energy representation useable by the quantum computer; minimize an energy of the energy representation to generate a quantum bit (q-bit) output; and determine a result of the constraint problem based on the q-bit output. 34. The quantum computer of claim 33, wherein the translating comprises interpreting the final matrix as a Hamiltonian energy matrix. 35. The quantum computer of claim 34, wherein the Hamiltonian energy matrix comprises a spin glass Hamiltonian energy matrix. 36. The quantum computer of claim 34, wherein the Hamiltonian energy matrix represents each of the at least one constrained inputs, each of the at least one constrained outputs, or each of the combination of at least one constrained input and at least one constrained output as a row and column entry in the Hamiltonian energy matrix. 37. The quantum computer of claim 34, wherein the quantum computer is further configured to convert the Hamiltonian energy matrix into an appropriate form for the quantum computer used to minimize the energy of the Hamiltonian energy matrix. 38. The quantum computer of claim 33, wherein the representing further comprises assigning a label to each of a plurality of intermediate outputs within the digital circuit. 39. The quantum computer of claim 33, wherein the representing further comprises assigning a label to each of the at least one gates. 40. The quantum computer of claim 33, wherein the digital circuit comprises at least one two-input logic gate selected from a set of universal gates. 41. The quantum computer of claim 40, wherein the set of universal gates comprises eight two-input gates formed by all two-input combinations of AND and OR with optional NOT functionality on one or both of the inputs. 42. The quantum computer of claim 33, wherein the digital circuit comprises at least one sub-circuit that evaluates to true when constraints on an input are satisfied and an output of the sub-circuit is constrained to be true. 43. The quantum computer of claim 33, wherein the quantum computer is further configured to convert the digital circuit into a table comprising data about the at least one gate and the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. 44. The quantum computer of claim 33, wherein generating the matrix for each of the at least one gates comprises: computing a permutation matrix for the gate; choosing a gate matrix based on a gate type of the gate; and multiplying a transpose of the permutation matrix, the gate matrix, and the permutation matrix to form the matrix for the gate. 45. The quantum computer of claim 33, wherein generating the final matrix comprises: adding each matrix for each of the at least one gates together to create a circuit matrix; and adding the constraint matrix to the circuit matrix. 46. The quantum computer of claim 33, wherein the quantum computer uses adiabatic quantum computing. 47. The quantum computer of claim 33, wherein the digital circuit represents a cryptographic function, a cryptographic algorithm, or a traveling salesman problem. 48. The quantum computer of claim 47, wherein the cryptographic function is a one-way function. | Please help me write a proper abstract based on the patent claims. | A constraint problem may be represented as a digital circuit comprising at least one gate and at least one constrained input or at least one constrained output, or a combination of at least one constrained input and at least one constrained output. A matrix may be generated for each of the at least one gates. A constraint matrix may be generated for the at least one constrained input, the at least one constrained output, or the combination of at least one constrained input and at least one constrained output. A final matrix comprising a combination of each matrix for each of the at least one gates and the constraint matrix may be generated. The final matrix may be translated into an energy representation useable by a quantum computer. The energy of the energy representation may be minimized to generate a q-bit output, and a result of the constraint problem may be determined based on the q-bit output. |
1. A system, comprising: one or more computing devices configured to: generate, at a machine learning service of a provider network, one or more space-efficient representations of a first set of observation records associated with a machine learning model, wherein individual ones of the space-efficient representations utilize less storage than the first set of observation records, and wherein at least a subset of observation records of the first set include respective values of a first group of one or more variables; receive an indication that a second set of observation records is to be examined for the presence of duplicates of observation records of the first set in accordance with a probabilistic duplicate detection technique, wherein at least a subset of observation records of the second set include respective values of the first group of one or more variables; obtain, using at least one space-efficient representation of the one or more space-efficient representations, a duplication metric corresponding to at least a portion of the second set, indicative of a non-zero probability that one or more observation records of the second set are duplicates of one or more observation records of the first set with respect to at least the first group of one or more variables; and in response to a determination that the duplication metric meets a threshold criterion, implement one or more responsive actions including a notification of a detection of potential duplicate observation records to the client. 2. The system as recited in claim 1, wherein a particular space-efficient representation of the one or more space-efficient representations includes one or more of: (a) a Bloom filter, (b) a quotient filter, or (c) a skip list. 3. The system as recited in claim 1, wherein the first set of one or more observation records comprises a training data set of the machine learning model, and wherein the second set of one or more observation records comprises a test data set of the machine learning model. 4. The system as recited in claim 1, wherein a particular space-efficient representation of the one or more space-efficient representations includes a Bloom filter, wherein the one or more computing devices are further configured to: estimate, prior to generating the Bloom filter, (a) an approximate count of observation records included in the first set and (b) an approximate size of individual observation records of the first set; and determine, based at least in part on the approximate count or the approximate size, one or more parameters to be used to generate the Bloom filter, including one or more of: (a) a number of bits to be included in the Bloom filter (b) a number of hash functions to be used to generate the Bloom filter, or (c) a particular type of hash function to be used to generate the Bloom filter. 5. The system as recited in claim 1, wherein the one or more responsive actions include one or more of: (a) a transmission of an indication, to the client, of a particular observation record of the second set which has been identified as having a non-zero probability of being a duplicate, (b) a removal, from the second set, of a particular observation record which has been identified as having a non-zero probability of being a duplicate, prior to performing a particular machine learning task using the second set, (c) a transmission, to the client, of an indication of a potential prediction error associated with removing, from the second set, one or more observation records which have been identified as having non-zero probabilities of being duplicates, or (d) a cancellation of a machine learning job associated with the second set. 6. A method, comprising: performing, by one or more computing devices: generating, at a machine learning service, one or more alternate representations of a first set of observation records, wherein at least one alternate representation occupies a different amount of space than the first set of observation records; obtaining, using at least one alternate representation of the one or more alternate representations, a duplication metric corresponding to at least a portion of a second set of observation records, indicative of a non-zero probability that one or more observation records of the second set are duplicates of respective observation records of the first set, with respect to one or more variables for which respective values are included in at least some observation records of the first set; and in response to determining that the duplication metric meets a threshold criterion, implementing one or more responsive actions. 7. The method as recited in claim 6, wherein a particular alternate representation of the one or more alternate representations includes one or more of: (a) a Bloom filter, (b) a quotient filter, or (c) a skip list. 8. The method as recited in claim 6, wherein the first set of one or more observation records comprises a training data set of a particular machine learning model, and wherein the second set of one or more observation records comprises a test data set of the particular machine learning model. 9. The method as recited in claim 6, wherein a particular alternate representation of the one or more alternate representations includes a Bloom filter, further comprising performing, by the one or more computing devices: estimating, prior to generating the Bloom filter, (a) an approximate count of observation records included in the first set and (b) an approximate size of individual observation records of the first set; and determining, based at least in part on the approximate count or the approximate size, one or more parameters to be used to generate the Bloom filter, including one or more of: (a) a number of bits to be included in the Bloom filter (b) a number of hash functions to be used to generate the Bloom filter, or (c) a particular type of hash function to be used to generate the Bloom filter. 10. The method as recited in claim 6, wherein the one or more response actions include one or more of: (a) notifying a client of a detection of potential duplicate observation records, (b) providing an indication of a particular observation record of the second set which has been identified as having a non-zero probability of being a duplicate, (c) removing, from the second set, a particular observation record which has been identified as having a non-zero probability of being a duplicate, prior to performing a particular machine learning task using the second set, (d) providing, to a client, an indication of a potential prediction error associated with removing, from the second data set, one or more observation records which have been identified as having non-zero probabilities of being duplicates, or (e) abandoning a machine learning job associated with the second set. 11. The method as recited in claim 6, wherein a particular responsive action of the one or more responsive actions comprises providing an indication of a confidence level that a particular observation record of the second set is a duplicate. 12. The method as recited in claim 6, wherein the group of one or more variables excludes an output variable whose value is to be predicted by a machine learning model. 13. The method as recited in claim 6, wherein said determining that the duplication metric meets a threshold criterion comprises one or more of: (a) determining that the number of observation records of the second set which have been identified as having non-zero probabilities of being duplicates exceeds a first threshold or (b) determining that the fraction of the observation records of the second set that have been identified as having non-zero probabilities of being duplicates exceeds a second threshold. 14. The method as recited in claim 6, wherein said generating the one or more alternate representations of the first set of observation records comprises: subdividing the first set of observation records into a plurality of partitions; generating, at respective servers of the machine learning service, a respective Bloom filter corresponding to individual ones of the plurality of partitions; and combining the Bloom filters generated at the respective servers into a consolidated Bloom filter. 15. The method as recited in claim 6, further comprising performing, by the one or more computing devices: receiving, via a programmatic interface, an indication from the client of one or more of (a) a parameter to be used by the machine learning service to determine whether the threshold criterion has been met, or (b) the one or more responsive actions. 16. The method as recited in claim 6, wherein the first set of observation records and the second set of observation records are respective subsets of one of: (a) a training data set of a particular machine learning model, (b) a test data set of a particular machine learning model, or (c) a source data set from which a training data set of a particular machine learning model and a test data set of the particular machine learning model are to be obtained. 17. A non-transitory computer-accessible storage medium storing program instructions that when executed on one or more processors: determine, at a machine learning service, that an analysis to detect whether at least a portion of contents of one or more observation records of a first set of observation records are duplicated in a second set of observation records is to be performed; obtain a duplication metric corresponding to at least a portion of a second set of observation records, indicative of a non-zero probability that one or more observation records of the second set are duplicates of respective observation records of the first set, with respect to one or more variables for which respective values are included in at least some observation records of the first set; and in response to a determination that the duplication metric meets a threshold criterion, implement one or more responsive actions. 18. The non-transitory computer-accessible storage medium as recited in claim 17, wherein to obtain the alternate metric, the instructions when executed on the one or more processors generate an alternate representation of the first set of observation records, wherein the alternate representation includes one or more of: (a) a Bloom filter, (b) a quotient filter, or (c) a skip list. 19. The non-transitory computer-accessible storage medium as recited in claim 17, wherein the first set of one or more observation records comprises a training data set of a particular machine learning model, and wherein the second set of one or more observation records comprises a test data set of the particular machine learning model. 20. The non-transitory computer-accessible storage medium as recited in claim 17, wherein a particular responsive action of the one or more responsive actions comprises providing an indication of a confidence level that a particular observation record of the second set is a duplicate. | Please help me write a proper abstract based on the patent claims. | At a machine learning service, a determination is made that an analysis to detect whether at least a portion of contents of one or more observation records of a first data set are duplicated in a second set of observation records is to be performed. A duplication metric is obtained, indicative of a non-zero probability that one or more observation records of the second set are duplicates of respective observation records of the first set. In response to determining that the duplication metric meets a threshold criterion, one or more responsive actions are initiated, such as the transmission of a notification to a client of the service. |
1. A non-transitory computer-readable medium having a program executable in at least one computing device, the program comprising program code that, when executed by the at least one computing device, causes the at least one computing device to: access reference data from a data store accessible to the at least one computing device; parse the reference data using a natural language processor to identify relevant data; store the relevant data identified in at least one decision tree data structure; apply an ingestion process to receive input data from at least one client device, the input data being received by the at least one computing device from the client device remotely over a transmission network; query the at least one decision tree data structure to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process; and generate a first metric for a first child node and a second metric for a second child node using the input data, the first child node and the second child node being children of the node identified in the at least one decision tree data structure. 7. The non-transitory computer-readable medium of claim 1, wherein the program further comprises program code that, when executed, causes the at least one computing device to: identify a binary response from the input data; and apply a receiver operating curve (ROC), wherein the first metric for the first child node and the second metric for the second child node are generated based on the receiver operating curve (ROC). 2. The non-transitory computer-readable medium of claim 1, wherein the program further comprises program code that, when executed, causes the at least one computing device to identify a subsequent state of the ingestion process based on the node identified in the at least one decision tree data structure. 3. The non-transitory computer-readable medium of claim 1, wherein the program further comprises program code that, when executed, causes the at least one computing device to: identify an unrecognizable term in the input data; and communicate with a service external to the at least one computing device to define the unrecognizable term through an application programming interface (API) of the service. 4. The non-transitory computer-readable medium of claim 1, wherein the at least one client device comprises a plurality of client devices; and wherein the program further comprises program code that, when executed, causes the at least one computing device to: access input data provided from a first one of the plurality of client devices obtained during the state of the ingestion process; access input data provided from a second one of the plurality of client devices obtained during the state of the ingestion process; and identify that a conflict exists between the input data provided from the first one of the plurality of client devices and the input data provided from the second one of the plurality of client devices. 5. The non-transitory computer-readable medium of claim 1, wherein the program further comprises program code that, when executed, causes the at least one computing device to determine a confidence metric from the input data, wherein the first metric and the second metric are generated using the confidence metric. 6. The non-transitory computer-readable medium of claim 5, wherein the program further comprises program code that, when executed, causes the at least one computing device to determine a credibility metric associated with the client device from which the input data is received, wherein the confidence metric is determined using the credibility metric. 8. A system, comprising: a server computing device in data communication with at least one client device over a network; program instructions executable by the at least one server computing device that, when executed by the at least one server computing device cause the at least one server computing device to: access reference data from a data store accessible to the at least one computing device; parse the reference data using a natural language processor to identify relevant data; store the relevant data identified in at least one decision tree data structure; apply an ingestion process to receive input data from the at least one client device, the input data being received by the at least one computing device from the client device remotely over a transmission network; query the at least one decision tree data structure to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process; and generate a first metric for a first child node and a second metric for a second child node using the input data, the first child node and the second child node being children nodes of the node identified in the at least one decision tree data structure. 9. The system of claim 8, further comprising program instructions that, when executed, cause the at least one computing device to identify a subsequent state of the ingestion process based on the node identified in the at least one decision tree data structure. 10. The system of claim 8, further comprising program instructions that, when executed, cause the at least one computing device to: identify an unrecognizable term in the input data; and communicate with a service external to the at least one computing device to define the unrecognizable term through an application programming interface (API) of the service. 11. The system of claim 8, wherein the at least one client device comprises a plurality of client devices; and wherein the system further comprises program instructions that, when executed, cause the at least one computing device to: access input data provided from a first one of the plurality of client devices obtained during the state of the ingestion process; access input data provided from a second one of the plurality of client devices obtained during the state of the ingestion process; and identify that a conflict exists between the input data provided from the first one of the plurality of client devices and the input data provided from the second one of the plurality of client devices. 12. The system of claim 8, further comprising program instructions that, when executed, cause the at least one computing device to determine a confidence metric from the input data, wherein the first metric and the second metric are generated using the confidence metric. 13. The system of claim 12, further comprising program instructions that, when executed, cause the at least one computing device to determine a credibility metric associated with the client device from which the input data is received, wherein the confidence metric is determined using the credibility metric. 14. The system of claim 8, further comprising program instructions that, when executed, cause the at least one computing device to: identify a binary response from the input data; and apply a receiver operating curve (ROC), wherein the first metric for the first child node and the second metric for the second child node are generated based on the receiver operating curve (ROC). 15. A computer-implemented method, comprising: accessing, by at least one computing device comprising at least one hardware processor, reference data from a data store accessible to the at least one computing device; parsing, by the at least one computing device, the reference data using a natural language processor to identify relevant data; storing, by the at least one computing device, the relevant data identified in at least one decision tree data structure; applying, by the at least one computing device, an ingestion process to receive input data from at least one client device, the input data being received by the at least one computing device from the client device remotely over a transmission network; querying, by the at least one computing device, the at least one decision tree data structure to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process; and generating, by the at least one computing device, a first metric for a first child node and a second metric for a second child node using the input data, the first child node and the second child node being children nodes of the node identified in the at least one decision tree data structure. 16. The computer-implemented method of claim 15, further comprising identifying, by the at least one computing device, a subsequent state of the ingestion process based on the node identified in the at least one decision tree data structure. 17. The computer-implemented method of claim 15, further comprising: identifying, by the at least one computing device, an unrecognizable term in the input data; and communicating, by the at least one computing device, with a service external to the at least one computing device to define the unrecognizable term through an application programming interface (API) of the service. 18. The computer-implemented method of claim 15, wherein the at least one client device comprises a plurality of client devices; and wherein the computer-implemented method further comprises: accessing, by the at least one computing device, input data provided from a first one of the plurality of client devices obtained during the state of the ingestion process; accessing, by the at least one computing device, input data provided from a second one of the plurality of client devices obtained during the state of the ingestion process; and identifying, by the at least one computing device, that a conflict exists between the input data provided from the first one of the plurality of client devices and the input data provided from the second one of the plurality of client devices. 19. The computer-implemented method of claim 15, further comprising determining, by the at least one computing device, a confidence metric from the input data, wherein the first metric and the second metric are generated using the confidence metric. 20. The computer-implemented method of claim 19, further comprising determining, by the at least one computing device, a credibility metric associated with the client device from which the input data is received, wherein the confidence metric is determined using the credibility metric. | Please help me write a proper abstract based on the patent claims. | Disclosed are various embodiments for data processing using decision tree data structures to implement artificial intelligence in an ingestion process. At least one computing device may be employed to access reference data from a data store accessible to the at least one computing device and parse the reference data using a natural language processor to identify relevant data for storage in at least one decision tree data structure. An ingestion process is applied to receive input data from at least one client device remotely over a transmission network. The at least one decision tree data structure is queried to identify a node in the at least one decision tree data structure that corresponds to a state of the ingestion process. A first metric for a first child node and a second metric for a second child node are generated using the input data. |
1. An artificial intelligence (“AI”) engine hosted on one or more remote servers configured to cooperate with one or more databases, comprising: one or more AI-engine modules including an architect module, an instructor module, and a learner module, wherein the architect module is configured to propose an AI model from an assembly code, and wherein the instructor module and the learner module are configured to train the AI model in one or more training cycles with training data from one or more training data sources, wherein the assembly code is generated from a source code written in a pedagogical programming language, wherein the source code includes a mental model of one or more concept modules to be learned by the AI model using the training data and curricula of one or more lessons for training the AI model on the one or more concept modules, and wherein the AI engine is configured to instantiate a trained AI model based on the one or more concept modules learned by the AI model in the one or more training cycles; and one or more server-side client-server interfaces configured to enable client interactions with the AI engine in one or both client interactions selected from submitting the source code for training the AI model and using the trained AI model for one or more predictions based upon the training data wherein the learner module and the instructor module are configured to pick out the curricula of the one or more lessons, thereby significantly cutting down on training time, memory, and computing cycles used by the AI engine for training the AI model. 2. The AI engine of claim 1, wherein the one or more server-side client-server interfaces are configured to cooperate with one or more client-side client-server interfaces selected from a command-line interface, a graphical interface, a web-based interface, or a combination thereof. 3. The AI engine of claim 1, further comprising a compiler configured to generate the assembly code from the source code; and a training data manager configured to push or pull the training data from one or more training data sources selected from a simulator, a training data generator, a training data database, or a combination thereof. 4. The AI engine of claim 1, wherein the AI engine is configured to operate in a training mode or a predicting mode during the one or more training cycles, wherein, in the training mode, the instructor module and the learner module are configured to i) instantiate the AI model conforming to the AI model proposed by the architect module and ii) train the AI model with the curricula of the one or more lessons, and wherein, in the predicting mode, a predictor AI-engine module is configured to i) instantiate and execute the trained AI model on the training data for the one or more predictions in the predicting mode. 5. The AI engine of claim 1, wherein the AI engine is configured to heuristically pick an appropriate learning algorithm from a plurality of machine learning algorithms in the one or more databases for training the AI model proposed by the architect module. 6. The AI engine of claim 5, wherein the architect module is configured to propose one or more additional AI models, wherein the AI engine is configured to heuristically pick an appropriate learning algorithm from the plurality of machine learning algorithms in the one or more databases for each of the one or more additional AI models, wherein the instructor module and the learner module are configured to train the AI models in parallel, wherein the one or more additional AI models are also trained in one or more training cycles with the training data from one or more training data sources, wherein the AI engine is configured to instantiate one or more additional trained AI models based on the concept modules learned by the one or more AI models in the one or more training cycles, and wherein the AI engine is configured to identify a best trained AI model among the trained AI models. 7. The AI engine of claim 6, further comprising: a trained AI-engine AI model, wherein the trained AI-engine AI model provides enabling AI for proposing the AI models from the assembly code and picking the appropriate learning algorithms from the plurality of machine learning algorithms in the one or more databases for training the AI models, and wherein the AI engine is configured to continuously train the trained AI-engine AI model in providing the enabling AI for proposing the AI models and picking the appropriate learning algorithms. 8. The AI engine of claim 6, further comprising: a meta-learning module configured to keep a record in the one or more databases for i) the source code processed by the AI engine, ii) mental models of the source code, iii) the training data used for training the AI models, iv) the trained AI models, v) how quickly the trained AI models were trained to a sufficient level of accuracy, and vi) how accurate the trained AI models became in making predictions on the training data. 9. The AI engine of claim 1, wherein the AI engine is configured to make determinations regarding i) when to train the AI model on each of the one or more concept modules and ii) how extensively to train the AI model on each of the one or more concept modules, and wherein the determinations are based on the relevance of each of the one or more concept modules in one or more predictions of the trained AI model based upon the training data. 10. The AI engine of claim 1, wherein the AI engine is configured to provide one or more training status updates on training the AI model selected from i) an estimation of a proportion of a training plan completed for the AI model, ii) an estimation of a completion time for completing the training plan, iii) the one or more concept modules upon which the AI model is actively training, iv) mastery of the AI model on learning the one or more concept modules, v) fine-grained accuracy and performance of the AI model on learning the one or more concept modules, and vi) overall accuracy and performance of the AI model on learning one or more mental models. 11. An artificial intelligence (“AI”) system, comprising: one or more remote servers including an AI engine including one or more AI-engine modules including an architect module, an instructor module, and a learner module, wherein the architect module is configured to propose an AI model from an assembly code, and wherein the instructor module and the learner module are configured to train the AI model in one or more training cycles with training data; a compiler configured to generate the assembly code from a source code written in a pedagogical programming language, wherein the source code includes a mental model of one or more concept modules to be learned by the AI model using the training data and curricula of one or more lessons for training the AI model on the one or more concept modules, and wherein the AI engine is configured to instantiate a trained AI model based on the concept modules learned by the AI model in the one or more training cycles; one or more databases; and one or more server-side client-server interfaces configured to enable client interactions with the AI engine; and one or more local clients including a coder for generating the source code written in the pedagogical programming language; and one or more client-side client-server interfaces configured to enable client interactions with the AI engine in one or both client interactions selected from submitting the source code for training the AI model and using the trained AI model for one or more predictions based upon the training data, wherein the one or more client-side client-server interfaces are selected from a command-line interface, a graphical interface, a web-based interface, or a combination thereof, and wherein the AI system includes at least one server-side training data source or at least one client-side training data source. 12. A method for an artificial intelligence (“AI”) engine hosted on one or more remote servers configured to cooperate with one or more databases, comprising: proposing an AI model, wherein the AI engine includes an architect AI-engine module for proposing the AI model from an assembly code; training the AI model, wherein the AI engine includes an instructor AI-engine module and a learner AI-engine module for training the AI model in one or more training cycles with training data from one or more training data sources; compiling the assembly code from a source code, wherein a compiler is configured to generate the assembly code from the source code written in a pedagogical programming language, wherein the source code includes a mental model of one or more concept modules to be learned by the AI model using the training data and curricula of one or more lessons for training the AI model on the one or more concept modules; instantiating a trained AI model, wherein the AI engine is configured for instantiating the trained AI model based on the concept modules learned by the AI model in the one or more training cycles; and enabling client interactions, wherein one or more server-side client-server interfaces are configured for enabling client interactions with the AI engine in one or both client interactions selected from submitting the source code for training the AI model and using the trained AI model for one or more predictions based upon the training data. 13. The method of claim 12, further comprising: pushing or pulling the training data, wherein a training data manager is configured for pushing or pulling the training data from one or more training sources selected from a simulator, a training data generator, a training data database, or a combination thereof. 14. The method of claim 12, further comprising: operating the AI engine in a training mode or a predicting mode during the one or more training cycles, wherein, in the training mode, the instructor module and the learner module are configured to i) instantiate the AI model conforming to the AI model proposed by the architect module and ii) train the AI model, and wherein, in the predicting mode, a predictor AI module is configured to i) instantiate and execute the trained AI model on the training data for the one or more predictions in the predicting mode. 15. The method of claim 12, further comprising: heuristically picking an appropriate learning algorithm, wherein the AI engine is configured for picking the appropriate learning algorithm from a plurality of machine learning algorithms in the one or more databases for training the AI model proposed by the architect module. 16. The method of claim 15, further comprising: proposing one or more additional AI models, wherein the architect module is configured for proposing the one or more additional AI models; heuristically picking an appropriate learning algorithm from the plurality of machine learning algorithms in the one or more databases with the AI engine for each of the one or more additional AI models; training the AI models in parallel with the instructor module and learner module, wherein the one or more additional AI models are also trained in one or more training cycles with the training data from one or more training data sources; instantiating one or more additional trained AI models with the AI engine based on the concept modules learned by the one or more AI models in the one or more training cycles; and identifying a best trained AI model among the trained AI models with the AI engine. 17. The method of claim 16, further comprising: providing enabling AI for proposing the AI models from the assembly code and picking the appropriate learning algorithms from the plurality of machine learning algorithms in the one or more databases for training the AI models; and continuously training a trained AI-engine AI model with the AI engine to provide the enabling AI for proposing the AI models and picking the appropriate learning algorithms. 18. The method of claim 16, further comprising: keeping a record in the one or more databases with a meta-learning module, wherein the record includes i) the source code processed by the AI engine, ii) mental models of the source code, iii) the training data used for training the AI models, iv) the trained AI models, v) how quickly the trained AI models were trained to a sufficient level of accuracy, and vi) how accurate the trained AI models became in making predictions on the training data. 19. The method of claim 12, further comprising: making determinations with the AI engine regarding i) when to train the AI model on each of the one or more concept modules and ii) how extensively to train the AI model on each of the one or more concept modules, wherein the determinations are based on the relevance of each of the one or more concept modules in one or more predictions of the trained AI model based upon the training data. 20. The method of claim 12, further comprising: providing one or more training status updates with the AI engine on training the AI model, wherein the one or more training status updates are selected from i) an estimation of a proportion of a training plan completed for the AI model, ii) an estimation of a completion time for completing the training plan, iii) the one or more concept modules upon which the AI model is actively training, iv) mastery of the AI model on learning the one or more concept modules, v) fine-grained accuracy and performance of the AI model on learning the one or more concept modules, and vi) overall accuracy and performance of the AI model on learning one or more mental models. | Please help me write a proper abstract based on the patent claims. | Provided herein in some embodiments is an artificial intelligence (“AI”) engine hosted on one or more remote servers configured to cooperate with one or more databases including one or more AI-engine modules and one or more server-side client-server interfaces. The one or more AI-engine modules include an instructor module and a learner module configured to train an AI model. An assembly code can be generated from a source code written in a pedagogical programming language describing a mental model of one or more concept modules to be learned by the AI model and curricula of one or more lessons for training the AI model. The one or more server-side client-server interfaces can be configured to enable client interactions from a local client such as submitting the source code for training the AI model and using the trained AI model for one or more predictions. |
1. A neuromorphic computing system comprising: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, and generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request. 2. The neuromorphic computing system of claim 1, wherein the synapse core is to: transmit (i) a first weighted spike to the first address of the first post-synaptic neuron and (ii) a second weighted spike to the second address of the second post-synaptic neuron. 3. The neuromorphic computing system of claim 1, wherein the synapse core is to generate the first address and the second address by: a finite field mathematical function which is to apply to a first seed number to generate the first address; and the finite field mathematical function which is to apply to a second seed number to generate the second address. 4. The neuromorphic computing system of claim 1, wherein the synapse core is to generate the first address and the second address by: a finite field mathematical function which is to apply to a first seed number to generate the first address and the second address. 5. The neuromorphic computing system of claim 1, wherein the synapse core is to generate the first address by: a finite field mathematical function which is to apply to a seed number to generate least significant bits (LSBs) of the first address; a storage which is to be accessed to retrieve most significant bits (MSBs) of the first address; and the first address which is to generate based on the LSBs of the first address and the MSBs of the first address. 6. The neuromorphic computing system of claim 1, wherein the first post-synaptic neuron is included in a first core of the neuromorphic computing system, and wherein the synapse core is to generate the first address by: a Galois field function which is to apply to a seed number to generate an identification of the first post-synaptic neuron within the first core; a storage which is to be accessed to retrieve an identification of the first core; and the first address which is to be generated based on the identification of the first post-synaptic neuron and the identification of the first core. 7. The neuromorphic computing system of claim 1, wherein the synapse core is to: associate a first weight with a first spike to generate the first weighted spike; and associate a second weight with a second spike to generate the second weighted spike. 8. The neuromorphic computing system of claim 7, further comprising: a memory to store the first weight and the second weight; and one or more registers to store a plurality of seed numbers, wherein the first address and the second address are to be generated based on one or more seed numbers of the plurality of seed numbers. 9. The neuromorphic computing system of claim 8, further comprising: circuitry to update the first weight and the second weight in the memory. 10. A synapse core of a neuromorphic computing system, the synapse core comprising: mapping logic to (i) receive a request, the request comprising an identification of a pre-synaptic neuron that generated the request, (ii) access a seed number based on the identification of the pre-synaptic neuron, and (iii) map the seed number to an identification of a post-synaptic neuron that is included in a first core of the neuromorphic computing system; and a first storage to provide an identification of the first core, wherein the synapse core is to (i) generate an address of the post-synaptic neuron, based at least in part on the identification of the post-synaptic neuron and the identification of the first core, and (ii) transit a spike to the address of the post-synaptic neuron. 11. The synapse core of claim 10, wherein the request is a first request, wherein the seed number is a first seed number, and wherein the mapping logic is to: receive a second request, the second request comprising the identification of the post-synaptic neuron that generated the second request; access a second seed number based on the identification of the post-synaptic neuron; and map the second seed number to the identification of the pre-synaptic neuron. 12. The synapse core of claim 11, wherein: the mapping logic is to (i) map the seed number to the identification of the post-synaptic neuron request using at least in part a first mathematical function, and (ii) map the second seed number to the identification of the pre-synaptic neuron using at least in part a second mathematical function, wherein the second mathematical function is an inverse of the first mathematical function. 13. The synapse core of claim 10, wherein the synapse core is to: associate a weight to the spike, prior to the transmission of the spike to the address of the post-synaptic neuron. 14. The synapse core of claim 11, further comprising: a memory to store the weight. 15. The synapse core of claim 14, further comprising: circuitry to update the weight in the memory, wherein the circuitry comprises: a first circuitry to generate a change in weight; a second circuitry to read an original weight from the memory; a third circuitry to generate an updated weight based on the change in weight and the original weight; and a fourth circuitry to write the updated weight to the memory. 16. The synapse core of claim 10, wherein: the request includes a sparsity number; and the mapping logic is to map the seed number to identifications of a first number of post-synaptic neurons, the first number based on the sparsity number. 17. One or more non-transitory computer-readable storage media to store instructions that, when executed by a processor, cause the processor to: receive a request from a pre-synaptic neuron; generate, in response to the request, an address of a post-synaptic neuron, wherein the address is not stored in an apparatus, which comprises the processor, prior to receiving the request; and transmit a weighted spike to the address of the post-synaptic neuron. 18. The one or more non-transitory computer-readable storage media of claim 17, wherein the instructions, when executed, cause the processor to: apply a finite field mathematical function to a seed number to generate a first section of the address. 19. The one or more non-transitory computer-readable storage media of claim 18, wherein the instructions, when executed, cause the processor to: access a storage to retrieve a second section of the address; and generate the address based on the first section and the second section. 20. The one or more non-transitory computer-readable storage media of claim 17, wherein the instructions, when executed, cause the processor to: apply a Galois field function to a seed number to generate an identification of the post-synaptic neuron, wherein the post-synaptic neuron is included in a core of a neuromorphic computing system; access a storage to retrieve an identification of the core; and generate the address based on the identification of the post-synaptic neuron and the identification of the core. 21. The one or more non-transitory computer-readable storage media of claim 17, wherein the instructions, when executed, cause the processor to: associate a synaptic weight with a spike to generate the weighted spike. | Please help me write a proper abstract based on the patent claims. | A neuromorphic computing system is provided which comprises: a synapse core; and a pre-synaptic neuron, a first post-synaptic neuron, and a second post-synaptic neuron coupled to the synaptic core, wherein the synapse core is to: receive a request from the pre-synaptic neuron, generate, in response to the request, a first address of the first post-synaptic neuron and a second address of the second post-synaptic neuron, wherein the first address and the second address are not stored in the synapse core prior to receiving the request. |
1. A computer-implemented method that improves computer-user interaction by generating an answer to a question input related to an image input, the method comprising: receiving a question input in a natural language form; receiving an image input related to the question input; and inputting the question input and the image input into a multimodal question answering (mQA) model to generate an answer comprising multiple words generated sequentially, the mQA model comprising: a first component that encodes the question input into a dense vector representation; a second component to extract a visual representation of the image input; a third component to extract representation of a current word in the answer and its linguistic context; and a fourth component to generate a next word after the current word in the answer using a fusion comprising the dense vector representation, the visual representation, and the representation of the current word. 2. The computer-implemented method of claim 1 wherein the first component is a first long short term memory (LSTM) network comprising a first word embedding layer and a first LSTM layer. 3. The computer-implemented method of claim 2 wherein the third component is a second LSTM network comprising a second word embedding layer and a second LSTM layer. 4. The computer-implemented method of claim 3 wherein the first word-embedding layer shares a weight matrix with the second word-embedding layer. 5. The computer-implemented method of claim 3 wherein the first LSTM layer does not share a weight matrix with the second LSTM layer. 6. The computer-implemented method of claim 1 wherein the second component is a deep Convolutional Neural network (CNN). 7. The computer-implemented method of claim 1 wherein the CNN is pre-trained and is fixed during question answering training. 8. The computer-implemented method of claim 1 wherein the first, the third, and the fourth components are jointly trained together. 9. The computer-implemented method of claim 3 wherein the fourth component is a fusing component comprising: a fusing layer that fuses information from the first LSTM layer, the second LSTM layer, and the second component to generate a dense multimodal representation for the current word in the answer; an intermediate layer that maps the dense multimodal representation in the fusing layer to a dense word representation; and a softmax layer that predicts a probability distribution of the next word in the answer. 10. A computer-implemented method that improves computer-user interaction by generating an answer to a question input related to an image input, the method comprising: extracting a semantic meaning of a question input using a first long short term memory (LSTM) component comprising a first word-embedding layer and a first LSTM layer; generating a representation of an image input related to the question input using a deep Convolutional Neural network (CNN) component; extracting a representation of a current word of the answer using a second LSTM component comprising a second word-embedding layer and a second LSTM layer; and fusing the semantic meaning, the representation of the image input, and a representation of the current word of the answer to predict a next word of the answer. 11. The computer-implemented method of claim 10 wherein the first word-embedding layer shares a weight matrix with the second word-embedding layer. 12. The computer-implemented method of claim 10 wherein the first LSTM layer does not share a weight matrix with the second LSTM layer. 13. The computer-implemented method of claim 10 wherein the deep CNN is pre-trained and is fixed during question answering training. 14. The computer-implemented method of claim 11 wherein predicting the next word in the answer further comprises: fusing information from the first LSTM layer, the second LSTM layer, and the CNN in a fusion layer to generate a dense multimodal representation for the current answer word; mapping in an intermediate layer the dense multimodal representation to a dense word representation; and predicting in a softmax layer a probability distribution of the next word in the answer. 15. The computer-implemented method of claim 14 wherein the dense multimodal representation in the fusion layer is a non-linear activation function. 16. The computer-implemented method of claim 15 wherein the non-linear activation function is a scaled hyperbolic tangent function. 17. The computer-implemented method of claim 15 wherein the first word-embedding layer, the second word-embedding layer, and the softmax layer share a weight matrix. 18. A non-transitory computer-readable medium or media comprising one or more sequences of instructions which, when executed by one or more processors, causes the steps to be performed comprising: responsive to receiving from a user a question input, extracting a semantic meaning of the question input; responsive to receiving an image input related to the question input, generating a representation of the image input; starting with a start sign as a current word in an answer to the question input based upon the image input, generating a next answer word based on a fusion of the semantic meaning, the representation of the image input, and a semantic current answer word and adding it to the answer; repeating the next answer word generating step until an end sign of the answer is generated; and responsive to obtaining an end sign, outputting the answer. 19. The non-transitory computer-readable medium or media of claim 18 wherein generating the next answer word comprises: fusing information from the first LSTM layer, the second LSTM layer, and the CNN in a fusion layer for a dense multimodal representation for the current answer word; mapping in an intermediate layer the dense multimodal representation to a dense word representation; and predicting in a softmax layer a probability distribution of the next word in the answer. 20. The non-transitory computer-readable medium or media of claim 19 wherein the softmax layer has a weight matrix to decode the dense word representation into a pseudo one-word representation. | Please help me write a proper abstract based on the patent claims. | Embodiments of a multimodal question answering (mQA) system are presented to answer a question about the content of an image. In embodiments, the model comprises four components: a Long Short-Term Memory (LSTM) component to extract the question representation; a Convolutional Neural Network (CNN) component to extract the visual representation; an LSTM component for storing the linguistic context in an answer, and a fusing component to combine the information from the first three components and generate the answer. A Freestyle Multilingual Image Question Answering (FM-IQA) dataset was constructed to train and evaluate embodiments of the mQA model. The quality of the generated answers of the mQA model on this dataset is evaluated by human judges through a Turing Test. |
1. A method of detecting anomaly in a target system, comprising: receiving input data associated with the target system; generating a prediction by executing one or more predictive algorithms based on the received input data; generating a current accuracy score representing accuracy of the prediction made by the predictive algorithm; and determining an anomaly score representing likelihood that the target system is in an anomalous state based on the current or one or more recent accuracy scores by referencing an anomaly model representing an anticipated range, or distribution, of accuracy scores made by the predictive model. 2. The method of claim 1, further comprising comparing the prediction with an actual value corresponding to the prediction to generate the current accuracy score. 3. The method of claim 1, further comprising generating the anomaly model by analyzing a plurality of prior accuracy scores generated prior to generating of the current accuracy score, the prior accuracy scores generated by executing the predictive algorithm based on training data or prior input data and comparing the plurality of predictions against a plurality of corresponding actual values. 4. The method of claim 1, wherein the accuracy score takes one of a plurality of discrete values, and the likelihood is determined by computing a difference in cumulative distribution function (CDF) values at an upper end and a lower end of one of the plurality of discrete values. 5. The method of claim 1, wherein determining the likelihood comprises: computing a running average of the current accuracy score and prior accuracy scores preceding the current accuracy score; and determining the anomaly score by identifying an output value of the anomaly model corresponding to the running average. 6. The method of claim 5, wherein a number of the prior accuracy scores for computing the running average is dynamically changed based on predictability of the input data. 7. The method of claim 1, further comprising aggregating the accuracy score with one or more prior accuracy scores generated using the input data at time steps prior to a current time step for computing the current accuracy score. 8. The method of claim 7, further comprising receiving a user input indicating a time period represented by the aggregated accuracy score. 9. The method of claim 8, further comprising increasing or decreasing a time period represented by the aggregated accuracy score responsive to receiving another user input. 10. The method of claim 1, wherein the predictive algorithm generates the prediction using a hierarchical temporal memory (HTM) or a cortical learning algorithm. 11. The method of claim 1, further comprising generating a plurality of predictions including the prediction and a corresponding plurality of current accuracy scores based on the same input data, each of the plurality of predictions associated with a different parameter of the target system, the likelihood that the target system is in the anomalous state is determined based on a combined accuracy score that combines the plurality of current accuracy scores. 12. The method of claim 1, further comprising generating a plurality of predictions including the prediction and a corresponding plurality of current accuracy scores based on the same input data and associated with different parameters of the target system, the likelihood that the target system is in the anomalous state is determined based on a change in correlation of at least two of the plurality of current accuracy scores. 13. An anomaly detector for detecting an anomalous state in a target system, comprising: a processor; a data interface configured to receive input data associated with the target system; a predictive algorithm module configure to: generate a prediction by executing one or more predictive algorithms based on the received input data, and generate a current accuracy score representing accuracy of the prediction; and an anomaly processor configured to determine an anomaly score representing likelihood that the target system is in an anomalous state based on the current accuracy score by referencing an anomaly model representing an anticipated range, or distribution of accuracy of predictions made by the predictive model. 14. The anomaly detector of claim 13, wherein the predictive algorithm module is further configured to compare the prediction with an actual value corresponding to the prediction to generate the current accuracy score. 15. The anomaly detector of claim 13, wherein the anomaly processor is configured to generate the anomaly model by analyzing a plurality of prior accuracy scores generated prior to generating the current accuracy score, the prior accuracy scores generated by executing the predictive algorithm based on training data or prior input data provided to the predictive and comparing the plurality of predictions against a plurality of corresponding actual values. 16. The anomaly detector of claim 13, wherein the accuracy score takes one of a plurality of discrete values, and the anomaly processor is configured to determine the likelihood by computing a difference in cumulative distribution function (CDF) values at an upper end and a lower end of one of the plurality of discrete values. 17. The anomaly detector of claim 13, wherein the anomaly processor is further configured to: compute a running average of the current accuracy score and prior accuracy scores preceding the current accuracy score; and determine the accuracy score by identifying an output value of the anomaly model corresponding to the running average. 18. The anomaly detector of claim 17, wherein a number of the prior accuracy scores for computing the running average is dynamically changed based on predictability of the input data. 19. The anomaly detector of claim 13, further comprising a statistics module configured to aggregate the accuracy score with one or more prior accuracy scores generated using the input data at time steps prior to a current time step for computing the current accuracy score. 20. A non-transitory computer readable storage medium storing instructions thereon, the instructions when executed by a processor causing the processor to: receive input data associated with the target system; generate a prediction by executing one or more predictive algorithms based on the received input data; generate a current accuracy score representing accuracy of the prediction made by the predictive algorithm; and determine an anomaly score representing likelihood that the target system is in an anomalous state based on the current accuracy score by referencing an anomaly model representing an anticipated range, or distribution in accuracy of predictions made by the predictive model. | Please help me write a proper abstract based on the patent claims. | Embodiments relate to determining likelihood of presence of anomaly in a target system based on the accuracy of the predictions. A predictive model makes predictions based at least on the input data from the target system that change over time. The accuracy of the predictions over time is determined by comparing actual values against predictions for these actual values. The accuracy of the predictions is analyzed to generate an anomaly model indicating anticipated changes in the accuracy of predictions made by the predictive model. When the accuracy of subsequent predictions does not match the range or distribution as anticipated by the anomaly model, a determination can be made that the target system is likely in an anomalous state. |
1. A non-transitory computer-accessible medium having stored thereon computer-executable instructions for generating at least one model, wherein, when a computer arrangement executes the instructions, the computer arrangement is configured to perform procedures, comprising: receiving first information related to raw data; generating second information by formatting the first information; generating third information related to at least one feature set of the second information; and generating the at least one model based on the second and third information. 2. The computer-accessible medium of claim 1, wherein the computer arrangement is further configured to (i) receive fourth information related to a user-defined regularization of the second information, and (ii) generate fifth information based on a reformatting of the second information using the fourth information. 3. The computer-accessible medium of claim 1, wherein the computer arrangement is further configured to generate at least one prediction based on the at least one model. 4. The computer-accessible medium of claim 3, wherein the computer arrangement is further configured to generate the at least one prediction based on at least one time horizon. 5. The computer-accessible medium of claim 3, wherein the computer arrangement is further configured to determine fourth information related to a potential information value for the second information based on the third information. 6. The computer-accessible medium of claim 5, wherein the computer arrangement is further configured to determine the fourth information using at least one of a simple regression procedure or a correlation analysis. 7. The computer-accessible medium of claim 3, wherein the second information includes a plurality of discrete data columns, and wherein computer arrangement is further configured to generate a plurality of equations based on a plurality of combinations of a set of data columns of the data columns. 8. The computer-accessible medium of claim 5, wherein the second information includes a plurality of discrete data columns, and wherein the computer arrangement is further configured to determine fifth information related to how a first data column of the data columns is linked with at least one further data column of the data columns. 9. The computer-accessible medium of claim 8, wherein the computer arrangement is further configured to generate the third information based on the second information. 10. The computer-accessible medium of claim 9, wherein the computer arrangement is further configured to assign a score to each set of the feature sets based on a correlation of each respective one of the feature sets to the at least one prediction. 11. The computer-accessible medium of claim 10, wherein the computer arrangement is further configured to select a particular feature set based on the score. 12. The computer-accessible medium of claim 1, wherein the computer arrangement is configured to generate the at least one model using an islanding procedure based on the first information and the second information. 13. The computer-accessible medium of claim 12, wherein the islanding procedure includes generating, using the computer arrangement, a plurality of subsets of the second information. 14. The computer-accessible medium of claim 13, wherein the islanding procedure further includes assigning one or more species to each subset of the subsets using the computer arrangement. 15. The computer-accessible medium of claim 14, wherein the computer arrangement is configured to assign the one or more species based on a performance of each subset. 16. The computer-accessible medium of claim 15, wherein the performance includes a comparison of each subset relative to its historical performance. 17. The computer-accessible medium of claim 1, wherein the computer arrangement is configured to generate the at least one model using at least one neural network. 18. The computer-accessible medium of claim 17, wherein the at least one neural network is at least one evolutionary neural network. 19. The computer-accessible medium of claim 18, wherein the at least one model is at least one genomic model, and the at least one evolutionary neural network is at least one evolutionary neural network with at least one of at least one mutation or at least one recombination. 20. The computer-accessible medium of claim 19, wherein the at least one of the at least one mutation or the at least one recombination includes at least one rate that is tunable using at least one hyperparameter. 21. A method for generating at least one model, comprising: receiving first information related to raw data; generating second information by formatting the first information; generating third information related to at least one feature set of the second information; and using a computer hardware arrangement, generating the at least one model based on the second and third information. 22-40. (canceled) 41. A system for generating at least one model, comprising: at least one computer hardware arrangement configured to: receive first information related to raw data; generate second information by formatting the first information; generate third information related to at least one feature set of the second information; and generate the at least one model based on the second and third information. 42-60. (canceled) | Please help me write a proper abstract based on the patent claims. | An exemplary system, method and computer-accessible medium for generating a model(s), can include, for example, receiving first information related to raw data, generating second information by formatting the first information, generating third information related to a feature set(s) of the second information, generating the model(s) based on the second and third information. Fourth information related to a user-defined regularization of the second information can be received, fifth information can be generated based on a reformatting of the second information using the fourth information. A prediction(s) can be generated based on the model(s). The prediction(s) can be generated based on a time horizon(s). |
1. A method for recommending a window treatment fabric, the method comprising: determining at least one position of window treatments that are controlled by an automated window treatment control system, wherein the at least one position of the window treatments cause at least a portion of at least one window of an interior space to be covered by the window treatments within at least one calendar day, wherein the at least one position is determined during at least two different time frames within the at least one calendar day; and presenting a recommendation to a user for at least one fabric of the window treatments to be used for at least one window, wherein the recommendation is based on the determined at least one position of the window treatments that are controlled by the automated window treatment control system. 2. The method of claim 1, wherein the determined at least one position of the window treatments is determined based on automated window treatment control information. 3. The method of claim 2, wherein the automated window treatment control information includes an angle of the sun, sensor information, an amount of cloud cover, or weather data. 4. The method of claim 2, wherein the automated window control system is configured to adjust the positions of the window treatments in response to at least one light intensity measured by a sensor. 5. The method of claim 2, wherein the automated window control system is configured to adjust the positions of the window treatments at intervals to minimize occupant distractions. 6. The method of claim 2, wherein the determined at least one position of the window treatments is determined based on a calculated angle of the sun to limit a sunlight penetration distance in a space of a building. 7. The method of claim 1, further comprising: computing at least one associated score for the at least one fabric of the window treatment based on at least one predicted performance metric, wherein the recommendation is based on the at least one associated score for the at least one fabric of the window treatment. 8. The method of claim 7, wherein the at least one associated score comprises at least one of a glare score that indicates a predicted amount of glare resulting in a building from use of the at least one fabric in the window treatment, a view score that indicates an occupant's predicted amount of view out of the at least one window when the window treatment is installed, or a daylight score that indicates a predicted amount of daylight resulting in the interior space from use of the fabric in the window treatment. 9. The method of claim 1, wherein the interior space is within a building, and wherein the automated window treatment control system determines the at least one position of the at least one window of the interior space to be covered by the window treatments based at least on astrological information about the building. 10. The method of claim 1, wherein the automated window treatment control system determines the at least one position of the at least one window treatment to affect at least one predicted performance metric associated with daylight entering the interior space through the at least one window, wherein the at least one predicted performance metric comprises at least one of a daylight glare probability, a spatial daylight autonomy, or a view preservation, and wherein the daylight glare probability indicates a maximum daylight glare intensity over a period of time, the spatial daylight autonomy indicates an amount of floor space in a building where daylight alone may provide light over a period of time, and view preservation indicates an amount of the at least one window that may be unobstructed by the window treatment. 11. The method of claim 1, wherein at least one predicted performance metric is received from a fabric performance engine that calculates the at least one predicted performance metric based on environmental characteristics associated with a building in which the at least one fabric of the window treatments will be installed and fabric data associated with the at least one fabric. 12. The method of claim 1, wherein the window treatment comprises at least one of a manual window treatment or an automated window treatment, and wherein at least one predicted performance metric is based on a predicted performance when installed in the at least one of the manual window treatment or the automated window treatment. 13. The method of claim 12, wherein ranking the plurality of fabrics further comprises ranking the plurality of fabrics based on the predicted performance metrics when the fabric is installed in the at least one of the manual window treatment or the automated window treatment. 14. The method of claim 12, further comprising: comparing the at least one predicted performance metric of each fabric of the plurality of fabrics when installed in the automated window treatment to the at least one predicted performance metric of the fabric when installed in the manual window treatment; and displaying the at least one performance metric of the recommended fabric when installed in the automated window treatment and the at least one performance metric of the recommended fabric when installed in the manual window treatment on a visual display. 15. A motorized window treatment configured to be mounted adjacent to a window of an interior space, the motorized window treatment comprising: a motor drive unit responsive to an automated control system; and a window treatment configured to be installed on or around the window in such a way that the motor drive unit is configured to adjust a position of the window treatment in response to the automated control system, the window treatment selected by determining at least one position of the window treatment as controlled by the automated control system to cause at least a portion of the window to be covered by the window treatment during at least two different time frames within the at least one calendar day, and presenting a recommendation to a user for at least one fabric of the window treatment to be used for the window, where the recommendation is based on the determined at least one position of the window treatment as controlled by the automated control system. 16. The motorized window treatment of claim 15, wherein the motor drive unit is configured to receive a digital message and to adjust the position of the window treatment in response to the received digital message. 17. The motorized window treatment of claim 16, wherein the motor drive unit is configured to adjust the position of the window treatment in response to at least one light intensity measured by a sensor. 18. The motorized window treatment of claim 16, wherein the motor drive unit is configured to adjust the position of the window treatment in response to the digital message at intervals to minimize occupant distractions. 19. The motorized window treatment of claim 16, wherein the motor drive unit is configured to adjust the position of the window treatment in response to the digital message to limit a sunlight penetration distance in a space in which the window is located. | Please help me write a proper abstract based on the patent claims. | A fabric selection tool provides an automated procedure for recommending and/or selecting a fabric for a window treatment to be installed in a building. The recommendation may be made to optimize the performance of the window treatment in which the fabric may be installed. The recommended fabric may be selected based on performance metrics associated with each fabric in an environment. The fabrics may be ranked based upon the performance metrics of one or more of the fabrics. One or more of the fabrics, and/or their corresponding ranks, may be displayed to a user for selection. The recommended fabrics may be determined based on combinations of fabrics that provide performance metrics for various façades of the building. Using the ranking system provided by the fabric selection tool, the user may obtain a fabric sample and/or order one or more of the recommended fabrics. |
1. (canceled) 2. A method to predict storage use in a computer network, the method comprising: directing with a first storage component comprising computer hardware, a second storage component to copy at least a portion of primary data stored on at least a first storage resource to secondary data stored on at least a second storage resource, wherein the first and second storage components are arranged in a hierarchy, and wherein the second storage component identifies the second storage resource with a first identifier; sending from the second storage component to the first storage component at least the first identifier associated with the second storage resource; directing with a first storage component comprising computer hardware, a third storage component to monitor usage associated with the second storage resource, wherein the first and third storage components are arranged in a hierarchy, and wherein the third storage component identifies the second storage resource with a second identifier; sending from the third storage component to the first storage component, the second identifier associated with the second storage resource, and usage data derived at least in part from the monitoring of the usage associated with the second storage resource; determining with the first storage component that the first identifier and the second identifier are associated with the second storage resource; and predicting, based at least in part on the usage data and with a computing device comprising computer hardware, one or more of future storage media use, future media growth, future network bandwidth use, and future media agent use. 3. The method of claim 2 further comprising calculating a storage cost for the second storage resource based on the usage data. 4. The method of claim 2 further comprising apportioning a storage cost the second storage resource among a plurality of departments based on the usage data. 5. The method of claim 2 wherein said predicting comprises calculating a moving average of a certain network operation during the time period 6. The method of claim 2 wherein said predicting comprises calculating a moving average. 7. The method of claim 2 wherein said predicting comprises calculating a seasonal index. 8. The method of claim 2 wherein said predicting comprises calculating an average index for each day in a monitored time period 9. The method of claim 2 wherein said predicting comprises performing a linear interpolation on a moving average. 10. The method of claim 2 the second storage resource is identified with a first name by the second storage component, and identified with second name by the third storage component that is different than the first name. 11. The method of claim 2 wherein said determining that the first identifier and the second identifier are associated with the second storage resource is based on network identifiers. 12. A system configured to predict storage use in a computer network, the system comprising: a first storage component comprising computer hardware, the first storage component configured to direct a second storage component to copy at least a portion of primary data stored on at least a first storage resource to secondary data stored on at least a second storage resource, wherein the first and second storage components are arranged in a hierarchy, and wherein the second storage component identifies the second storage resource with a first identifier; wherein the second storage component configured to send to the first storage component, at least the first identifier associated with the second storage resource; wherein the first storage component configured to direct a third storage component to monitor usage associated with the second storage resource, wherein the first and third storage components are arranged in a hierarchy, and wherein the third storage component identifies the second storage resource with a second identifier; wherein the third storage component configured to send to the first storage component, the second identifier associated with the second storage resource, and usage data derived at least in part from the monitoring of the usage associated with the second storage resource; wherein the first storage component is further configured to determine that the first identifier and the second identifier are associated with the second storage resource; and wherein the first storage component is further configured to, based at least in part on the usage data, predict one or more of future storage media use, future media growth, future network bandwidth use, and future media agent use. 13. The system of claim 12 wherein the first storage component is further configured to calculate a storage cost for the second storage resource based on the usage data. 14. The system of claim 12 the first storage component is further configured to apportion a storage cost the second storage resource among a plurality of departments based on the usage data. 15. The system of claim 12 wherein the first storage component is configured to perform the prediction at least in part by calculating a moving average of a certain network operation during the time period 16. The system of claim 12 wherein the first storage component is configured to perform the prediction based at least in part on the calculation of a moving average. 17. The system of claim 12 wherein the first storage component is configured to perform the prediction based at least in part on the calculation a seasonal index. 18. The system of claim 12 wherein the first storage component is further to perform the prediction based at least in part on the calculation of an average index for each day in the time period. 19. The system of claim 12 wherein the second storage component identifies the second storage resource with a first name, and the third storage component identifies the second storage resource with a second name that is different than the first name. 20. The system of claim 12 wherein the first storage component is further configured to determine that the first identifier and the second identifier are associated with the second storage resource based on network identifiers. | Please help me write a proper abstract based on the patent claims. | The present invention provides systems and methods for data storage. A hierarchical storage management architecture is presented to facilitate data management. The disclosed system provides methods for evaluating the state of stored data relative to enterprise needs by using weighted parameters that may be user defined. Also disclosed are systems and methods evaluating costing and risk management associated with stored data. |
1. A system for contextual text adaptation, comprising: one or more hardware processors; a topic model algorithm executable on one or more of the hardware processors, the topic model algorithm generated by machine learning based on a corpus of documents at least related to context of a target user, the topic model comprising a first function that predicts probability distribution of a plurality of topics in a given document, and a second function that predicts probability of a given word occurring in a document associated with a given topic, one or more of the hardware processors operable to receive an input document, one or more of the hardware processors further operable to determine input document topics associated with the input document and a normalized weight associated with each of the input document topics by executing the first function, one or more of the hardware processors further operable to determine an aggregate probability indicating relevance of an input document word to the input document topics based on executing the second function, one or more of the hardware processors further operable to determine a synonym of the input document word based on a dictionary of synonyms, one or more of the hardware processors further operable to determine an aggregate probability for the synonym based on executing the second function, one or more of the hardware processors further operable to compare the aggregate probability for the synonym and the aggregate probability for the input document word, and responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, one or more of the hardware processors further operable to replace the input document word with the synonym, one or more of the hardware processors further operable to generate an output document comprising content of the input document with replaced word. 2. The system of claim 1, wherein one or more of the hardware processors communicate with a social media server to retrieve the corpus of documents. 3. The system of claim 1, wherein the corpus of documents comprises web postings the target user accesses on the social media server. 4. The system of claim 1, wherein the social media server presents the output document on a web page associated with the social media server. 5. The system of claim 1, wherein one or more of the processors determines the aggregate probability indicating relevance of an input document word to the input document topics, determines the aggregate probability for the synonym, compares the aggregate probability for the synonym and the aggregate probability for the input document word, and replaces the input document word with the synonym responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, for each of a plurality of input document words in the input document. 6. The system of claim 1, wherein the aggregate probability for the input document word is determined as a sum of products of the probability that the input document word is associated with an input document topic and the normalized weight of the input document topic. 7. The system of claim 1, wherein multiple synonyms are determined for the input document word and the aggregate probability is determined for each of the multiple synonyms, wherein the synonym with maximum aggregate probability among the multiple synonyms is selected for the comparing with the aggregate probability for the input document word. 8. A computer-implemented method of contextual text adaptation, the method performed by one or more hardware processors, comprising: receiving a corpus of documents in context of a target user; receiving a dictionary of synonyms; generating a topic model algorithm based on at least the corpus of documents by machine learning, the topic model algorithm comprising a first function that predicts probability distribution of a plurality of topics in a given document, and a second function that predicts probability of a given word occurring in a document associated with a given topic; receiving an input document; determining input document topics associated with the input document and a normalized weight associated with each of the input document topics by executing the first function; determining an aggregate probability indicating relevance of an input document word to the input document topics based on executing the second function; determining a synonym of the input document word based on the dictionary of synonyms; determining an aggregate probability for the synonym based on executing the second function; comparing the aggregate probability for the synonym and the aggregate probability for the input document word; and responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, replacing the input document word with the synonym; and generating an output document comprising content of the input document with replaced word. 9. The method of claim 8, wherein the determining of an aggregate probability indicating relevance of an input document word to the input document, the determining of an aggregate probability for the synonym, the comparing of the aggregate probability for the synonym and the aggregate probability for the input document word, and the replacing of the input document word with the synonym responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, is performed for each of a plurality of input document words in the input document. 10. The method of claim 8, wherein the aggregate probability for the input document word is determined as a sum of products of the probability that the input document word is associated with an input document topic and the normalized weight of the input document topic. 11. The method of claim 8, wherein multiple synonyms are determined for the input document word and the aggregate probability is determined for each of the multiple synonyms, wherein the synonym with maximum aggregate probability among the multiple synonyms is selected for the comparing with the aggregate probability for the input document word. 12. The method of claim 8, wherein the corpus of documents are received over a communication network from a social media server. 13. The method of claim 8, wherein the corpus of documents comprises web postings the target user accesses. 14. A computer readable storage medium storing a program of instructions executable by a machine to perform a method of contextual text adaptation, the method comprising: identifying a target user; receiving a corpus of documents in context of the target user; receiving a dictionary of synonyms; generating a topic model algorithm based on at least the corpus of documents by machine learning, the topic model algorithm comprising a first function that predicts probability distribution of a plurality of topics in a given document, and a second function that predicts probability of a given word occurring in a document associated with a given topic; and receiving an input document; determining input document topics associated with the input document and a normalized weight associated with each of the input document topics by executing the first function; determining a probability that an input document word is associated with an input document topic for each of the input document topics by executing the second function; determining an aggregate probability for the input document word as a sum of products of the probability that an input document word is associated with an input document topic and the normalized weight of the input document topic; determining a synonym of the input document word based on the dictionary of synonyms; determining an aggregate probability for the synonym; comparing the aggregate probability for the synonym and the aggregate probability for the input document word; responsive to determining that the aggregate probability for the synonym is greater than the aggregate probability for the input document word, replacing the input document word with the synonym; and generating an output document comprising content of the input document with replaced word. 15. The computer readable storage medium of claim 14, wherein the aggregate probability for the input document word is determined as a sum of products of the probability that the input document word is associated with an input document topic and the normalized weight of the input document topic. 16. The computer readable storage medium of claim 14, wherein multiple synonyms are determined for the input document word and the aggregate probability is determined for each of the multiple synonyms, wherein the synonym with maximum aggregate probability among the multiple synonyms is selected for the comparing with the aggregate probability for the input document word. 17. The computer readable storage medium of claim 14, wherein the corpus of documents are received over a communication network from a social media server. 18. The computer readable storage medium of claim 14, wherein the corpus of documents comprises web postings the target user accesses. | Please help me write a proper abstract based on the patent claims. | Contextual adaptation of documents automatically replaces words for synonyms that appear within context or topic whey they are being used. A machine learned topic modeling, trained by a set of documents representative of a target user is executed to determine topics of an input document, and to determine words in the document to replace based on determining the relevance of the words to the topics in the documents. An output document is generated based on the input document with the replaced words. |
1. A method for communicating postsynaptic neuron states to a neuromorphic core, the method comprising: storing, in a first transmit buffer, indications of a postsynaptic neuron circuit fire from postsynaptic neuron circuits in a first neuromorphic core, each of the postsynaptic neuron circuits being coupled to a plurality of synaptic memory cells, the indications of postsynaptic neuron circuit fire identifying which of the postsynaptic neuron circuits fired; serially shifting the indications of postsynaptic neuron circuit fire to a neuron bus; receiving the indications of postsynaptic neuron circuit fire from the neuron bus at a plurality of presynaptic neuron circuits in a second neuromorphic core. 2. The method of claim 1, further comprising: updating the indications of postsynaptic neuron circuit fire at the first transmitter buffer when one or more of the postsynaptic neuron circuits at the first neuromorphic core fires; determining if a second transmit buffer at the first neuromorphic core is serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus; transmitting the indications of the postsynaptic neuron circuit fire from the first transmit buffer to the second transmit buffer when the second transmit buffer is not serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first transmit buffer; clearing the indications of the postsynaptic neuron circuit fire at the first transmit buffer after transmitting the indications of the postsynaptic neuron circuit fire to the second transmit buffer. 3. The method of claim 1, further comprising serially shifting the indications of the postsynaptic neuron circuit fire from the neuron bus to a first receive buffer at the second neuromorphic core. 4. The method of claim 3, further comprising: determining if the first receive buffer is serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus; transmitting the indications of the postsynaptic neuron circuit fire from the first receive buffer to a second receive buffer at the second neuromorphic core when the first receive buffer is not serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first receive buffer; and clearing the indications of the postsynaptic neuron circuit fire at the first receive buffer after transmitting the indications of the postsynaptic neuron circuit fire to the second receive buffer. 5. The method of claim 1, further comprising firing one or more presynaptic neuron circuits at the second neuromorphic core after receiving the indications of postsynaptic neuron circuit fire by the first receive buffer from the neuron bus. 6. A system for communicating postsynaptic neuron states, the system comprising: a first neuromorphic core including a first array of synaptic memory cells and postsynaptic neuron circuits, each of the postsynaptic neuron circuits is coupled to a row of synaptic memory cells in the first array of synaptic memory cells, each of the postsynaptic neuron circuits is configured to fire when voltage sensed from the row of synaptic memory cells exceeds a threshold; a second neuromorphic core including a second array of synaptic memory cells; and a neuron bus configured to serially transmit indications of a postsynaptic neuron circuit fire from the first neuromorphic core to the second neuromorphic core. 7. The system of claim 6, further comprising: a first transmit buffer coupled to postsynaptic neuron circuits, the first transmit buffer configured to store indications of a postsynaptic neuron circuit fire from each of the postsynaptic neuron circuits, the indications of postsynaptic neuron circuit fire identifying which of the postsynaptic neuron circuits fired; 8. The system of claim 7, further comprising: a second transmit buffer coupled to the first transmit buffer and the neuron bus, the second transmit buffer configured to serially shift the indications of postsynaptic neuron circuit fire to the neuron bus. 9. The system of claim 8, wherein the first transmit buffer is configured to: update the indications of postsynaptic neuron circuit fire at the first transmit buffer when one or more of the postsynaptic neuron circuits at the first neuromorphic core fires; determine if a second transmit buffer at the first neuromorphic core is serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus; transmit the indications of the postsynaptic neuron circuit fire from the first transmit buffer to the second transmit buffer when the second transmit buffer is not serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first transmit buffer; and clear the indications of the postsynaptic neuron circuit fire at the first transmit buffer after transmitting the indications of the postsynaptic neuron circuit fire to the second transmit buffer. 10. The system of claim 6, further comprising: a first receive buffer coupled to the neuron bus, the first receive buffer configured to serially shift the indications of postsynaptic neuron circuit fire from the neuron bus to the second neuromorphic core. 11. The system of claim 10, further comprising a second receive buffer coupled to the first receive buffer, the second receive buffer configured to: determine if the first receive buffer is serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus; receive the indications of the postsynaptic neuron circuit fire from the first receive buffer to a second receive buffer at the second neuromorphic core when the first receive buffer is not serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first receive buffer; and clear the indications of the postsynaptic neuron circuit fire at the first receive buffer after receiving the indications of the postsynaptic neuron circuit fire to the second receive buffer. 12. The system of claim 6, wherein the second neuromorphic core includes a plurality of presynaptic neuron circuits configured to receive the indications of postsynaptic neuron circuit fire from the second receive buffer. 13. The system of claim 6, wherein the second neuromorphic core includes a plurality of presynaptic neuron circuits configured to receive the indications of postsynaptic neuron circuit fire from the neuron bus. 14. A system for communicating postsynaptic neuron states, the system comprising: a first neuromorphic core including a first array of synaptic memory cells and postsynaptic neuron circuits, each of the postsynaptic neuron circuits is coupled to a row of synaptic memory cells in the first array of synaptic memory cells, each of the postsynaptic neuron circuits is configured to fire when voltage sensed from the row of synaptic memory cells exceeds a threshold; a plurality of second neuromorphic cores, each of the second neuromorphic cores including a second array of synaptic memory cells; and a neuron bus configured to serially transmit indications of a postsynaptic neuron circuit fire from the first neuromorphic core to the second neuromorphic cores. 15. The system of claim 14, further comprising: a first transmit buffer coupled to postsynaptic neuron circuits, the first transmit buffer configured to store indications of a postsynaptic neuron circuit fire from each of the postsynaptic neuron circuits, the indications of postsynaptic neuron circuit fire identifying which of the postsynaptic neuron circuits fired; 16. The system of claim 15, further comprising: a second transmit buffer coupled to the first transmit buffer and the neuron bus, the second transmit buffer configured to serially shift the indications of postsynaptic neuron circuit fire to the neuron bus. 17. The system of claim 16, wherein the first transmit buffer is configured to: update the indications of postsynaptic neuron circuit fire at the first transmit buffer when one or more of the postsynaptic neuron circuits at the first neuromorphic core fires; determine if a second transmit buffer at the first neuromorphic core is serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus; transmit the indications of the postsynaptic neuron circuit fire from the first transmit buffer to the second transmit buffer when the second transmit buffer is not serially shifting the indications of postsynaptic neuron circuit fire to the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first transmit buffer; and clear the indications of the postsynaptic neuron circuit fire at the first transmit buffer after transmitting the indications of the postsynaptic neuron circuit fire to the second transmit buffer. 18. The system of claim 14, wherein each of the second neuromorphic cores includes a first receive buffer coupled to the neuron bus, the first receive buffer configured to serially shift the indications of postsynaptic neuron circuit fire from the neuron bus to a respective one of the second neuromorphic cores. 19. The system of claim 18, wherein each of the second neuromorphic cores includes a second receive buffer coupled to the first receive buffer, the second receive buffer configured to: determine if the first receive buffer is serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus; receive the indications of the postsynaptic neuron circuit fire from the first receive buffer to a second receive buffer at the second neuromorphic core when the first receive buffer is not serially shifting the indications of postsynaptic neuron circuit fire from the neuron bus and there are one or more indications of postsynaptic neuron circuit fire at the first receive buffer; and clear the indications of the postsynaptic neuron circuit fire at the first receive buffer after receiving the indications of the postsynaptic neuron circuit fire to the second receive buffer. 20. The system of claim 19, wherein the each of the second neuromorphic cores includes a plurality of presynaptic neuron circuits configured to receive the indications of postsynaptic neuron circuit fire from the second receive buffer. | Please help me write a proper abstract based on the patent claims. | A system for communicating postsynaptic neuron states. The system includes a first neuromorphic core and a second neuromorphic core. The first neuromorphic core includes a first array of synaptic memory cells and postsynaptic neuron circuits. Each of the postsynaptic neuron circuits is coupled to a row of synaptic memory cells in the first array of synaptic memory cells. Each of the postsynaptic neuron circuits is configured to fire when voltage sensed from the row of synaptic memory cells exceeds a threshold. The second neuromorphic core includes a second array of synaptic memory cells. A neuron bus is configured to serially transmit indications of a postsynaptic neuron circuit fire from the first neuromorphic core to the second neuromorphic core. |
1. A method comprising: receiving, by a computer processor of a computing device from RFID tags embedded in sensors, first event data associated with a first plurality of events detected by said sensors, said computer processor controlling a cloud hosted mediation system comprising an inference engine software application, a truth maintenance system database, and non monotonic logic, wherein said non monotonic logic comprises code for enabling a Dempster Shafer theory; deriving, by said computer processor executing said inference engine software application, first assumption data associated with each portion of first portions of said first event data associated with associated RFID tags of said RFID tags, wherein said first assumption data comprises multiple sets of assumptions associated said plurality of events, wherein each set of said multiple sets comprises assumed event conditions and an associated plausibility percentage value, and wherein at least two sets of said multiple sets is associated with each event of said plurality of events; generating, by said computer processor based on results of said deriving and executing the Dempster Shafer theory with respect to a first pair of sets of said multiple sets with respect to a first event of said plurality of events, an initial recommendation for said event, said initial recommendation associated with a first selected set of said first pair of sets, said first selected set comprises a first plausibility percentage value; retrieving, by said computer processor from said truth maintenance system database, previous assumption data derived from and associated with previous portions of previous event data retrieved from said RFID tags embedded in said sensors, said previous assumption data derived at a time differing from a time of said deriving, said previous event data associated with previous events occurring at a different time from said first plurality of events; executing, by said computer processor, said non monotonic logic with respect to said first assumption data and said previous assumption data; additionally executing, by said computer processor executing said non monotonic logic, the Dempster Shafer theory with respect to said first pair of sets and said previous assumption data; generating, by said computer processor based on results of said additionally executing and modifying said first plausibility percentage value of said first selected set, an updated recommendation for said first event, said updated recommendation associated with a second selected set of said first pair of sets, said second selected set differing from said first selected set; and generating, by said computer processor executing said non monotonic logic and said inference engine software application, first updated assumption data associated with said first assumption data and said previous assumption data, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with conditions of vehicles detected by said sensors. 2. The method of claim 1, further comprising: executing, by said computer processor based on said first updated assumption data, an action associated with objects detected by said sensors. 3. The method of claim 2, wherein said action comprises implementing a pay by usage cloud metering model associated with said objects. 4. The method of claim 1, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with objects detected by said sensors. 5. The method of claim 1, further comprising: receiving, by said computer processor from said RFID tags embedded in said sensors, second event data associated with a second plurality of events detected by said sensors, said second plurality of events occurring at a time differing from said first plurality of events; associating, by said computer processor, first portions of said second event data with associated RFID tags of said RFID tags; deriving, by said computer processor executing said inference engine software application, second assumption data associated with each portion of said first portions of said second event data; retrieving, by said computer processor from said truth maintenance system database, said previous assumption data, said first assumption data, and said first updated assumption data; executing, by said computer processor, said non monotonic logic with respect to said first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; generating, by said computer processor executing said non monotonic logic and said inference engine software application, second updated assumption data associated with first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; and storing, by said computer processor in said truth maintenance system database, said second assumption data. 6. The method of claim 1, wherein said generating first updated assumption data comprises retracting portions of said first assumption data and said previous assumption data. 7. The method of claim 1, further comprising: providing at least one support service for at least one of creating, integrating, hosting, maintaining, and deploying computer-readable code in said computing system, wherein the code in combination with the computing system is capable of performing: said receiving, said associating, said deriving, said retrieving, said executing, said generating, and said storing. 8. A computer program product, comprising a computer readable memory device storing a computer readable program code, said computer readable program code comprising an algorithm adapted to implement a method within a computing device, said method comprising: receiving, by a computer processor of said computing device from RFID tags embedded in sensors, first event data associated with a first plurality of events detected by said sensors, said computer processor controlling a cloud hosted mediation system comprising an inference engine software application, a truth maintenance system database, and non monotonic logic, wherein said non monotonic logic comprises code for enabling a Dempster Shafer theory; deriving, by said computer processor executing said inference engine software application, first assumption data associated with each portion of first portions of said first event data associated with associated RFID tags of said RFID tags, wherein said first assumption data comprises multiple sets of assumptions associated said plurality of events, wherein each set of said multiple sets comprises assumed event conditions and an associated plausibility percentage value, and wherein at least two sets of said multiple sets is associated with each event of said plurality of events; generating, by said computer processor based on results of said deriving and executing the Dempster Shafer theory with respect to a first pair of sets of said multiple sets with respect to a first event of said plurality of events, an initial recommendation for said event, said initial recommendation associated with a first selected set of said first pair of sets, said first selected set comprises a first plausibility percentage value; retrieving, by said computer processor from said truth maintenance system database, previous assumption data derived from and associated with previous portions of previous event data retrieved from said RFID tags embedded in said sensors, said previous assumption data derived at a time differing from a time of said deriving, said previous event data associated with previous events occurring at a different time from said first plurality of events; executing, by said computer processor, said non monotonic logic with respect to said first assumption data and said previous assumption data; additionally executing, by said computer processor executing said non monotonic logic, the Dempster Shafer theory with respect to said first pair of sets and said previous assumption data; generating, by said computer processor based on results of said additionally executing and modifying said first plausibility percentage value of said first selected set, an updated recommendation for said first event, said updated recommendation associated with a second selected set of said first pair of sets, said second selected set differing from said first selected set; and generating, by said computer processor executing said non monotonic logic and said inference engine software application, first updated assumption data associated with said first assumption data and said previous assumption data, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with conditions of vehicles detected by said sensors. 9. The computer program product of claim 8, wherein said method further comprises: executing, by said computer processor based on said first updated assumption data, an action associated with objects detected by said sensors. 10. The computer program product of claim 9, wherein said action comprises implementing a pay by usage cloud metering model associated with said objects. 11. The computer program product of claim 8, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with objects detected by said sensors. 12. The computer program product of claim 8, wherein said method further comprises: receiving, by said computer processor from said RFID tags embedded in said sensors, second event data associated with a second plurality of events detected by said sensors, said second plurality of events occurring at a time differing from said first plurality of events; associating, by said computer processor, first portions of said second event data with associated RFID tags of said RFID tags; deriving, by said computer processor executing said inference engine software application, second assumption data associated with each portion of said first portions of said second event data; retrieving, by said computer processor from said truth maintenance system database, said previous assumption data, said first assumption data, and said first updated assumption data; executing, by said computer processor, said non monotonic logic with respect to said first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; generating, by said computer processor executing said non monotonic logic and said inference engine software application, second updated assumption data associated with first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; and storing, by said computer processor in said truth maintenance system database, said second assumption data. 13. The computer program product of claim 8, wherein said generating first updated assumption data comprises retracting portions of said first assumption data and said previous assumption data. 14. A computing system comprising a computer processor coupled to a computer-readable memory unit, said memory unit comprising instructions that when enabled by the computer processor implements a method comprising: receiving, by said computer processor from RFID tags embedded in sensors, first event data associated with a first plurality of events detected by said sensors, said computer processor controlling a cloud hosted mediation system comprising an inference engine software application, a truth maintenance system database, and non monotonic logic, wherein said non monotonic logic comprises code for enabling a Dempster Shafer theory; deriving, by said computer processor executing said inference engine software application, first assumption data associated with each portion of first portions of said first event data associated with associated RFID tags of said RFID tags, wherein said first assumption data comprises multiple sets of assumptions associated said plurality of events, wherein each set of said multiple sets comprises assumed event conditions and an associated plausibility percentage value, and wherein at least two sets of said multiple sets is associated with each event of said plurality of events; generating, by said computer processor based on results of said deriving and executing the Dempster Shafer theory with respect to a first pair of sets of said multiple sets with respect to a first event of said plurality of events, an initial recommendation for said event, said initial recommendation associated with a first selected set of said first pair of sets, said first selected set comprises a first plausibility percentage value; retrieving, by said computer processor from said truth maintenance system database, previous assumption data derived from and associated with previous portions of previous event data retrieved from said RFID tags embedded in said sensors, said previous assumption data derived at a time differing from a time of said deriving, said previous event data associated with previous events occurring at a different time from said first plurality of events; executing, by said computer processor, said non monotonic logic with respect to said first assumption data and said previous assumption data; additionally executing, by said computer processor executing said non monotonic logic, the Dempster Shafer theory with respect to said first pair of sets and said previous assumption data; generating, by said computer processor based on results of said additionally executing and modifying said first plausibility percentage value of said first selected set, an updated recommendation for said first event, said updated recommendation associated with a second selected set of said first pair of sets, said second selected set differing from said first selected set; and generating, by said computer processor executing said non monotonic logic and said inference engine software application, first updated assumption data associated with said first assumption data and said previous assumption data, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with conditions of vehicles detected by said sensors. 15. The computing system of claim 14, wherein said method further comprises: executing, by said computer processor based on said first updated assumption data, an action associated with objects detected by said sensors. 16. The computing system of claim 15, wherein said action comprises implementing a pay by usage cloud metering model associated with said objects. 17. The computing system of claim 14, wherein said previous assumption data, said first assumption data, and said first updated assumption data each comprise assumptions associated with objects detected by said sensors. 18. The computing system of claim 14, wherein said method further comprises: receiving, by said computer processor from said RFID tags embedded in said sensors, second event data associated with a second plurality of events detected by said sensors, said second plurality of events occurring at a time differing from said first plurality of events; associating, by said computer processor, first portions of said second event data with associated RFID tags of said RFID tags; deriving, by said computer processor executing said inference engine software application, second assumption data associated with each portion of said first portions of said second event data; retrieving, by said computer processor from said truth maintenance system database, said previous assumption data, said first assumption data, and said first updated assumption data; executing, by said computer processor, said non monotonic logic with respect to said first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; generating, by said computer processor executing said non monotonic logic and said inference engine software application, second updated assumption data associated with first updated assumption data, said first assumption data, said second assumption data, and said previous assumption data; and storing, by said computer processor in said truth maintenance system database, said second assumption data. | Please help me write a proper abstract based on the patent claims. | A truth maintenance method and system. The method includes receiving by a computer processor from RFID tags embedded in sensors, event data associated with events detected by said sensors. The computer processor associates portions of the event data with associated RFID tags and derives assumption data associated with each portion of the portions. The computer processor retrieves previous assumption data derived from and associated with previous portions of previous event data retrieved from the RFID tags and executes non monotonic logic with respect to the assumption data and the previous assumption data. In response, the computer processor generates and stores updated assumption data associated with the assumption data and the previous assumption data. |
1. A method comprising: accessing a document with attributes and tags that sequentially order elements of the document; extracting a selection of document text belonging to a specific set of attributes; creating a narrative model that represents evolution of semantics with respect to the sequentially ordered elements; accessing a set of target values and training documents, wherein the target value quantifies an outcome associated with one or more of the training documents in the set; and predicting an outcome associated with the accessed document. 2. The method of claim 1, wherein the semantics includes: statistical methods including one or a combination of sentiment analysis, semantic analysis, pragmatics analysis, latent class analysis, support vector machines, semantic orientation, pointwise mutual information and any document-type specific analysis. 3. The method of claim 1, further comprising: associating the training documents with communities; training a classifier between the communities and the target values; detecting relationships between the elements of the accessed document and the communities; calculating weighting based on the detected relationships, and wherein predicting the outcome is based on the classifiers and the calculated weightings. 4. The method of claim 3, wherein: obtaining a collection of topics over a corpus of documents using latent models based on the words in those documents, and using significant words in the significant topics representing a document as tags in associating documents with communities. 5. The method of claim 3, further comprising: training multiple predictors between the communities and the target values, and wherein predicting the outcome is further based on these predictors. 6. The method recited in claim 1, wherein the accessed document includes: a collection of elements arranged in its temporal succession in which elements of the documents can be accessed according to a specific set of attributes. 7. The method recited in claim 1, wherein creating the narrative model includes: creating a branching narrative that represents multiple path possibilities when applicable to the document. 8. The method of claim 1, wherein generating a prediction includes: creating narrative models for the training documents, wherein generating the prediction is further based on the narrative models for the training documents. 9. The method recited in claim 1, wherein creating the narrative model includes: generating a sequence of semantics descriptor vectors that are indexed to the sequentially ordered elements; analysing the change and association of semantics from element to element within documents with one or more additional features, tags, attributes; and representing as a collection of vectors. 10. The method recited in claim 1, wherein creating the narrative model includes: generating a contingency matrix; and using the contingency matrix in semantically analyzing the document, wherein semantically analyzing the document yields data that is inputted to the narrative model. 11. The method recited in claim 10 further comprising: training a lexicon of distributed word vectors on individual words with generative models that represent topics as frequencies of words and tracing rates of word usage with respect to the elements in the document, and wherein: generating the contingency matrix includes modifying word frequency data using the lexicon. 12. The method of claim 1, wherein predicting the outcome includes: transforming the narrative model through alignment transformation to match the number of coefficients between models with different numbers of elements. 13. The method of claim 1, wherein predicting the outcome further includes: training a classifier using the set of target values and training documents; and inputting the narrative model into the classifier wherein the classifier is used to generate the prediction. 14. The method of claim 1, wherein predicting the outcome includes: training an ensemble model that includes classifiers and/or regression models using the set of target values and training documents; and using the narrative model from the accessed document with the ensemble model to predict the associated outcome. 15. A system comprising a processor having instructions operable to cause the processor to: access a document with attributes and tags that sequentially order elements of the document; extract a selection of document text belonging to a specific set of attributes; create a narrative model that represents evolution of semantics with respect to the sequentially ordered elements; access a set of target values and training documents, wherein the target value quantifies an outcome associated with one or more of the training documents in the set; and generate a prediction of an outcome associated with the accessed document. 16. The system of claim 15, wherein the semantics includes: applying statistical methods including one or a combination of sentiment analysis, semantic analysis, pragmatics analysis, latent class analysis, support vector machines, semantic orientation, pointwise mutual information and any document-type specific analysis. 17. The system of claim 15, further comprises: associate the training documents with communities; train a classifier between the communities and the target values; detect relationships between the elements of the accessed document and the communities; calculate weighting based on the detected relationships, and wherein the prediction of the outcome is based on the classifiers and the calculated weightings. 18. The system of claim 15 being embedded in a word processing system. 19. The system of claim 15, wherein the prediction includes: metadata and factors having predictive value with respect to the outcome associated with the accessed document. 20. The system of claim 15, wherein the prediction: finds documents in a database that are closest in terms of outcome associated with the accessed document as found by the prediction method, and reports these documents as a content discovery output. | Please help me write a proper abstract based on the patent claims. | A method and system is described for modeling the content evolution of an accessed document and predicting an associated outcome for said document. The system accesses a document but can further receive additional tags, metadata, or related information that characterizes the nature of such text collection. The invention applies various processing to separate the document into elements and performs semantic modeling to create a narrative model that describes the evolution of the contents of the elements in terms of their respective sequencing. This system then uses a set of training documents with target values assigned to them to predict an associated outcome for the accessed document. The most relevant subset of a training set can be selected by matching metadata information that characterize the accessed document and a collection of metadata that characterize other broad document sets. Such characterization is done using graph partitioning or other community detection methods from metadata information that characterize the document sets and relations between multiple sets of such documents. The outcome of the method may apply to prediction of economic value of a events described by the accessed document, success measures of the document quality, or discovery of related content with similar associated outcome to the accessed document. |
1-20. (canceled) 21. A method, in a data processing system comprising a processor and a memory, for clarifying an input question, the method comprising: generating, in the data processing system, a set of candidate answers for an input question, wherein each candidate answer in the set of candidate answers corresponds to an evidence passage supporting the candidate answer as answering the input question; determining, in the data processing system, based on the set of candidate answers, whether clarification of the input question is required; and in response to a determination that clarification of the input question is required: identifying, by the data processing system, a differentiating factor in evidence passage of at least two candidate answers in the set of candidate answers; outputting, by the data processing system, a request for user input to clarify the input question, wherein the request for user input is generated based on the identified differentiating factor; and selecting, by the data processing system, at least one candidate answer in the set of candidate answers as an answer for the input question based on a user input in response to the request. 22. The method of claim 21, wherein the request for user input comprises a clarification question directed to the differentiating factor and a plurality of user selectable potential answers to the clarification question, each answer corresponding to a portion of a corresponding one of the evidence passages, of the at least two candidate answers, directed to the differentiating factor. 23. The method of claim 21, wherein the request for user input comprises a clarification question that comprises a potential answer corresponding to the differentiating factor in the content of the clarification question and user selectable potential answers in the affirmative and negative for answering the clarification question. 24. The method of claim 21, wherein the request for user input comprises a clarification question that is directed to the differentiating factor and a free-form text entry field into which a user may input a textual answer to the clarification question. 25. The method of claim 21, wherein determining, based on the set of candidate answers, whether clarification of the input question is required comprises determining that clarification of the input question is required in response to the set of candidate answers comprising a plurality of candidate answers with corresponding confidence scores equal to or higher than a predetermined threshold confidence score. 26. The method of claim 21, wherein selecting at least one candidate answer in the set of candidate answers as an answer for the input question comprises: updating the set of candidate answers based on the user input; and selecting the at least one candidate answer from the updated set of candidate answers. 27. The method of claim 26, wherein updating the set of candidate answers comprises modifying confidence scores associated with one or more of the candidate answers in the set of candidate answers based on the user input, wherein confidence scores for candidate answers having evidence passages corresponding to the user input are increased and candidate answers having evidence passages not corresponding to the user input are decreased. 28. The method of claim 26, wherein updating the set of candidate answers comprises removing candidate answers, from the set of candidate answers, that have evidence passages that do not correspond to the user input. 29. The method of claim 26, wherein selecting the at least one candidate answer from the updated set of candidate answers comprises: performing synthesis stage, merging and ranking stage, and final answer selecting stage operations of a question and answer (QA) system pipeline on the updated set of candidate answers. 30. The method of claim 21, wherein the request comprises a clarifying question posed to a user, wherein the clarifying question is generated based on the differentiating factor and is constructed such that an answer to the clarifying question indicates a correctness of one of the at least two candidate answers based on their associated evidence passages. 31. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: generate a set of candidate answers for an input question, wherein each candidate answer in the set of candidate answers corresponds to an evidence passage supporting the candidate answer as answering the input question; determine, based on the set of candidate answers, whether clarification of the input question is required; and in response to a determination that clarification of the input question is required: identify a differentiation factor in evidence passages of at least two candidate answers in the set of candidate answers; output a request for user input to clarify the input question, wherein the request for user input is generated based on the identified differentiating factor; and select at least one candidate answer in the set of candidate answers as an answer for the input question based on a user response to the request. 32. The computer program product of claim 31, wherein the request for user input comprises a clarification question directed to the differentiating factor and a plurality of user selectable potential answers to the clarification question, each answer corresponding to a portion of a corresponding one of the evidence passages, of the at least two candidate answers, directed to the differentiating factor. 33. The computer program product of claim 31, wherein the request for user input comprises a clarification question that comprises a potential answer corresponding to the differentiating factor in the content of the clarification question and user selectable potential answers in the affirmative and negative for answering the clarification question. 34. The computer program product of claim 31, wherein the request for user input comprises a clarification question that is directed to the differentiating factor and a free-form text entry field into which a user may input a textual answer to the clarification question. 35. The computer program product of claim 31, wherein determining, based on the set of candidate answers, whether clarification of the input question is required comprises determining that clarification of the input question is required in response to the set of candidate answers comprising a plurality of candidate answers with corresponding confidence scores equal to or higher than a predetermined threshold confidence score. 36. The computer program product of claim 31, wherein the computer readable program further causes the computing device to select at least one candidate answer in the set of candidate answers as an answer for the input question at least by: updating the set of candidate answers based on the user input; and selecting the at least one candidate answer from the updated set of candidate answers. 37. The computer program product of claim 36, wherein the computer readable program further causes the computing device to update the set of candidate answers at least by modifying confidence scores associated with one or more of the candidate answers in the set of candidate answers based on the user input, wherein confidence scores for candidate answers having evidence passages corresponding to the user input are increased and candidate answers having evidence passages not corresponding to the user input are decreased. 38. The computer program product of claim 36, wherein the computer readable program further causes the computing device to update the set of candidate answers at least by removing candidate answers, from the set of candidate answers, that have evidence passages that do not correspond to the user input. 39. The computer program product of claim 31, wherein the request comprises a clarifying question posed to a user, wherein the clarifying question is generated based on the differentiating factor and is constructed such that an answer to the clarifying question indicates a correctness of one of the at least two candidate answers based on their associated evidence passages. 40. An apparatus comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: generate a set of candidate answers for an input question, wherein each candidate answer in the set of candidate answers corresponds to an evidence passage supporting the candidate answer as answering the input question; determine, based on the set of candidate answers, whether clarification of the input question is required; and in response to a determination that clarification of the input question is required: identify a differentiation factor in evidence passages of at least two candidate answers in the set of candidate answers; output a request for user input to clarify the input question, wherein the request for user input is generated based on the identified differentiating factor; and select at least one candidate answer in the set of candidate answers as an answer for the input question based on a user response to the request. | Please help me write a proper abstract based on the patent claims. | Mechanisms for clarifying an input question are provided. A question is received for generation of an answer. A set of candidate answers is generated based on an analysis of a corpus of information. Each candidate answer has an evidence passage supporting the candidate answer. Based on the set of candidate answers, a determination is made as to whether clarification of the question is required. In response to a determination that clarification of the question is required, a request is sent for user input to clarify the question. User input is received from the computing device in response to the request and at least one candidate answer in the set of candidate answers is selected as an answer for the question based on the user input. |
1. A system for providing semantic reasoning comprising: an extended semantic model comprising existing concepts for a specific knowledge domain, existing relationships among the existing concepts, and logic comprising conditions and processes that cause a new concept or a new relationship to be inferred from an existing concept or an existing relationship, respectively; a semantic knowledge database comprising existing nodes and existing links and wherein the existing nodes represent instances of the existing concepts, and the existing links represent instances of the existing relationships; an inference engine configured to add new nodes and new links to the semantic knowledge database by following the logic of the extended semantic model; and a computing system comprising at least a processor configured to execute the logic of the extended semantic model. 2. The system of claim 1, further comprising a data normalizer configured to receive data and normalize them to the existing concepts and existing relationships of the extended semantic model. 3. The system of claim 1, further comprising a repertoire library and wherein the repertoire library comprises a set of functions that are identified in the extended semantic model and are invoked by the inference engine. 4. The system of claim 3, wherein the set of functions comprise one of resolver, context-gate, collector, qualifier or operators, and wherein each type of function fulfills a specific role in the inference process. 5. The system of claim 3, wherein each function is assigned a unique identification number. 6. The system of claim 1, further comprising a user interface configured to query the semantic knowledge database and to provide decision support to a user. 7. The system of claim 1, wherein the extended semantic model further comprises upstream concepts and downstream concepts and wherein the upstream concepts are connected to the downstream concepts via relationships that are inferred from the upstream. 8. The system of claim 7, wherein the extended semantic model further comprises notes and wherein each note comprises data associated with an existing concept. 9. The system of claim 8, wherein said data comprise values associated with an existing concept or a chain of links connecting data within an upstream concept with a downstream concept. 10. The system of claim 1, wherein the extended semantic model further comprises properties and wherein said properties are used by the inference engine for implementing a specific logic for instancing a link. 11. The system of claim 1, wherein said existing concepts comprise fact concepts and wherein the fact concepts are directly observed or known. 12. The system of claim 11, wherein said existing concepts comprise insight concepts, and wherein the insight concepts are inferred by the inference engine from fact concepts. 13. The system of claim 12, wherein relationships between fact concepts and insight concepts are automatically inferred by the inference engine. 14. The system of claim 13 wherein the semantic knowledge database further comprises fact nodes corresponding to the fact concepts and insight nodes corresponding to the insight concepts. 15. The system of claim 8, wherein the semantic knowledge database further comprises attributes of nodes and links and wherein said attributes represent instances of said notes. 16. The system of claim 1, wherein the inference engine adds new concepts and relationships to the semantic knowledge database by first creating instances of fact nodes and associated links, next recursively updating downstream nodes, and then instancing downstream insight nodes and associated links. 17. The system of claim 1, further comprising an extensible meta-language and wherein the extensible meta-language comprises a vocabulary of words and a syntax, and wherein new words are configured to be added to the vocabulary by building upon existing words. 18. The system of claim 17, further comprising an interpreter of the extensible meta-language and wherein the interpreter is configured to provide an interface between the inference engine and the extended semantic model and the semantic knowledge database. 19. The system of claim 18 wherein words in the extensible meta-language are used to query nodes, links and attributes in the semantic knowledge database. 20. The system of claim 18, wherein the extensible meta-language comprises an extension of a FORTH language and wherein the interpreter comprises a FORTH language interpreter. 21. The system of claim 1, wherein the computing system comprises a distributed computing system. 22. The system of claim 1, wherein the extended semantic model is extended by adding new concepts, new fact concepts, new insight concepts, or another extended semantic model. 23. A method for providing semantic reasoning comprising: providing an extended semantic model comprising existing concepts for a specific knowledge domain, existing relationships among the existing concepts, and logic comprising conditions and processes that cause a new concept or a new relationship to be inferred from an existing concept or an existing relationship, respectively; providing a semantic knowledge database comprising existing nodes, and existing links and wherein the existing nodes represent instances of the existing concepts, and the existing links represent instances of the existing relationships; providing an inference engine configured to add new concepts and relationships to the semantic knowledge database by following the logic of the extended semantic model; and providing a computing system comprising at least a processor configured to execute the logic of the extended semantic model. | Please help me write a proper abstract based on the patent claims. | A system for providing semantic reasoning includes an extended semantic model, a semantic knowledge database, and an inference engine. The extended semantic model includes existing concepts for a specific knowledge domain, existing relationships among the existing concepts, and logic including conditions and processes that cause a new concept or a new relationship to be inferred from an existing concept or an existing relationship, respectively. The semantic knowledge database includes existing nodes and existing links and the existing nodes represent instances of the existing concepts, and the existing links represent instances of the existing relationships. The inference engine is configured to add new nodes and new links to the semantic knowledge database by following the logic of the extended semantic model. |
1. An apparatus for establishing operation rules in a home automation system, comprising: a processor; a memory in electronic communication with the processor; and instructions stored in the memory, the instructions being executable by the processor to: receive a spoken command having a plurality of rule setting terms; establish at least one operation rule for the home automation system based on the spoken command; and store the at least one operation rule for later use by the home automation system. 2. The apparatus of claim 1, wherein the instructions are executable by the processor to: initiate a rules mode for the home automation system prior to receiving the spoken command. 3. The apparatus of claim 2, wherein the instructions are executable by the processor to: terminate the rules mode after establishing the at least one operation rule. 4. The apparatus of claim 2, wherein the initiating the rules mode includes receiving a touch input at a control panel of the home automation system. 5. The apparatus of claim 2, wherein the initiating the rules mode includes receiving a spoken trigger word or an audible sound. 6. The apparatus of claim 1, wherein the instructions are executable by the processor to: deliver a request for clarification of the spoken command. 7. The apparatus of claim 6, wherein the request for clarification includes an audible message. 8. The apparatus of claim 6, wherein the request for clarification includes a displayed message visible on a control panel of the home automation system. 9. The apparatus of claim 1, wherein the instructions are executable by the processor to: generate a message that includes a restatement of the spoken command and a request for confirmation of the correctness of the restatement of the spoken command. 10. The apparatus of claim 1, wherein the instructions are executable by the processor to: generate a message to a user of the home automation system that includes a restatement of the at least one operation rule and requests confirmation of the accuracy of the at least one operation rule. 11. A computer-program product for establishing operation rules in a home automation system, the computer-program product comprising a non-transitory computer-readable medium storing instructions executable by a processor to: initiate a rule setting mode for the home automation system; receive a spoken command having a plurality of rule setting terms; generate at least one operation rule for the home automation system based on the spoken command; and generate a message that includes a restatement of the at least one operation rule. 12. The computer-program product of claim 11, wherein at least some of the plurality of rule setting terms are defined during installation of the home automation system at a property. 13. The computer-program product of claim 11, wherein at least some of the plurality of rule setting terms are generic to a plurality of different home automation systems associated with a plurality of different properties. 14. The computer-program product of claim 11, wherein the plurality of rule setting terms include at least one static term that is generic to a plurality of different home automation systems, and at least one dynamic term that is uniquely defined for the home automation system. 15. The computer-program product of claim 11, wherein the instructions are executable by the processor to: store the at least one operation rule for later use by the home automation system, wherein storing the at least one operation rule includes storing in a local database of the home automation system. 16. The computer-program product of claim 11, wherein the instructions are executable by the processor to: generate a message requesting confirmation of the spoken command. 17. The computer-program product of claim 16, wherein the instructions are executable by the processor to: receive confirmation of the spoken command. 18. A computer-implemented method for establishing operation rules in a home automation system, comprising: initiating a rules mode for the home automation system; receiving a spoken command having a plurality of rule setting terms; establishing at least one operation rule for the home automation system using the spoken command; storing the at least one operation rule; and operating at least one function of the home automation system based on the at least one stored operation rule. 19. The method of claim 18, wherein the plurality of rule setting terms include at least one static term that is pre-programmed into the home automation system prior to installation. 20. The method of claim 18, wherein the plurality of rule setting terms include at least one dynamic term that is uniquely programmed into the home automation system no sooner than installation at a property. | Please help me write a proper abstract based on the patent claims. | Methods and systems are described for setting operation rules for use in controlling aspects of a home automation system. According to at least one embodiment, an apparatus for establishing operation rules in a home automation system includes a processor, a memory in electronic communication with the processor, and instructions stored in the memory which are executable by the processor to receive a spoken command having a plurality of rule setting terms, establish at least one operation rule for the home automation system based on the spoken command, and store the at least one operation rule for later use by the home automation system. |
1. A system for guiding the operation of an application on a first device, the system comprising a first device and a second device, wherein: the application on the first device is configured to: present a character on a display of the first device, wherein the character is associated with a text-to-speech voice, obtain an audio signal from a microphone of the first device, wherein the audio signal comprises speech of a user of the first device, and transmit audio data to the second device in real time, wherein the audio data is generated from the audio signal; the second device is configured to: receive the audio data from the first device, cause audio to be played using the audio data, present a plurality of phrases as suggestions of phrases to be spoken by the character, receive an input from a user of the second device that specifies a selected phrase to be spoken by the character, and cause phrase data to be transmitted to the first device corresponding to the selected phrase; and the application on the first device is configured to: receive the phrase data corresponding to the selected phrase, and cause audio to be played from the first device corresponding to the selected phrase, wherein the audio is generated using the text-to-speech voice of the character. | Please help me write a proper abstract based on the patent claims. | The operation of an application on a first device may be guided by a user operating a second device. The application on the first device may present a character on a display of the first device and obtain an audio signal of speech of a user of the first device. Audio data may be transmitted to the second device and corresponding audio may be played from speakers of the second device. The second device may present suggestions of phrases to be spoken by the character displayed on the first device. A user of the second device may select a phrase to be spoken by the character. Phrase data may be transmitted to the first device, and the first device may generate audio of the character speaking the phrase using a text-to-speech voice associated with the character. |
1. A scalable system for recalculating, in an event-driven manner, property parameters including connectivity parameters of a neural network, the system comprises: an input component that receives a time varying input signal; a storage component for storing the property parameters of the neural network; a state machine capable of recalculating property parameters of the neural network, wherein the property parameters include connectivity among neurons of the neural network; and an output component that generates output signals reflective of the calculated property parameters of the neural network and the input signal. 2. The system of claim 1, wherein the state machine is capable of generating a unique identifying number for each neuron in the neural network. 3. The system of claim 1, wherein the state machine comprises a Linear Feedback Shift Register (LSFR). 4. The system of claim 3, wherein the LFSR is configured to generate certain property parameters including connectivity. 5. The system of claim of 2, wherein the state machine comprises a neuron identification counter (Neuron_Index) with a first predefined initial value and a neuron connectivity counter (Conn_Counter) with a second predefined initial value. 6. The system of claim 5, wherein the state machine is initialized with a third predefined initial value and configured to perform the following: comparing Conn_Counter value to a predefined final value (MAX_Conn); if Conn_Counter value is not equal to MAX_Conn, causing the state machine to update, changing the Conn_Counter to a next value, and updating property parameters of the neuron identified by Neuron_Index in response to the input signal; if Conn_Counter value is equal to MAX_Conn, changing Neuron_Index to a next value; comparing Neuron_Index to a predefined total number of neurons in the neural network (MAX_Neuron); and if Neuron_Index value is not equal to MAX_Neuron, resetting the Conn_Counter to the second predefined initial value and repeating the above steps for next neuron as identified by the Neuron_Index. 7. The system of claim 1, wherein the input component converts the received time varying input signal into a sequence of spikes. 8. The system of claim 1, wherein the state machine recalculates connectivity of a neuron currently being evaluated using a predefined initial value corresponding to the neuron currently being evaluated only when the neuron currently being evaluated fires in response to the input signal. 9. The system of claim 5, wherein the storage component is configured to store predefined initial values corresponding to the neurons in the neural network for the state machine and the state machine is configured to perform the following: determining whether a neuron identified by the Neuron_Index fires in response to the input signal; if the neuron identified by the Neuron_Index fires, retrieving from the storage component a predefined initial value corresponding to the neuron identified by the Neuron_Index, initializing Conn_Counter to the second predefined initial value; comparing Conn_Counter value to a predefined final value (MAX_Conn); if Conn_Counter value is not equal to MAX_Conn, causing the state machine to update to a next state, changing Conn_Counter to a next value, and repeating this step; if Conn_Counter value is equal to MAX_Conn, changing Neuron_Index to a next value; comparing Neuron_Index to a predefined maximum number of neurons in the neural network (MAX_Neuron); and if Neuron_Index value is not equal to MAX_Neuron, repeating the above steps for a next neuron as identified by Neuron_Index. 10. The system of claim 1, wherein the storage component includes a cache for storing predefined initial values used by the state machine for recalculating certain property parameters. 11. The system of claim 10, wherein the state machine is configured to further perform the following: calculating an initial value necessary for recalculating certain connectivity parameters upon determining that the initial value necessary for recalculating certain connectivity parameters are not stored in the cache; and updating the cache to include the calculated initial value according to a predetermined cache rule. 12. The system of claim 8, wherein the state machine is configured to further perform the following: maintaining a list of future firing neurons, wherein the recalculating at each time step is conducted only on neurons identified on the list; for each target neuron of a neuron that fires at a current time step, comparing current membrane potential of that target neuron to a corresponding predefined firing threshold of that target neuron; adding identity of a target neuron to the list of future firing neurons if the current membrane potential of that target neuron exceeds the corresponding predefined firing threshold; and removing an identity of a target neuron from the list of future firing neurons if current membrane potential of that target neuron is below the corresponding predefined firing threshold. 13. The system of claim 1, wherein the state machine comprises: a state skip unit with a number (N) of feedback networks, wherein each feedback network generates, in a number (M) of clock cycles, a state of the state machine after sequentially advancing a programmable number (P) of states from the state machine's current state, and M<P; and a multiplexing circuit for updating the state machine by selecting one of the N feedback networks. 14. The system of claim 2, wherein the identifying number of a target neuron in the network is the sum of a predefined offset value and multiple random numbers generated by the state machine so as to center the normalized distribution of the target neurons. 15. The system of claim 1, wherein the state machine is also used to generate a connection type for each neuron of the network. 16. The system of claim 1, wherein the property parameters further include neural delays of each neuron in the network. 17. The system of claim 1, further comprises: a plurality of processing elements, wherein each processing element has a state machine and is capable of calculating property parameters of a subset of neurons of the neural network. 18. The system of claim 2, wherein the state machine is configured to cache predefined initial values of the state machine corresponding to N neurons last fired. 19. A computer-implemented method for recalculating network property parameters of a neural network including connectivity parameters in an event-driven manner, the method comprises: initializing property parameters of the neural network; receiving, at an evaluating neuron of the neural network, a neural input corresponding to a time varying input signal to the neural network; recalculating by a state machine of the neural network at least some of the property parameters of the evaluating neuron, wherein the property parameters are random but determined after initialization; determining whether the evaluating neuron is to generate a neural output to its target neurons in the neural network; and if the evaluating neuron is determined to generate a neural output to its target neurons in the neural network, propagating the output of the evaluating neuron to its target neurons. 20. The method of claim 19, wherein the property parameters of the neural network include one or more parameters of the group consisting of maximum number of neurons in the neural network, one or more random number generation seed values, neural axonal and dendritic delay values, positive connectivity strength values, negative connectivity strength values, neuron refractory period, decay rate of neural membrane potential, neuron membrane potential, and neuron membrane leakage parameter. 21. The method of claim 19, wherein the determining comprises: calculating current membrane potential of the evaluating neuron based on the neural input and the connectivity parameters of the evaluating neuron; comparing the calculated membrane potential to a firing threshold value of the evaluating neuron; and reporting that the evaluating neuron is to generate an output if the calculated membrane potential exceeds the firing threshold value. 22. The method of claim 19, wherein the recalculating comprises: using a pseudo-random number generator with a pre-defined start value to calculate the property parameters. 23. The method of claim 22, wherein the pseudo-random number generator comprises a Linear Feedback Shift Register (LFSR). 24. The method of claim 19, wherein the recalculating comprises: retrieving a stored pre-defined initial value corresponding to the evaluating neuron; and calculating connectivity parameters of the evaluating neuron using the retrieved seed value. 25. The method of claim 19, wherein the recalculating comprises calculating connectivity of a neuron currently being evaluated only when the neuron currently being evaluated fires in response to the input signal. 26. The method of claim 19, further comprises retrieving an initial value of the state machine for recalculating the connectivity parameters of the evaluating neuron from a cache coupled to the state machine. 27. The method of claim 26, wherein the retrieving further comprises: calculating the initial value of the state machine for recalculating the connectivity parameters of the evaluating neuron upon determining that the cache does not store the initial value; and updating the cache to include the calculated initial value according to a predetermined cache rule. 28. The method of claim 19, further comprises: maintaining a list of future firing neurons, wherein the updating at each time step is conducted only on neurons identified on the list; for each target neuron of a neuron that fires at a current time step, comparing the current membrane potential of that target neuron to a corresponding predefined firing threshold of that target neuron; adding an identity of a target neuron to the list of future firing neurons if the current membrane potential of that target neuron exceeds the corresponding predefined firing threshold; and removing an identity of a target neuron from the list of future firing neurons if the current membrane potential of that target neuron is below the corresponding predefined firing threshold. 29. The method of claim 19, wherein the recalculating further comprises: sequentially advancing a programmable number (P) of states beyond the current state of the state machine in a number (M) of clock cycles, wherein M<P. 30. The method of claim 19, further comprises: taking sum of the state machine results to form a uniform distribution; and adding an offset to center the normalized distribution in calculating addresses of the target neurons. 31. The method of claim 19, wherein the recalculating further comprises using the state machine to generate a distinct connection type for each neuron of the network. 32. The method of claim 19, wherein the property parameters include neural delays of the neurons. 33. The method of claim 19, wherein the recalculating is carried out by a plurality of processing elements, wherein each processing element has a state machine and is capable of calculating property parameters of a subset of neurons of the neural network. | Please help me write a proper abstract based on the patent claims. | Systems and methods achieving scalable and efficient connectivity in neural algorithms by re-calculating network connectivity in an event-driven way are disclosed. The disclosed solution eliminates the storing of a massive amount of data relating to connectivity used in traditional methods. In one embodiment, a deterministic LFSR is used to quickly, efficiently, and cheaply re-calculate these connections on the fly. An alternative embodiment caches some or all of the LFSR seed values in memory to avoid sequencing the LFSR through all states needed to compute targets for a particular active neuron. Additionally, connections may be calculated in a way that generates neural networks with connections that are uniformly or normally (Gaussian) distributed. |
1. A method comprising: selecting a set of seed content providers from a set of content providers; for each seed content provider, training a model that predicts a likelihood that a user will perform an interaction with a content item provided by the seed content provider; clustering the seeds into a smaller number of clusters, where the seeds are clustered based on a performance of each model for a corresponding seed on data of the other seeds; and for each of the clusters, training a shared model for all of the seeds of the cluster. 2. The method of claim 1, further comprising: receiving a request for predicting user responses to a content item associated with a content provider; and querying a database of the shared models to identify a shared model for the content provider. 3. The method of claim 1, further comprising: assigning a content provider that is not a seed content provider to a cluster. 4. The method of claim 3, further comprising re-training the shared model for the seeds and the content provider assigned to the cluster. 5. The method of claim 1, wherein the clusters are determined based on a distance metric, where the distance metric between a first seed and a second seed indicates similarity between performance of the model for the first seed on data of the other seeds and performance of the model for the second seed on data of the other seeds. 6. The method of claim 1, wherein the clusters are determined based on a distance metric, where the distance metric between a first seed and a second seed indicates similarity between performance of the models on data of the first seed and performance of the models on data of the second seed. 7. The method of claim 1, wherein the number of the clusters is determined to minimize a loss indicating predictive error of the shared models. 8. The method of claim 7, wherein the loss further indicates computational complexity of the shared models. 9. The method of claim 1, wherein for each of the clusters, the shared model for the cluster is trained based on aggregated data of the seeds of the cluster. 10. The method of claim 1, further comprising training a general model for all seeds. 11. A method comprising: receiving a request for predicting user responses to a content item associated with a content provider; querying a database of a plurality of shared models to identify a shared model for the content provider, where the plurality of shared models are generated by: selecting a set of seed content providers from a set of content providers, for each seed content provider, training a model that predicts a likelihood that a user will perform an interaction with a content item provided by the seed content provider, clustering the seeds into a smaller number of clusters, where the seeds are clustered based on performance of each model for a corresponding seed on data of the other seeds, and for each of the clusters, training a shared model for all of the seeds of the cluster to generate the plurality of shared models; and predicting the user responses for the content item associated with the content provider by using the identified shared model. | Please help me write a proper abstract based on the patent claims. | An online system, such as a social networking system, generates shared models for one or more clusters of categories. A shared model for a cluster is common to the categories assigned to the cluster. In this manner, the shared models are specific to the group of categories (e.g., selected content providers) in each cluster while requiring a reasonable computational complexity for the online system. The categories are clustered based on the performance of a model specific to a category on data for other categories. |
1. A control method for determining a quality indicator of medical technology recording results data from a contrast-agent-assisted tomography scan of an examination structure via a tomography system, the method comprising: automatically deriving, at least one of during and directly after the tomography scan, at least one control parameter value from the recording results data in respect of a contrast agent image region, the at least one control parameter value representing a quality of the recording results data in the contrast agent image region. 2. The control method of claim 1, wherein the at least one control parameter value is derived by at least one of the tomography system and a contrast agent administration system. 3. The control method of claim 1, wherein the at least one control parameter value is derived on the basis of thresholds. 4. The control method of claim 3, wherein a threshold value, on which the derivation is based, comprises at least one of a minimum radiation value of a contrast agent and a minimum absorption value in the region of a significant structure of an examination object. 5. The control method of claim 1, wherein the at least one control parameter value represents a result of an object identification of a significant structure of an examination object. 6. The control method of claim 5, wherein the object identification comprises a segmentation of the significant structure of surrounding structures of the examination object. 7. The control method of claim 1, wherein on the basis of the quality indicator a signal is emitted to a user if the quality of the recording results data is at least one of sufficient, unsatisfactory and questionable. 8. A method for control adjustment of a contrast-agent-assisted tomography scan sequence of a medical technology tomography system, the method comprising: adjusting a number of control values for the tomography scan sequence as a function of a quality indicator at least one of determined in the control method of claim 1, a control parameter value derived in the context thereof, and examination data used to derive the control parameter values. 9. The method of claim 8, wherein the number of control values for the tomography scan sequence is adjusted such that a parameter value, to be expected according to at least one of a simulation and preliminary estimation, is altered in a follow-up scan scenario, essentially designed similarly to a scan scenario, as could be established in the context of the control method of claim 1, such that it represents an improved quality of recording results data. 10. The method of claim 9, wherein the number of adjusted control values comprises at least one contrast agent administration control parameter value, used for control of automatic contrast agent administration in the context of the tomography scan. 11. The method of claim 10, wherein an injection protocol for the automatic contrast agent administration is modified by adjusting the contrast agent administration control parameter value in the injection protocol. 12. A control system for determining a quality indicator of medical technology recording results data from a contrast-agent-assisted tomography scan of an examination structure using a tomography system, the control system comprising: an input interface for the recording results data; and a derivation unit to, operation at least one of during and directly after the tomography scan, automatically derive at least one control parameter value from the recording results data in respect of a contrast agent image region, the at least one control parameter value representing a quality of the recording results data in the contrast agent image region. 13. A tomography system, comprising: a recording unit; and the control system of claim 12. 14. A contrast agent administration system, comprising: a contrast agent administration control; and the control system of claim 12. 15. A computer program product, loadable directly into a processor of a programmable control system, including program code segments to execute the control method of claim 1 when the program product is executed on the control system. 16. The control method of claim 7, wherein on the basis of the quality indicator a signal is emitted to a user if the quality of the recording results data is at least one of sufficient, unsatisfactory and questionable, related to a previously defined purpose of the tomography scan. 17. A computer program product, loadable directly into a processor of a programmable control system, including program code segments to execute the control method of claim 8 when the program product is executed on the control system. 18. A computer readable medium including program code segments for, when executed on a control system, causing the control system to implement the method of claim 1. 19. A computer readable medium including program code segments for, when executed on a control device of a radar system, causing the control device of the radar system to implement the method of claim 8. | Please help me write a proper abstract based on the patent claims. | A control method is disclosed for determining a quality indicator of medical technology recording results data from a tomography scan of an examination structure, which scan is supported by a contrast agent, by way of a tomography system. According to an embodiment of the invention, at least one control parameter value is automatically derived from the recording results data in respect of a contrast agent image region during and/or directly after the tomography scan, which value represents a quality of the recording results data in the contrast agent image region. A control system for such a determination is also disclosed. |
1. A data processing device using a neural network comprising: an input layer; a hidden layer; and an output layer, wherein the hidden layer comprises: a digital-to-analog converter; a first neuron circuit; a second neuron circuit; and a comparator, wherein each of the first neuron circuit and the second neuron circuit comprises a first potential holding circuit and a second potential holding circuit, wherein the first potential holding circuit and the second potential holding circuit are electrically connected to a bit line, wherein the first potential holding circuit is configured to hold a potential of a first analog signal, wherein the second potential holding circuit is configured to hold a potential of a second analog signal, wherein the first potential holding circuit comprises a first transistor, a second transistor, and a third transistor, wherein a gate of the second transistor is electrically connected to one of a source and a drain of the first transistor, wherein a gate of the third transistor is electrically connected to a first wiring to which a first digital signal is supplied, wherein the second potential holding circuit comprises a fourth transistor, a fifth transistor, and a sixth transistor, wherein a gate of the fifth transistor is electrically connected to one of a source and a drain of the fourth transistor, wherein a gate of the sixth transistor is electrically connected to a second wiring to which a second digital signal is supplied, wherein a third analog signal is output from the first neuron circuit to the second neuron circuit, is input to the comparator to which a reference voltage is applied, and is converted into a third digital signal, and wherein the third digital signal is output to the gate of the third transistor included in the second neuron circuit or the gate of the sixth transistor included in the second neuron circuit. 2. The data processing device according to claim 1, wherein the third analog signal is a signal obtained by adding a product of the first analog signal and the first digital signal to a product of the second analog signal and the second digital signal. 3. The data processing device according to claim 1, wherein each of the first transistor and the fourth transistor comprises an oxide semiconductor. 4. The data processing device according to claim 1, wherein each of the second transistor, the third transistor, the fifth transistor, and the sixth transistor comprises silicon. 5. The data processing device according to claim 1, further comprising a third neuron circuit in the hidden layer. 6. An electronic component comprising: the data processing device according to claim 1, and a lead electrically connected to the data processing device. 7. An electronic device comprising: the electronic component according to claim 6, a printed circuit board where the electronic component is mounted, and a housing incorporating the printed circuit board. 8. A data processing device using a neural network comprising: an input layer; a hidden layer; and an output layer, wherein the hidden layer comprises: a digital-to-analog converter; a first neuron circuit; and a second neuron circuit, wherein each of the first neuron circuit and the second neuron circuit comprises a first potential holding circuit and a second potential holding circuit, wherein the first potential holding circuit and the second potential holding circuit are electrically connected to a bit line, wherein the first potential holding circuit is configured to hold a potential of a first analog signal, wherein the second potential holding circuit is configured to hold a potential of a second analog signal, wherein the first potential holding circuit comprises a first transistor, a second transistor, and a third transistor, wherein a gate of the second transistor is electrically connected to one of a source and a drain of the first transistor, wherein a gate of the third transistor is electrically connected to a first wiring to which a first digital signal is supplied, wherein the second potential holding circuit comprises a fourth transistor, a fifth transistor, and a sixth transistor, wherein a gate of the fifth transistor is electrically connected to one of a source and a drain of the fourth transistor, wherein a gate of the sixth transistor is electrically connected to a second wiring to which a second digital signal is supplied, and wherein a third analog signal is output from the first neuron circuit to the second neuron circuit. 9. The data processing device according to claim 8, wherein the third analog signal is a signal obtained by adding a product of the first analog signal and the first digital signal to a product of the second analog signal and the second digital signal. 10. The data processing device according to claim 8, wherein each of the first transistor and the fourth transistor comprises an oxide semiconductor. 11. The data processing device according to claim 8, wherein each of the second transistor, the third transistor, the fifth transistor, and the sixth transistor comprises silicon. 12. An electronic component comprising: the data processing device according to claim 8, and a lead electrically connected to the data processing device. 13. An electronic device comprising: the electronic component according to claim 12, a printed circuit board where the electronic component is mounted, and a housing incorporating the printed circuit board. | Please help me write a proper abstract based on the patent claims. | To provide a data processing device using a neural network that can suppress increase in the occupied area of a chip. A product-sum operation circuit is formed using a transistor including an oxide semiconductor having an extremely small off-state current. Signals are input to and output from the product-sum operation circuits included in a plurality of hidden layers through comparators. The outputs of the comparators are used as digital signals to be input signals for the next-stage hidden layer. The combination of a digital circuit and an analog circuit can eliminate the need for an analog-to-digital converter or a digital-to-analog converter which occupies a large area of a chip. |
1. A machine learning device which performs a task using a plurality of industrial machines and learns task sharing for the plurality of industrial machines, the device comprising: a state variable observation unit which observes state variables of the plurality of industrial machines; and a learning unit which learns task sharing for the plurality of industrial machines, on the basis of the state variables observed by the state variable observation unit. 2. The machine learning device according to claim 1, further comprising a decision unit which decides and issues, as a command, a sharing detail of the task for the plurality of industrial machines by referring to the task sharing learned by the learning unit. 3. The machine learning device according to claim 2, wherein the machine learning device is connected to each of the plurality of industrial machines via a network, the state variable observation unit obtains the state variables of the plurality of industrial machines via the network, and the decision unit sends the sharing detail of the task to the plurality of industrial machines via the network. 4. The machine learning device according to claim 1, wherein the state variable observation unit observes at least one of a task time from start to end of a series of tasks repeatedly performed by the plurality of industrial machines, and a task load on each of the plurality of industrial machines in an interval from the start to the end of the tasks, or observes at least one of an achievement level of the tasks performed by the plurality of industrial machines and a difference in task volume in each of the plurality of industrial machines. 5. The machine learning device according to claim 4, wherein the state variable observation unit further obtains at least one of a change in production volume in an upstream process, and a change in production volume upon stop of the industrial machine for maintenance performed periodically. 6. The machine learning device according to claim 1, wherein the learning unit learns task sharing for maintaining a volume of production by the plurality of industrial machines, averaging a load on each of the plurality of industrial machines, and maximizing a volume of the task performed by the plurality of industrial machines. 7. The machine learning device according to claim 1, wherein each of the plurality of industrial machines comprises a robot, and the plurality of robots perform the task on the basis of the learned task sharing. 8. The machine learning device according to claim 1, wherein the learning unit comprises: a reward computation unit which computes a reward on the basis of output from the state variable observation unit; and a value function update unit which updates a value function for determining a value of task sharing for the plurality of industrial machines, in accordance with the reward on the basis of output from the state variable observation unit and output from the reward computation unit. 9. The machine learning device according to claim 1, wherein the learning unit comprises: an error computation unit which computes an error on the basis of input teacher data and output from the state variable observation unit; and a learning model update unit which updates a learning model for determining an error of task sharing for the plurality of industrial machines, on the basis of output from the state variable observation unit and output from the error computation unit. 10. The machine learning device according to claim 1, wherein the machine learning device further comprises a neural network. 11. An industrial machine cell comprising the plurality of industrial machines; and the machine learning device according to claim 1. 12. A manufacturing system comprising a plurality of industrial machine cells according to claim 11, wherein the machine learning devices are provided in correspondence with the industrial machine cells, and the machine learning devices provided in correspondence with the industrial machine cells are configured to share or exchange data with each other via a communication medium. 13. The manufacturing system according to claim 12, wherein the machine learning device is located on a cloud server. 14. A machine learning method for performing a task using a plurality of industrial machines and learning task sharing for the plurality of industrial machines, the method comprising: observing state variables of the plurality of industrial machines; and learning task sharing for the plurality of industrial machines, on the basis of the observed state variables. 15. The machine learning method according to claim 14, wherein observing the state variables comprises one of: observing at least one of a task time from start to end of a series of tasks repeatedly performed by the plurality of industrial machines, and a task load on each of the plurality of industrial machines in an interval from the start to the end of the tasks, and observing at least one of an achievement level of the tasks performed by the plurality of industrial machines and a difference in task volume in each of the plurality of industrial machines. | Please help me write a proper abstract based on the patent claims. | A machine learning device, which performs a task using a plurality of industrial machines and learns task sharing for the plurality of industrial machines, includes a state variable observation unit which observes state variables of the plurality of industrial machines; and a learning unit which learns task sharing for the plurality of industrial machines, on the basis of the state variables observed by the state variable observation unit. |
1. A system for generating fabricated pattern data records (XDRs) based on data from accessible data sources, comprising: a) an XDR core module containing: one or more modeling and pattern creation modules for modeling original data received from said data sources; one or more synthetic data generation modules for generating fabricated data, based on the patterns created by said modeling and pattern creation modules; a data splitting module for splitting the data into training and testing sets according to a predetermined policy; an XDR storage database for storing created patterns and fabricated data; a configuration manager for controlling the operation of said modeling and pattern creation modules and of said synthetic data generation modules; b) a plurality of XDR agents being software components for communicating with said data sources and accessing relevant data, using a unique API of each data source, each of said XDR agents is capable of: identifying the data-structures of its corresponding data source; transforming said data structures into a unified input structure being used by said XDR core module; c) a data-store communication module for mediating between said XDR agents and said XDR core modules by using data transformation. 2. A system according to claim 1, in which the modeling and pattern creation modules use Model and Patterns Creation algorithms (MPCs) being capable of discovering patterns that reflect the relationships, conditions and constants of the available data. 3. A system according to claim 2, in which the modeling tasks include: state-transitions learning of a system or an individual; learning probabilistic cause-effect conditions among a given set of random variables; context-aware learning 4. A system according to claim 2, in which the synthetic data generation modules use Syntactic Data Production (SDP) algorithms to generate new and fabricated data samples utilizing the models learned by the MPCs. 5. A system according to claim 1, further comprising a Query API and a Query Processer to receive and process data-generation queries. 6. A system according to claim 5, further comprising a query cache for caching queries and query results. 7. A system according to claim 1, further comprising a User Interface for allowing interaction with the XDR core module and server-side components. 8. A system according to claim 1, in which the data sources are located locally on the computerized device that runs the data fabrication system, or on an external computerized device. 9. A system according to claim 1, in which the data splitting module splits the data into training and testing sets by using random based or time based splitting. 10. A system according to claim 1, in which the data is aggregated and prepared for further usage. | Please help me write a proper abstract based on the patent claims. | A system for generating fabricated pattern data records (XDRs) based on data from accessible data sources, which comprises an XDR core module containing one or more modeling and pattern creation modules for modeling original data received from the data sources; one or more synthetic data generation modules for generating fabricated data, based on the patterns created by the modeling and pattern creation modules; a data splitting module for splitting the data into training and testing sets according to a predetermined policy; an XDR storage database for storing created patterns and fabricated data; a configuration manager for controlling the operation of the modeling and pattern creation modules and of the synthetic data generation modules; a plurality of XDR agents being software components for communicating with the data sources and accessing relevant data, using a unique API of each data source. Each of the XDR agents is capable of identifying the data-structures of its corresponding data source; transforming the data structures into a unified input structure being used by the XDR core module; a data-store communication module for mediating between the XDR agents and the XDR core modules by using data transformation. |
1. A method for building flexible persistence models for education institutions, the method comprising: translating units of academic progress of a non-traditional learning program of an education institution into states of a Markov model; instantiating the Markov model to quantify transitions of students between the states as parameters of state transitions; defining flexible persistence in terms of state-transitional characteristics of the students using the Markov model with the parameters of state transitions, wherein the flexible persistence indicate student enrollment from one collection of academic progress units to another collection of academic progress units; extracting features from the Markov model with the parameters of state transitions that are related to the non-traditional learning program of the education institution using the defined flexible persistence; and building at least one flexible persistence model using the extracted features for different segments of the students. 2. The method of claim 1, wherein translating the units of academic progress of the non-traditional learning program of the education institution into the states of the Markov model includes translating flexible sessions of the non-traditional learning program into states of the Markov model so that each flexible session corresponds to at least one of the states of the Markov model, wherein the flexible sessions are not predefined with respect to sequence. 3. The method of claim 2, wherein extracting the features from the Markov model with the parameters of state transitions includes extracting features related to the flexible sessions of the non-traditional learning program from the Markov model with the parameters of state transitions. 4. The method of claim 1, wherein translating the units of academic progress of the non-traditional learning program of the education institution into the states of the Markov model includes translating competency units of the non-traditional learning program into states of a modified Markov model augmented by a hierarchical time-series tree structure so that each competency unit corresponds to at least one of the states of the modified Markov model. 5. The method of claim 4, wherein the hierarchical time-series tree structure includes parent states representing the students advancing at different speeds for a first competency unit of a course and child states representing the students advancing at different speeds for a second competency unit of the course from the parent states. 6. The method of claim 4, wherein extracting the features from the Markov model with the parameters of state transitions includes extracting features related to the competency units of the non-traditional learning program from the Markov model with the parameters of state transitions. 7. The method of claim 6, wherein the features related to the competency units of the non-traditional learning program are based on a sliding window of time so that the features are derived from the sliding windows of time at different times. 8. The method of claim 7, wherein the features related to the competency units of the non-traditional learning program are based data-adaptive comparison basis so that the features are derived from comparison of the students who progress at a similar rate of competency unit mastery based on the sliding window of time. 9. The method of claim 8, wherein extracting the features from the Markov model with the parameters of state transitions includes using dynamic time warping for overlapping sessions with respect to time or using anchoring of the overlapping sessions for session comparisons. 10. A computer-readable storage medium containing program instructions for method for building flexible persistence models for education institutions, wherein execution of the program instructions by one or more processors of a computer system causes the one or more processors to perform steps comprising: translating units of academic progress of a non-traditional learning program of an education institution into states of a Markov model; instantiating the Markov model to quantify transitions of students between the states as parameters of state transitions; defining flexible persistence in terms of state-transitional characteristics of the students using the Markov model with the parameters of state transitions, wherein the flexible persistence indicate student enrollment from one collection of academic progress units to another collection of academic progress units; extracting features from the Markov model with the parameters of state transitions that are related to the non-traditional learning program of the education institution using the defined flexible persistence; and building at least one flexible persistence model using the extracted features for different segments of the students. 11. The computer-readable storage medium of claim 10, wherein translating the units of academic progress of the non-traditional learning program of the education institution into the states of the Markov model includes translating flexible sessions of the non-traditional learning program into states of the Markov model so that each flexible session corresponds to at least one of the states of the Markov model, wherein the flexible sessions are not predefined with respect to sequence. 12. The computer-readable storage medium of claim 11, wherein extracting the features from the Markov model with the parameters of state transitions includes extracting features related to the flexible sessions of the non-traditional learning program from the Markov model with the parameters of state transitions. 13. The computer-readable storage medium of claim 10, wherein translating the units of academic progress of the non-traditional learning program of the education institution into the states of the Markov model includes translating competency units of the non-traditional learning program into states of a modified Markov model augmented by a hierarchical time-series tree structure so that each competency unit corresponds to at least one of the states of the modified Markov model. 14. The computer-readable storage medium of claim 13, wherein the hierarchical time-series tree structure includes parent states representing the students advancing at different speeds for a first competency unit of a course and child states representing the students advancing at different speeds for a second competency unit of the course from the parent states. 15. The computer-readable storage medium of claim 13, wherein extracting the features from the Markov model with the parameters of state transitions includes extracting features related to the competency units of the non-traditional learning program from the Markov model with the parameters of state transitions. 16. The computer-readable storage medium of claim 15, wherein the features related to the competency units of the non-traditional learning program are based on a sliding window of time so that the features are derived from the sliding windows of time at different times. 17. The computer-readable storage medium of claim 16, wherein the features related to the competency units of the non-traditional learning program are based data-adaptive comparison basis so that the features are derived from comparison of the students who progress at a similar rate of competency unit mastery based on the sliding window of time. 18. A flexible persistence modeling system comprising: memory; and a processor configured to: translate units of academic progress of a non-traditional learning program of an education institution into states of a Markov model; instantiate the Markov model to quantify transitions of students between the states as parameters of state transitions; define flexible persistence in terms of state-transitional characteristics of the students using the Markov model with the parameters of state transitions, wherein the flexible persistence indicate student enrollment from one collection of academic progress units to another collection of academic progress units; extract features from the Markov model with the parameters of state transitions that are related to the non-traditional learning program of the education institution using the defined flexible persistence; and build at least one flexible persistence model using the extracted features for different segments of the students. 19. The flexible persistence modeling system of claim 18, wherein the processor is configured to translate flexible sessions of the non-traditional learning program into states of the Markov model so that each flexible session corresponds to at least one of the states of the Markov model, wherein the flexible sessions are not predefined with respect to sequence. 20. The flexible persistence modeling system of claim 19, wherein the processor is configured to extract features related to the flexible sessions of the non-traditional learning program from the Markov model with the parameters of state transitions. 21. The flexible persistence modeling system of claim 18, wherein the processor is configured to translate competency units of the non-traditional learning program into states of a modified Markov model augmented by a hierarchical time-series tree structure so that each competency unit corresponds to at least one of the states of the modified Markov model. 22. The flexible persistence modeling system of claim 21, wherein the processor is configured to extract features related to the competency units of the non-traditional learning program from the Markov model with the parameters of state transitions. 23. The flexible persistence modeling system of claim 22, wherein the features related to the competency units of the non-traditional learning program are based on a sliding window of time so that the features are derived from the sliding windows of time at different times. 24. The computer-readable storage medium of claim 16, wherein the features related to the competency units of the non-traditional learning program are based data-adaptive comparison basis so that the features are derived from comparison of the students who progress at a similar rate of competency unit mastery based on the sliding window of time. | Please help me write a proper abstract based on the patent claims. | A flexible persistence modeling system and method for building flexible persistence models for education institutions using a Markov model based on units of academic progress of a non-traditional learning program of an education institution. The Markov model is used to quantify transitions of students between the states as parameters of state transitions so that features from the Markov model with the parameters of state transitions can be extracted that are related to the non-traditional learning program of the education institution using defined flexible persistence. The extracted features can then be used to build at least one flexible persistence model for different segments of the students. |
1. A method of classifying data, comprising: selecting a hypothesis class among entire classes; generating output data with regard to the entire classes by applying a classification algorithm to input data; modifying the input data to increase a value of the hypothesis class among the output data in response to a re-classification condition being met; and setting the modified input data to be new input data. 2. The method of claim 1, wherein the re-classification condition comprises at least one of a value of the hypothesis class being lower than a preset threshold and a number of a re-classification iteration being lower than a preset number of the re-classifications. 3. The method of claim 1, further comprising: outputting the input data and the output data in response to the determination that the re-classification condition is not met. 4. The method of claim 1, wherein the modifying of the input data comprises: defining a loss function of the classification algorithm using the hypothesis class, calculating a gradient vector of the defined loss function, and modifying the input data on a basis of the gradient vector. 5. The method of claim 4, wherein the modifying of the input data comprises reducing each value of the input data by a preset positive value in a direction a gradient. 6. The method of claim 4, wherein the modifying of the input data on a basis of the gradient vector comprises: for each value of the input data, modifying to 0 a value of the input data in which a result from multiplying a sign and a gradient of each value is greater than or equal to a reference value, or a value of the input data, in which an absolute value of the gradient is greater than or equal to the reference value. 7. The method of claim 4, wherein the modifying of the input data on a basis of the gradient vector comprises: reducing, in the direction in which a gradient descends, each value of the input data by a positive value. 8. The method of claim 1, further comprising: generating initial output data of the entire classes by applying the classification algorithm to the received input data, and the selecting of the hypothesis class is performed on a basis of a size of each value of the initial output data. 9. The method of claim 1, wherein the classification algorithm is one of a neutral network, a convolutional neutral network (CNN), and a recurrent neural network (RNN). 10. An apparatus to classify data, comprising: a hypothesis class selector configured to select one hypothesis class among entire classes; a data classifier configured to generate output data with regard to the entire classes by applying a classification algorithm to input data; and a data setter configured to modify input data to increase a value of the hypothesis class among the output data and set the modified input data to new input data in response to a determination that a re-classification condition is met. 11. The apparatus of claim 10, wherein the re-classification condition comprises at least one of a value of the hypothesis class being lower than a preset threshold and a value of the hypothesis class being lower than a preset number of the re-classifications. 12. The apparatus of claim 10, in response to a determination that the re-classification condition is not met, further comprising: a result output configured to output the input data and the output data. 13. The apparatus of claim 10, wherein the data setter comprises: a loss function definer configured to define a loss function of the classification algorithm by using the hypothesis class, a gradient calculator configured to calculate a gradient vector with respect to the defined loss function, and a data modifier configured to modify the input data on a basis of the gradient vector. 14. The apparatus of claim 13, wherein the data modifier is configured to reduce each value of the input data by a preset positive in a direction of a gradient. 15. The apparatus of claim 13, wherein the data modifier is configured to, for each value of the input data, modify to 0 a value of the input data in which a result from multiplying a sign and a gradient of each value is greater than or equal to a reference value, or a value of the input data in which an absolute value of the gradient is greater than or equal to the reference value. 16. The apparatus of claim 13, wherein the data modifier modifies the input data on a basis of the gradient vector by reducing, in the direction in which a gradient descends, each value of the input data by a positive value. 17. The apparatus of claim 10, wherein the hypothesis class selector is configured to generate initial output data with respect to the entire classes by applying the classification algorithm to the received input data and select the hypothesis class on a basis of a size of each value of the initial output data. 18. The apparatus of claim 10, wherein the classification algorithm is one of a neutral network, a convolutional neutral network (CNN), and a recurrent neural network (RNN). 19. A method of segmenting a region of interest (ROI), comprising: selecting one hypothesis class among entire classes; generating output data with regard to the entire classes by applying a classification algorithm to input data; modifying the input data to increase a value of the hypothesis class among the output data; and segmenting, as ROIs, an area from the modified input data based on the modifying. 20. The method of claim 19, further comprising: in response to a determination that the ROI is to be re-segmented, generating new input data that comprises the segmented area that is continuous; and repeatedly performing operations subsequent to the generating of the output data. 21. The method of claim 19, wherein the segmenting of the area comprises segmenting a continuous area, of which a value is increased as ROIs from the modified input data by using a segmentation algorithm. 22. The method of claim 21, wherein the segmentation algorithm comprises at least one of a graph cut algorithm and a conditional random field (CRF) algorithm. 23. The method of claim 19, wherein the modifying of the input data comprises: defining a loss function of the classification algorithm using the hypothesis class; calculating a gradient vector of the defined loss function; and modifying the input data based on the gradient vector. 24. An apparatus to segment a region of interest (ROI), comprising: a hypothesis class selector configured to select a hypothesis class among entire classes; a data classifier configured to generate output data about the entire classes by applying a classification algorithm to input data; a data setter configured to modify the input data to increase a value of the hypothesis class among the output data and outputting a modification result indicative thereof; and an ROI segmentor configured to segment, as ROIs, an area from the modified input data based on the modification result. 25. The apparatus of claim 24, wherein, in response to a determination that the ROI is to be re-segmented, the data setter is configured to generate new input data that comprises the one or more segmented areas. 26. The apparatus of claim 24, wherein the ROI segmentor is configured to segment a continuous area, of which values are increased, as ROIs from the modified input data using a segmentation algorithm. 27. The apparatus of claim 26, wherein the segmentation algorithm comprises at least one of a graph cut algorithm and a conditional random field (CRF) algorithm. 28. The apparatus of claim 24, wherein the data setter comprises: a loss function definer configured to define a loss function of the classification algorithm using the hypothesis class; a gradient calculator configured to calculate a gradient vector of the defined loss function; and a data modifier configured to modify the input data on a basis of the gradient vector. | Please help me write a proper abstract based on the patent claims. | A method and an apparatus are described to classify data. The method and apparatus includes selecting a hypothesis class among entire classes. The method and corresponding apparatus generate output data with regard to the entire classes by applying a classification algorithm to input data, and modify the input data to increase a value of the hypothesis class among the output data in response to a re-classification condition being met. The modified input data is set to be new input data. |
1. An apparatus for a predictive analytics factory, the apparatus comprising: a function generator module configured to generate a plurality of learned functions based on training data without prior knowledge regarding suitability of the generated learned functions for the training data; a function evaluator module configured to perform an evaluation of the plurality of learned functions using test data and to maintain evaluation metadata for the plurality of learned functions; and a predictive compiler module configured to form a predictive ensemble based on the evaluation metadata, the predictive ensemble comprising a subset of multiple learned functions from the plurality of learned functions. 2. The apparatus of claim 1, further comprising a feature selector module configured to, in response to the function generator module generating the plurality of learned functions, determine a subset of features from the training data for use in the predictive ensemble based on the evaluation metadata, the predictive compiler module configured to form the predictive ensemble using the selected subset of features. 3. The apparatus of claim 2, wherein one or more of the features of the training data are selected by a user as required and the feature selector module is configured to select one or more optional features to include in the subset of features with the required one or more features. 4. The apparatus of claim 1, wherein the predictive compiler module is configured to combine learned functions from the plurality of learned functions to form combined learned functions, the predictive ensemble comprising at least one combined learned function. 5. The apparatus of claim 1, wherein the predictive compiler module is configured to add one or more layers to at least a portion of the plurality of learned functions to form one or more extended learned functions, at least one of the one or more layers comprising a probabilistic model, the predictive ensemble comprising at least one extended learned function. 6. The apparatus of claim 1, wherein the predictive compiler module is configured to form the predictive ensemble by organizing the subset of multiple learned functions into the predictive ensemble, the predictive ensemble comprising the subset of multiple learned functions and a rule set synthesized from the evaluation metadata for the subset of learned functions to direct data through the multiple learned functions such that different learned functions of the ensemble process different subsets of the data based on the evaluation metadata. 7. The apparatus of claim 1, further comprising an orchestration module configured to direct workload data through the predictive ensemble based on the evaluation metadata data to produce a classification for the workload data and a confidence metric for the classification. 8. The apparatus of claim 1, further comprising an interface module configured to receive an analytics request from a client and to provide an analytics result to the client, the analytics request comprising workload data with similar features to the training data, the analytics result produced by the predictive ensemble. 9. A system for a predictive analytics factory, the system comprising: a host computing device in communication with at least one client; a predictive analytics module executing on the host computing device, the predictive analytics module determining a plurality of learned functions using training data received from the at least one client without prior knowledge regarding suitability of the determined learned functions for the training data, selecting a subset of the learned functions based on evaluation metadata generated for the plurality of learned functions, and forming a predictive ensemble comprising the selected subset of the learned functions from the plurality of learned functions. 10. The system of claim 9, wherein the predictive analytics module comprises a predictive compiler and the plurality of learned function comprise computer readable code configured by the predictive compiler to accept an input comprising instances of one or more features of the training data and to provide a result. 11. The system of claim 10, wherein the result comprises one or more of a classification, a confidence metric, an inferred function, a regression function, an answer, a prediction, a recognized pattern, a rule, a recommendation, a subset of the instances, and a subset of the one or more features. 12. The system of claim 9, wherein the predictive analytics module generates and evaluates the plurality of learned functions using parallel computing on multiple processors of the host computing device. 13. The system of claim 9, wherein the predictive analytics module determines the plurality of learned functions in response to a request from the at least one client, the request comprising a query, the ensemble formed to predict a result for the query. 14. The system of claim 13, wherein the predictive analytics module returns the ensemble to the at least one client to satisfy the request. 15. The system of claim 13, wherein the predictive analytics module returns the result for the query to the at least one client to satisfy the request, the predictive analytics module maintaining the ensemble in a library of a plurality of generated ensembles from which the predictive analytics module satisfies subsequent requests from the at least one client. 16. The system of claim 13, wherein the predictive analytics module receives the request from the at least one client using one or more of an application programming interface, a shared library, a hardware command interface, and a data network. 17. The system of claim 9, wherein the predictive ensemble comprises a rule set synthesized from the evaluation metadata to direct data through the subset of the learned functions. 18. A predictive analytics ensemble comprising: multiple learned functions synthesized from a larger plurality of learned functions, the larger plurality of learned functions generated from training data without prior knowledge of a suitability of the larger plurality of learned functions for the training data; a metadata rule set synthesized from the evaluation metadata for the plurality of learned functions for directing data through different learned functions of the multiple learned functions to produce a result; and an orchestration module configured to direct the data through the different learned functions of the multiple learned functions based on the synthesized metadata rule set to produce the result. 19. The predictive analytics ensemble of claim 18, further comprising a predictive correlation module configured to correlate one or more features of the multiple learned functions with a confidence metric associated with the result. 20. The predictive analytics ensemble of claim 19, wherein the predictive correlation module is configured to provide a listing of the one or more features correlated with the result to a client. | Please help me write a proper abstract based on the patent claims. | Apparatuses, systems, methods, and computer program products are disclosed for a predictive analytics factory. A function generator module is configured to determine a plurality of learned functions based on training data without prior knowledge regarding suitability of the generated learned functions for the training data. A function evaluator module is configured to perform an evaluation of the plurality of learned functions using test data and to maintain evaluation metadata for the plurality of learned functions. A predictive compiler module is configured to form a predictive ensemble comprising a subset of multiple learned functions from the plurality of learned functions. |
1. A method, comprising: determining, via a processor, a data element of a first geographic area; creating in a database, via the processor, a first dataset representative of the first geographic area based on the data element, the first dataset having less than a threshold amount of information; identifying, via the processor, a second geographic area having the data element, the database including a second dataset for the second geographic area, the second dataset being a model trained with information known to be representative of the second geographic area; identifying, via the processor, a third geographic area having the data element, the database including a third dataset for the third geographic area, the third dataset being a model trained with information known to be representative of the third geographic area; combining, via the processor, the second and third dataset to generate a composite model; and profiling an unknown geographic area by populating, via the processor, the first dataset for the first geographic area with data from the composite model such that the amount of information in the first dataset satisfies the threshold amount of information. 2. A method as defined in claim 1, wherein at least one of the second dataset or the third dataset is trained using machine learning algorithms prior to the combining of the second and third dataset. 3. A method as defined in claim 1, wherein determining the data element of the first geographic area is based on at least one of satellite imagery of the first geographic area, a survey or census related to the first geographic area, applications monitoring the first geographic area, or Internet entries related to the first geographic area. 4. A method as defined in claim 1, wherein using the composite model to populate the first dataset for the first geographic area includes adding an estimation of an aspect of the first geographic area to the first dataset, the estimation based on the composite model. 5. A method as defined in claim 1, wherein the first data element is at least one of a type, a geography, a demographic, an inhabitant lifestyle, a wealth distribution, a size, or a shape. 6. A method as defined in claim 1, wherein the second and third dataset are combined by weighted averaging the second and third dataset, the second and third dataset weighted based on similarities between the first geographic area and the second and third geographic areas. 7. A method as defined in claim 1, wherein the threshold amount of information is an amount of information required for a learning algorithm to perform accurately on unseen tasks. 8. A tangible computer readable storage medium, comprising instructions that, when executed, cause a machine to at least: identify a first geographic area for which a database does not include a model; determine a first data element of the first geographic area; identify a first trained model corresponding to a second geographic area with the first data element; identify a second trained model corresponding to a third geographic area with the first data element; mix the first trained model and the second trained model to generate a composite model; and use the composite model to represent the first geographic area in the database. 9. A storage medium as defined in claim 8, wherein the determining of the first data element of the first geographic area is based on at least one of satellite imagery of the first geographic area, a survey or census related to the first geographic area, applications monitoring the first geographic area, or Internet entries related to the first geographic area. 10. A storage medium as defined in claim 8, wherein the first data element is at least one of a type, a geography, a demographic, an inhabitant lifestyle, a wealth distribution, a size, or a shape. 11. A storage medium as defined in claim 8, wherein the instructions further cause a machine to: determine a second data element of the first geographic area based on at least one of an image representative of the first geographic area, a survey from the first geographic area, applications monitoring the first geographic area, and the internet; and verify that the first trained model corresponding to the second geographic area has the second data element. 12. A storage medium as defined in claim 11, wherein the second trained model corresponding to the third geographic area does not have the second data element, the instructions further cause a machine to mix the first and second trained models by weighted averaging, the respective weights of the first and second trained models based on the number of matching data elements. 13. A storage medium as defined in claim 8, wherein the first and second trained models are mixed by aggregating the first and second trained models together. 14. A storage medium as defined in claim 8, wherein the composite model satisfies a threshold amount of information required for a learning algorithm to perform accurately on unseen tasks. 15. An apparatus, comprising: a geographic area manager to determine a first data element of a first geographic area; a model searcher to search a database for at least one of a model of the first geographic area, a first trained model corresponding to a second geographic area, and a second trained model corresponding to a third geographic area; a geographic area matcher to determine whether the second geographic area and the third geographic area match the first data element of the first geographic area; a trained model extractor to extract a first trained model corresponding to the second geographic area and a second trained model corresponding to the third geographic area; and a model mixer to blend the first trained model and the second trained model together to represent the model of the first geographic area. 16. An apparatus as defined in claim 15, wherein the first data element is at least one of a type, a geography, a demographic, an inhabitant lifestyle, a wealth distribution, a size, or a shape. 17. An apparatus as defined in claim 15, wherein the model mixer is to blend the first trained model and the second trained model using data fusion algorithms. 18. An apparatus as defined in claim 15, wherein the first trained model is a bootstrap aggregate decision tree learning algorithm and the second trained model is an artificial neural network learning algorithm. 19. An apparatus as defined in claim 15, wherein the geographic area manager is to determine a second data element of the first geographic area based on at least one of the image representative of the first geographic area, a survey from the first geographic area, applications monitoring the first geographic area, and the Internet, and further wherein the geographic area matcher is to verify that at least one of the second geographic area and the third geographic area have the second data element of the first geographic area. 20. An apparatus as defined in claim 15, wherein when the second geographic area has the second data element of the first geographic area and the third geographic area does not have the second data element of the first geographic area, the geographic area matcher determines the second geographic area is a better match to the first geographic area than the third geographic area. | Please help me write a proper abstract based on the patent claims. | Methods and apparatus to generate data for geographic areas are disclosed. An example method includes identifying a first geographic area for which a database does not include a model, determining a first data element of the first geographic area, identifying a first trained model corresponding to a second geographic area with the first data element, identifying a second trained model corresponding to a third geographic area with the first data element, mixing the first trained model and the second trained model to generate a composite model, and using the composite model to represent the first geographic area in the database. |
1. A method for training a learning machine, comprising: augmenting data from fine-grained image recognition with labeled data annotated by one or more hyper-classes, performing a multi-task deep learning on the labeled data; allowing fine-grained classification and hyper-class classification to share and learn the same feature layers; and applying regularization in the multi-task deep learning to exploit one or more relationships between the fine-grained classes and the hyper-classes. 2. The method of claim 1, comprising two common hyper-classes with one being a super-classes that subsume a set of fine-grained classes and another being named factor-classes on different viewpoints of a car that explain the large intra-class variance. 3. The method of claim 1, comprising identifying annotated hyper-classes in the fine-grained data and acquiring a large number of hyper-classes labeled images from external sources. 4. The method of claim 3, wherein the external sources include image search engines. 5. The method of claim 1, comprising applying a learning model engine from a regularization between the fine-grained recognition and the hyper-class recognition. 6. The method of claim 1, comprising performing data augmentation to utilize auxiliary images as to improve a generalization performance of learned features. 7. The method of claim 1, comprising applying a hyper-class to capture ‘has a’ relationship. 8. The method of claim 7, comprising applying the hyper-class to explain intra-class variances or pose variance. 9. The method of claim 1, comprising solving: min { w v , c } , { u v } , { w l } L ( { w v , c } , { u v } ) + R ( { w v , c } , { u v } ) + ∑ v = 1 K r ( u v ) + ∑ l = 1 H r ( w l ) where wl, l=1, . . . , H denotes all the weights of the CNN in determining the high level features h(x), H denotes the number of layers before the classifier layers, and r(w) denotes the standard Euclidean norm square regularizer with an implicit regularization parameter (or a weight decay parameter). 10. The method of claim 1, comprising training the deep CNN by backpropagation using a mini-batch stochastic gradient descent with two sources of data and two loss functions corresponding to the tasks, further comprising sampling images in a mini-batch to determine stochastic gradients. 11. A learning system, comprising: low level feature extractors; high level feature extractors coupled to the low level feature extractors; and a plurality of classifiers receiving high and low level features, with a softmax loss on auxiliary data and softmax loss on fine-grained data, the classifiers forming a hyper-class augmented and regularized deep Convolution Neural Network (CNN). 12. The system of claim 11, comprising two common hyper-classes with one being a super-classes that subsume a set of fine-grained classes and another being named factor-classes on different viewpoints of a car that explain the large intra-class variance. 13. The system of claim 11, comprising annotated hyper-classes from in fine-grained data and acquiring hyper-classes labeled images from external sources. 14. The system of claim 13, comprising, wherein the external sources include image search engines. 15. The system of claim 11, comprising a learning model engine derived from a regularization between a fine-grained recognition and a hyper-class recognition. 16. The system of claim 11, wherein data augmentation is used to utilize auxiliary images as to improve a generalization performance of learned features. 17. The system of claim 11, comprising applying a hyper-class to capture ‘has a’ relationship. 18. The system of claim 17, wherein the hyper-class is used to explain intra-class variances or pose variance. 19. The system of claim 11, comprising code to determine: min { w v , c } , { u v } , { w l } L ( { w v , c } , { u v } ) + R ( { w v , c } , { u v } ) + ∑ v = 1 K r ( u v ) + ∑ l = 1 H r ( w l ) where wl, l=1, . . . , H denotes all the weights of the CNN in determining the high level features h(x) denotes the number of layers before the classifier layers, and r(w) denotes the standard Euclidean norm square regularizer with an implicit regularization parameter (or a weight decay parameter). 20. The system of claim 11, wherein the deep CNN is trained by backpropagation using a mini-batch stochastic gradient descent with two sources of data and two loss functions corresponding to the tasks, further comprising code for sampling images in a mini-batch to determine stochastic gradients. | Please help me write a proper abstract based on the patent claims. | Systems and methods are disclosed for training a learning machine by augmenting data from fine-grained image recognition with labeled data annotated by one or more hyper-classes, performing multi-task deep learning; allowing fine-grained classification and hyper-class classification to share and learn the same feature layers; and applying regularization in the multi-task deep learning to exploit one or more relationships between the fine-grained classes and the hyper-classes. |
1. A method for anatomical object detection in a medical image comprising: training a deep neural network to detect the anatomical object in medical images; calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network; and detecting the anatomical object in a received medical image of a patient using the approximation of the trained deep neural network. 2. The method of claim 1, wherein training a deep neural network to detect the anatomical object in medical images comprises training a respective filter for each of a plurality of nodes in each of a plurality of layers of the deep neural network, wherein each respective filter is a weight matrix comprising a plurality of weights that weight node outputs of the nodes of a previous one of the plurality of layers, and calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: sparsifying the weights of the filters for each of the plurality of layers of the trained deep neural network. 3. The method of claim 2, wherein sparsifying the weights of the filters for each of the plurality of layers of the trained deep neural network comprises: reducing a number of non-zero weights in each filter for each of the plurality of layers in the trained deep neural network by setting a predetermined percentage of non-zero weights with lowest magnitudes in each filter equal to zero; and refining the remaining non-zero weights in each filter for each of the plurality of layers to alleviate an effect of reducing the number of non-zero weights in each filter. 4. The method of claim 3, wherein refining the remaining non-zero weights in each filter for each of the plurality of layers to alleviate an effect of reducing the number of non-zero weights in each filter comprises: performing one or more iterations of back-propagation on the approximation of the trained deep neural network resulting from reducing the number of non-zero weights in each filter to refine the remaining non-zero weights in each filter to reduce a cost function that measures an error between predicted anatomical object locations using the approximation of the trained deep neural network and ground truth anatomical object locations in a set of training data. 5. The method of claim 2, wherein sparsifying the weights of the filters for each of the plurality of layers of the trained deep neural network comprises: performing re-weighted L1-norm regularization on the weights of the filters for each of the plurality layers of the trained deep neural network, wherein the re-weighted L1-norm regularization drives a plurality of non-zero weights of the filters to zero; and refining the remaining non-zero weights in the filters for each of the plurality of layers to alleviate an effect of driving the plurality of non-zero weights to zero. 6. The method of claim 5, wherein performing re-weighted L1-norm regularization on the weights of the filters for each of the plurality layers of the trained deep neural network, wherein the re-weighted L1-norm regularization drives a plurality of non-zero weights of the filters to zero comprises: adding a term that re-weights the L1-norm to a cost function that measures an error between predicted anatomical object locations and ground truth anatomical object locations in a set of training data; and performing back-propagation on the trained deep neural network to refine the weights in the filters for each of the plurality of layers of the trained deep neural network to reduce the cost function with the added term that re-weights the L1-norm. 7. The method of claim 6, wherein refining the remaining non-zero weights in the filters for each of the plurality of layers to alleviate an effect of driving the plurality of non-zero weights to zero comprises: performing one or more iterations of back-propagation on the approximation of the trained deep neural network resulting from driving the plurality of non-zero weights to zero to refine the remaining non-zero weights in the filters to reduce the cost function that measures an error between predicted anatomical object locations and ground truth anatomical object locations in the set of training data, without the added term that re-weights the L1-norm 8. The method of claim 1, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: determining a subset of nodes of a plurality nodes in a current layer of the trained deep neural network that linearly approximate the plurality of nodes in the current layer of the trained deep neural network and removing the plurality of nodes in the current layer that are not in subset of nodes from the trained deep neural network; and updating weights for a next layer of the trained deep neural network based on the subset of nodes remaining in the current layer of the trained deep neural network. 9. The method of claim 8, wherein determining a subset of nodes of a plurality nodes in a current layer of the trained deep neural network that linearly approximate the plurality of nodes in the current layer of the trained deep neural network and removing the plurality of nodes in the current layer that are not in subset of nodes from the trained deep neural network comprises: determining the subset of nodes in the current layer and a mixing matrix that best minimizes an error between each of the plurality of nodes in the current layer and a respective approximation for each of the plurality of nodes in the currently layer calculated by linearly combining the subset of nodes using the mixing matrix, subject to a constraint on a size of the subset of nodes. 10. The method of claim 9, wherein updating weights for a next layer of the trained deep neural network based on the subset of nodes remaining in the current layer of the trained deep neural network comprises: removing filters for the next layer of the trained deep neural network whose indices are not in the subset of nodes in the current layer; and updating the remaining filters for the next layer of the trained deep neural network with weights generated by linearly combining weights of the subset of nodes in the current layer using the mixing matrix to approximate weighted inputs to the next layer from the removed ones of the plurality of nodes in the current layer. 11. The method of claim 8, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network further comprises: repeating the steps of determining a subset of nodes of a plurality nodes in a current layer of the trained deep neural network that linearly approximate the plurality of nodes in the current layer of the trained deep neural network and removing the plurality of nodes in the current layer that are not in subset of nodes from the trained deep neural network and updating weights for a next layer of the trained deep neural network based on the subset of nodes remaining in the current layer of the trained deep neural network, for each of a plurality of layers in the trained deep neural network, resulting in an initial approximation of the trained deep neural network; and refining the initial approximation of the trained deep neural network by performing one or more iterations of back-propagation on the initial approximation of the trained deep neural network to reduce a cost function that measures an error between predicted anatomical object locations and ground truth anatomical object locations in a set of training data. 12. The method of claim 1, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: reducing a number of nodes in each of a plurality of layers of the trained deep neural network by determining a subset of nodes in each layer that linearly approximates all of the nodes in that layers, resulting in a first approximation of the trained deep neural network; and reducing a number of non-zero weights in a respective filter for each of the nodes in each of the plurality of layers of the first approximation of the trained deep neural network, resulting in a second approximation of the trained deep neural network. 13. The method of claim 1, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: for each of a plurality of nodes in each of a plurality of layers of the trained deep neural network, reconstructing a trained weight matrix for the node using 1-D Haar wavelet bases and wavelet coefficients. 14. The method of claim 13, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network further comprises: for each of the plurality of nodes in each of the plurality of layers of the trained deep neural network, reducing a number of wavelet coefficients used to reconstruct the trained weight matrix for the node. 15. The method of claim 14, wherein detecting the anatomical object in a received medical image of a patient using the approximation of the trained deep neural network comprises: storing an integral image of the received medical image in a look-up table; calculating, for each of a plurality of image patches in the received medical image, a respective multiplication result of multiplying the image patch by the 1-D Haar wavelet bases and the transposed Haar 1-D wavelets using look-up operations from the integral image stored in the look-up table; and for each node of a first hidden layer in the approximation of the trained deep neural network, calculating a Frobenius inner product of the wavelet coefficients for that node and the respective multiplication result calculated for each of the plurality of image patches. 16. The method of claim 14, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network further comprises: for each of the plurality of layers of the trained deep neural network, applying principal component analysis (PCA) to the space of the wavelet coefficients over all of the plurality of nodes for the layer. 17. The method of claim 1, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: reconstructing a respective trained weight matrix for each of a plurality of nodes in a current layer of the trained deep neural network using 1-D Haar wavelet bases and respective wavelet coefficients and reducing a number of wavelet coefficients used to reconstruct each respective trained weight matrix; and re-training the approximation of the trained deep neural network resulting from reconstructing the respective trained weight matrix for each of the plurality of nodes in the current layer of the trained deep neural network and reducing the number of the wavelet coefficients used to reconstruct each respective trained weight matrix. 18. The method of claim 1, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: for each of a plurality of layers in the trained deep neural network, applying principal component analysis (PCA) to a space of trained weight matrices over all of a plurality of nodes in that layer. 19. An apparatus for anatomical object detection in a medical image comprising: means for training a deep neural network to detect the anatomical object in medical images; means for calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network; and means for detecting the anatomical object in a received medical image of a patient using the approximation of the trained deep neural network. 20. The apparatus of claim 19, wherein the means for training a deep neural network to detect the anatomical object in medical images comprises means for training a respective filter for each of a plurality of nodes in each of a plurality of layers of the deep neural network, wherein each respective filter is a weight matrix comprising a plurality of weights that weight node outputs of the nodes of a previous one of the plurality of layers, and the means for calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: means for sparsifying the weights of the filters for each of the plurality of layers of the trained deep neural network. 21. The apparatus of claim 20, wherein the means for sparsifying the weights of the filters for each of the plurality of layers of the trained deep neural network comprises: means for reducing a number of non-zero weights in each filter for each of the plurality of layers in the trained deep neural network by setting a predetermined percentage of non-zero weights with lowest magnitudes in each filter equal to zero; and means for refining the remaining non-zero weights in each filter for each of the plurality of layers to alleviate an effect of reducing the number of non-zero weights in each filter. 22. The apparatus of claim 20, wherein the means for sparsifying the weights of the filters for each of the plurality of layers of the trained deep neural network comprises: means for performing re-weighted L1-norm regularization on the weights of the filters for each of the plurality layers of the trained deep neural network, wherein the re-weighted L1-norm regularization drives a plurality of non-zero weights of the filters to zero; and means for refining the remaining non-zero weights in the filters for each of the plurality of layers to alleviate an effect of driving the plurality of non-zero weights to zero. 23. The apparatus of claim 19, wherein the means for calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: means for determining a subset of nodes of a plurality nodes in each layer of the trained deep neural network that linearly approximate the plurality of nodes in that layer of the trained deep neural network and removing the plurality of nodes in each layer that are not in subset of nodes from the trained deep neural network; and means for updating weights for each layer of the trained deep neural network based on the subset of nodes remaining in a preceding layer of the trained deep neural network. 24. The apparatus of claim 23, wherein the means for calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network further comprises: means for refining the approximation of the trained deep neural network by performing one or more iterations of back-propagation on the approximation of the trained deep neural network to reduce a cost function that measures an error between predicted anatomical object locations and ground truth anatomical object locations in a set of training data. 25. The apparatus of claim 19, wherein the means for calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: means for reducing a number of nodes in each of a plurality of layers of the trained deep neural network by determining a subset of nodes in each layer that linearly approximates all of the nodes in that layers, resulting in a first approximation of the trained deep neural network; and means for reducing a number of non-zero weights in a respective filter for each of the nodes in each of the plurality of layers of the first approximation of the trained deep neural network, resulting in a second approximation of the trained deep neural network. 26. The apparatus of claim 19, wherein the means for calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: means for reconstructing a respective trained weight matrix for each of a plurality of nodes in each of a plurality of layers of the trained deep neural network using 1-D Haar wavelet bases and wavelet coefficients. 27. The apparatus of claim 26, wherein the means for calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network further comprises: means for reducing a number of wavelet coefficients used to reconstruct the respective trained weight matrix for each of the plurality of nodes in each of the plurality of layers of the trained deep neural network. 28. A non-transitory computer readable medium storing computer program instructions for anatomical object detection in a medical image, the computer program instructions defining operations comprising: training a deep neural network to detect the anatomical object in medical images; calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network; and detecting the anatomical object in a received medical image of a patient using the approximation of the trained deep neural network. 29. The non-transitory computer readable medium of claim 28, wherein training a deep neural network to detect the anatomical object in medical images comprises training a respective filter for each of a plurality of nodes in each of a plurality of layers of the deep neural network, wherein each respective filter is a weight matrix comprising a plurality of weights that weight node outputs of the nodes of a previous one of the plurality of layers, and calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: sparsifying the weights of the filters for each of the plurality of layers of the trained deep neural network. 30. The non-transitory computer readable medium of claim 29, wherein sparsifying the weights of the filters for each of the plurality of layers of the trained deep neural network comprises: reducing a number of non-zero weights in each filter for each of the plurality of layers in the trained deep neural network by setting a predetermined percentage of non-zero weights with lowest magnitudes in each filter equal to zero; and refining the remaining non-zero weights in each filter for each of the plurality of layers to alleviate an effect of reducing the number of non-zero weights in each filter. 31. The non-transitory computer readable medium of claim 29, wherein sparsifying the weights of the filters for each of the plurality of layers of the trained deep neural network comprises: performing re-weighted L1-norm regularization on the weights of the filters for each of the plurality layers of the trained deep neural network, wherein the re-weighted L1-norm regularization drives a plurality of non-zero weights of the filters to zero; and refining the remaining non-zero weights in the filters for each of the plurality of layers to alleviate an effect of driving the plurality of non-zero weights to zero. 32. The non-transitory computer readable medium of claim 28, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: determining a subset of nodes of a plurality nodes in a current layer of the trained deep neural network that linearly approximate the plurality of nodes in the current layer of the trained deep neural network and removing the plurality of nodes in the current layer that are not in subset of nodes from the trained deep neural network; and updating weights for a next layer of the trained deep neural network based on the subset of nodes remaining in the current layer of the trained deep neural network. 33. The non-transitory computer readable medium of claim 32, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network further comprises: repeating the steps of determining a subset of nodes of a plurality nodes in a current layer of the trained deep neural network that linearly approximate the plurality of nodes in the current layer of the trained deep neural network and removing the plurality of nodes in the current layer that are not in subset of nodes from the trained deep neural network and updating weights for a next layer of the trained deep neural network based on the subset of nodes remaining in the current layer of the trained deep neural network, for each of a plurality of layers in the trained deep neural network, resulting in an initial approximation of the trained deep neural network; and refining the initial approximation of the trained deep neural network by performing one or more iterations of back-propagation on the initial approximation of the trained deep neural network to reduce a cost function that measures an error between predicted anatomical object locations and ground truth anatomical object locations in a set of training data. 34. The non-transitory computer readable medium of claim 28, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: reducing a number of nodes in each of a plurality of layers of the trained deep neural network by determining a subset of nodes in each layer that linearly approximates all of the nodes in that layers, resulting in a first approximation of the trained deep neural network; and reducing a number of non-zero weights in a respective filter for each of the nodes in each of the plurality of layers of the first approximation of the trained deep neural network, resulting in a second approximation of the trained deep neural network. 35. The non-transitory computer readable medium of claim 28, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: for each of a plurality of nodes in each of a plurality of layers of the trained deep neural network, reconstructing a trained weight matrix for the node using 1-D Haar wavelet bases and wavelet coefficients. 36. The non-transitory computer readable medium of claim 35, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network further comprises: for each of the plurality of nodes in each of the plurality of layers of the trained deep neural network, reducing a number of wavelet coefficients used to reconstruct the trained weight matrix for the node. 37. The non-transitory computer readable medium of claim 36, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network further comprises: for each of the plurality of layers of the trained deep neural network, applying principal component analysis (PCA) to the space of the wavelet coefficients over all of the plurality of nodes for the layer. 38. The non-transitory computer readable medium of claim 28, wherein calculating an approximation of the trained deep neural network that reduces the computational complexity of the trained deep neural network comprises: for each of a plurality of layers in the trained deep neural network, applying principal component analysis (PCA) to a space of trained weight matrices over all of a plurality of nodes in that layer. | Please help me write a proper abstract based on the patent claims. | A method and system for approximating a deep neural network for anatomical object detection is discloses. A deep neural network is trained to detect an anatomical object in medical images. An approximation of the trained deep neural network is calculated that reduces the computational complexity of the trained deep neural network. The anatomical object is detected in an input medical image of a patient using the approximation of the trained deep neural network. |
1. A non-transitory computer readable medium comprising program code executable by a processor for causing the processor to: receive a time series comprising a plurality of data points arranged in a sequential order over a period of time; determine a repetitive characteristic of the time series by analyzing the time series for a pattern that repeats over a predetermined time period; generate an adjusted time series by removing the repetitive characteristic from the time series; determine, using the adjusted time series, an effect of one or more moving events that occur on different dates for two or more consecutive years on the adjusted time series; generate a residual time series by removing the effect of the one or more moving events from the adjusted time series; generate, using the residual time series, a base forecast that is independent of the repetitive characteristic and the effect of the one or more moving events; and generate a predictive forecast by including the repetitive characteristic and the effect of the one or more moving events into the base forecast. 2. The non-transitory computer readable medium of claim 1, further comprising program code executable by the processor for causing the processor to pre-process the time series prior to determining the repetitive characteristic for the time series by: determining that the time series is to be combined with other time series data based on a length of the time series being below a threshold; and generating an aggregate time-series by combining the time series with the other time series data. 3. The non-transitory computer readable medium of claim 2, further comprising program code executable by the processor for causing the processor to pre-process the time series prior to determining the repetitive characteristic for the time series by: smoothing the aggregate time-series to generate a smoothed aggregate time-series; and using the smoothed aggregate time-series as the time series. 4. The non-transitory computer readable medium of claim 3, further comprising program code executable by the processor for causing the processor to smooth the aggregate time-series by: removing data points associated with one or more predetermined events from the time series; determining replacement data points to be included in the time series in locations corresponding to the removed data points; and including the replacement data points in the time series in the locations corresponding to the removed data points. 5. The non-transitory computer readable medium of claim 1, further comprising program code executable by the processor for causing the processor to determine the effect of the one or more moving events by: generating a pooled time series from the time series and other time-series data using a clustering method or a hierarchical method; and determining the effect of the one or more moving events based on the pooled time series. 6. The non-transitory computer readable medium of claim 1, further comprising program code executable by the processor for causing the processor to determine that the time series is usable to generate the predictive forecast prior to determining the repetitive characteristic for the time series by: receiving a plurality of time series comprising the time series; and filtering the plurality of time series using a threshold time duration to identify a subset of time series from the plurality of time series in which the time durations exceed the threshold time duration, the subset of time series comprising the time series. 7. The non-transitory computer readable medium of claim 6, further comprising program code executable by the processor for causing the processor to: determine that the time series does not include a time period with inactivity; and determine that the time series exhibits the repetitive characteristic by analyzing the time series to detect a presence of the repetitive characteristic. 8. The non-transitory computer readable medium of claim 7, further comprising program code executable by the processor for causing the processor to determine that the time series comprises a magnitude spike with a value above a magnitude threshold. 9. The non-transitory computer readable medium of claim 8, further comprising program code executable by the processor for causing the processor to flag the time series as usable to generate the predictive forecast in response to determining that the time series (i) lacks the time period with inactivity, (ii) exhibits the repetitive characteristic, and (iii) comprises the magnitude spike with the value above the magnitude threshold. 10. The non-transitory computer readable medium of claim 1, wherein the non-transitory computer readable medium comprises two or more computer readable media distributed among two or more worker nodes in a communications grid computing system, the two or more worker nodes being separate computing devices that are remote from one another. 11. A method comprising: receiving a time series comprising a plurality of data points arranged in a sequential order over a period of time; determining a repetitive characteristic of the time series by analyzing the time series for a pattern that repeats over a predetermined time period; generating an adjusted time series by removing the repetitive characteristic from the time series; determining, using the adjusted time series, an effect of one or more moving events that occur on different dates for two or more consecutive years on the adjusted time series; generating a residual time series by removing the effect of the one or more moving events from the adjusted time series; generating, using the residual time series, a base forecast that is independent of the repetitive characteristic and the effect of the one or more moving events; and generating a predictive forecast by including the repetitive characteristic and the effect of the one or more moving events into the base forecast. 12. The method of claim 11, further comprising pre-processing the time series prior to determining the repetitive characteristic for the time series by: determining that the time series is to be combined with other time series data based on a length of the time series being below a threshold; and generating an aggregate time-series by combining the time series with the other time series data. 13. The method of claim 12, further comprising pre-processing the time series prior to determining the repetitive characteristic for the time series by: smoothing the aggregate time-series to generate a smoothed aggregate time-series; and using the smoothed aggregate time-series as the time series. 14. The method of claim 13, further comprising smoothing the aggregate time-series by: removing data points associated with one or more predetermined events from the time series; determining replacement data points to be included in the time series in locations corresponding to the removed data points; and including the replacement data points in the time series in the locations corresponding to the removed data points. 15. The method of claim 11, further comprising determining the effect of the one or more moving events by: generating a pooled time series from the time series and other time-series data using a clustering method or a hierarchical method; and determining the effect of the one or more moving events based on the pooled time series. 16. The method of claim 11, further comprising determining that the time series is usable to generate the predictive forecast prior to determining the repetitive characteristic for the time series by: receiving a plurality of time series comprising the time series; and filter the plurality of time series using a threshold time duration to identify a subset of time series from the plurality of time series in which the time durations exceed the threshold time duration, the subset of time series comprising the time series. 17. The method of claim 16, further comprising: determining that the time series does not include a time period with inactivity; and determine that the time series exhibits the repetitive characteristic by analyzing the time series to detect a presence of the repetitive characteristic. 18. The method of claim 17, further comprising determining that the time series comprises a magnitude spike with a value above a magnitude threshold. 19. The method of claim 18, further comprising flagging the time series as usable to generate the predictive forecast in response to determining that the time series (i) lacks the time period with inactivity, (ii) exhibits the repetitive characteristic, and (iii) comprises the magnitude spike with the value above the magnitude threshold. 20. The method of claim 11, wherein: determining the base forecast comprises a first worker node of a communications grid computing system receiving data from a second worker node of the communications grid computing system, generating the base forecast based on the data, and transmitting information associated with the base forecast to a third worker node of the communications grid computing system; and determining the predictive forecast comprises the third worker node of the communications grid computing system receiving the information associated with the base forecast and generating the predictive forecast based on the information. 21. A system comprising: a processing device; and a memory device in which instructions executable by the processing device are stored for causing the processing device to: receive a time series comprising a plurality of data points arranged in a sequential order over a period of time; determine a repetitive characteristic of the time series by analyzing the time series for a pattern that repeats over a predetermined time period; generate an adjusted time series by removing the repetitive characteristic from the time series; determine, using the adjusted time series, an effect of one or more moving events that occur on different dates for two or more consecutive years on the adjusted time series; generate a residual time series by removing the effect of the one or more moving events from the adjusted time series; generate, using the residual time series, a base forecast that is independent of the repetitive characteristic and the effect of the one or more moving events; and generate a predictive forecast by including the repetitive characteristic and the effect of the one or more moving events into the base forecast. 22. The system of claim 21, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to pre-process the time series prior to determining the repetitive characteristic for the time series by: determining that the time series is to be combined with other time series data based on a length of the time series being below a threshold; and generating an aggregate time-series by combining the time series with the other time series data. 23. The system of claim 22, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to pre-process the time series prior to determining the repetitive characteristic for the time series by: smoothing the aggregate time-series to generate a smoothed aggregate time-series; and using the smoothed aggregate time-series as the time series. 24. The system of claim 23, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to smooth the aggregate time-series by: removing data points associated with one or more predetermined events from the time series; determining replacement data points to be included in the time series in locations corresponding to the removed data points; and including the replacement data points in the time series in the locations corresponding to the removed data points. 25. The system of claim 21, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to determine the effect of the one or more moving events by: generating a pooled time series from the time series and other time-series data using a clustering method or a hierarchical method; and determining the effect of the one or more moving events based on the pooled time series. 26. The system of claim 21, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to determine that the time series is usable to generate the predictive forecast prior to determining the repetitive characteristic for the time series by: receiving a plurality of time series comprising the time series; and filter the plurality of time series using a threshold time duration to identify a subset of time series from the plurality of time series in which the time durations exceed the threshold time duration, the subset of time series comprising the time series. 27. The system of claim 26, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to: determine that the time series does not include a time period with inactivity; and determine that the time series exhibits the repetitive characteristic by analyzing the time series to detect a presence of the repetitive characteristic. 28. The system of claim 27, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to: determine that the time series comprises a magnitude spike with a value above a magnitude threshold. 29. The system of claim 28, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to flag the time series as usable to generate the predictive forecast in response to determining that the time series (i) lacks the time period with inactivity, (ii) exhibits the repetitive characteristic, and (iii) comprises the magnitude spike with the value above the magnitude threshold. 30. The system of claim 21, further comprising a plurality worker nodes in a communications grid computing system, wherein: a first worker node of the plurality of worker nodes is configured to generate the base forecast and transmit information associated with the base forecast to a second worker node of the plurality of worker nodes; and the second worker node of the plurality of worker nodes is configured to receive the information associated with the base forecast and generate the predictive forecast based on the information. | Please help me write a proper abstract based on the patent claims. | Information related to a time series can be predicted. For example, a repetitive characteristic of the time series can be determined by analyzing the time series for a pattern that repeats over a predetermined time period. An adjusted time series can be generated by removing the repetitive characteristic from the time series. An effect of a moving event on the adjusted time series can be determined. The moving event can occur on different dates for two or more consecutive years. A residual time series can be generated by removing the effect of the moving event from the adjusted time series. A base forecast that is independent of the repetitive characteristic and the effect of the moving event can be generated using the residual time series. A predictive forecast can be generated by including the repetitive characteristic and the effect of the moving event into the base forecast. |
1. A decision support system assessing an intervention action in response to a hazard situation developing in a physical domain, the decision support system comprising: a hazard simulator configured to generate and output information representing the hazard situation by combining data from a plurality of data sources describing one of the hazard situation and the physical domain including at least one source of real-time data from the physical domain; an agent based modeler configured to run a simulation using rules for predicting the behavior of agents affected by the hazard situation to predict the behavior of the individual agents in response to the hazard situation; a decision selector configured to select one intervention action from a set of candidate intervention actions in dependence upon the behavior predicted by the agent based modeler; an iterative decision tester configured to test the selected intervention action by, for each of a predetermined set of possible outcomes of the selected intervention action, initiating the agent-based modeler to reiterate the simulation with the respective possible outcome as a factor influencing agent behavior; and an assessor configured to assess the behavior of the agents predicted by each iteration of the simulation to determine whether to initiate the selected intervention action. 2. A decision support system according to claim 1, comprising: a rule generator configured to use the information output by the hazard simulator to generate rules for predicting the behavior of agents affected by the hazard situation. 3. A decision support system according to claim 1, wherein the hazard simulator comprises: a federated database server configured to retrieve data from the plurality of data sources, and a data exploration engine configured to analyze the retrieved data in order to generate the information representing the hazard situation. 4. A decision support system according to claim 3, wherein the plurality of data sources includes one or more of the following: a data source storing date representing the socio-economic properties of occupants of the physical domain; a data source storing data representing environmental conditions in the physical domain; a data source storing candidate intervention actions; a data source storing a representation of geographical information in the physical domain including one or more of land surface information, vegetation surface information, river surface information, ocean information including information of at least one of a tide, a wave height, and air information including pollution and visibility; a data source storing a representation of man-made structures in the physical domain; a data source storing implementation options for each candidate intervention action; and a data source storing data representing hazard prediction information. 5. A decision support system according to claim 1, wherein the decision selector is configured to select more than one from the set of candidate intervention actions in dependence upon the behavior predicted by the agent based modeler; the iterative decision tester is configured to test each of the more than one selected intervention actions; and the assessor is configured to determine which one of the selected intervention actions to initiate based on the assessment of the behavior of the agents predicted by each iteration of the simulation. 6. A decision support system according to claim 5, wherein the iterative decision tester is configured to test the more than one of the selected intervention actions in combination with another one of the selected intervention actions in iterations of the simulation. 7. A decision support system according to claim 1, wherein the iterative decision tester is configured to access a plurality of implementation options associated with the selected intervention action, and for each implementation option, to reiterate the simulation with each possible outcome, the respective implementation option and the respective possible outcome each being factors influencing agent behavior; and the assessor is configured to select an implementation option for the selected intervention action based on the assessment of the behavior of the agents predicted by each iteration of the simulation. 8. A decision support system according to claim 1, further comprising: a communicator configured to output a representation of the intervention action determined to be initiated, by the assessor, and the selected implementation option to one of a user interface, one or more recipients. 9. A decision support system according to claim 1, wherein the plurality of data sources includes an intervention action, and a previously determined implementation option to be initiated by the assessor, and the information output by the hazard simulator includes an indication of the previously determined intervention action, and a corresponding implementation option. 10. A decision support system according to claim 1, wherein the assessor is configured to assess the behavior of agents predicted by each iteration of the simulation by comparing the predicted behavior with one or more behavior goals and quantifying whether each of the behavior goals is satisfied by the predicted behavior. 11. A decision support system according to claim 1, wherein the assessor is configured to obtain a representation of cost associated with the selected intervention action, and each implementation option, where cost information is included as a factor in determining whether to initiate the selected intervention action and/or which implementation option to select. 12. A decision support system according to claim 1, comprising: a rule generator is configured to combine rule definitions from an external agent library with information output by the hazard simulator to generate an agent library specific to the hazard situation and the physical domain including the rules for predicting the behavior of agents affected by the hazard situation. 13. A decision support method for assessing an intervention action in response to a hazard situation developing in a physical domain, the decision support method comprising: generating and outputting information representing the hazard situation by combining data from a plurality of data sources describing the hazard situation and/or the physical domain including at least one source of real-time data from the physical domain; generating rules, using the output information, for predicting the behavior of agents affected by the hazard situation; executing a simulation using an agent based modeling simulator and generated rules to predict the behavior of the agents in response to the hazard situation; selecting one intervention action from a set of candidate intervention actions in dependence upon the predicted behavior; testing the selected intervention action, for each of a predetermined set of possible outcomes of the selected intervention action, by initiating further simulations by the agent based modeling simulator with a respective possible outcome as a factor influencing agent behavior; and assessing the behavior of the agents predicted by each iteration of the simulation and determining whether to initiate the selected intervention action. 14. A non-transitory computer readable storage medium storing a computer program which, when executed by a computing apparatus, causes the computing apparatus to function as the decision support system of claim 1. 15. A non-transitory computer readable storage medium storing a computer program which, when executed by a computing apparatus, causes the computing apparatus to perform the method of claim 13. | Please help me write a proper abstract based on the patent claims. | A decision support system and method of assessing an intervention action in response to a hazard situation. The decision support system outputs information representing the hazard situation by combining data from data sources describing one of the hazard situation and a physical domain including at least one source of real-time data from the physical domain. A simulation is run using rules to predict the behavior of individual agents in response to the hazard situation, where the system selects one action from a set of candidate intervention actions based on the behavior predicted by the agent and tests the selected intervention action by initiating, for each of a predetermined set of possible outcomes of the selected intervention action, an operation to reiterate the simulation with a respective possible outcome as a factor influencing agent behavior. |
1. A method comprising: generating, by a speech recognition system, a matrix from a predetermined quantity of vectors that each represent input for a layer of a neural network; generating a plurality of sub-matrices from the matrix; using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network to determine whether an utterance encoded in an audio signal comprises a keyword for which the neural network is trained. 2. The method of claim 1, wherein generating the plurality of sub-matrices from the matrix comprises generating a plurality of non-overlapping sub-matrices from the matrix. 3. The method of claim 1, wherein: the layer comprises an input layer; and generating the matrix from the predetermined quantity of vectors that each represent input for the layer of the neural network comprises generating the matrix from a predetermined quantity of feature vectors that each model a portion of the audio signal encoding the utterance. 4. The method of claim 1, wherein a size of each of the sub-matrices is the same. 5. The method of claim 1, wherein generating the matrix from the predetermined quantity of vectors comprises generating the matrix from a predetermined quantity of sequential vectors. 6. The method of claim 1, wherein using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network comprises using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network to determine whether the utterance was spoken by a predetermined speaker. 7. The method of claim 1, wherein using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network comprises using each of the sub-matrices as input to a predetermined quantity of nodes in the layer of the neural network. 8. The method of claim 1, wherein using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network comprises using the respective sub-matrix as input to a plurality of adjacent nodes in the layer of the neural network. 9. The method of claim 1, comprising: generating, for each node in the layer of the neural network, output from the node using the respective sub-matrix; determining whether the utterance comprises a keyword using the output from the nodes in the layer; and performing, by a device, an action in response to determining that the utterance comprises a keyword. 10. The method of claim 9, wherein the device includes the speech recognition system. 11. The method of claim 9, wherein performing the action comprises exiting, by the device, a standby state. 12. The method of claim 9, wherein performing the action comprises presenting, by the device, content to a user of the device. 13. The method of claim 9, wherein performing the action comprises performing, by the device, an action for a particular application. 14. The method of claim 13, wherein performing the action for the particular application comprises launching, by the device, the particular application. 15. A computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to perform operations comprising: generating, by a speech recognition system, a matrix from a predetermined quantity of vectors that each represent input for a layer of a neural network; generating a plurality of sub-matrices from the matrix; using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network to determine whether an utterance encoded in an audio signal comprises a keyword for which the neural network is trained. 16. The computer-readable medium of claim 15, wherein a size of each of the sub-matrices is the same. 17. The computer-readable medium of claim 15, wherein generating the matrix from the predetermined quantity of vectors comprises generating the matrix from a predetermined quantity of sequential vectors. 18. The computer-readable medium of claim 15, wherein using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network comprises using each of the sub-matrices as input to a predetermined quantity of nodes in the layer of the neural network. 19. The computer-readable medium of claim 15, the operations comprising: generating, for each node in the layer of the neural network, output from the node using the respective sub-matrix; determining whether the utterance comprises a keyword using the output from the nodes in the layer; and performing, by a device, an action in response to determining that the utterance comprises a keyword. 20. A system comprising: one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: generating, by a speech recognition system, a matrix from a predetermined quantity of vectors that each represent input for a layer of a neural network; generating a plurality of sub-matrices from the matrix; using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network to determine whether an utterance encoded in an audio signal comprises a keyword for which the neural network is trained. | Please help me write a proper abstract based on the patent claims. | Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a neural network. One of the methods includes generating, by a speech recognition system, a matrix from a predetermined quantity of vectors that each represent input for a layer of a neural network, generating a plurality of sub-matrices from the matrix, using, for each of the sub-matrices, the respective sub-matrix as input to a node in the layer of the neural network to determine whether an utterance encoded in an audio signal comprises a keyword for which the neural network is trained. |
1-10. (canceled) 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device implementing a natural language processing system, causes the computing device to: generate, by the natural language processing system, a result of processing a natural language query; determine that at least one of the natural language query or the result comprises a temporal characteristic; generate, in response to determining that at least one of the natural language query or the result comprises a temporal characteristic, a reminder notification data structure having an associated scheduled reminder notification time for outputting a reminder notification of the result generated for the natural language query; store the reminder notification data structure in a data storage device; and output, at a later time from a time that the reminder notification data structure was stored in the data storage device, in response to the later time being equal to or later than the scheduled reminder notification time, a reminder notification to a client device associated with a user, wherein the reminder notification specifies the result generated for the natural language query. 12. The computer program product of claim 11, wherein the reminder notification further specifies a history of changes to the result occurring from a time that the result was generated for the natural language query and the scheduled reminder notification time. 13. The computer program product of claim 11, wherein the natural language processing system is a Question and Answer (QA) system, the natural language query is a natural language question input to the QA system, and the result is an answer generated by the QA system for the natural language question. 14. The computer program product of claim 11, wherein the natural language processing system is a search engine, the natural language query is a search query input to the search engine, and the result comprises at least one search result generated by the search engine. 15. The computer program product of claim 11, wherein the computer readable program causes the computing device to generate a reminder notification data structure at least by: in response to determining that at least one of the natural language query or the result comprises a temporal characteristic, outputting an option to the client device of the user to create the reminder notification data structure, wherein the reminder notification data structure is created in response to the user selecting the option. 16. The computer program product of claim 11, wherein the computer readable program further causes the computing device to: identify temporal characteristics of the natural language query; identify temporal characteristics of the result; and calculate the scheduled reminder notification time based on the temporal characteristics of the natural language query and the temporal characteristics of the result. 17. The computer program product of claim 16, wherein at least one of the temporal characteristics of the natural language query or temporal characteristics of the result are identified by identifying at least one of time-based keywords or key phrases, concept relationships associated with time in language of the natural language query or result, a lexical answer type or focus of the natural language query or result that is associated with time, or implicit timing aspects within the natural language query or result. 18. The computer program product of claim 16, wherein the computer readable program causes the computing device to identify the temporal characteristics of the natural language query at least by identifying a temporal characteristic of a domain associated with the natural language query. 19. The computer program product of claim 11, wherein the computer readable program further causes the computing device to: automatically check for a change in the result at a time between the current time and the scheduled reminder notification time; determine, in response to a change in the result being identified, whether the change in the result is significant enough to send a change notification to the user; and output, in response to the change in the result being significant enough to send a change notification to the user, a notification of the change in the result to the client device associated with the user. 20. The computer program product of claim 11, wherein the scheduled reminder notification time is a time calculated based on at least one of an arbitrarily selected default timeframe, a default timeframe associated with a domain of the natural language query, or a user specified default timeframe, prior to a temporal characteristic of the result. 21. An apparatus comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: generate, by a natural language processing system implemented by the processor, a result of processing a natural language query; determine that at least one of the natural language query or the result comprises a temporal characteristic; generate, in response to determining that at least one of the natural language query or the result comprises a temporal characteristic, a reminder notification data structure having an associated scheduled reminder notification time for outputting a reminder notification of the result generated for the natural language query; store the reminder notification data structure in a data storage device; and output, at a later time from a time that the reminder notification data structure was stored in the data storage device, in response to the later time being equal to or later than the scheduled reminder notification time, a reminder notification to a client device associated with a user, wherein the reminder notification specifies the result generated for the natural language query. | Please help me write a proper abstract based on the patent claims. | A data processing system generates a result of processing a natural language query. A determination is made as to whether the natural language query or the result has a temporal characteristic. In response, a reminder notification data structure is generated having an associated scheduled reminder notification time for outputting a reminder notification of the result generated for the natural language query. The reminder notification data structure is stored in a data storage device and, at a later time from a time that the reminder notification data structure was stored in the data storage device, in response to the later time being equal to or later than the scheduled reminder notification time, a reminder notification is output to a client device associated with a user. The reminder notification specifies the result generated for the natural language query. |
1. A method to determine identity-centric risks in an enterprise, comprising: receiving a training set of data representing classifications of one or more accounts; based on the classifications, computing a machine learning classifier using software executing in a hardware element; applying the machine learning classifier to additional account data to identify, in an automated manner, one or more attributes that provide a given classification result; determining whether the classification result represents an acceptable level of account classification; and when the classification result represents an acceptable level of account classification, applying the one or more attributes to new data to attempt to detect an identity-centric risk. 2. The method as described in claim 1 further including re-computing the machine learning classifier when the classification result does not represent an acceptable level of account classification. 3. The method as described in claim 1 wherein applying the one or more attributes to new data includes executing an identity and access management (IAM) detection process to generate a particular classification result with respect to a particular account. 4. The method as described in claim 3 further including issuing a notification when the particular classification result generated by the IAM detection process is not definitive and requires further evaluation. 5. The method as described in claim 1 wherein the one or more accounts include privileged accounts. 6. The method as described in claim 1 wherein determining whether the classification result represents an acceptable level of account classification receives a user-supplied input. 7. Apparatus, comprising: a processor; computer memory holding computer program instructions executed by the processor to determine identity-centric risks in an enterprise, the computer program instructions comprising: program code operative to receive a training set of data representing classifications of one or more accounts; program code operative based on the classifications to compute a machine learning classifier; program code operative to apply the machine learning classifier to additional account data to identify one or more attributes that provide a given classification result; program code operative to determine whether the classification result represents an acceptable level of account classification; and program code operative when the classification result represents an acceptable level of account classification to apply the one or more attributes to new data to attempt to detect an identity-centric risk. 8. The apparatus as described in claim 7 wherein the computer program instructions further include program code operative to re-compute the machine learning classifier when the classification result does not represent an acceptable level of account classification. 9. The apparatus as described in claim 7 wherein the computer program instructions further include an identity and access management (IAM) detection process operative to apply the one or more attributes to generate a particular classification result with respect to a particular account. 10. The apparatus as described in claim 9 wherein the computer program instructions further include program code operative to issue a notification when the particular classification result generated by the IAM detection process is not definitive and requires further evaluation. 11. The apparatus as described in claim 7 wherein the one or more accounts include privileged accounts. 12. The apparatus as described in claim 7 wherein the program code operative to determine whether the classification result represents an acceptable level of account classification receives a user-supplied input. 13. A computer program product in a non-transitory computer readable medium for use in a data processing system, the computer program product holding computer program instructions executed by the data processing system to determine identity-centric risks in an enterprise, the computer program instructions operative to: receive a training set of data representing classifications of one or more accounts; based on the classifications, to compute a machine learning classifier; apply the machine learning classifier to additional account data to identify one or more attributes that provide a given classification result; determine whether the classification result represents an acceptable level of account classification; and when the classification result represents an acceptable level of account classification, to apply the one or more attributes to new data to attempt to detect an identity-centric risk. 14. The computer program product as described in claim 13 wherein the computer program instructions are further operative to re-compute the machine learning classifier when the classification result does not represent an acceptable level of account classification. 15. The computer program product as described in claim 13 wherein the computer program instructions comprise an identity and access management (IAM) detection process further operative to apply the one or more attributes to generate a particular classification result with respect to a particular account. 16. The computer program product as described in claim 15 wherein the computer program instructions are further operative to issue a notification when the particular classification result generated by the IAM detection process is not definitive and requires further evaluation. 17. The computer program product as described in claim 13 wherein the one or more accounts include privileged accounts. 18. The computer program product as described in claim 13 wherein the program code operative to determine whether the classification result represents an acceptable level of account classification receives a user-supplied input. 19. An apparatus for identity and access management, comprising: a hardware processor; computer memory holding computer program instructions executed by the hardware processor to detect identity-centric risks, the computer program instructions comprising program code operative to compute and apply a machine learning classifier to identify one or more attributes that provide a given data classification result, and to apply the one or more attributes to enforce a data classification decision. 20. The apparatus as described in claim 19 wherein the data classification decision is that a particular account has an appropriate or inappropriate classification. | Please help me write a proper abstract based on the patent claims. | An identity and access management IAM system is augmented to provide for supervised, iterative machine learning (ML), preferably with a user-generated training set for classification. The training set may include various types of data, including characteristics or attributes of the account types, the users, or the like. A goal of the initial ML training, which may include one or multiple passes, is to enable the machine to identify specific characteristics or attributes that provide a good classification result, with the resulting classifications then applied within the IAM system. In particular, the output of the ML system may be used by the IAM system for enforcing rights associated with the identified accounts, managing accounts, and so forth. |
1. An information processing apparatus comprising: a phenomenon pattern extraction unit that extracts a phenomenon pattern of a past sensor signal of a facility; a correlation unit that correlates the sensor signal based on maintenance history information; a classification reference creation unit that creates a classification reference for classifying the phenomenon pattern based on a work keyword included in the maintenance history information correlated with the sensor signal as a source of the phenomenon pattern and the phenomenon pattern extracted by the phenomenon pattern extraction unit; a phenomenon pattern classification unit that classifies the phenomenon pattern based on the classification reference created by the classification reference creation unit; and a diagnosis model creation unit that creates a diagnosis model for estimating a work keyword suggested to a maintenance worker based on the work keyword and the phenomenon pattern classified by the phenomenon pattern classification unit. 2. The information processing apparatus according to claim 1, wherein the phenomenon pattern extraction unit extracts a diagnosis target phenomenon pattern of a diagnosis target sensor signal as the diagnosis target of the facility, the phenomenon pattern classification unit classifies the diagnosis target phenomenon pattern extracted by the phenomenon pattern extraction unit based on the classification reference, and the information processing apparatus further comprises a maintenance work suggestion unit that extracts a work keyword suggested to a maintenance worker by referring to the diagnosis model based on the diagnosis target phenomenon pattern classified by the phenomenon pattern classification unit. 3. The information processing apparatus according to claim 1, wherein the work keyword is extracted based on a correlated value included in the maintenance history information. 4. The information processing apparatus according to claim 3, wherein the maintenance history information includes a maintenance cost or a downtime of the facility, and the correlated value is the maintenance cost or the downtime of the facility. 5. The information processing apparatus according to claim 3, wherein the correlated value is an abnormality measure calculated based on the sensor signal. 6. The information processing apparatus according to claim 3, wherein the correlated value is a numerical value that represents the existence of a specific keyword in the maintenance history information. 7. The information processing apparatus according to claim 1, wherein the classification reference is determined based on a learning method with a teacher by using the work keyword or the combination thereof as a teacher label. 8. The information processing apparatus according to claim 1, wherein the phenomenon pattern is expressed by an accumulation value of a residual vector calculated as a difference between an observation vector and a reference vector from the sensor signal. 9. The information processing apparatus according to claim 1, wherein the phenomenon pattern is expressed by an accumulation value of an isolation vector calculated based on the two-dimensional distribution density of learned data from the sensor signal. 10. The information processing apparatus according to claim 1, wherein the phenomenon pattern is expressed by a histogram characteristic calculated by a bag-of-features method from the sensor signal. 11. The information processing apparatus according to claim 1, wherein the diagnosis model is expressed by the possibility calculated based on the phenomenon pattern classification result and the two-dimensional frequency distribution of the work keyword or the combination thereof. 12. The information processing apparatus according to claim 8, wherein the reference vector is calculated by using a local sub-space classifier. 13. The information processing apparatus according to claim 2, further comprising: an alarm classification reference creation unit that creates an alarm classification reference for classifying the phenomenon pattern by using an alarm included in a past event signal of the facility as a teacher; an alarm classification unit that classifies the phenomenon pattern based on the alarm classification reference created by the alarm classification reference creation unit; an alarm diagnosis model creation unit that creates an alarm diagnosis model for estimating the alarm of the diagnosis target phenomenon pattern based on the alarm and the phenomenon pattern classified by the alarm classification unit; and an alarm estimation unit that estimates the alarm of the diagnosis target phenomenon pattern based on the alarm diagnosis model created by the alarm diagnosis model creation unit, wherein the classification reference creation unit creates the classification reference for each alarm, the diagnosis model creation unit creates the diagnosis model for each alarm, the phenomenon pattern classification unit classifies the diagnosis target phenomenon pattern, in which the alarm is estimated by the alarm estimation unit, based on the classification reference for each alarm, and the maintenance work suggestion unit extracts a work keyword suggested to a maintenance worker by referring to the diagnosis model for each alarm based on the diagnosis target phenomenon pattern classified by the phenomenon pattern classification unit. 14. A method of diagnosing a facility by an information processing apparatus, comprising: extracting a phenomenon pattern of a past sensor signal of the facility; correlating the sensor signal based on maintenance history information; creating a classification reference for classifying the phenomenon pattern based on a work keyword included in the maintenance history information correlated with the sensor signal as a source of the phenomenon pattern and the phenomenon pattern extracted by the extracting of the phenomenon pattern; classifying the phenomenon pattern based on the classification reference created by the creating of the classification reference; and creating a diagnosis model for estimating a work keyword suggested to a maintenance worker based on the work keyword and the phenomenon pattern classified by the classifying of the phenomenon pattern. 15. A non-transitory computer-readable storage medium storing a program that causes an information processing apparatus to function as: a phenomenon pattern extraction unit that extracts a phenomenon pattern of a past sensor signal of a facility; a correlation unit that correlates the sensor signal based on maintenance history information; a classification reference creation unit that creates a classification reference for classifying the phenomenon pattern based on a work keyword included in the maintenance history information correlated with the sensor signal as a source of the phenomenon pattern and the phenomenon pattern extracted by the phenomenon pattern extraction unit; a phenomenon pattern classification unit that classifies the phenomenon pattern based on the classification reference created by the classification reference creation unit; and a diagnosis model creation unit that creates a diagnosis model for estimating a work keyword suggested to a maintenance worker based on the work keyword and the phenomenon pattern classified by the phenomenon pattern classification unit. | Please help me write a proper abstract based on the patent claims. | A phenomenon pattern extraction unit extracts a phenomenon pattern of a past sensor signal of a facility. A related information correlation unit correlates the sensor signal based on maintenance history information. A phenomenon pattern classification reference creation unit creates a classification reference for classifying a phenomenon pattern based on the extracted phenomenon pattern and a work keyword included in the maintenance history information correlated with the sensor signal as the source of the phenomenon pattern. A phenomenon pattern classification unit classifies the phenomenon pattern based on the classification reference. A diagnosis model creation unit creates a diagnosis model for estimating a work keyword suggested to a maintenance worker based on the classified phenomenon pattern and the work keyword. |
1. A computer program product for determining a topic change of a communication, the computer program product comprising: one or more computer readable storage media and program instructions stored on the one or more computer readable storage media, the program instructions comprising: program instructions to monitor a communication including a first span; program instructions to determine the communication containing a set of dialog statements, wherein the first span of the communication includes one or more dialog statements of the set of dialog statements; program instructions to determine if the one or more dialog statements of the first span include one or more indicators of a topic change, wherein the one or more indicators are identified by at least one detector of a learning model, wherein each of the one or more indicators of the topic change within the first span includes at least one of: a particular key phrase, a pause of particular duration, a particular activity on a participant's communication device, and a particular duration of the first span; responsive to determining the first span includes the one or more indicators of the topic change, program instructions to generate a score for the one or more indicators, based on the learning model; responsive to the score for the one or more indicators triggering a threshold condition, program instructions to determine a topic change within the first span, wherein the threshold condition is based on a determination of the topic change within the first span of the communication during training of the learning model, and wherein the threshold condition determined during training of the learning model includes: program instructions to determine a weighted value for the at least one detector, based on heuristics, program instructions to receive input of labelled communication dialog statements, wherein the labelled communication dialog statements include one or more topic change indicators that are known, the one or more topic change indicators corresponding to the at least one detector, program instructions to adjust the weighted value of the at least one detector in response to a delta between an output of scores of the at least one detector of the learning model and scores of the one or more topic change indicators that are known, and program instructions to determine the threshold condition in response to achieving an acceptable minimum for the delta between the output of the scores which are determined by the at least one detector of the learning model and the scores of the one or more topic change indicators that are known; program instructions to generate a second span based on adjusting boundaries of the first span by performing at least one of, adding to the first span one or more dialog statements of the set of dialog statements not included in the first span, and removing one or more dialog statements from the first span; program instructions to determine a score for the first span and a score for the second span, wherein the score for the first span and the score for the second span is based on a topic of the first span and a topic of the second span, respectively; responsive to the score of the second span being more favorable than the score of the first span, program instructions to extract one or more features from the one or more dialog statements of the second span not included in the first span, wherein extracting the one or more features from the one or more dialog statements of the second span, includes classifying the one or more features to correspond with the at least one detector of the learning model; and program instructions to train the learning model to determine a topic change, based, at least in part, on including the one or more features from the one or more dialog statements of the second span, in at least one detector of the learning model. | Please help me write a proper abstract based on the patent claims. | A computer processor determines a first span of a communication, wherein a span includes content associated with one or more dialog statements. If the content of the first span contains one or more topic change indicators which are identified by at least one detector of a learning model, the computer processor, in response, generates scores for each of the one or more indicators. The computer processor aggregates scores of the one or more indicators of the first span, which may be weighted, to produce an aggregate score. The computer processor compares the aggregate score to a threshold value, wherein the threshold value is determined during training of the learning model, and the computer processor, in response to the aggregate score crossing the threshold value, determines a topic change has occurred within the first span. |
1. A simulation apparatus comprising: an event receiver receiving an event showing a situation of a first monitor target simulated by a first simulator simulating at least one of motion and state change of the first monitor target from a monitor apparatus monitoring a simulation of the first simulator; a decision maker determining an action based on the event, the action to be carried out with respect to a second monitor target undergoing a simulation of at least one of motion and state change by a second simulator synchronized with the first simulator by time progress; and an execution command transmitter transmitting an execution command of the action determined by the decision maker to a control apparatus controlling the second simulator. 2. The simulation apparatus according to claim 1, wherein the decision maker determines, based on at least one of a type of the event and a value of the event, the action to change an acquisition frequency of the event about the first monitor target; and the execution command transmitter transmits an execution command of the action determined by the decision maker to a control apparatus controlling the first simulator. 3. The simulation apparatus according to claim 2, wherein based on the value of the event and a value of an event received before the event by the event receiver, the decision maker determines the action to change the acquisition frequency of the event about the first monitor target. 4. The simulation apparatus according to claim 1, wherein the first simulator simulates at least one of the motion and state change of the plurality of first monitor targets; an agent including the event receiver, the decision maker, and the execution command transmitter is provided for each of the first monitor targets; and the agent receives the event showing the situation of the first monitor target corresponding to the agent from the monitor apparatus. 5. The simulation apparatus according to claim 4, wherein the decision maker has a plurality of control rules determining types of events, conditions based on values of the events, and actions to be executed, selects the control rule matching a type of the received event and satisfying the condition from among the plurality of control rules, and determines the action shown in the selected control rule. 6. The simulation apparatus according to claim 4, wherein the decision maker of at least one of the agents reports the event to a user device via wired or wireless communication and determines the action to be executed in accordance with an input of an instruction signal from the user. 7. The simulation apparatus according to claim 6, wherein the execution command transmitter transmits a control command to the user device based on the execution command. 8. The simulation apparatus according to claim 1, further comprising: a simulation controller synchronizing the time progress of the first and second simulators by transmitting an instruction signal to advance the simulations of the first simulator and the second simulator by certain time to the control apparatuses controlling the first and second simulators, respectively, and, when both of the first and second simulators are advanced by the certain time, repeatedly transmitting the next instruction signal to advance the certain time. 9. The simulation apparatus according to claim 2, further comprising: the monitor apparatus, a first control apparatus controlling the first simulator, and a second control apparatus controlling the second simulator; wherein the first control apparatus transmits an instruction signal to the monitor apparatus, the instruction signal to acquire an event about the first monitor target at an acquisition frequency in accordance with the execution command of the action; the monitor apparatus acquires the event in accordance with the acquisition frequency shown by the instruction signal and transmits the event to the event receiver; and the second control apparatus controls at least one of motion and state of the second monitor target in accordance with the execution command of the action. 10. The simulation apparatus according to claim 1, further comprising a trigger event reporter periodically generating a trigger event about the second simulator; wherein the event receiver receives the trigger event generated by the trigger event reporter; the decision maker determines execution of the action about the second monitor target in accordance with the number of times of reception of the trigger event by the event receiver; and the execution command transmitter transmits the execution command of the action to the control apparatus controlling the second simulator. 11. The simulation apparatus according to claim 1, further comprising: a trigger event reporter periodically generating a trigger event about the first simulator; wherein the event receiver receives the trigger event generated by the trigger event reporter; the decision maker makes a decision to acquire an event about the first monitor target in accordance with the number of reception of the trigger event by the event receiver; and the execution command transmitter transmits an execution command of an action to acquire the event to the control apparatus controlling the first simulator. 12. The simulation apparatus according to claim 1, wherein the event receiver receives an event from a monitor apparatus monitoring a simulation of the second simulator, the event showing a situation of the second monitor target simulated by the second simulator; based on the event received by the event receiver, the decision maker determines an action to be carried out with respect to the first monitor target simulated by the first simulator; and the execution command transmitter transmits an execution command of an action determined by the decision maker to a control apparatus controlling the first simulator. 13. The simulation apparatus according to claim 1, wherein the first monitor target is a vehicle; and a value of the event expresses at least one of a velocity of the vehicle, a position of the vehicle, and the number of another vehicle(s) present around the vehicle. 14. The simulation apparatus according to claim 1, wherein the first monitor target is an apparatus consuming electric power; and a value of the event expresses at least one of a voltage, a current, electric power, and an electric power usage amount of the apparatus consuming the electric power. 15. The simulation apparatus according to claim 1, wherein the first monitor target is a vehicle moving by using energy; and a value of the event expresses an energy remaining amount of the vehicle. 16. The simulation apparatus according to claim 1, wherein the first monitor target is a provider of a product or a service; and a value of the event expresses a price of the product or the service provided by the provider. 17. The simulation apparatus according to claim 1, further comprising: a communicator connected to the first monitor target and communicatable with a wired or wireless network; wherein the monitor apparatus monitors at least one of the motion and state change of the first monitor target of the network by using the communicator instead of the first monitor target simulated by the first simulator; the event receiver receives an event showing a situation of the first monitor target of the network from the monitor apparatus; based on the event received by the event receiver, the decision maker determines the action with respect to the second monitor target simulated by the second simulator undergoing synchronization of time progress with the network; and the execution command transmitter transmits an execution command of the action determined by the decision maker to the control apparatus controlling the second simulator. 18. The simulation apparatus according to claim 1, further comprising: a communicator connected to the second target and communicatable with a wired or wireless network; wherein the first simulator is undergoing synchronization of time progress with the network; the decision maker determines an action to be carried out with respect to the second monitor target connected to the network instead of the second monitor target simulated by the second simulator; and the execution command transmitter transmits an execution command of the action determined by the decision maker to the second monitor target connected to the network via the communicator. 19. A simulation method performed by a computer, comprising: receiving an event showing a situation of a first monitor target simulated by a first simulator simulating at least one of motion and state change of the first monitor target from a monitor apparatus monitoring a simulation of the first simulator; determining an action based on the event, the action to be carried out with respect to a second monitor target undergoing a simulation of at least one of motion and state change by a second simulator synchronized with the first simulator by time progress; and transmitting an execution command of the action to a control apparatus controlling the second simulator. 20. A non-transitory computer readable medium having instructions stored therein which, when executed by a computer, cause a computer to perform operations comprising: receiving an event showing a situation of a first monitor target simulated by a first simulator simulating at least one of motion and state change of the first monitor target from a monitor apparatus monitoring a simulation of the first simulator; determining an action based on the event, the action to be carried out with respect to a second monitor target undergoing a simulation of at least one of motion and state change by a second simulator synchronized with the first simulator by time progress; and transmitting an execution command of the action to a control apparatus controlling the second simulator. | Please help me write a proper abstract based on the patent claims. | A simulation apparatus according to one embodiment, including: an event receiver, a decision maker and an execution command transmitter. The event receiver receives an event showing a situation of a first monitor target simulated by a first simulator simulating at least one of motion and state change of the first monitor target from a monitor apparatus monitoring a simulation of the first simulator. The decision maker determines an action based on the event, the action to be carried out with respect to a second monitor target undergoing a simulation of at least one of motion and state change by a second simulator synchronized with the first simulator by time progress. The execution command transmitter transmits an execution command of the action determined by the decision maker to a control apparatus controlling the second simulator. |
1. A computer-implemented method of automating inductive bias selection, the method comprising: receiving, by a computer, a plurality of examples, each example providing a plurality of feature-value pairs; constructing, by the computer, an inductive bias dataset which correlates each respective example in the plurality of examples with numerical indications of training quality, wherein the numerical indications of training quality for each respective example are generated by: creating a plurality of models, each model corresponding to a distinct set of inductive biases, and evaluating training quality for each respective model when applied to the respective example; using, by the computer, the inductive bias dataset to select a plurality of inductive biases for application to one or more new datasets. 2. The method of claim 1, wherein the distinct set of inductive biases comprise a first bias corresponding to a model type. 3. The method of claim 1, wherein the distinct set of inductive biases comprise one or more second biases corresponding to model parameters corresponding to a respective model type. 4. The method of claim 1, wherein the distinct set of inductive biases comprise a first bias corresponding to an ensemble modeling technique. 5. The method of claim 1, wherein the distinct set of inductive biases comprise a first bias corresponding a measure of conservativeness with respective to model applicability and model accuracy. 6. The method of claim 1, wherein the distinct set of inductive biases comprise a first bias corresponding to an indication of sampling method. 7. The method of claim 1, wherein the distinct set of inductive biases comprise a first bias corresponding to an indication of technique for imputing missing data for particular data types. 8. The method of claim 1, wherein the plurality of examples comprise time-series data and the distinct set of inductive biases comprise one or more wavelet transformations. 9. The method of claim 1, wherein the distinct set of inductive biases comprise a one or more nominal biases corresponding to an external data source. 10. The method of claim 9, further comprising: using the inductive bias dataset to score a plurality of available external data sources; and generating a model which applies each of the plurality of available external data sources in proportion with its respective score. 11. A computer-implemented method of performing recursive learning based on a plurality of feature values, the method comprising: generating, by a computer, a plurality of models for a plurality of input features; receiving, by the computer, a plurality of examples, each example providing a plurality of input feature values corresponding to the plurality of input features; applying, by the computer, a recursive learning process to each respective example included in the plurality of examples, the recursive learning process comprising: selecting a preferred model from the plurality of models, the preferred model providing a lowest predicted error value when applied to the respective example, and associating the respective example with the preferred model; receiving a new example comprising a plurality of new input feature values; using, by the computer, the plurality of new input feature values to select a similar example from the plurality of examples; identifying, by the computer, one or more corresponding preferred models associated with the similar example; and applying, by the computer, the one or more corresponding preferred models to the new example. 12. The method of claim 11, further comprising: associating each of the plurality of models with a score indicative of model quality; identifying the one or more corresponding preferred models associated with the similar example based on the score of each of the one or more corresponding preferred models; averaging results generated by applying the one or more corresponding preferred models to the new example to yield a final prediction. 13. The method of claim 11, further comprising: creating a plurality of recursive learners, each respective recursive learner corresponding to distinct set of parameters; creating a high-order recursive learner configured to select from the plurality of recursive learners based on feature-value pairs associated with a particular example; and using the high-order recursive learner to select a distinct recursive learner from the plurality of recursive learners based on the plurality of new input feature values, wherein the distinct recursive learner uses the plurality of new input feature values to select the similar example from the plurality of examples and identifies the one or more corresponding preferred models associated with the similar example. 14. The method of claim 13, wherein each of the plurality of recursive learners corresponds to a distinct learning method. 15. The method of claim 14, wherein at least one of the plurality of recursive learners corresponds to a decision tree learning method and at least one of the plurality of recursive learners corresponds to a neural network learning method. 16. The method of claim 11, further comprising: partitioning the plurality of new input feature values into a first set of input feature values and a second set of input feature values; using, by the computer, the first set of input feature values to select a first similar example from the plurality of examples; identifying, by the computer, one or more first preferred models associated with the first similar example; applying, by the computer, the one or more first preferred models to the first set of input feature values; using, by the computer, the second set of input feature values to select a second similar example from the plurality of examples; identifying, by the computer, one or more second preferred models associated with the second similar example; and applying, by the computer, the one or more second preferred models to the second set of input feature values. 17. The method of claim 16, further comprising generating a final prediction by averaging (i) first results generated by applying the one or more first preferred models to the first set of input feature values and (ii) second results generated by applying the one or more second preferred models to the second set of input feature values. 18. A modeling computing system comprising: a processor configured to retrieve a plurality of examples from an example database and execute a plurality of modeling components comprising: a model generation component configured to generate a plurality of models, each model corresponding to a specified set of inductive biases; an inductive bias dataset component configured to construct an inductive bias dataset which correlates each respective example in the plurality of examples with numerical indications of training quality, wherein the numerical indications of training quality for each respective example are generated by: creating a plurality of models, each model corresponding to a distinct set of inductive biases, and evaluating training quality for each respective model when applied to the respective example. 19. The modeling computing system of claim 18, wherein the inductive bias dataset component is further configured to use the inductive bias dataset to select a plurality of inductive biases for application to one or more new datasets. 20. The modeling computing system of claim 18, wherein the plurality of modeling components further comprise a recursive learning component configured to apply a recursive learning process to each respective example included in the plurality of examples, the recursive learning process comprising: selecting a preferred model from the plurality of models, the preferred model providing a lowest predicted error value when applied to the respective example, and associating the respective example with the preferred model. | Please help me write a proper abstract based on the patent claims. | A computer-implemented method of automating inductive bias selection includes a computer receiving a plurality of examples, each example providing a plurality of feature-value pairs. The computer constructs an inductive bias dataset which correlates each respective example in the plurality of examples with numerical indications of training quality. The numerical indications of training quality for each respective example are generated by creating a plurality of models, with each model corresponding to a distinct set of inductive biases. The training quality for each respective model is evaluated when applied to the respective example. The computer uses the inductive bias dataset to select a plurality of inductive biases for application to one or more new datasets. |
1. A method of using behavioral biometric algorithms, the method comprising: gathering data from a first user; filtering said gathered data; and conducting analysis and distinguishing a human swipe input sequence applied to a touchpad or touchscreen or shape input sequence and behavioral traits from other human behavior and or machine behavior where: a. said input sequence data is determined based on at least one of: i. raw data in the highest resolution available for the particular system, ii. raw data in the resolution determined by the current application, iii. filtered data fitting different behavior traits; b. said behavioral traits are determined based on at least three of the following: i. an angle of the swipe when entering or leaving one or more measuring points, ii. a velocity between one more measuring points, iii. an acceleration between one or more measuring points, iv. a quotient between one or more measuring points, v. a sequence between multiple measuring points, vi. a start sequence to a first measuring point, vii. an end sequence from the last measuring point, viii. a time of flight between one or more measuring points, ix the dominant side between one or more measuring points, x. an area enclosed between a curve and a line connecting one or more measuring points on said curve, xi. a curve fitting between one or more measuring points, xii. a heat map between one or more measuring points; xii. the average time of the sample, xiii. keypress timings; and c. where said conducting of analysis comprises at least one of the following: i. determining if said input sequence is from said first user, ii. determining if said input sequence is from another user, different from said first user, iii. determining if said input sequence is not human. 2. The method of claim 1, wherein said shape input sequence is inputted using a mouse. 3. The method of claim 1, wherein a raw score of a determination of human characteristics of said swipe input sequence is exhibited, lacking a determination as to whether said swipe input sequence was carried out by a human. 4. The method of claim 1, wherein collision detection on shapes is used to conduct analysis and distinguish human swipe shape input sequence and behavioral traits from other human behavior and or machine behavior. 5. (canceled) 6. The method of claim 1, wherein said shape input sequence comprises a determination of a shape of inputted data based on a shape of at least two letters or numbers. 7. The method of claim 1, wherein said method is carried out as part of post-processing, after a transaction by a user conducting said swipe input sequence is carried out. 8. The method of claim 1, wherein a filter is used to omit a part of said input sequence data to anonymize user information. | Please help me write a proper abstract based on the patent claims. | Recording, analyzing and categorizing of user interface input via touchpad, touch screens or any device that can synthesize gestures from touch and pressure into input events. Such as, but not limited to, smart phones, touch pads and tablets. Humans may generate the input. The analysis of data may include statistical profiling of individual users as well as groups of users, the profiles can be stored in, but not limited to data containers such as files, secure storage, smart cards, databases, off device, in the cloud etc. A profile may be built from user/users behavior categorized into quantified types of behavior and/or gestures. The profile might be stored anonymized. The analysis may take place in real time or as post processing. Profiles can be compared against each other by all the types of quantified behaviors or by a select few. |
1. A method of knowledge discovery and publication or broadcasting comprising: identifying a subject matter, designating at least one online shop, for publishing or broadcasting electronic contents, corresponding to the subject matter, accessing or building a first collection of content corresponding to the subject matter, building, using one or more data processing or computing devices, said one or more data processing or computing device having compound processing speeds of one thousand million or larger than one thousand million instructions per second, a first set of association value spectrums for one or more ontological subjects of the first collection of content by analyzing the first collection of content, receiving an electronic content from a generator, related to the subject matter, through a data communication network, said data communication network carries, transmit, or transport data at the rate of 10 million bits per second or larger than 10 million bits per second, building, using one or more data processing or computing devices, a second set of association value spectrums for one or more ontological subjects of the received electronic content by analyzing the received electronic content, independently from the first set of association value spectrums, assigning, using one or more data processing or computing devices, a merit value to the received electronic content, by processing data of the second set of association value spectrums with data of the first set of association value spectrums, and publishing or broadcasting the received electronic content in at least one of the at least one online shop or passing it for further review, based on the merit value of the received electronic content. 2. The method of claim 1 further comprising building a first one or more data structures corresponding to said first set of association value spectrums and storing said first one or more data structures in one or more non-transitory computer-readable storage media. 3. The method of claim 1 further comprising building a second one or more data structures corresponding to said second set of association value spectrums and storing said second one or more data structures in one or more non-transitory computer-readable storage media. 4. The method of claim 1 wherein the first set of association value spectrums are functions of number of co-occurrences of ontological subjects within a predefined proximity or the importance of the ontological subjects. 5. The method of claim 1, wherein the second set of association value spectrums is calculated by partitioning the received electronic content and calculating an association value spectrum for an ontological subjects of the received electronic content as a function of co-occurrences of the ontological subjects in the partitions or a weighting coefficient of one or more ontological subjects of the received electronic content. 6. The method of claim 1, further comprising a searching software agent for identifying names and contact information of experts and authorities having expertise and credentials related to the subject matter, thereby identifying, for each subject matter, a number of experts and authorities, for acting as one or more role of a reviewer, editor, administrator, and shop owner of one or more publication shop having the subject matter for exploration related to their expertise. 7. A computer implemented method of indexing ontological subjects comprising: providing a plurality of ontological subjects; providing access to a collection of electronic content by way of electronic communication through a data communication network, said data communication network carries, transmit, or transport data at the rate of 10 million bits per second or larger than 10 million bits per second, evaluating numerically, using one or more data processing or computing devices, said one or more data processing or computing device having compound processing speeds of one thousand million or larger than one thousand million instructions per second, association values between a number of pairs of ontological subjects, using data information of co-occurrence of each of said pairs, within predefined proximities in the collection of electronic content; selecting a first set of ontological subjects, and building an association set for a member of said first set, wherein the association set corresponds to ontological subjects having association values higher than a predefined value with said member of the first set, indexing the members of the first set and members of said associated set in a multilayer index, and storing the index of the ontological subject in one or more non-transitory computer readable storage devices. 8. The method of claim 7 wherein the associated set is further decompose to a growing association set and dormant associate set wherein an ontological subject cannot be a member of more than one growing association set. 9. The method of claim 7 wherein the association value between a pair of ontological subjects is further a function of importance of at least one of the ontological subjects of the pair. 10. The method of claim 7, wherein the at least one collection of electronic content includes content retrieved from a search engine database. 11. The method of claim 7, wherein said association value of the pair of ontological subject is evaluated using the count information by querying at least one Internet search engine and getting the co-occurrence count of the pair from the search engine. 12. The method of claim 8, wherein the ontological subject index is represented with a corresponding ontological subject map, wherein each indexed ontological subject is represented by a node in the map, wherein the map is configured to show an ontological subject of an association set as growing or dormant, whereby providing a tool for visual navigation and focusing on a desired particular place of the map. 13. The method of claim 7, wherein the associated set of the ontological subject is represented by at least one form of spectral graph having ontological subjects in one axis and showing the association value of the members of the set in another axis. 14. The method of claim 7, wherein the ontological subject index is used for identifying at least one of, related subject, most important subjects related to another subject, indirect relation of two or more subjects. 15. The method of claim 7, wherein the ontological subject index is used to guide and show to a user routs for exploration in search of new knowledge, thereby assisting the user in knowledge discovery. 16. The method of claim 7, further comprising: a. finding explicit form of relations of association between each of selected pairs of ontological subjects by searching through the at least one collection of contents; and b. recording and storing said explicit forms of relations in a database configured for easy retrieval. 17. The method of claim 7, wherein the ontological subject index is used as reference for scoring the merit of an electronic content in terms of validity, novelty and importance. 18. A system for knowledge discovery and publication or broadcasting content comprising: a database or list of a plurality of subject matters for exploration; a plurality of online publishing/broadcasting shop each having a title related to at least one of said subject matters; at least one reference database representing association of a plurality of ontological subjects with each other or with the plurality of subject matters, wherein entries of the database indicating association of an ontological subject to a plurality of other ontological subjects or subject matters based on their association values to each other, wherein said association values is calculated by analyzing a collection of content, having at least one content, using one or more data processing or computing devices, said one or more data processing or computing devices having compound processing speeds of one thousand million or larger than one thousand million instructions per second, devices for receiving an electronic content from one or more creators, through a data communication network and/or one or more data acquisition sensory, said data communication network carries, transmit, or transport data at the rate of 10 million bits per second or larger than 10 million bits per second, and assigning at least one publishing shop for said electronic content, and building at least one ontological subject association value spectrum using said electronic content; at least one software program configured to automatically measure a merit for received electronic content, by evaluating, using one or more data processing or computing devices, said one or more data processing or computing devices having compound processing speeds of one thousand million or larger than one thousand million instructions per second, association values of ontological subjects of the received electronic content and processing them with the association values stored in the reference database of association values of ontological subjects derived from the collection of content, between comparing predefined parameters and functions of said at least one ontological subject map of said content with said at least one reference ontological subject map, and passing the content, if the at least one merit meets a predetermined criteria, for publication to said at least one of assigned shop; and one or more data processing and computing devices, said one or more data processing or computing devices having compound processing speeds of one thousand million or larger than one thousand million instructions per second, for making said received electronic content available for public access through communication network. 19. The system of claim 18, wherein the system is distributed and at least one part of the system is physically located in, or performs from, different location from the rest of the system. 20. The system of claim 18, wherein the association values between ontological subjects is calculated based on the number of co-occurrences, within predefined proximities in the collection of content or partitions of the received content, of pairs of ontological subjects with each other, or weighting coefficient for one or more of the ontological subjects of the pairs. | Please help me write a proper abstract based on the patent claims. | A system and method is presented for knowledge discovery that incorporate both human and computers to index, process, and communicate and share the knowledge and electronic contents. It also provides a platform for launching unlimited number of qualified and content reviewed publishing/broadcasting ventures or artificial beings. The system assists individuals for faster and more efficient discovery/creation of new and useful knowledge, and valuable artistic content. It also provides incentives to the owners of the ventures and a method for rewarding or compensating all contributors. |
1. A non-transitory computer readable storage medium comprising instructions for developing, testing and analyzing a product that, when executed on a computing device, cause the computing device to at least: receive data regarding the product in an application; send a message containing the data and a set of rules that apply to the data together to a rule engine to initiate a process for analyzing the data, the data being true for the set of rules for a current process state; analyze the data based on the set of rules operated within the rule engine to produce a rule-dependent response based on the data without requesting additional rules or additional data for the first set of rules from any source during the current process state; and based on the rule-dependent response, perform one or more work flows within the application related to the development, testing or analysis of the product. 2. The non-transitory computer-readable storage medium of claim 1, wherein the instruction to analyze the data includes instructions to combine a forward-chaining rule with a backward-chaining rule by creating a condition within the forward-chaining rule that contains a backward-chaining query. 3. The non-transitory computer-readable storage medium of claim 1, wherein the instruction to analyze the data includes instructions to combine a backward-chaining rule with a forward-chaining rule by utilizing a fact inferred from the forward-chaining rule as a goal for the backward-chaining rule, unless the forward-chaining rule contains a condition that depends on negation of another forward-chaining inference, in which case execution of the forward-chaining rule is suspended, the dependency of the rule-predicate for the problematic fact is recorded in a table, and execution of the forward-chaining rule skips to the next untried fact to select a new rule to execute. 4. The non-transitory computer-readable storage medium of claim 1, wherein the data is simulation data associated with an aspect of the product being developed, wherein the rule-dependent response indicates a problem with the simulation data, and wherein the one or more work flows include alerting one or more persons regarding the problem. 5. The non-transitory computer-readable storage medium of claim 1, wherein the data is testing data associated with a prototype of the product being developed, wherein the rule-dependent response indicates a problem with the testing data, and wherein the one or more work flows include alerting one or more persons regarding the problem. 6. The non-transitory computer-readable storage medium of claim 1, wherein the data is analysis data associated with the product that has been developed, wherein the rule-dependent response indicates the product passes a certification, fails a certification, or is missing a part necessary to certifying the product in accordance with a standard or a regulation, and wherein the one or more work flows include alerting one or more persons regarding the product passing the certification, failing the certification or missing the part. 7. The non-transitory computer-readable storage medium of claim 6, wherein one or more work flows include generating a report suitable for submission to a standard body or regulatory authority. 8. The non-transitory computer-readable storage medium of claim 6, wherein one or more work flows include generating a report indicating why the product failed the certification. 9. The non-transitory computer-readable storage medium of claim 6, wherein one or more work flows include generating a report indicating at least one part the product was missing and an indication of where the part could be located within the product. | Please help me write a proper abstract based on the patent claims. | Systems, methods and mediums are described for processing rules and associated bags of facts generated by an application in communication with a processing engine, database and rule engine that process the bags of facts in view of the rules and generate one or more rule-dependent responses to the application which performs one or more work flows based on the responses. The rule engine may apply forward-chaining, backward-chaining or a combination of forward-chaining and backward-chaining to process the rules and facts. Numerous novel applications that work in conjunction with the processing engine, database and rule engine are also described. |
1. An information processing apparatus, comprising: a calculation section which calculates a proficiency level of a user for operations performed by the user for achieving a prescribed objective based on history information related to the operations and attribute information related to physical features of the user; and a generation section which generates advice for achieving the objective based on the proficiency level calculated by the calculation section. 2. The information processing apparatus according to claim 1, wherein the generation section generates advice for a first user based on the advice generated for a second user who has the attribute information similar to the attribute information of the first user. 3. The information processing apparatus according to claim 1, further comprising: an estimation section which estimates other objectives capable of being easily achieved by the user based on the history information and the proficiency level, wherein the generation section generates the advice which recommends the other objectives estimated by the estimation section. 4. The information processing apparatus according to claim 1, wherein the calculation section calculates the proficiency level for a plurality of partial operations by breaking down one of the operations, and wherein the generation section generates the advice based on the proficiency level for each of the partial operations. 5. The information processing apparatus according to claim 1, wherein the generation section generates the advice more abstractly when the proficiency level is high, and generates the advice more specifically when the proficiency level is low. 6. The information processing apparatus according to claim 1, wherein the generation section preferentially generates the advice related to the operations in which the proficiency level is low. 7. The information processing apparatus according to claim 1, further comprising: a detection section which detects operation tendencies of the user in the operations from the history information, wherein the generation section generates the advice additionally based on the operation tendencies detected by the detection section. 8. The information processing apparatus according to claim 1, wherein the calculation section calculates a first proficiency level for the operations performed for achieving a first objective, and wherein the generation section generates the advice for achieving a second objective based on the first proficiency level. 9. The information processing apparatus according to claim 1, wherein the generation section generates the advice related to tools used by the user for the operations. 10. The information processing apparatus according to claim 1, further comprising: an operation acquisition section which acquires in real time operations performed by the user for achieving the objective, wherein the generation section generates the advice additionally based on the operations acquired by the operation acquisition section. 11. The information processing apparatus according to claim 1, further comprising: a state acquisition section which acquires a state of the user, wherein the generation section generates the advice additionally based on the state acquired by the state acquisition section. 12. The information processing apparatus according to claim 1, further comprising: an environmental information acquisition section which acquires environmental information of surroundings of the user, wherein the generation section generates the advice additionally based on the environmental information acquired by the environmental information acquisition section. 13. The information processing apparatus according to claim 1, wherein the attribute information is at least one of an age, gender, body shape, muscular strength, eyesight, hearing, dominant arm, dominant foot, dominant eye, and dominant ear of the user. 14. The information processing apparatus according to claim 1, wherein the history information includes a history of the operations performed by the user and an achievement level of the objective. 15. The information processing apparatus according to claim 1, further comprising: a presentation control section which presents the user with the advice generated by the generation section. 16. A non-transitory computer-readable storage medium having a program stored therein, the program causing a computer to perform: calculating a proficiency level of a user for operations performed by the user for achieving a prescribed objective based on history information related to the operations and attribute information related to physical features of the user; and generating advice for achieving the objective based on the calculated proficiency level. | Please help me write a proper abstract based on the patent claims. | Provided is an information processing apparatus, including a calculation section which calculates a proficiency level of a user for operations performed by the user for achieving a prescribed objective based on history information related to the operations and attribute information related to physical features of the user, and a generation section which generates advice for achieving the objective based on the proficiency level calculated by the calculation section. |
1. A recommendation method, comprising: determining, according to a hidden variable characteristic parameter of a user and a hidden variable characteristic parameter of each branch node of a content tree, a probability that the user selects each branch node on an nth level of the content tree, wherein one branch node of the content tree corresponds to one content category, and n is a natural number greater than 1; and recommending, to the user according to the probability that the user selects each branch node on the nth level of the content tree, a content category corresponding to at least one branch node on the nth level of the content tree. 2. The method of claim 1, wherein the determining, according to a hidden variable characteristic parameter of a user and a hidden variable characteristic parameter of each branch node of a content tree, a probability that the user selects each branch node on an nth level of the content tree comprises: calculating an affinity between the user and each branch node on the nth level of the content tree according to the hidden variable characteristic parameter of the user and a hidden variable characteristic parameter of each branch node on the nth level of the content tree; and determining, according to the affinity between the user and each branch node on the nth level of the content tree, the probability that the user selects each branch node on the nth level of the content tree. 3. The method of claim 2, wherein the affinity between the user and each branch node on the nth level of the content tree is a dot product of the hidden variable characteristic parameter of the user and the hidden variable characteristic parameter of each branch node on the nth level of the content tree. 4. The method of claim 1, wherein the method further comprises: pre-determining the hidden variable characteristic parameter of the user and the hidden variable characteristic parameter of each branch node of the content tree. 5. The method of claim 1, wherein the content tree is an application tree, and the content category is an APP category; or the content tree is a commodity tree, and the content category is a commodity category; or the content tree is a search result tree, and the content category is a search result category. 6. A recommendation method, comprising: determining, according to a hidden variable characteristic parameter of a user and a hidden variable characteristic parameter of each leaf node of a content tree, a probability that the user selects each leaf node of the content tree, wherein one leaf node of the content tree corresponds to one content; and recommending, to the user according to the probability that the user selects each leaf node of the content tree, a content corresponding to at least one leaf node of the content tree. 7. The method of claim 6, wherein the determining, according to a hidden variable characteristic parameter of a user and a hidden variable characteristic parameter of each leaf node of a content tree, a probability that the user selects each leaf node of the content tree comprises: calculating an affinity between the user and each leaf node of the content tree according to the hidden variable characteristic parameter of the user and the hidden variable characteristic parameter of each leaf node of the content tree; and determining, according to the affinity between the user and each leaf node of the content tree, the probability that the user selects each leaf node of the content tree. 8. The method of claim 7, wherein the affinity between the user and each leaf node of the content tree is a dot product of the hidden variable characteristic parameter of the user and the hidden variable characteristic parameter of each leaf node of the content tree. 9. The method of claim 6, wherein the method further comprises: pre-determining the hidden variable characteristic parameter of the user and the hidden variable characteristic parameter of each leaf node of the content tree. 10. The method of claim 6, wherein the content tree is an APP tree, and the content is an APP; or the content tree is a commodity tree, and the content is a commodity; or the content tree is a search result tree, and the content is a search result. 11. A recommendation apparatus, comprising: a probability determining module, configured to determine, according to a hidden variable characteristic parameter of a user and a hidden variable characteristic parameter of each branch node of a content tree, a probability that the user selects each branch node on an nth level of the content tree, wherein one branch node of the content tree corresponds to one content category, and n is a natural number greater than 1; and a recommendation module, configured to recommend, to the user according to the probability that the user selects each branch node on the nth level of the content tree, a content category corresponding to at least one branch node on the nth level of the content tree. 12. The apparatus of claim 11, wherein the probability determining module comprises: an affinity determining unit, configured to calculate an affinity between the user and each branch node on the nth level of the content tree according to the hidden variable characteristic parameter of the user and a hidden variable characteristic parameter of each branch node on the nth level of the content tree; and a probability determining unit, configured to determine, according to the affinity between the user and each branch node on the nth level of the content tree, the probability that the user selects each branch node on the nth level of the content tree. 13. The apparatus of claim 11, wherein the apparatus further comprises: a parameter determining module, configured to pre-determine the hidden variable characteristic parameter of the user and the hidden variable characteristic parameter of each branch node of the content tree. 14. A recommendation apparatus, comprising: a probability determining module, configured to determine, according to a hidden variable characteristic parameter of a user and a hidden variable characteristic parameter of each leaf node of a content tree, a probability that the user selects each leaf node of the content tree, wherein one leaf node of the content tree corresponds to one content; and a recommendation module, configured to recommend, to the user according to the probability that the user selects each leaf node of the content tree, a content corresponding to at least one leaf node of the content tree. 15. The apparatus of claim 14, wherein the probability determining module comprises: an affinity determining unit, configured to calculate an affinity between the user and each leaf node of the content tree according to the hidden variable characteristic parameter of the user and the hidden variable characteristic parameter of each leaf node of the content tree; and a probability determining unit, configured to determine, according to the affinity between the user and each leaf node of the content tree, the probability that the user selects each leaf node of the content tree. 16. The apparatus of claim 14, wherein the apparatus further comprises: a parameter determining module, configured to pre-determine the hidden variable characteristic parameter of the user and the hidden variable characteristic parameter of each leaf node of the content tree. 17. A recommendation apparatus, comprising: a memory and a processor, wherein the memory is configured to store an instruction; and the processor is configured to execute the instruction, to perform the following steps: determining, according to a hidden variable characteristic parameter of a user and a hidden variable characteristic parameter of each branch node of a content tree, a probability that the user selects each branch node on an nth level of the content tree, wherein one branch node of the content tree corresponds to one content category, and n is a natural number greater than 1; and recommending, to the user according to the probability that the user selects each branch node on the nth level of the content tree, a content category corresponding to at least one branch node on the nth level of the content tree. 18. A recommendation apparatus, comprising: a memory and a processor, wherein the memory is configured to store an instruction; and the processor is configured to execute the instruction, to perform the following steps: determining, according to a hidden variable characteristic parameter of a user and a hidden variable characteristic parameter of each leaf node of a content tree, a probability that the user selects each leaf node of the content tree, wherein one leaf node of the content tree corresponds to one content; and recommending, to the user according to the probability that the user selects each leaf node of the content tree, a content corresponding to at least one leaf node of the content tree. | Please help me write a proper abstract based on the patent claims. | Embodiments of this application disclose a recommendation method, comprising: determining, according to a hidden variable characteristic parameter of a user and a hidden variable characteristic parameter of each branch node of a content tree, a probability that the user selects each branch node on an nth level of the content tree; and recommending, to the user according to the probability that the user selects each branch node on the nth level of the content tree, a content category corresponding to at least one branch node on the nth level of the content tree. This application further discloses another recommendation method and recommendation apparatus. According to the recommendation method and apparatus in the embodiments of this application, a probability that a user selects each node on a specific level in a tree structure of to-be-recommended contents can be determined according to a hidden variable characteristic parameter of the user, and a recommendation is given based on the probability, which overcomes a problem in the prior art that a need of a user for customization is overlooked, so that a customized recommendation can be given to a user more accurately. |
1. A method, in an information handling system comprising a processor and a memory, for generating concept vectors, the method comprising: generating, by the system, concept sequences from one or more content sources by extracting a plurality of concepts to form each concept sequence; and generating, by the system, at least a first concept vector for a first concept by supplying the concept sequences as input to a vector learning component, such that the first concept vector comprises information interrelating the first concept to other concepts in the concept sequences which is inferred from the concept sequences. 2. The method of claim 1, wherein generating concept sequences comprises performing, by the system, a natural language processing (NLP) analysis of an annotated text source specializing in concepts to filter out words corresponding to non-concepts, leaving a plurality of concepts forming a concept sequence. 3. The method of claim 1, wherein generating concept sequences comprises performing, by the system, a natural language processing (NLP) analysis of an annotated text source specializing in concepts to replace one or more words forming each concept by a marked string representing the concept, leaving other words unchanged. 4. The method of claim 1, wherein generating concept sequences comprises performing, by the system, a random walk across concept nodes on a graph having edge weights, where a probability to proceed to a concept node depends on a connecting edge weight relative to edge weights of edges connecting to other concept nodes, thereby traversing a plurality of concepts forming a concept sequence. 5. The method of claim 1, wherein generating concept sequences comprises performing, by the system, a random walk across weighted concept nodes on a graph having edge weights, where a probability to proceed to a concept node depends on a connecting edge weight and on adjoining concept node's weight relative to weights of other connecting edges and adjoining concept nodes, thereby traversing a plurality of concepts forming a concept sequence. 6. The method of claim 1, wherein generating concept sequences comprises tracking, by the system, at least a first user's navigation behavior over a plurality of concepts to identify a plurality of concepts forming a concept sequence. 7. The method of claim 1, wherein generating at least the first concept vector comprises performing, by the system, machine learning processing to compute the first concept vector based on statistics of associations with the first concept in one or more of the concept sequences. 8. The method of claim 1, wherein generating at least the first concept vector comprises performing, by the system, a machine learning process to repeatedly scan the concept sequences to generate a vector representation for each concept in the sequence. 9. The method of claim 1, wherein generating concept sequences comprises filtering, by the system, the one or more concept sources to retain text related to a specified theme in a filtered content source before extracting the plurality of concepts from the filtered content source to form each concept sequence. 10. The method of claim 1, wherein generating concept sequences comprises: extracting a first plurality of concepts from a first set of concept annotations for the one or more content sources, and extracting a second plurality of concepts from a second set of concept annotations for the one or more content sources; and wherein generating at least the first concept vector comprises supplying the first plurality of concepts and second plurality of concepts as input to the vector learning component to generate the first concept vector and a second concept vector. 11. An information handling system comprising: one or more processors; a memory coupled to at least one of the processors; a set of instructions stored in the memory and executed by at least one of the processors to generate concept vectors, wherein the set of instructions are executable to perform actions of; generating, by the system, concept sequences from one or more content sources by extracting a plurality of concepts to form each concept sequence; and generating, by the system, a first concept vector for a first concept by supplying the concept sequences as input to a vector learning component, such that the first concept vector comprises information interrelating the first concept to other concepts in the concept sequences which is inferred from the concept sequences. 12. The information handling system of claim 11, wherein the set of instructions are executable to generate concept sequences by performing a natural language processing (NLP) analysis of an annotated text source specializing in concepts to filter out words corresponding to non-concepts, leaving a plurality of concepts forming a concept sequence. 13. The information handling system of claim 11, wherein the set of instructions are executable to generate concept sequences by performing a natural language processing (NLP) analysis of an annotated text source specializing in concepts to replace one or more words forming each concept by a marked string representing the concept, leaving other words unchanged. 14. The information handling system of claim 11, wherein the set of instructions are executable to generate concept sequences by performing a random walk across weighted concept nodes on a graph having edge weights, where a probability to proceed to a concept node depends on a connecting edge weight and an adjoining concept node's weight relative to other edges and concept nodes, thereby traversing a plurality of concepts forming a concept sequence. 15. The information handling system of claim 11, wherein the set of instructions are executable to generate concept sequences by tracking at least a first user's navigation behavior over a plurality of concepts to identify a plurality of concepts forming a concept sequence. 16. The information handling system of claim 11, wherein the set of instructions are executable to generate the first vector by performing machine learning processing to compute the first concept vector based on statistics of associations with the first concept in one or more of the concept sequences. 17. The information handling system of claim 11, wherein the set of instructions are executable to generate the first vector by performing a machine learning process to repeatedly scan the concept sequences to generate a vector representation for each concept in the sequence. 18. The information handling system of claim 11, wherein the set of instructions are executable to generate the first vector by performing a machine learning process to repeatedly scan the concept sequences to generate a vector representation for each concept in the sequence. 19. A computer program product stored in a computer readable storage medium, comprising computer instructions that, when executed by an information handling system, causes the system to generate concept vectors by performing actions comprising: generating, by the system, concept sequences from one or more content sources by extracting a plurality of concepts to form each concept sequence; and generating, by the system, a first concept vector for a first concept by supplying the concept sequences as input to a vector learning component, such that the first concept vector comprises information interrelating the first concept to other concepts in the concept sequences which is inferred from the concept sequences. 20. The computer program product of claim 19, wherein generating concept sequences comprises performing a natural language processing (NLP) analysis of an annotated text source specializing in concepts to filter out words corresponding to non-concepts, leaving a plurality of concepts forming a concept sequence. 21. The computer program product of claim 19, wherein generating concept sequences comprises performing a natural language processing (NLP) analysis of an annotated text source specializing in concepts to replace one or more words forming each concept by a marked string representing the concept, leaving other words unchanged. 22. The computer program product of claim 19, wherein generating concept sequences comprises performing a random walk across weighted concept nodes on a graph having edge weights, where a probability to proceed to a concept node depends on a connecting edge weight and an adjoining concept node's weight relative to other edges and concept nodes, thereby traversing a plurality of concepts forming a concept sequence. 23. The computer program product of claim 19, wherein generating concept sequences comprises tracking at least a first user's navigation behavior over a plurality of concepts to identify a plurality of concepts forming a concept sequence. 24. The computer program product of claim 19, wherein generating the first concept vector comprises performing machine learning processing to compute the first concept vector based on statistics of associations with the first concept in one or more of the concept sequences. 25. The computer program product of claim 19, wherein generating the first concept vector comprises performing a machine learning process to repeatedly scan the concept sequences to generate a vector representation for each concept in the sequence. | Please help me write a proper abstract based on the patent claims. | An approach is provided for automatically generating and processing concept vectors by extracting concept sequences from one or more content sources and generating a first concept vector for a first concept by supplying the concept sequences as inputs to a vector learning component, such that the first concept vector comprises information interrelating the first concept to other concepts in the concept sequences which is inferred from the concept sequences. |
1. An apparatus, comprising: a feature value calculating unit configured to calculate feature values corresponding to image and sound features in a video content; and a fatigue degree calculating unit configured to calculate a degree of user fatigue to the video content by applying the feature values to a fatigue degree estimation model. 2. The apparatus of claim 1, wherein the feature value calculating unit further comprises: an image feature value calculating unit configured to calculate an image feature value corresponding to an image feature in the video content; a sound feature value calculating unit configure to calculate a sound feature value corresponding to sound feature in the video content; and an emotion feature value calculating unit configured to calculate an emotion feature value by applying at least one of the image feature value and the sound feature value to an emotion recognition model. 3. The apparatus of claim 2, wherein the emotion feature value comprises an arousal level and a valence level. 4. The apparatus of claim 1, wherein the fatigue degree estimation model comprises: a first parameter that represents a correlation between the image feature value and a bio-signal change of a user; and a second parameter that represents a correlation between the bio-signal change and a fatigue degree. 5. The apparatus of claim 2, wherein the fatigue degree calculating unit is further configured to calculate an auditory fatigue degree based on the sound feature value, a visual fatigue degree based on the image feature value, and a mental fatigue degree based on the emotion feature value. 6. The apparatus of claim 5, wherein the fatigue degree calculating unit is further configured to calculate an overall fatigue degree based on the auditory fatigue degree, the visual fatigue degree, and the mental fatigue degree. 7. The apparatus of claim 1, wherein the fatigue degree calculating unit is further configured to calculate an accumulative fatigue degree according to a lapse of viewing time, while a user is watching the video content. 8. The apparatus of claim 1, further comprising: a fatigue degree information generating unit configured to generate fatigue degree information to display the fatigue degree to a user. 9. A method, comprising: calculating feature values corresponding to image and sound features in a video content; and calculating a degree of user fatigue to the video content by applying the feature values to a fatigue degree estimation model. 10. The method of claim 9, wherein the calculating of the feature values comprises: calculating an image feature value corresponding to an image feature in the video content; calculating a sound feature value corresponding to a sound feature in the video content; and calculating an emotion feature value by applying at least one of the image feature value and the sound feature value to an emotion recognition model. 11. The method of claim 10, wherein the emotion feature value comprises an arousal level and a valence level. 12. The method of claim 9, wherein the fatigue degree estimation model comprises: a first parameter that represents a correlation between the image feature value and a bio-signal change of a user; and a second parameter that represents a correlation between the bio-signal change and a fatigue degree. 13. The method of claim 10, wherein the calculating of the fatigue degree comprises calculating an auditory fatigue degree based on the sound feature value, a visual fatigue degree based on the image feature value, and a mental fatigue degree based on the emotion feature value. 14. The method of claim 13, wherein the calculating of the fatigue degree comprises calculating an overall fatigue degree based on the auditory fatigue degree, the visual fatigue degree, and the mental fatigue degree. 15. The method of claim 9, wherein the calculating of the fatigue degree comprises calculating an accumulative fatigue degree according to a lapse of viewing time, while a user is watching the video content. 16. The method of claim 9, further comprising: generating fatigue degree information to display the fatigue degree to a user. 17. An apparatus, comprising: a feature value calculating unit configured to calculate image feature value, sound feature value, and emotion feature value associated with a video content; and a fatigue degree calculating unit configured to calculate a degree of user fatigue to video content by correlating bio-signal changes of a user to the image feature value, sound feature value, and emotion feature value. 18. The apparatus of claim 17, further comprising: a fatigue degree information generating unit configured to generate fatigue degree information to display the calculated degree of user fatigue. 19. The apparatus of claim 17, wherein the fatigue degree calculating unit is further configured to calculate the degree of user fatigue to video content from a mean value based on a degree of visual fatigue, a degree of auditory fatigue, and a degree of mental fatigue, each degree based on the image feature value, the sound feature value, and the emotion feature value, respectively. 20. The apparatus of claim 17, wherein the fatigue degree calculating unit is further configured to calculate the degree of user fatigue to video content by multiplying each of a degree of visual fatigue, a degree of auditory fatigue, and a degree of mental fatigue by a weighted value, and summing up each of the multiplied values, each degree based on the image feature value, the sound feature value, and the emotion feature value, respectively. | Please help me write a proper abstract based on the patent claims. | An apparatus and a method to estimate a degree of user fatigue to video content are disclosed including a feature value calculating unit and a fatigue degree calculating unit. The feature value calculating unit is configured to calculate feature values corresponding to image and sound features in a video content. The fatigue degree calculating unit is configured to calculate a degree of user fatigue to the video content by applying the feature values to a fatigue degree estimation model. |
1. A method comprising: defining, by a computer system, a three dimensional (3D) model; simulating, by the computer system, two or more sensor outputs from sound from a lane-splitting vehicle incident on two or more sensor locations of a subject vehicle in the 3D model; and training, by the computer system, a machine-learning model using a location of the lane-splitting vehicle in the 3D model over time and the two or more sensor outputs. 2. The method of claim 1, further comprising: defining on the subject vehicle one or more camera locations; simulating detection of images at the one or more camera locations; and training the machine-learning model using both the images and the two or more sensor outputs. 3. The method of claim 2 further comprising: defining on the subject vehicle a RADAR sensor location; simulating a RADAR sensor output according to the 3D model; and training the machine learning model using all of the images, the RADAR sensor output, and the two or more sensor outputs. 4. The method of claim 3, further comprising: defining on the subject vehicle a LIDAR sensor location; simulating a sequence of point clouds detected from the LIDAR sensor location of the 3D model; and training the machine learning model using all of the images, the RADAR sensor output, the sequence of point clouds, and the two or more sensor outputs. 5. The method of claim 1, wherein defining the 3D model further includes defining velocities for the plurality of vehicles, the lane-splitting vehicle and the subject vehicle. 6. The method of claim 1, wherein the machine-learning model is a deep neural network. 7. The method of claim 1, wherein simulating the two or more sensor outputs from the sound of the plurality of vehicles incident on the two or more sensor locations comprises simulating Doppler effects and propagation distances of the sound. 8. The method of claim 1, where the lane-splitting vehicle is modeled as a lane-splitting motorcycle emitting engine noises characteristic of a motorcycle. 9. The method of claim 1, further comprising: providing a vehicle including a vehicle controller and two or more microphones; programming the vehicle controller with the machine-learning model; receiving, by the vehicle controller, two or more audio streams from the two or more microphones; inputting, by the vehicle controller, the two or more audio streams to the machine-learning model; (a) determining, by the vehicle controller, that the machine-learning model indicates that the two or more audio streams currently indicate a lane-splitting vehicle is present; in response to (a) at least one of outputting an alert and refraining from causing the vehicle to enter an inter-lane region. 10. The method of claim 9, further comprising: (b) determining, by the vehicle controller, subsequent to (a) that the machine-learning model indicates that the two or more audio streams do not currently indicate a lane-splitting vehicle is present; in response to (b), discontinuing outputting the alert by the vehicle controller and permitting, by the vehicle controller, movement into the inter-lane region. 11. A system comprising one or more processors and one or more memory devices coupled to the one or more processors, the one or more memory devices storing executable code effective to: define a three dimensional model including a road and a plurality of vehicles, a lane-splitting vehicle, and a subject vehicle including two or more sensor locations positioned on the road; simulate two or more sensor outputs from sound from the plurality of vehicles incident on the two or more sensor locations; and train a machine-learning model using a location of the lane-splitting vehicle over time and the two or more sensor outputs. 12. The system of claim 11, wherein the executable code is further effective to cause the one or more processors to: define on the subject vehicle one or more camera locations; simulate detection of images at the one or more camera locations; and train the machine-learning model using both the images and the two or more sensor outputs. 13. The system of claim 12, wherein the executable code is further effective to cause the one or more processors to: define on the subject vehicle a RADAR sensor location; simulate a RADAR sensor output according to the 3D model; and train the machine learning model using all of the images, the RADAR sensor output, and the two or more sensor outputs. 14. The system of claim 13, wherein the executable code is further effective to cause the one or more processors to: define on the subject vehicle a LIDAR sensor location; simulate a sequence of point clouds detected from the LIDAR sensor location of the 3D model; and train the machine learning model using all of the images, the RADAR sensor output, the sequence of point clouds, and the two or more sensor outputs. 15. The system of claim 11, wherein the executable code is further effective to cause the one or more processors to define the 3D model by defining velocities for the plurality of vehicles, the lane-splitting vehicle and the subject vehicle. 16. The system of claim 11, wherein the machine-learning model is a deep neural network. 17. The system of claim 11, wherein the executable code is further effective to cause the one or more processors to simulate the two or more sensor outputs from the sound of the plurality of vehicles incident on the two or more sensor locations by simulating Doppler effects and propagation distances of the sound. 18. The system of claim 11, wherein the executable code is further effective to cause the one or more processors to model the lane-splitting vehicle as a lane-splitting motorcycle emitting engine noises characteristic of a motorcycle. 19. The system of claim 11, further comprising a vehicle including a vehicle controller and two or more microphones; wherein the vehicle controller is programmed with the machine-learning model; wherein the vehicle controller is further programmed to: receive two or more audio streams from the two or more microphones; input the two or more audio streams to the machine-learning model; if the machine-learning model indicates that the two or more audio streams currently indicate a lane-splitting vehicle is present, at least one of output an alert by the vehicle controller and refrain from moving, by the vehicle controller, the vehicle into an inter-lane region. 20. The method of claim 9, further comprising: if the machine-learning model indicates that the two or more audio streams currently indicate a lane-splitting vehicle is present, at least one of refrain from outputting the alert by the vehicle controller and permit, by the vehicle controller, movement into the inter-lane region. | Please help me write a proper abstract based on the patent claims. | A machine learning model is trained by defining a scenario including models of vehicles and a typical driving environment. A model of a subject vehicle is added to the scenario and sensor locations are defined on the subject vehicle. A perception of the scenario by sensors at the sensor locations is simulated. The scenario further includes a model of a lane-splitting vehicle. The location of the lane-splitting vehicle and the simulated outputs of the sensors perceiving the scenario are input to a machine learning algorithm that trains a model to detect the location of a lane-splitting vehicle based on the sensor outputs. A vehicle controller then incorporates the machine learning model and estimates the presence and/or location of a lane-splitting vehicle based on actual sensor outputs input to the machine learning model. |
1. A computer-implemented method, executed on a computing device, comprising: defining a model that includes a plurality of variables; conditioning at least one variable of the plurality of variables based, at least in part, upon a conditioning command received from a user, thus defining a conditioned variable; and inferencing the model based, at least in part, upon the conditioned variable. 2. The computer-implemented method of claim 1 wherein the conditioning command defines a selected value for the at least one variable. 3. The computer-implemented method of claim 1 wherein the conditioning command defines an excluded value for the at least one variable. 4. The computer-implemented method of claim 1 wherein the conditioning command releases control of the at least one variable. 5. The computer-implemented method of claim 1 further comprising: identifying one or more candidate variables, chosen from the plurality of variables, to the user for potential conditioning selection. 6. The computer-implemented method of claim 5 wherein conditioning at least one variable of the plurality of variables includes: allowing the user to select the at least one variable from the one or more candidate variables. 7. A computer program product residing on a computer readable medium having a plurality of instructions stored thereon which, when executed by a processor, cause the processor to perform operations comprising: defining a model that includes a plurality of variables; and conditioning at least one variable of the plurality of variables based, at least in part, upon a conditioning command received from a user, thus defining a conditioned variable; and inferencing the model based, at least in part, upon the conditioned variable. 8. The computer program product of claim 7 wherein the conditioning command defines a selected value for the at least one variable. 9. The computer program product of claim 7 wherein the conditioning command defines an excluded value for the at least one variable. 10. The computer program product of claim 7 wherein the conditioning command releases control of the at least one variable. 11. The computer program product of claim 7 further comprising: identifying one or more candidate variables, chosen from the plurality of variables, to the user for potential conditioning selection. 12. The computer program product of claim 11 wherein conditioning at least one variable of the plurality of variables includes: allowing the user to select the at least one variable from the one or more candidate variables. 13. A computing system including a processor and memory configured to perform operations comprising: defining a model that includes a plurality of variables; and conditioning at least one variable of the plurality of variables based, at least in part, upon a conditioning command received from a user, thus defining a conditioned variable; and inferencing the model based, at least in part, upon the conditioned variable. 14. The computing system of claim 13 wherein the conditioning command defines a selected value for the at least one variable. 15. The computing system of claim 13 wherein the conditioning command defines an excluded value for the at least one variable. 16. The computing system of claim 13 wherein the conditioning command releases control of the at least one variable. 17. The computing system of claim 13 further configured to perform operations comprising: identifying one or more candidate variables, chosen from the plurality of variables, to the user for potential conditioning selection. 18. The computing system of claim 17 wherein conditioning at least one variable of the plurality of variables includes: allowing the user to select the at least one variable from the one or more candidate variables. | Please help me write a proper abstract based on the patent claims. | A computer-implemented method, computer program product and computing system for defining a model that includes a plurality of variables. At least one variable of the plurality of variables is conditioned based, at least in part, upon a conditioning command received from a user, thus defining a conditioned variable. The model is inferenced based, at least in part, upon the conditioned variable. |
1. A computer-implemented method of classifying a data set into two or more classes executed by a processor, comprising: (a) receiving a training data set and a training class set, the training class set including a set of known labels, each known label identifying a class associated with each element in the training data set; (b) receiving a test data set; (c) generating a first classifier for the training data set by applying a first machine learning technique to the training data set and the training class set, (d) generating a first test class set by classifying the elements in the test data set according to the first classifier, (e) for each of a plurality of iterations: (i) transforming the training data set by shifting the elements in the training data set by an amount corresponding to a center of a set of training class centroids, wherein each training class centroid is representative of a center of a subset of elements in the training data set, (ii) transforming the test data set by shifting the elements in the test data set by an amount corresponding to a center of a set of test class centroids, wherein each test class centroid is representative of a center of a subset of elements in the test data set, (iii) generating a second test class set by classifying the elements in the transformed test data set according to a second classifier, wherein the second classifier is generated by applying a second machine learning technique to the transformed training data set and the training class set; (iv) when the first test class set and the second test class set differ, storing the second class set as the first class set and storing the transformed test data set as the test data set and return to step (i). 2. The method of claim 1, further comprising when the first test class set and the second test class set do not differ, outputting the second class set. 3. The method of claim 1, wherein the elements of the training data set represent gene expression data for a patient with a disease, for a patient resistant to the disease, or for a patient without the disease. 4. The method of claim 1, wherein the training data set is formed from a random subset of samples in an aggregate data set, and the test data set is formed from a remaining subset of samples in the aggregate data set. 5. The method of claim 1, wherein: the test data set includes a test set of known labels, each known label identifying a class associated with each element in the test data set; the first test class set includes a set of predicted labels for the test data set; and the second test class set includes a set of predicted labels for the transformed test data set. 6. The method of claim 1, wherein the shifting at step (i) includes applying a rotation, a shear, a linear transformation, or a non-linear transformation to the training data set to obtain the transformed training data set. 7. The method of claim 1, wherein the shifting at step (ii) includes applying a rotation, a shear, a linear transformation, or a non-linear transformation to the test data set to obtain the transformed test data set. 8. The method of claim 1, further comprising comparing the first test class set to the second test class set for each of the plurality of iterations. 9. The method of claim 1, further comprising generating the second classifier for the transformed training data set by applying a machine learning technique to the transformed training data set and the training class set for each of the plurality of iterations. 10. The method of claim 1, wherein the transforming at step (ii) is performed by applying the same transformation of step (i). 11. The method of claim 1, further comprising providing the second test class set to a display device, a printing device, or a storing device. 12. The method of claim 1, wherein the first test class set and the second test class set differ if any element of the first test class set differs from a corresponding element of the second test class set. 13. The method of claim 1, wherein the second test class set includes a set of predicted labels for the transformed test data set, the method further comprising evaluating the second classifier by computing a performance metric representative of a number of correct predicted labels in the second test class set divided by a total number of predicted labels. 14. A computer program product comprising computer-readable instructions that, when executed in a computerized system comprising at least one processor, cause said at least one processor to carry out a method comprising: (a) receiving a training data set and a training class set, the training class set including a set of known labels, each known label identifying a class associated with each element in the training data set; (b) receiving a test data set; (c) generating a first classifier for the training data set by applying a first machine learning technique to the training data set and the training class set, (d) generating a first test class set by classifying the elements in the test data set according to the first classifier, (e) for each of a plurality of iterations: (i) transforming the training data set by shifting the elements in the training data set by an amount corresponding to a center of a set of training class centroids, wherein each training class centroid is representative of a center of a subset of elements in the training data set, (ii) transforming the test data set by shifting the elements in the test data set by an amount corresponding to a center of a set of test class centroids, wherein each test class centroid is representative of a center of a subset of elements in the test data set, (iii) generating a second test class set by classifying the elements in the transformed test data set according to a second classifier, wherein the second classifier is generated by applying a second machine learning technique to the transformed training data set and the training class set; (iv) when the first test class set and the second test class set differ, storing the second class set as the first class set and storing the transformed test data set as the test data set and return to step (i). 15. A computerized system comprising at least one processor configured with non-transitory computer-readable instructions that, when executed, cause the processor to carry out a method of comprising: (a) receiving a training data set and a training class set, the training class set including a set of known labels, each known label identifying a class associated with each element in the training data set; (b) receiving a test data set; (c) generating a first classifier for the training data set by applying a first machine learning technique to the training data set and the training class set, (d) generating a first test class set by classifying the elements in the test data set according to the first classifier, (e) for each of a plurality of iterations: (i) transforming the training data set by shifting the elements in the training data set by an amount corresponding to a center of a set of training class centroids, wherein each training class centroid is representative of a center of a subset of elements in the training data set, (ii) transforming the test data set by shifting the elements in the test data set by an amount corresponding to a center of a set of test class centroids, wherein each test class centroid is representative of a center of a subset of elements in the test data set, (iii) generating a second test class set by classifying the elements in the transformed test data set according to a second classifier, wherein the second classifier is generated by applying a second machine learning technique to the transformed training data set and the training class set; (iv) when the first test class set and the second test class set differ, storing the second class set as the first class set and storing the transformed test data set as the test data set and return to step (i). | Please help me write a proper abstract based on the patent claims. | Described herein are systems and methods for correcting a data set and classifying the data set in an integrated manner. A training data set, a training class set, and a test data set are received. A first classifier is generated for the training data set by applying a machine learning technique to the training data set and the training class set, and a first test class set is generated by classifying the elements in the test data set according to the first classifier. For each of multiple iterations, the training data set is transformed, the test data set is transformed, and a second classifier is generated by applying a machine learning technique to the transformed training data set. A second test class set is generated according to the second classifier, and the first test class set is compared to the second test class set. |
1. A computing system comprising: a control unit configured to: operate a knowledge discovery component to extract knowledge from data, operate a knowledge engineering component to perform a knowledge extension or a knowledge evolution on the data or the knowledge; and a user interface, coupled to the communication unit, configured to operate an interface component to interact with the knowledge discovery component and the knowledge engineering component. 2. The system as claimed in claim 1 wherein the control unit is configured to evolve or extend the knowledge not restricted to a schema. 3. The system as claimed in claim 1 wherein the control unit is configured to generate a knowledge candidate for the knowledge extension. 4. The system as claimed in claim 1 wherein the control unit is configured to generate a model candidate for the knowledge evolution. 5. The system as claimed in claim 1 wherein the control unit is configured to format the data or the knowledge for analysis. 6. The system as claimed in claim 1 wherein the control unit is configured to iteratively evolve a knowledge model. 7. The system as claimed in claim 1 wherein the control unit is configured to iteratively extend the knowledge. 8. A method of operation of a computing system comprising: extracting, with a control unit, knowledge from data; performing a knowledge extension or a knowledge evolution on the data or the knowledge; and interacting with a knowledge discovery component and a knowledge engineering component. 9. The method as claimed in claim 8 wherein performing the knowledge extension or the knowledge evolution on the data or the knowledge includes evolving or extending the knowledge not restricted to a schema. 10. The method as claimed in claim 8 further compromising generating a knowledge candidate for the knowledge extension. 11. The method as claimed in claim 8 further compromising generating a model candidate for the knowledge evolution. 12. The method as claimed in claim 8 further compromising format the data or the knowledge for analysis. 13. The method as claimed in claim 8 further compromising iteratively evolving a knowledge model. 14. The method as claimed in claim 8 further compromising iteratively extending the knowledge. 15. A non-transitory computer readable medium including instructions for execution, the instructions comprising: extracting knowledge from data; performing a knowledge extension or a knowledge evolution on the data or the knowledge; and interacting with a knowledge discovery component and a knowledge engineering component. 16. The medium as claimed in claim 15 wherein performing the knowledge extension or the knowledge evolution on the data or the knowledge includes evolving or extending the knowledge not restricted to a schema. 17. The medium as claimed in claim 15 further compromising generating a knowledge candidate for the knowledge extension. 18. The medium as claimed in claim 15 further compromising generating a model candidate for the knowledge evolution. 19. The medium as claimed in claim 15 further compromising format the data or the knowledge for analysis. 20. The medium as claimed in claim 15 further compromising iteratively evolving a knowledge model. 21. The medium as claimed in claim 15 further compromising iteratively extending the knowledge. | Please help me write a proper abstract based on the patent claims. | A computing system includes: a control unit configured to operate a knowledge discovery component to extract knowledge from data, operate a knowledge engineering component to perform a knowledge extension or a knowledge evolution on the data or the knowledge; and a user interface, coupled to the communication unit, configured to operate an interface component to interact with the knowledge discovery component and the knowledge engineering component. |
1. A system comprising: a computer-readable media storing at least two modules; a processing unit operably coupled to the computer-readable media, the processing unit adapted to execute the at least two modules, the at least two modules comprising: a model module configured to store a portion of a model; and a deep learning training module configured to communicate with the model module and asynchronously sending updates to parameters shared by the model. 2. A system as claim 1 recites, further comprising one or more data servers configured to pre-process data items and store the pre-processed data items, wherein pre-processing the data items comprises creating variants of the data items. 3. A system as claim 2 recites, wherein the deep learning training module is further configured to: asynchronously receive batches of the pre-processed data items from the one or more data servers; and provide the batches of the pre-processed data items as input to the model module. 4. A system as claim 1 recites, wherein asynchronously sending the updates comprises sending associative and commutative weight updates to the parameters shared by the model. 5. A system as claim 1 recites, wherein asynchronously sending the updates comprises sending updates including activation terms and error terms to the parameters shared by the model, the activation terms representing an output of individual neurons in a layer of the model resulting from feed-forward evaluation and the error terms representing computations associated with the individual neurons resulting from back-propagation of the activation terms. 6. A system as claim 5 recites, further comprising one or more parameter servers configured to: store the parameters shared by the model; receive the activation terms and the error terms for updating the parameters; collect the activation terms and the error terms; calculate updated weight values associated with the parameters based at least partly on the collected activation terms and error terms; and send the updated weight values to the deep learning training module. 7. A system as claim 1 recites, wherein the deep learning training module is further configured to: asynchronously receive updated weight values based on the updates sent to the parameters shared by the model; and provide the updated weight values to the model module to update the portion of the model. 8. A system as claim 1 recites, wherein the portion of the model includes individual neurons arranged in layers, individual neurons in a first layer having vertical proximities within a predetermined threshold to individual neurons in neighboring layers. 9. One or more computer-readable storage media encoded with instructions that, when executed by a processor, configure a computer to perform acts comprising: receiving a batch of data items; processing individual data items of the batch of data items, the processing comprising applying a model to the batch of data items to calculate updates; asynchronously sending the updates to shared parameters associated with the model; asynchronously receiving updated weight values based on the updates to the shared parameters; and modifying the model to reflect the updated weight values. 10. One or more computer-readable storage media as claim 9 recites, wherein the processing the individual data items further comprises: assigning the individual data items to individual threads of a plurality of threads based at least in part on the individual threads sharing a same model weight; allocating a training context for feed-forward evaluation and back-propagation; calculating weight updates associated with the convolutional layers of the model; and calculating activation terms and error terms associated with neurons in fully connected layers of the model, the activation terms and error terms based at least in part on the feed-forward evaluation and back-propagation. 11. One or more computer-readable storage media as claim 9 recites, wherein asynchronously sending the updates to the shared parameters comprises sending the updates responsive to processing a predetermined number of the individual data items. 12. One or more computer-readable storage media as claim 9 recites, wherein asynchronously sending the updates to the shared parameters comprises sending the updates in predetermined time intervals. 13. One or more computer-readable storage media as claim 9 recites, wherein the updates are associative and commutative and are aggregated before being applied to update the shared parameters. 14. One or more computer-readable storage media as claim 9 recites, wherein the batch of data items comprises a first batch of data items and the method further comprises: receiving a second batch of data items; processing individual data items of the second batch of data items, the processing comprising applying the model to the second batch of data items to calculate new updates; asynchronously sending the new updates to the shared parameters; asynchronously receiving new updated weight values based on the new updates to the shared parameters; and modifying the model to reflect the new updated weight values. 15. One or more computer-readable storage media as claim 14 recites, further comprising calculating a model prediction error based at least in part on the updated individual weight values and the new updated weight values. 16. One or more computer-readable storage media as claim 15 recites, further comprising processing subsequent batches of data items until the model prediction error converges to a value below a predetermined threshold. 17. A method comprising: arranging computing devices into groups of computing devices, individual groups associated with a model; and partitioning the model across the computing devices in each individual group, the partitioning comprising vertically partitioning the model such that neurons in a layer of the model have vertical proximities within a predetermined threshold to neurons in neighboring layers of the model. 18. A method as claim 17 recites, wherein partitioning the model across the computing devices further comprises partitioning the model to fit in an L3 cache of the computing devices. 19. A method as claim 17 recites, wherein arranging the groups comprises arranging the groups such that a first group sends updates to shared parameters associated with the model at a first rate and a second group sends additional updates to the shared parameters at a second rate. 20. A method as claim 19 recites, wherein arranging the groups further comprises arranging the groups such that the first group sends the updates without knowledge of the second group sending the additional updates. | Please help me write a proper abstract based on the patent claims. | Training large neural network models by providing training input to model training machines organized as multiple replicas that asynchronously update a shared model via a global parameter server is described herein. In at least one embodiment, a system including a model module storing a portion of a model and a deep learning training module that communicates with the model module are configured for asynchronously sending updates to shared parameters associated with the model. The techniques herein describe receiving and processing a batch of data items to calculate updates. Replicas of training machines communicate asynchronously with a global parameter server to provide updates to a shared model and return updated weight values. The model may be modified to reflect the updated weight values. The techniques described herein include computation and communication optimizations that improve system efficiency and scaling of large neural networks. |
1. A computer-implemented method, comprising: accessing a plurality of computer-implemented directionally distinct relationships among a plurality of computer-implemented objects; inferring automatically a first expertise level of a first person who is a first user of a computer-implemented system from one or more usage behaviors; selecting one or more computer-implemented objects of the plurality of computer-implemented objects, wherein the selecting is performed by a computer-implemented function executed on a processor-based device in accordance with the first expertise level and at least one of the plurality of the computer-implemented directionally distinct relationships; and delivering the one or more computer-implemented objects to the first person. 2. The method of claim 1, further comprising: accessing the plurality of computer-implemented directionally distinct relationships among the plurality of computer-implemented objects in accordance with a first computer-implemented object of the plurality of computer-implemented objects being identified as a context for the accessing. 3. The method of claim 1, further comprising: inferring automatically the first expertise level of the first person who is the first user of the computer-implemented system from the one or more usage behaviors, wherein at least one of the one or more usage behaviors is contributing text-based content. 4. The method of claim 3, further comprising: inferring automatically the first expertise level of the first person who is the first user of the computer-implemented system from the one or more usage behaviors, wherein at least one of the one or more usage behaviors is contributing text-based content, wherein the inferring is based, at least in part, on an automatic analysis of the text-based content. 5. The method of claim 1, further comprising: selecting the one or more computer-implemented objects of the plurality of computer-implemented objects, wherein the selecting is further performed in accordance with expertise level calibration information. 6. The method of claim 5, further comprising: selecting the one or more computer-implemented objects of the plurality of computer-implemented objects, wherein the selecting is further performed in accordance with the expertise level calibration information, wherein the expertise calibration information comprises one or more test results. 7. The method of claim 1, further comprising: delivering an explanation for the delivery of the one or more computer-implemented objects to the first person, wherein the explanation references the first expertise level. 8. A computer-implemented system comprising one or more processor-based devices configured to: access a plurality of computer-implemented directionally distinct relationships among a plurality of computer-implemented objects; infer automatically a first expertise level of a first person who is a first user of a computer-implemented system from one or more usage behaviors; select one or more computer-implemented objects of the plurality of computer-implemented objects, wherein the selecting is performed in accordance with the first expertise level and at least one of the plurality of the computer-implemented directionally distinct relationships; and deliver the one or more computer-implemented objects to the first person. 9. The system of claim 8, further comprising the one or more processor-based devices configured to: access the plurality of computer-implemented directionally distinct relationships among the plurality of computer-implemented objects in accordance with a first computer-implemented object of the plurality of computer-implemented objects being identified as a context for the access. 10. The system of claim 9 comprising the one or more processor-based devices configured to: access the plurality of computer-implemented directionally distinct relationships among the plurality of computer-implemented objects in accordance with the first computer-implemented object of the plurality of computer-implemented objects being identified as the context for the access, wherein the identification of the first computer-implemented object as the context is explicitly performed by the first person. 11. The system of claim 8 comprising the one or more processor-based devices configured to: infer automatically the first expertise level of the first person who is the first user of the computer-implemented system from the one or more usage behaviors, wherein at least one of the one or more usage behaviors is contributing text-based content by the first person. 12. The system of claim 11 comprising the one or more processor-based devices configured to: infer automatically the first expertise level of the first person who is the first user of the computer-implemented system from the one or more usage behaviors, wherein at least one of the one or more usage behaviors is contributing text-based content by the first person, wherein the inferring is based, at least in part, on an automatic analysis of the text-based content. 13. The system of claim 8 comprising the one or more processor-based devices configured to: select the one or more computer-implemented objects of the plurality of computer-implemented objects, wherein the selecting is performed in accordance with the first expertise level and at least one of the plurality of the computer-implemented directionally distinct relationships, and wherein the selecting is further performed in accordance with an automatically inferred interest of the first person. 14. The system of claim 8 comprising the one or more processor-based devices configured to: generate an explanation for the delivery of the one or more computer-implemented objects to the first person, wherein the explanation comprises a reference to at least one of the one or more usage behaviors. 15. The system of claim 8 comprising the one or more processor-based devices configured to: generate an explanation for the delivery of the one or more computer-implemented objects to the first person, wherein the explanation comprises a reference to the first expertise level. 16. The system of claim 8 comprising the one or more processor-based devices configured to: generate an explanation for the delivery of the one or more computer-implemented objects to the first person, wherein the explanation comprises a reference to one of the plurality of computer-implemented directionally distinct relationships. 17. A portable processor-based device comprising: geographic location identification hardware; and one or more processors configured to: capture one or more usage behaviors; deliver the one or more usage behaviors to a computer-implemented function that infers a first expertise level of a first person who is a first user of the portable processor-based device from the one or more usage behaviors; receive one or more computer-implemented objects that are selected from a plurality of computer-implemented objects that have associated computer-implemented directionally distinct relationships between pairs of the plurality of computer-implemented objects, wherein the selecting is performed in accordance with the first expertise level, at least one of the computer-implemented directionally distinct relationships, and a first automatically determined geographic location; and deliver the one or more computer-implemented objects to the first person. 18. The device of claim 17 comprising the one or more processors configured to: deliver the one or more usage behaviors, wherein at least one of the one or more usage behaviors is an automatically determined geographic location of the portable processor-based device. 18. The device of claim 17 comprising the one or more processors configured to: receive the one or more computer-implemented objects that are selected from the plurality of computer-implemented objects, wherein at least one of the selected one or more computer-implemented objects comprises a reference to a geographic location. 19. The device of claim 17 comprising the one or more processors configured to: receive the one or more computer-implemented objects that are selected from the plurality of computer-implemented objects wherein at least one of the selected one or more computer-implemented objects comprises a reference to a physical object. 20. The device of claim 17 comprising the one or more processors configured to: deliver an explanation for the delivery of the one or more computer-implemented objects to the first person, wherein the explanation comprises a reference to a second automatically determined geographic location. | Please help me write a proper abstract based on the patent claims. | A directed expertise level-based discovery system, method, and device infers an expertise level associated with a system user from a plurality of usage behaviors and delivers one or more computer-implemented objects to the user that are selected in accordance with the inferred expertise level and a directionally distinct relationship between computer-implemented objects. The directed expertise level-based discovery system, method, and device may infer the expertise level from textual information and/or in accordance with calibration information such as test results. Geographic location awareness associated with a portable device may inform the selection of the computer-implemented objects to be delivered to the user. An explanation may be delivered to the user that references information that is used to inform the selection of the computer-implemented objects to be delivered to the user. |
1-20. (canceled) 21. One or more non-transitory computer-readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: causing a virtual assistant to be presented to enable a conversation between a user and the virtual assistant; receiving user input during the conversation; processing the user input to identify one or more concepts of the user input, each of the one or more concepts comprising a pattern of components; identifying one or more contextual values related to at least one of the conversation or a system that at least partly implements the virtual assistant; determining a response for the user input based at least in part on the one or more concepts of the user input and the one or more contextual values; and causing the response to be presented to the user in real-time via the virtual assistant. 22. The one or more non-transitory computer-readable storage media of claim 21, wherein the one or more contextual values include information related to activity of the user on a site. 23. The one or more non-transitory computer-readable storage media of claim 21, wherein the one or more contextual values include previous user input from the user during the conversation. 24. The one or more non-transitory computer-readable storage media of claim 21, wherein the one or more contextual values include at least one of: a name of the user; an email address; an IP address; a credit card number; or a gender of the user. 25. The one or more non-transitory computer-readable storage media of claim 21, wherein the components of the pattern each include at least one of: one or more vocab terms comprising a grouping of at least one of one or more unambiguous synonyms of a word in the user input or one or more spelling variations of the word in the user input; one or more helper terms comprising one or more words in the user input that have no unambiguous synonyms; or one or more wildcards that each function as a placeholder for a word or words. 26. A method comprising: causing a virtual assistant to carry out a conversation with a user; determining, by a system that is at least partly implemented by a human-trained algorithm, an intent of the user; determining, by the system that is at least partly implemented by the human-trained algorithm, a task based at least in part on the intent of the user; and causing the task to be performed at least in part by the virtual assistant. 27. The method of claim 26, wherein the determining the intent of the user comprises determining one or more concepts that are associated with user input that is received during the conversation. 28. The method of claim 27, wherein the user input comprises natural language input. 29. The method of claim 26, further comprising: capturing information from a previous conversation between the user and the virtual assistant; wherein the intent of the user is determined based at least in part on the information from the previous conversation between the user and the virtual assistant. 30. The method of claim 26, further comprising: identifying one or more values related to at least one of the conversation or the system that is at least partly implemented by the human-trained algorithm; wherein the intent of the user is determined based at least in part on the one or more values. 31. The method of claim 30, wherein the one or more values include at least one of: a name of the user; an email address; an IP address; a credit card number; or a gender of the user. 32. The method of claim 26, wherein the task comprises at least one of: outputting content to the user; causing an action be performed by an application that is external to the system; or providing access to a web page. 33. One or more non-transitory computer-readable storage media storing instructions that, when executed by one or more processors, cause the one or more processors to perform the method of claim 26. 34. A system comprising: one or more processors; and memory communicatively coupled to the one or more processors and storing instructions that, when executed by the one or more processors, cause the one or more processors to perform operations comprising: causing a virtual assistant to carry out a conversation with a user; receiving user input during the conversation; determining one or more concepts for the user input; determining a task based at least in part on the one or more concepts and one or more values, the one or more values being related to at least one of the conversation or the system; and causing the task to be performed at least in part by the virtual assistant. 35. The system of claim 34, wherein the task comprises providing content to the user. 36. The system of claim 34, wherein the task comprises calling an external application and passing one or more parameters to the external application. 37. The system of claim 34, wherein the task comprises accessing a web page. 38. The system of claim 34, wherein the one or more values include at least one of: information related to activity of the user on a site; or previous user input from the user during the conversation. 39. The system of claim 34, wherein the one or more values include at least one of: a name of the user; an email address; an IP address; a credit card number; or a gender of the user. 40. The system of claim 34, wherein the one or more concepts for the user input are determined at least in part by a human-trained algorithm. 41. A device comprising: one or more processors; a display device communicatively coupled to one or more processors and configured to display information representing a virtual assistant; a communication component communicatively coupled to the one or more processors and configured to communicate with a remote service to facilitate a conversation with a user, the communication component being configured to: send user input to the remote service for processing; and receive a response for the user input; and a module executable by the one or more processors to cause the virtual assistant to output the response. 42. The device of claim 41, wherein the response is based at least in part on one or more values that are related to the conversation. 43. The device of claim 42, wherein the one or more values include at least one of: a name of the user; an email address; an IP address; a credit card number; or a gender of the user. | Please help me write a proper abstract based on the patent claims. | A virtual assistant may communicate with a user in a conversational manner based on context. For instances, a virtual assistant may be presented to a user to enable a conversation between the virtual assistant and the user. A response to user input that is received during the conversation may be determined based on contextual values related to the conversation or system that implements the virtual assistant. |
1. A method comprising: retrieving user information in response to a user request; selecting predefined content identification strategies for determining learning content based on the user information; executing each selected predefined content identification strategy; merging the executed selected predefined content identification strategies; and generating recommendations for learning content for the user based on the merged executed selected predefined content identification strategies. 2. The method of claim 1 wherein retrieving user information in response to a user request comprises: retrieving the user information from a learning graph, the learning graph comprising nodes and edges, wherein a plurality of content nodes represent learning content and a plurality of person nodes represent individuals. 3. The method of claim 2 wherein retrieving the user information from a learning graph includes retrieving the user information from nodes within a predetermined distance of a person node associated with the user. 4. The method of claim 1 wherein selecting predefined content identification strategies for determining learning content based on the user information comprises applying a rule set. 5. The method of claim 1 further comprising removing recommendations for learning content based on characteristics of the user. 6. The method of claim 1 further comprising removing recommendations for learning content based on characteristics of the content and the user. 7. The method of claim 1 further comprising storing the recommendations for learning content in corresponding recommendation nodes in a learning graph, the learning graph comprising nodes and edges, wherein a plurality of content nodes represent learning content and a plurality of person nodes represent individuals. 8. A computer system comprising: a processor; and a non-transitory computer readable medium having stored thereon one or more programs, which when executed by the processor, causes the processor to: retrieve user information in response to a user request; select predefined content identification strategies for determining learning content based on the user information; execute each selected predefined content identification strategy; merge the executed selected predefined content identification strategies; and generate recommendations for learning content for the user based on the merged executed selected predefined content identification strategies. 9. The computer system of claim 8 wherein retrieve user information in response to a user request comprises: retrieve the user information from a learning graph, the learning graph comprising nodes and edges, wherein a plurality of content nodes represent learning content and a plurality of person nodes represent individuals. 10. The computer system of claim 9 wherein retrieve the user information from a learning graph includes retrieve the user information from nodes within a predetermined distance of a person node associated with the user. 11. The computer system of claim 8 wherein select predefined content identification strategies for determining learning content based on the user information comprises apply a rule set. 12. The computer system of claim 8 wherein the one or more programs, which when executed by the processor, causes the processor to remove recommendations for learning content based on characteristics of the user. 13. The computer system of claim 8 wherein the one or more programs, which when executed by the processor, causes the processor to remove recommendations for learning content based on characteristics of the content and the user. 14. The computer system of claim 8 wherein the one or more programs, which when executed by the processor, causes the processor to store the recommendations for learning content in corresponding recommendation nodes in a learning graph, the learning graph comprising nodes and edges, wherein a plurality of content nodes represent learning content and a plurality of person nodes represent individuals. 15. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions for: retrieving user information in response to a user request; selecting predefined content identification strategies for determining learning content based on the user information; executing each selected predefined content identification strategy; merging the executed selected predefined content identification strategies; and generating recommendations for learning content for the user based on the merged executed selected predefined content identification strategies. 16. The non-transitory computer readable storage medium of claim 15 wherein retrieving user information in response to a user request comprises: retrieving the user information from a learning graph, the learning graph comprising nodes and edges, wherein a plurality of content nodes represent learning content and a plurality of person nodes represent individuals. 17. The non-transitory computer readable storage medium of claim 16 wherein retrieving the user information from a learning graph includes retrieving the user information from nodes within a predetermined distance of a person node associated with the user. 18. The non-transitory computer readable storage medium of claim 15 wherein selecting predefined content identification strategies for determining learning content based on the user information comprises applying a rule set. 19. The non-transitory computer readable storage medium of claim 15 further comprising instructions for removing recommendations for learning content based on characteristics of the user. 20. The non-transitory computer readable storage medium of claim 15 further comprising instructions for removing recommendations for learning content based on characteristics of the content and the user. | Please help me write a proper abstract based on the patent claims. | The present disclosure includes techniques pertaining to computer automated learning management systems and methods. In one embodiment, a system is disclosed where information is represented in a learning graph. In one embodiment, a framework may be used to access different algorithms for identifying customized learning content for a user. In another embodiment, the present disclosure includes techniques for analyzing content and incorporating content into an organizational glossary. |
1. A method to synthesize a business opportunity identified from a set of business opportunities corresponding to an organization for a cognitive decision-making process, the method comprising: receiving, by the processor, an opportunity instance knowledge object associated with a business opportunity identified from a set of business opportunities corresponding to an organization, wherein the opportunity instance knowledge object comprises one or more symptoms indicative of the business opportunity, an financial amount associated with the business opportunity, a type of the business opportunity, a root cause of the one or more symptoms; generation, by a processor, an opportunity instance knowledge specification, wherein the opportunity instance knowledge specification is appended to the opportunity instance knowledge object; generating, by a processor, a first specification based on the opportunity instance knowledge object, wherein the first specification comprises one or more of a narrative description corresponding the business opportunity, a visual description corresponding to the business opportunity, an evidence for identifying the business opportunity, and a confidence score as; generating, by the processor, a second specification based on the opportunity instance knowledge object, wherein the second specification comprises one or more of an impact of inaction, an urgency indicative of the importance of the business opportunity, and an act-by-date associated with the business opportunity; and appending, by the processor, the first specification and the second specification to the opportunity instance knowledge specification, thereby synthesize a business opportunity identified from a set of business opportunities corresponding to an organization for a cognitive decision-making process the narrative content. 2. The method of claim 1 further comprises displaying, by the processor, the first specification and the second specification strategy to user for to be implemented for the business opportunity. 3. The method of claim 1, wherein the impact of inaction is one of a loss in profit, or a loss in brand value, and wherein the urgency is one of a high, medium or low. 4. The method of claim 1, wherein the visual description comprises of one or more of a photo, an image, a graph, and a video. 5. The method of claim 1, further comprises determining, by the processor, the impact of inaction based on a set of Key Performance Indicators (KPIs) associated to the business opportunity, and wherein the set of KPIs comprises business policy, brand and profit. 6. The method of claim 1, wherein the narrative description corresponding the business opportunity is generated using a natural language generation methodology (NLG). 7. The method of claim 6, wherein the NLG methodology is one of a Content determination methodology, a Document structuring methodology, a Aggregation methodology, a Choice of words methodology, and a Realization methodology. 8. The method of claim 1 further comprises generating, by the processor, an impact due to inaction for one or more time period post synthesis a business opportunity. 9. A opportunity synthesis system to synthesize a business opportunity identified from a set of business opportunities corresponding to an organization for a cognitive decision-making process, the strategy planning system comprising: a processor; and a memory coupled to the processor, wherein the processor is capable of executing a plurality of modules stored in the memory, and wherein the plurality of modules comprising: receiving an opportunity instance knowledge object associated with a business opportunity identified from a set of business opportunities corresponding to an organization, wherein the opportunity instance knowledge object comprises one or more symptoms indicative of the business opportunity, an financial amount associated with the business opportunity, a type of the business opportunity, a root cause of the one or more symptoms; generation an opportunity instance knowledge specification, wherein the opportunity instance knowledge specification is appended to the opportunity instance knowledge object; generating a first specification based on the opportunity instance knowledge object, wherein the first specification comprises one or more of a narrative description corresponding the business opportunity, a visual description corresponding to the business opportunity, an evidence for identifying the business opportunity, and a confidence score; generating a second specification based on the opportunity instance knowledge object, wherein the second specification comprises one or more of an impact of inaction, an urgency indicative of the importance of the business opportunity, and an act-by-date associated with the business opportunity; and appending the first specification and the second specification to the opportunity instance knowledge specification, thereby synthesis a business opportunity identified from a set of business opportunities corresponding to an organization for a cognitive decision-making process the narrative content. 10. The opportunity synthesis system of claim 9 further comprises displaying the first specification and the second specification to user for strategy to be implemented for the business opportunity. 11. The opportunity synthesis system of claim 9, wherein the impact of inaction is one of a loss in profit, or a loss in brand value, and wherein the urgency is one of a high, medium or low. 12. The opportunity synthesis system of claim 9, wherein the visual description comprises of one or more of a photo, an image, a graph, and a video. 13. The opportunity synthesis system of claim 9, wherein the impact of inaction is further determined based on a set of Key Performance Indicators (KPIs) associated to the business opportunity, and wherein the set of KPIs comprises business policy, brand and profit. 14. The opportunity synthesis system of claim 9, wherein the narrative description corresponding the business opportunity is generated using a natural language generation methodology (NLG). 15. The opportunity synthesis system of claim 14, wherein the NLG methodology is one of a Content determination methodology, a Document structuring methodology, a Aggregation methodology, a Choice of words methodology, and a Realization methodology. 16. The opportunity synthesis system of claim 9, further comprises generating an impact due to inaction for one or more time period post synthesis a business opportunity. 17. A non-transitory computer readable medium embodying a program executable in a computing device to synthesis a business opportunity identified from a set of business opportunities corresponding to an organization for a cognitive decision-making process, the program comprising a program code for: receiving an opportunity instance knowledge object associated with a business opportunity identified from a set of business opportunities corresponding to an organization, wherein the opportunity instance knowledge object comprises one or more symptoms indicative of the business opportunity, an financial amount associated with the business opportunity, a type of the business opportunity, a root cause of the one or more symptoms; generation an opportunity instance knowledge specification, wherein the opportunity instance knowledge specification is appended to the opportunity instance knowledge object; generating a first specification based on the opportunity instance knowledge object, wherein the first specification comprises one or more of a narrative description corresponding the business opportunity, a visual description corresponding to the business opportunity, an evidence for identifying the business opportunity, and a confidence score as; generating a second specification based on the opportunity instance knowledge object, wherein the second specification comprises one or more of an impact of inaction, an urgency indicative of the importance of the business opportunity, and an act-by-date associated with the business opportunity; and appending the first specification and the second specification to the opportunity instance knowledge specification, thereby synthesis a business opportunity identified from a set of business opportunities corresponding to an organization for a cognitive decision-making process the narrative content. 18. The non-transitory computer readable medium of claim 17 further comprises determining the impact of inaction based on a set of Key Performance Indicators (KPIs) associated to the business opportunity, and wherein the set of KPIs comprises business policy, brand and profit; generating an impact due to inaction for one or more time period post synthesis a business opportunity; and displaying the first specification and the second specification to user for strategy to be implemented for the business opportunity. 19. The non-transitory computer readable medium of claim 17, wherein the impact of inaction is one of a loss in profit, or a loss in brand value, and wherein the urgency is one of a high, medium or low, wherein the visual description comprises of one or more of a photo, an image, a graph, and a video 20. The non-transitory computer readable medium of claim 17, wherein the narrative description corresponding the business opportunity is generated using a natural language generation (NLG) methodology, and wherein the NLG methodology is one of a Content determination methodology, a Document structuring methodology, a Aggregation methodology, a Choice of words methodology, and a Realization methodology. | Please help me write a proper abstract based on the patent claims. | The present disclosure relates to system(s) and method(s) to synthesis a business opportunity identified from a set of business opportunities corresponding to an organization for a cognitive decision-making process. In one embodiment, the method comprises receiving an opportunity instance knowledge object associated with a business opportunity identified from a set of business opportunities corresponding to an organization and generation an opportunity instance knowledge specification. The method further comprise generating a first specification based on the opportunity instance knowledge object and generating a second specification based on the opportunity instance knowledge object. The method furthermore comprises appending the first specification and the second specification to the opportunity instance knowledge specification, thereby synthesis a business opportunity identified from a set of business opportunities corresponding to an organization for a cognitive decision-making process the narrative content. |
1. A method for translating a boosting algorithm, comprising: receiving, at a hardware interface, a trained boosting model; identifying, using a processor communicatively coupled to the interface, a plurality of one-level, binary split-node variables associated with the trained boosting model, wherein each of the plurality of one-level, binary split-node variables comprises a variable name, a cutoff point, and a weight; identifying, using the processor, a group of one-level, binary split-node variables that have the same variable name and cutoff point within the plurality of one-level, binary split-node variables; combining, using the processor, the weights of each of the one-level, binary split-node variables in the group of one-level, binary split-node variables to calculate a combined weight for the one-level, binary split-node variables in the group of one-level, binary split-node variables, wherein combining the weights comprises summing the weights of the one-level, binary split-node variables in the group of one-level, binary split-node variables; creating, using the processor, a linear model based at least in part upon the variable name, the cutoff point, and the combined weight; creating a performance scorecard based on the combined weight for the group of one-level, binary split-node variables; and creating a model evaluation based on an error rate of the trained boosting model. 2. The method of claim 1, wherein the trained boosting model is one selected from the group comprising: Discrete Adaboost, Real Adaboost, Gentle Adaboost, and Logitboost. 3. The method of claim 1, wherein the linear model comprises conditional logic. 4. The method of claim 1, wherein the linear model is in an if-then-else format. 5. The method of claim 1, further comprising creating a model evaluation based on a Kolmogorov-Smirnov test of the trained boosting model. 6. A non-transitory computer readable storage medium comprising logic, the logic operable, when executed by a processor, to: receive a trained boosting model; identify a plurality of one-level, binary split-node variables associated with the trained boosting model, wherein each of the plurality of one-level, binary split-node variables comprise a variable name, a cutoff point, and a weight; identify a group of one-level, binary split-node variables that have the same variable name and cutoff point within the plurality of one-level, binary split-node variables; combine the weights of each of the one-level, binary split-node variables in the group of one-level, binary split-node variables to calculate a combined weight for the one-level, binary split node variables in the group of one-level, binary split node variables, wherein combining the weights comprises summing the weights of the one-level, binary split-node variables in the group of one-level, binary split-node variables; create a linear model based on the variable name, the cutoff point, and the combined weight; creating a performance scorecard based on the combined weight for the one-level, binary split-node variables; and creating a model evaluation based on the error rate of the trained boosting model. 7. The non-transitory media of claim 6, wherein the trained boosting model is one selected from the group comprising: Discrete Adaboost, Real Adaboost, Gentle Adaboost, and Logitboost. 8. The non-transitory media of claim 6, wherein the linear model comprises conditional logic. 9. The non-transitory media of claim 6, wherein the linear model is in an if-then-else format. 10. The non-transitory media of claim 6, further comprising creating a model evaluation based on a Kolmogorov-Smirnov test of the trained boosting model. 11. A system for translating a boosting algorithm, comprising: a hardware interface operable to: receive a trained boosting model; and a hardware processor operable to: identify a plurality of one-level, binary split-node variables associated with the trained boosting model, wherein each of the plurality of one-level, binary split-node variables comprises a variable name, a cutoff point, and a weight; identify a group of one-level, binary split-node variables that have the same variable name and cutoff point within the plurality of one-level, binary split-node variables; combine the weights of each of the one-level, binary split-node variables in the group of one-level, binary split-node variables to calculate a combined weight for the one-level, binary split-node variables in the group of one-level, binary split-node variables, wherein combining the weights comprises summing the weights of the one-level, binary split-node variables in the group of one-level, binary split-node variables; create a linear model based on the variable name, the cutoff point, and the combined weight; creating a performance scorecard based on the combined weight for the one-level, binary split-node variables; and create a model evaluation based on the error rate of the trained boosting model. 12. The system of claim 11, wherein the trained boosting model is one selected from the group comprising: Discrete Adaboost, Real Adaboost, Gentle Adaboost, and Logitboost. 13. The system of claim 11, wherein the processor is further operable to create a model evaluation based on a Kolmogorov-Smirnov test of the trained boosting model. 14. The system of claim 11, wherein the linear model comprises conditional logic. | Please help me write a proper abstract based on the patent claims. | A system for translating a boosting algorithm includes an interface communicatively coupled to a processor. The interface is operable to receive a trained boosting model. The processor is operable to identify a plurality of split-node variables associated with the trained boosting model. Each of the plurality of split-node variables comprises a variable name, a cutoff point, and a weight. The processor may aggregate the split-node variables by variable name and cutoff point and then combine the weights of each of the plurality of split-node variables having the same variable name and cutoff point. The processor may then create a linear model based on the combined variables. The processor may further create a performance scorecard based on the combined weight, and create a model evaluation based on the error rate of the trained boosting model. |
1. A method to provide medical cannabis diagnosis, treatment, information, and data analysis, comprising: displaying questions in natural language and receiving at least one answer from a user at a user interface; converting the at least one answer to computer readable data by a processor; processing the computer readable data by artificial intelligence reasoning using data from at least one knowledge source containing medical cannabis knowledge to produce at least one output; converting the at least one output into natural language; and displaying the at least one output in natural language at a user interface. 2. The method of claim 1, wherein the at least one answer is in natural language and the processor is a natural language processor. 3. The method of claim 1, wherein the knowledge source comprises data from the Metathesaurus, the Semantic Network, and the Medical Marijuana Ontology. 4. The computer method of claim 3, wherein the Medical Marijuana Ontology database comprises medical marijuana data provided in computer readable files. 5. The method of claim 1, wherein the user interface is a computing article. 6. The method of claim 5, wherein the computing article is a desktop computer, a laptop computer, a smart phone, a tablet, or a kiosk computer. 7. The method of claim 1, wherein displayed questions are adapted to the user type. 8. The method of claim 7, wherein the user is selected from the group consisting of medical cannabis users, medical professionals, cannabis growers, cannabis producers, cannabis manufacturers, and cannabis salespersons. 9. The method of claim 1, further comprising the step of storing information received from the user. 10. The method of claim 9, further comprising the step of uploading information to the knowledge source. 11. The method of claim 10, further comprising the step of receiving data from external sources to process received computer readable data. 12. A system for medical cannabis diagnosis, treatment, information provision, and data analysis, the system comprising: at least one user interface module; a first software architect to serve as the basis for the user interface module; a second software architect to serve as the single point data analysis; a third software architect to serve as the knowledge source; and at least one natural language processor to process user input into computer readable data; wherein the natural language processor processes or receives information from the at least one user interface module; wherein the second software architect receives information from the natural language processor and the third software architect; and wherein the third software architect processes information received from the second software architect and at least one outside source. 13. The system of claim 12, wherein the at least one outside source comprises the Metathesaurus, the Semantic Network, and the Medical Marijuana Ontology. 14. The system of claim 12, wherein the at least one outside source comprises a Dynamic Unified Resource Identifier. 15. The system of claim 14, wherein the Dynamic Unified Resource Identifier comprises: at least one existing knowledge database; at least one of an SQL server or a NoSQL server to receive and store data from the at least one existing knowledge database; at least one business intelligence tool; at least one medical cannabis business intelligence database; and a unified resource identifier; wherein the SQL and NoSQL servers supply data to the business intelligence tool; wherein the business intelligence tool supplies data to the medical cannabis business intelligence database; and wherein the medical cannabis business intelligence database and business intelligence tool analyze and supply data to the unified resource identifier. 16. The system of claim 12, further comprising at least one reusable module. 17. The system of claim 12, wherein the at least one reusable module is selected from the group consisting of Point-of-Sale module, Reservation module, and History module. 18. The system of claim 16, further comprising at least one additional database to supply data to the third software architect. 19. The system of claim 18, wherein the at least one additional database is selected from the group consisting of User Data database, Reporting database, and Analytics database. 20. The system of claim 18, further comprising at least one of an external service or an artificial programming interface. | Please help me write a proper abstract based on the patent claims. | This invention provides a computer method and system for interactive administration of medical cannabinoid treatments. The invention also provides information on cannabinoid product availability and geographical information. Medical professionals, cannabis growers, cannabis manufacturers, and other stakeholders can use this computer method and system to study trends, efficacy, and other information pertinent to the medical cannabis market. |
1-10. (canceled) 11. A data processor system for monitoring a complex system, the processor system configured to receive a plurality of pieces of state information and to merge at least the pieces of state information into a piece of failure information, at least one of the pieces of state information being associated with a confidence flag, and the piece of failure information also being associated with a confidence flag, wherein the merging is performed by implementing a fuzzy logic technique to produce the piece of failure information while taking account of the respective confidence flags of the pieces of state information and to produce the confidence flag associated with the failure information. 12. A data processor system according to claim 11, wherein exact rules are used for combining the pieces of state information, or fuzzy rules are used for combining the pieces of state information. 13. A data processor system according to claim 11, wherein the state information is subjected to fuzzyfication with an exact belonging function, a fuzzy belonging function, a belonging function in which one class is strengthened relative to the others, or a belonging function in which a magnitude of the state information is cross-tabulated with a confidence level. 14. A processor system according to claim 11, wherein inference is performed using Mamdani's method or Larsen's method. 15. A processor system according to claim 11, wherein the rules are aggregated with a maximum operator or a minimum operator. 16. A data processor system according to claim 11, wherein defuzzyfication is performed using a method of averaging maximums or center of gravity method. 17. A data processor system according to claim 11, wherein the pieces of state information are received in state messages including an identity of a subsystem or of a component. 18. A data processor system according to claim 11, wherein the pieces of state information are received in state messages including a time stamp. 19. A data processor system according to claim 11, wherein each piece of state information is associated with a magnitude. 20. A data processing method for monitoring a complex system, the method comprising: receiving pieces of state information and merging at least one of the pieces of state information into a piece of failure information, at least one of the pieces of state information being associated with a confidence flag, and the piece of failure information also being associated with a confidence flag; wherein the merging is performed by implementing a fuzzy logic technique to produce a piece of failure information while taking account of the respective confidence flags of the pieces of state information and to produce the confidence flag associated with the piece of failure information. | Please help me write a proper abstract based on the patent claims. | A data processor system for monitoring a complex system, the processor system configured to receive a plurality of pieces of state information and to merge at least the pieces of state information into a piece of failure information, at least one of the pieces of state information being associated with a confidence flag, and the piece of failure information also being associated with a confidence flag. The system performs the merging by implementing a fuzzy logic technique to produce the piece of failure information while taking account of the respective confidence flag of the pieces of state information and to produce the confidence flag associated with the failure information. |
1. A method programmed in a non-transitory memory of a device comprising: a. automatically analyzing target information; b. automatically fact checking, with the device, the target information by comparing the target information with source information to generate a result, wherein comparing includes at least one of: i. searching for an exact match of the target information in the source information and returning the exact match search result of the exact match search if the exact match is found; ii. utilizing pattern matching for fact checking and returning the result of the pattern matching fact check if a pattern matching result confidence score is above a pattern matching result confidence threshold; and iii. utilizing a natural language search for fact checking and returning the result of the natural language fact check if a natural language result confidence score is above a natural language result confidence threshold, wherein fact checking includes: utilizing a plurality of fact checking implementations initially, wherein each fact checking implementation utilizes a different set of source information for fact checking, and comparing separate results of each fact checking implementation, and iteratively eliminating a fact checking implementation of the plurality of fact checking implementations with a lowest confidence score until a single fact checking implementation remains; and c. automatically presenting a status of the target information in real-time based on the result of the comparison of the target information with the source information. 2. The method of claim 1 wherein searching for the exact match begins searching the source information located on a fastest access time hardware device, then using the source information located on a second fastest access time hardware device, and then using the source information located on slower access time hardware devices until a device list has been exhausted; wherein utilizing pattern matching begins utilizing the source information located on the fastest access time hardware device, then using the source information located on the second fastest access time hardware device, and then using the source information located on the slower access time hardware devices until the device list has been exhausted; and wherein the natural language search begins searching the source information located on the fastest access time hardware device, then using the source information located on the second fastest access time hardware device, and then using the source information located on the slower access time hardware devices until the device list has been exhausted. 3. The method of claim 2 wherein most popular, most recent and most common information is stored in the fastest access time hardware device, and less popular, less recent and less common information is stored in the slower access hardware devices. 4. The method of claim 1 wherein searching for the exact match begins searching the source information located in a designated fact checking database, then goes to a broader set of source information, and repeatedly goes to broader sets of source information until a broadest source information set has been exhausted; wherein utilizing pattern matching begins utilizing the source information located in the designated fact checking database, then goes to the broader set of source information, and repeatedly goes to broader sets of source information until the broadest source information set has been exhausted; and wherein the natural language search begins searching the source information located in the designated fact checking database, then goes to the broader set of source information, and repeatedly goes to broader sets of source information until the broadest source information set has been exhausted. 5. The method of claim 1 wherein searching for the exact match begins searching the source information classified by a plurality of keywords found in the target information, then using the source information classified by a single keyword found in the target information, and then using the source information classified by keywords related to the keywords found in the target information; wherein utilizing pattern matching begins utilizing the source information classified by the plurality of keywords found in the target information, then using the source information classified by the single keyword found in the target information, and then using the source information classified by the keywords related to the keywords found in the target information; and wherein the natural language search begins searching the source information classified by the plurality of keywords found in the target information, then using the source information classified by the single keyword found in the target information, and then using the source information classified by the keywords related to the keywords found in the target information. 6. The method of claim 1 further comprising parsing the target information into segments and prioritizing the segments, so that a highest priority segment is fact checked first, wherein priority is based on the relatedness of the segment to a current topic being discussed and when the segment was presented, wherein if the segment is not fact checked before a timeout threshold, then the segment is removed from a fact check queue. 7. The method of claim 6 wherein a plurality of fact check queues are implemented, wherein a first fact check queue contains the segments to be fact checked in real-time, and the second fact check queue contains the segments to be fact checked in non-real-time. 8. The method of claim 1 further comprising parsing the target information into segments and prioritizing the segments, so that a highest priority segment is fact checked first, wherein priority is based on a presenter of the target information, wherein if the presenter of the target information has a validity rating below a first validity threshold, then the segments of the target information from the presenter are prioritized in a highest priority group, and if the presenter of the target information has the validity rating above the first threshold and below a second threshold, then the segments of the target information from the presenter are prioritized in a second highest priority group which has a lower priority than the highest priority group, and if the presenter of the target information has the validity rating above the second threshold, then the segments of the target information from the presenter are prioritized in a third highest priority group which has a lower priority than the second highest priority group. 9. The method of claim 1 further comprising parsing the target information into phrases, parsing the phrases into words, counting the number of words in each phrase, and comparing each phrase with the source information containing the same number of words as the number of words in the phrase being compared. 10. The method of claim 9 wherein the source information includes only source information in a classification matching a keyword detected in the target information. 11. The method of claim 1 wherein utilizing the plurality of fact checking implementations initially occurs in the first minute of a television program. 12. The method of claim 1 wherein the single fact checking implementation determined by eliminating other fact checking implementations is utilized for a specific type of content and is reused for future content that is the same type of content as the specific type of content. 13. The method of claim 1 wherein utilizing the plurality of fact checking implementations initially occurs in parallel while a previously determined best fact checking implementation is utilized for fact checking the target information. 14. The method of claim 13 wherein the previously determined best fact checking implementation is determined in a prior program. 15. The method of claim 1 wherein the single fact checking implementation is stored for each channel and is used depending on the currently selected channel. 16. The method of claim 1 wherein the single fact checking implementation is determined periodically. 17. The method of claim 1 wherein each fact checking implementation of the plurality of fact checking implementations utilizes a different source determination method, a different source weighting scheme, a different monitoring implementation and a different processing implementation. 18. The method of claim 1 wherein searching for the exact match begins searching the source information controlled by a media company, then using crowdsourced data as the source information, and then using world wide web data for fact checking; wherein utilizing pattern matching begins utilizing the source information controlled by the media company, then using the crowdsourced data as the source information, and then using the world wide web data for fact checking; and wherein the natural language search begins searching the source information controlled by the media company, then using the crowdsourced data as the source information, and then using the world wide web data for fact checking. 19. A method programmed in a non-transitory memory of a device comprising: a. automatically analyzing target information; b. automatically fact checking, with the device, the target information by comparing the target information with source information to generate a result, wherein comparing includes at least one of: i. implementing a first fact check implementation for fact checking the target information using the source information and returning a first fact check result of the first fact check implementation if the first fact check result of the first fact check is above a first confidence threshold; ii. implementing a second fact check implementation for fact checking the target information using the source information and returning a second fact check result of the second fact check implementation if the second fact check result of the second fact check is above a second confidence threshold; iii. implementing a third fact check implementation for fact checking the target information using the source information and returning a third fact check result of the third fact check implementation if the third fact check result of the third fact check is above a third confidence threshold, wherein fact checking includes: utilizing a plurality of fact checking implementations initially, wherein each fact checking implementation utilizes a different set of source information for fact checking, and comparing separate results of each fact checking implementation, and iteratively eliminating a fact checking implementation of the plurality of fact checking implementations with a lowest confidence score until a single fact checking implementation remains; and c. automatically presenting a status of the target information in real-time based on the comparison of the target information with the source information. 20. A device comprising: a. a non-transitory memory for storing an application for automatically performing the following steps: i. analyzing target information; ii. fact checking the target information by comparing the target information with source information to generate a result, wherein comparing includes: (1) searching for an exact match of the target information in the source information and returning the exact match search result of the exact match search if the exact match is found; (2) otherwise, if the exact match is not found, then utilizing pattern matching for fact checking and returning the result of the pattern matching fact check if a pattern matching result confidence score is above a pattern matching result confidence threshold; (3) otherwise, if the pattern matching result confidence score is not above the pattern matching result confidence threshold, then utilizing a natural language search for fact checking and returning the result of the natural language fact check if a natural language result confidence score is above a natural language result confidence threshold; and (4) otherwise, returning a negative result value as the result, wherein fact checking includes: utilizing a plurality of fact checking implementations initially, wherein each fact checking implementation utilizes a different set of source information for fact checking, and comparing separate results of each fact checking implementation, and iteratively eliminating a fact checking implementation of the plurality of fact checking implementations with a lowest confidence score until a single fact checking implementation remains; and iii. presenting a status of the target information in real-time based on the result of the comparison of the target information with the source information; and b. a processor for processing the application. | Please help me write a proper abstract based on the patent claims. | An efficient fact checking system analyzes and determines the factual accuracy of information and/or characterizes the information by comparing the information with source information. The efficient fact checking system automatically monitors information, processes the information, fact checks the information efficiently and/or provides a status of the information. |
1. A computer-implemented method of identifying a case with a missing decision from a set of decision rules in violation of a decision requirement, wherein the decision rules determine whether a decision is made for a case, and wherein the decision requirement determines the decisions required for a case, the method comprising: receiving the set of decision rules; receiving the decision requirement; obtaining a set of decisions made by the decision rules; building a decision detection constraint graph that represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not for the case by a decision rule in the set of decision rules; building a decision requirement constraint graph from the decision requirement, that represents for each case used by the set of decision rules the decisions required for that case; and for each case used by the set of decision rules, using the decision requirement constraint graph and the decision detection constraint graph for the case to identify if the case is a case with a missing decision. 2. A computer-implemented method as claimed in claim 1, wherein the set of decisions is generated from the decision rules. 3. A computer-implemented method as claimed in claim 1, wherein a set of cases is generated from the decision rules. 4. A computer-implemented method as claimed in claim 3, further comprising generating a new set of cases used by the set of decision rules, and using the new set of cases to determine further cases with missing decisions. 5. A computer-implemented method as claimed in claim 4, wherein the method is repeated for each possible set of cases used by the set of decision rules. 6. A computer-implemented method as claimed in claim 1, further comprising generating a ghost decision rule that determines that a decision is made for the identified case with a missing decision. 7. A computer-implemented method as claimed in claim 6, wherein the method is repeated with the ghost decision rule added to the set of decision rules. 8. A computer-implemented method as claimed in claim 7, wherein the method is repeated until no further cases with missing decisions are identified. 9. A computer system for identifying a case with a missing decision from a set of decision rules in violation of a decision requirement, wherein the decision rules determine whether a decision is made for a case, wherein the computer system comprises memory and a processor system and is arranged to: receive the set of decision rules and store it in the memory; receive the decision requirement and store it in the memory; obtain a set of decisions made by the decision rules and store it in the memory; use the processor system to build a decision detection constraint graph that represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not by a decision rule in the set of decision rules; use the processor system to build a decision requirement constraint graph from the decision requirement that represents, for each case used by the set of decision rules, the decisions required; and for each case used by the set of decision rules, using the decision requirement constraint graph and the decision detection constraint graph for the case to identify if the case is a case with a missing decision. 10. A computer system as claimed in claim 9, arranged to generate the set of decisions from the decision rules. 11. A computer system as claimed in claim 9, arranged to generate a set of cases from the decision rules. 12. A computer system as claimed in claim 11, further arranged to generate a new set of cases used by the set of decision rules, and use the new set of cases to determine further cases with missing decisions. 13. A computer system as claimed in claim 12, arranged to determine further cases with missing decisions for each possible set of cases used by the set of decision rules. 14. A computer system as claimed in claim 9, further arranged to generate a ghost decision rule that determines that a decision is made for the identified case with a missing decision. 15. A computer system as claimed in claim 14, further arranged to add the ghost decision rule to the set of decision rules, and use the new set of decision rules to determine further cases with missing decisions. 16. A computer system as claimed in claim 15, arranged to generate new ghost decision rules until no further cases with missing decisions are identified. 17. A computer program product for identifying a case with a missing decision from a set of decision rules in violation of a decision requirement, wherein the decision rules determine whether a decision is made for a case, and wherein the decision requirement determines the decisions required for a case, the computer program product comprising a computer-readable storage medium having computer-readable program code embodied therewith, the computer-readable program code configured to perform the steps of: receiving the set of decision rules; receiving the decision requirement; obtaining a set of decisions made by the decision rules; building a decision detection constraint graph that represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not by a decision rule in the set of decision rules; building a decision requirement constraint graph from the decision requirement, that represents, for each case used by the set of decision rules, the decisions required; and for each case used by the set of decision rules, using the decision requirement constraint graph and the decision detection constraint graph for the case to identify if the case is a case with a missing decision. | Please help me write a proper abstract based on the patent claims. | Methods for identifying a case with a missing decision from a set of decision rules in violation of a decision requirement are provided. The set of decision rules and decision requirement are received, and a set of decisions made by the decision rules is obtained. A decision detection constraint graph is built, which represents, for each case used by the set of decision rules, whether each decision in the set of decisions is made or not by a decision rule in the set of decision rules. A decision requirement constraint graph is built from the decision requirement, which represents, for each case used by the set of decision rules, the decisions required. For each case used by the set of decision rules, the decision requirement constraint graph and the decision detection constraint graph for the case are used to identify if the case is a case with a missing decision. |
1. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit. 2. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally. 3. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally, and grouping of the connection array is performed. 4. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally, and grouping of the connection array is performed and the semiconductor device has a function of reconfiguring a configuration including the number of groupings and the magnitude. 5. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally, and the semiconductor device has wired connections of short-distance, intermediate-distance, or long-distance connections between a plurality of connection arrays. 6. A semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other, the semiconductor device comprising: the synapse bonds that perform non-contact communication using magnetic coupling; and the neuron sections that include a wired connection and a logical circuit, wherein the semiconductor device has a connection array where the synapse bonds and the neuron sections are spread three-dimensionally, and the semiconductor device has a function of reconfiguring at least some of the wired connections of short-distance, intermediate-distance, or long-distance connections between a plurality of connection arrays. | Please help me write a proper abstract based on the patent claims. | In order to provide a 1H-magnitude neuro-semiconductor device, a semiconductor device that constitutes a neural network in which a plurality of sets each including a plurality of synapse bonds and a neuron section are connected with each other. The semiconductor device includes the synapse bonds that perform non-contact communications using magnetic coupling, and the neuron sections including a wired connection and a logical circuit. The semiconductor device has a connection array in which the synapse bonds and the neuron sections are arranged three-dimensionally. The semiconductor device has a function for enabling reconfiguration of at least some of groupings of the connection array or wired short-distance, intermediate-distance, or long-distance connections. |
1. A method comprising: receiving one or more inputs for training a neural network; selecting a parallelizing technique from a plurality of parallelizing techniques; selecting a forward-propagation computation technique from a plurality of computation techniques; directing the neural network to process the one or more inputs using the selected parallelizing technique and the selected computation technique; and receiving from the neural network, one or more outputs resulting from the neural network processing the one or more inputs. 2. A method as recited in claim 1, wherein the plurality of parallelizing techniques include: parallel processing; and processing in parallel. 3. A method as recited in claim 1, wherein the plurality of computation techniques include: matrix multiplication; and stencil-based computation. 4. A method as recited in claim 1, wherein selecting a parallelizing technique from the plurality of parallelizing techniques is based, at least in part, on properties associated with the neural network. 5. A method as recited in claim 4, wherein the properties associated with the neural network comprise one or more of: a number of layers within the neural network; a number of feature maps associated with individual layers of the neural network; a data sparsity associated with individual layers of the neural network; a size associated with a convolution filter used to process the inputs; or a stride size. 6. A method as recited in claim 1, wherein selecting a computation technique from the plurality of computation techniques is based, at least in part, on properties associated with the neural network. 7. A method as recited in claim 6, wherein the properties associated with the neural network comprise one or more of: a size of the inputs; a number of inputs; a number of feature maps of the inputs; a stride size; or a size associated with a convolution filter that is used to process the inputs. 8. A method as recited in claim 1, wherein: the neural network includes at least a first layer and a second layer; selecting the parallelizing technique comprises: selecting a first parallelizing technique from the plurality of parallelizing techniques to use for the first layer; and selecting a second parallelizing technique from the plurality of parallelizing techniques to use for the second layer; and selecting the computation technique comprises: selecting a first computation technique from the plurality of computation techniques to use for the first layer; and selecting a second computation technique from the plurality of computation techniques to use for the second layer. 9. A method as recited in claim 1, further comprising: determining, based at least in part on the one or more inputs and the one or more outputs, one or more output activation errors; selecting a backward-propagation computation technique from a plurality of backward-propagation computation techniques; and processing the neural network based, at least in part, on the one or more output activation errors, using the selected backward-propagation technique. 10. A method as recited in claim 9, wherein the plurality of backward-propagation computation techniques include: matrix multiplication; and sparse-dense matrix computation. 11. A method as recited in claim 9, wherein processing the neural network based, at least in part, on the one or more output activation errors, includes updating weights associated with one or more layers of the neural network. 12. A method as recited in claim 9, further comprising: selecting a backward-propagation parallelization technique from a plurality of backward-propagation parallelization techniques, wherein processing the neural network based, at least in part, on the one or more output activation errors, using the selected backward-propagation technique, further includes processing the neural network based on the selected backward-propagation parallelization technique. 13. A device comprising: a processor; and a computer-readable medium communicatively coupled to the processor; a parallelizing decision module stored on the computer-readable medium and executable by the processor to select, based at least in part on properties of a neural network, a parallelizing technique from a plurality of parallelizing techniques; a forward propagation decision module stored on the computer-readable medium and executable by the processor to select, based at least in part on properties of the neural network, a computation technique from a plurality of computation techniques; and a forward-propagation processing module configured to: receive one or more inputs for training the neural network; cause the neural network to process, based at least in part on the selected parallelizing technique and the selected computation technique, the one or more inputs; and receive, from the neural network, one or more outputs resulting from the neural network processing the one or more inputs. 14. A device as recited in claim 13, wherein: the plurality of parallelizing techniques include: parallel processing; and processing in parallel; and the plurality of computation techniques include: matrix multiplication; and stencil-based computation. 15. A device as recited in claim 13, further comprising a backward-propagation decision module stored on the computer-readable media and executable by the processor to: determine, based at least in part on the one or more inputs and the one or more outputs, one or more output activation errors for the neural network; select, based at least in part on properties of the neural network, a backward-propagation technique from a plurality of backward-propagation techniques and a parallelizing technique from a plurality of parallelizing techniques; and process the neural network using the selected backward-propagation technique and the selected parallelizing technique to update weights associated with one or more layers of the neural network. 16. One or more computer-readable media storing computer-executable instructions that, when executed on one or more processors, configure a computer to train a neural network by performing acts comprising: causing the neural network to process one or more inputs; receiving from the neural network, one or more outputs resulting from the neural network processing the one or more inputs; determining, based at least in part on the one or more inputs and the one or more outputs, one or more output activation errors for the neural network; selecting, based at least in part on one or more properties associated with the neural network, a backward-propagation technique from a plurality of backward-propagation techniques; using the selected backward-propagation technique and the one or more output activation errors to calculate error gradients and weight deltas for the neural network; and updating weights associated with one or more layers of the neural network based, at least in part, on the error gradients or the weight deltas. 17. One or more computer-readable media as recited in claim 16, wherein: the selected backward-propagation technique is a sparse-dense matrix multiplication technique; and using the selected backward-propagation technique and the one or more output activation errors to generate input activation errors and weight deltas for the neural network includes: generating one or more sparse matrices using the one or more output activation errors; representing an individual sparse matrix of the one or more sparse matrices using a row index array, a column index array, and a value array; calculating the error gradients and the weight deltas based, at least in part, on the one or more sparse matrices. 18. One or more computer-readable media as recited in claim 16, wherein the one or more properties associated with the neural network comprise at least one of: a number of layers within the neural network; a number of feature maps associated with individual layers of the neural network; a data sparsity associated with individual layers of the neural network; a size associated with a kernel; and a stride size. 19. One or more computer-readable media as recited in claim 18, wherein the data sparsity is represented as a percentage of values within the individual layers of the neural network that include a zero value. 20. One or more computer-readable media as recited in claim 19, wherein selecting the backward-propagation technique includes selecting a sparse-dense matrix multiplication technique based, at least in part, on the data sparsity being greater than a threshold percentage of values that include a zero value. | Please help me write a proper abstract based on the patent claims. | A neural network training tool selects from a plurality of parallelizing techniques and selects from a plurality of forward-propagation computation techniques. The neural network training tool performs a forward-propagation phase to train a neural network using the selected parallelizing technique and the selected forward-propagation computation technique based on one or more inputs. Additionally, the neural network training tool selects from a plurality computation techniques and from a plurality of parallelizing techniques for a backward-propagation phase. The neural network training tool performs a backward-propagation phase of training the neural network using the selected backward-propagation parallelizing technique and the selected backward-propagation computation technique to generate error gradients and weight deltas and to update weights associated with one or more layers of the neural network. |
1. A system for modeling user satisfaction, the system comprising: at least one processor; and memory coupled to the at least one processor, the memory comprising computer executable instructions that, when executed by the at least one processor, performs a method comprising: receiving a first viewing session data; determining at least a first content item in the first viewing session data, wherein the at least a first content item has a first content type; determining a first aggregated display time for the first content type; determining a first content density for the first content type; generating a first viewing time based on the first aggregated display time for the first content type and the first content density for the first content type; determining a satisfaction value for the first content type; and updating a satisfaction model based on the satisfaction value. 2. The system of claim 1, wherein first session viewing data comprises one or more viewports, the one or more viewports comprising at least a portion of one or more content items. 3. The system of claim 1, wherein determining the first aggregated display time comprises aggregating one or more content items in the viewing session data and attributing a duration to each of the aggregated one or more content items. 4. The system of claim 3, wherein an attributed duration of the one or more content items determines a display time for one or more content items, wherein the display time is based on the visible area of the one or more content items within the one or more viewports. 5. The system of claim 4, wherein the visible area excludes occluded areas within the viewing session data. 6. The system of claim 1, wherein determining a first content density comprises determining at least one of: the number of characters within the first content item and the size in pixels of the first content item. 7. The system of claim 1, wherein the first viewing time is used to update an attention value. 8. The system of claim 1, wherein the satisfaction model is one of: a rule-based model, a machine-learned regressor, and a machine-learned classifier. 9. The system of claim 1, further comprising: receiving a second viewing session data; determining at least a second content item in the second viewing session data, wherein the at least a second content item has the first content type; determining a second aggregated display time for the first content type; determining a second content density for the first content type; generating a second viewing time based on the aggregated display time for the first content type and the second content density for the first content type; comparing the first viewing time to the second viewing time; and determining a fatigue value based at least on the comparison. 10. The system of claim 9, wherein the fatigue value is further based at least on determining whether the at least a first content item is different from the at least a second content item. 11. The system of claim 10, wherein the fatigue model is updated based on the fatigue value. 12. The system of claim 10, further comprising: optimizing a presentation of the first content type based upon at least one of: the satisfaction value and the fatigue value. 13. The system of claim 10, wherein optimizing a presentation of the first content type comprises prioritizing the first content type by at least one of: content type selection, content type triggering, and content type ranking. 14. A system for providing recommendations using viewable content, the system comprising: a processor; a recommendation component; and a memory coupled to the processor, the memory comprising computer executable instructions that, when executed by the processor, performs a method comprising: receiving viewing session data; creating an user attention model from the received viewing session data; using the attention model, creating a satisfaction model for the received viewing session data; selecting a content selection related to the received viewing session data; using the satisfaction model, prioritizing as prioritized content a portion of content from at least one of the viewing session data and the content selection related to the viewing session data; and integrating the prioritized content with the recommendation component. 15. The system of claim 14, further comprising: using the satisfaction model, creating a fatigue model for the received viewing session data. 16. The system of claim 14, wherein selecting a content selection comprises: determining a criteria in the received viewing session data, wherein the criteria is at least one of: a content type, a time, a location, a user, and a user group; and selecting content with the criteria. 17. The system of claim 14, wherein the prioritized content is prioritized based on at least one of: a content of the content selection and a ranking of the content selection. 18. The system of claim 14, wherein the recommendation component provides recommendations based at least upon the prioritized content. 19. The system of claim 14, wherein the recommendation component updates a profile based upon at least one of the attention model, the satisfaction model, and the prioritized content. 20. A method for providing recommendations using viewable content, the method comprising: receiving a first viewing session data; determining at least a first content in the first viewing session data, wherein the first content has a first content type; determining a first aggregated display time for the first content type; generating a first viewing time based on the first aggregated display time for the first content type; determining a satisfaction value for the first content type; receiving a second viewing session data; determining at least a second content in the second viewing session data, wherein the second content has the first content type; determining a second aggregated display time for the first content type; generating a second viewing time based on the second aggregated display time for the first content type; comparing the first viewing time and the second viewing time; determining a fatigue value based at least on the comparison; and providing a recommendation based at least in part on at least one of the satisfaction value and the fatigue value. | Please help me write a proper abstract based on the patent claims. | Examples of the present disclosure describe systems and methods for improving the recommendations provided to a user by a recommendation system using viewed content as implicit feedback. In some aspects, attention models are created/updated to infer the user attention of a user that has viewed or is viewing content on a computing device. The attention model may be used to convert inferences of user attention into inferences of user satisfaction with the viewed content. The inferences of user satisfaction may be used to generate inferences of fatigue with the viewed content. The inferences of user satisfaction and inferences of user fatigue may then be used as implicit feedback to improve the content selection, content triggering and/or content presentation by the recommendation system. Other examples are also described. |
1. A method of operating a spiking neural network having a plurality of neurons coupled together with at least one synapse, comprising: monitoring a timing of a presynaptic spike; monitoring a timing of a postsynaptic spike; determining a time difference between the postsynaptic spike and the presynaptic spike; and calculating a stochastic update of a delay for the at least one synapse based at least in part on the time difference. 2. The method of claim 1, in which the stochastic update is based at least in part on an evaluation of a probability function. 3. The method of claim 2, in which the probability function is based at least in part on an increase in the delay. 4. The method of claim 2, in which the probability function is based at least in part on a decrease in the delay. 5. The method of claim 2, in which at least one region of a probability distribution is parameterized. 6. The method of claim 2, in which the probability function is piecewise linear. 7. The method of claim 1, in which the update is based at least in part on a look up table. 8. The method of claim 1, in which the update is based at least in part on a calculation. 9. An apparatus for operating a spiking neural network having a plurality of neurons coupled together with at least one synapse, comprising: a memory; and at least one processor coupled to the memory, the at least one processor being configured: to monitor a timing of a presynaptic spike; to monitor a timing of a postsynaptic spike; to determine a time difference between the postsynaptic spike and the presynaptic spike; and to calculate a stochastic update of a delay for the at least one synapse based at least in part on the time difference. 10. The apparatus of claim 9, in which the at least one processor is configured to calculate the stochastic update based at least in part on an evaluation of a probability function. 11. The apparatus of claim 10, in which the probability function is based at least in part on an increase in the delay. 12. The apparatus of claim 10, in which the probability function is based at least in part on a decrease in the delay. 13. The apparatus of claim 10, in which at least one region of a probability distribution is parameterized. 14. The apparatus of claim 10, in which the probability function is piecewise linear. 15. The apparatus of claim 9, in which the at least one processor is configured to calculate the stochastic update based at least in part on a look up table. 16. The apparatus of claim 9, in which the at least one processor is configured to calculate the stochastic update based at least in part on a calculation. 17. An apparatus for operating a spiking neural network having a plurality of neurons coupled together with at least one synapse, comprising: means for monitoring a timing of a presynaptic spike; means for monitoring a timing of a postsynaptic spike; means for determining a time difference between the postsynaptic spike and the presynaptic spike; and means for calculating a stochastic update of a delay for the at least one synapse based at least in part on the time difference. 18. The apparatus of claim 17, in which the means for calculating the stochastic update calculates the stochastic update based at least in part on an evaluation of a probability function. 19. The apparatus of claim 18, in which the probability function is based at least in part on an increase in the delay. 20. The apparatus of claim 18, in which the probability function is based at least in part on a decrease in the delay. 21. The apparatus of claim 18, in which at least one region of a probability distribution is parameterized. 22. The apparatus of claim 18, in which the probability function is piecewise linear. 23. The apparatus of claim 17, in which the means for calculating the stochastic update calculates the stochastic update based at least in part on a look up table. 24. The apparatus of claim 17, in which the means for calculating the stochastic update calculates the stochastic update based at least in part on a calculation. 25. A computer program product for operating a spiking neural network having a plurality of neurons coupled together with at least one synapse, comprising: a non-transitory computer readable medium have encoded thereon program code, the program code comprising: program code to monitor a timing of a presynaptic spike; program code to monitor a timing of a postsynaptic spike; program code to determine a time difference between the postsynaptic spike and the presynaptic spike; and program code to calculate a stochastic update of a delay for the at least one synapse based at least in part on the time difference. | Please help me write a proper abstract based on the patent claims. | A method of operating a spiking neural network having neurons coupled together with a synapse includes monitoring a timing of a presynaptic spike and monitoring a timing of a postsynaptic spike. The method also includes determining a time difference between the postsynaptic spike and the presynaptic spike. The method further includes calculating a stochastic update of a delay for the synapse based on the time difference between the postsynaptic spike and the presynaptic spike. |
1. A computer-implemented method for parameter data sharing, the computer-implemented method comprising: receiving, by a first machine, a first set of global parameters from a global parameter server, wherein the first set of global parameters comprises data that weights one or more operands used in an algorithm that models an entity type; executing, by multiple learner processors in the first machine, the algorithm using the first set of global parameters and a first mini-batch of data known to describe the entity type; generating, by the first machine, a first consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the first mini-batch of data known to describe the entity type; transmitting, from the first machine, the first consolidated set of gradients to the global parameter server; and receiving, by the first machine, a second set of global parameters from the global parameter server, wherein the second set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients. 2. The computer-implemented method of claim 1, further comprising: receiving, by a second machine, the first set of global parameters from the global parameter server; executing, by multiple learner processors in the second machine, the algorithm using the first set of global parameters and a second mini-batch of data known to describe the entity type; generating, by the second machine, a second consolidated set of gradients that describes a direction for the first set of global parameters in order to improve the accuracy of the algorithm in modeling the entity type when using the first set of global parameters; transmitting, from the second machine, the second consolidated set of gradients to the global parameter server; and receiving, by the first machine and the second machine, a third set of global parameters from the global parameter server, wherein the third set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients and the second consolidated set of gradients. 3. The computer-implemented method of claim 2, further comprising: testing, by a third machine, a set of unknown data using the third set of global parameters in order to determine whether the set of unknown data matches the entity type. 4. The computer-implemented method of claim 1, further comprising: generating each of the first consolidated set of gradients by a different learner processor in the first machine; writing, by each of the multiple learner processors in the first machine, each gradient generated by each of the multiple learner processors in the first machine; and consolidating, by the first machine, gradients generated by all of the multiple learner processors in the first machine in order to create the first consolidated set of gradients. 5. The computer-implemented method of claim 1, further comprising: reading, by all of the multiple learner processors in the first machine, the first set of global parameters and the second set of global parameters from a shared memory in the first machine. 6. The computer-implemented method of claim 1, further comprising: storing, by one or more processors, global parameters currently in use by the first machine in a first memory in the first machine; and storing, by one or more processors, global parameters being downloaded from the global parameter server for future use, in a second memory in the first machine. 7. The computer-implemented method of claim 1, wherein the first set of global parameters further weight results from one or more particular operators used in the algorithm that models the entity type. 8. A computer program product for parameter data sharing, the computer program product comprising a computer readable storage device having program instructions embodied therewith, the program instructions readable and executable by a computer to perform a method comprising: receiving, by a first machine, a first set of global parameters from a global parameter server, wherein the first set of global parameters comprises data that weights one or more operands used in an algorithm that models an entity type; executing, by multiple learner processors in the first machine, the algorithm using the first set of global parameters and a first mini-batch of data known to describe the entity type; generating, by the first machine, a first consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the first mini-batch of data known to describe the entity type; transmitting, from the first machine, the first consolidated set of gradients to the global parameter server; and receiving, by the first machine, a second set of global parameters from the global parameter server, wherein the second set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients. 9. The computer program product of claim 8, wherein the method further comprises: receiving, by a second machine, the first set of global parameters from the global parameter server; executing, by multiple learner processors in the second machine, the algorithm using the first set of global parameters and a second mini-batch of data known to describe the entity type; generating, by the second machine, a second consolidated set of gradients that describe a direction for the first set of global parameters in order to improve the accuracy of the algorithm in modeling the entity type when using the first set of global parameters; transmitting, from the second machine, the second consolidated set of gradients to the global parameter server; and receiving, by the first machine and the second machine, a third set of global parameters from the global parameter server, wherein the third set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients and the second consolidated set of gradients. 10. The computer program product of claim 9, wherein the method further comprises: testing, by a third machine, a set of unknown data using the third set of global parameters in order to determine whether the set of unknown data matches the entity type. 11. The computer program product of claim 8, wherein the method further comprises: generating each of the first consolidated set of gradients by a different learner processor in the first machine; writing, by each of the multiple learner processors in the first machine, each gradient generated by each of the multiple learner processors in the first machine; and consolidating, by the first machine, gradients generated by all of the multiple learner processors in the first machine in order to create the first consolidated set of gradients. 12. The computer program product of claim 8, wherein the method further comprises: reading, by all of the multiple learner processors in the first machine, the first set of global parameters and the second set of global parameters from a shared memory in the first machine. 13. The computer program product of claim 8, wherein the method further comprises: storing global parameters currently in use by the first machine in a first memory in the first machine; and storing global parameters being downloaded from the global parameter server for future use, in a second memory in the first machine. 14. The computer program product of claim 8, wherein the first set of global parameters further weight results from one or more particular operators used in the algorithm that models the entity type. 15. The computer program product of claim 8, wherein the program instructions are provided as a service in a cloud environment. 16. A computer system comprising one or more processors, one or more computer readable memories, and one or more computer readable storage mediums, and program instructions stored on at least one of the one or more storage mediums for execution by at least one of the one or more processors via at least one of the one or more memories, the stored program instructions comprising: program instructions to receive, by a first machine, a first set of global parameters from a global parameter server, wherein the first set of global parameters comprises data that weights one or more operands used in an algorithm that models an entity type; program instructions to execute, by multiple learner processors in the first machine, the algorithm using the first set of global parameters and a first mini-batch of data known to describe the entity type; program instructions to generate, by the first machine, a first consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the first mini-batch of data known to describe the entity type; program instructions to transmit, from the first machine, the first consolidated set of gradients to the global parameter server; and program instructions to receive, by the first machine, a second set of global parameters from the global parameter server, wherein the second set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients. 17. The computer system of claim 16, further comprising: program instructions to receive, by a second machine, the first set of global parameters from the global parameter server; program instructions to execute, by multiple learner processors in the second machine, the algorithm using the first set of global parameters and a second mini-batch of data known to describe the entity type; program instructions to generate, by the second machine, a second consolidated set of gradients that describes a direction for the first set of global parameters in order to improve the accuracy of the algorithm in modeling the entity type when using the first set of global parameters; program instructions to transmit, from the second machine, the second consolidated set of gradients to the global parameter server; and program instructions to receive, by the first machine and the second machine, a third set of global parameters from the global parameter server, wherein the third set of global parameters is a modification of the first set of global parameters based on the first consolidated set of gradients and the second consolidated set of gradients. 18. The computer system of claim 17, further comprising: program instructions to test, by a third machine, a set of unknown data using the third set of global parameters in order to determine whether the set of unknown data matches the entity type. 19. The computer system of claim 16, further comprising: program instructions to generate each of the first consolidated set of gradients by a different learner processor in the first machine; program instructions to write, by each of the multiple learner processors in the first machine, each gradient generated by each of the multiple learner processors in the first machine; and program instructions to consolidate, by the first machine, gradients generated by all of the multiple learner processors in the first machine in order to create the first consolidated set of gradients. 20. The computer system of claim 16, further comprising: program instructions to read, by all of the multiple learner processors in the first machine, the first set of global parameters and the second set of global parameters from a shared memory in the first machine. | Please help me write a proper abstract based on the patent claims. | A machine receives a first set of global parameters from a global parameter server. The first set of global parameters includes data that weights one or more operands used in an algorithm that models an entity type. Multiple learner processors in the machine execute the algorithm using the first set of global parameters and a mini-batch of data known to describe the entity type. The machine generates a consolidated set of gradients that describes a direction for the first set of global parameters in order to improve an accuracy of the algorithm in modeling the entity type when using the first set of global parameters and the mini-batch of data. The machine transmits the consolidated set of gradients to the global parameter server. The machine then receives a second set of global parameters from the global parameter server, where the second set of global parameters is a modification of the first set of global parameters based on the consolidated set of gradients. |
1. A computer-implemented method for global object recognition comprising: receiving, by the one or more hardware processors, object metadata including a plurality of characteristics that define an object to be detected; receiving, by one or more hardware processors, search metadata including a plurality of context parameters that define a search for the object; retrieving, based on the object and search metadata, a plurality of source data of a given data type; selecting, from a plurality of algorithms, a subset of algorithms to be used in processing the retrieved source data based on a cumulative trained probability of correctness (Pc) that each of the algorithms, which are processed in a chain and conditioned upon the result of the preceding algorithms, produce a correct result; ordering the algorithms in the subset based on algorithm metadata including a plurality of algorithm characteristics to reduce an expected processing load of the retrieved source data; and processing the retrieved source data in order according to the chain of the selected subset of algorithms to obtain a plurality of results and to reduce the number of source data that is processed by the next algorithm in the chain, at least one result indicating whether the object was detected in corresponding source data output from the last algorithm in the chain. 2. The method of claim 1, further comprising: computing a cumulative object Pc and confidence interval representing whether the objected was detected in the plurality of source data. 3. The method of claim 2, further comprising: computing a cumulative algorithm Pc and confidence interval representing whether the result of each algorithm in the chain was correct in the plurality of source data. 4. The method of claim 1, further comprising configuring the algorithms in the subset to reduce the number of retrieved source data by a nominal culling percentage. 5. The method of claim 4, further comprising: receiving, by one or more hardware processors, a mission time critical targeting (TCT) requirement; and reconfiguring one or more of the image processing algorithms to adjust the nominal culling percentage based on the mission TCT requirement. 6. The method of claim 1, wherein the steps of selecting and ordering the subset of algorithms further comprises: for a plurality of objects, non-real-time evaluation of a plurality of candidate subsets of different chained algorithms configured based on object and algorithm metadata, selection of a subset based on its cumulative trained Pc and expected processing load for each said object and storing of selected subsets in a repository; and real-time selection of the subset from the repository based on the object metadata. 7. The method of claim 6, wherein a candidate subset's cumulative trained Pc is evaluated against a threshold after each image processing algorithm in the chain and disqualified if the trained Pc does not exceed the threshold. 8. The method of claim 7, wherein the plurality of candidate subsets' cumulate trained Pc are evaluated against a different threshold at each level of a multi-path tree to disqualify algorithms within the subset chain at that level and to identify one or more candidate algorithms to replace the disqualified algorithm at that level to continue the chain, complete the subset, and ensure that at least one subset's cumulative Pc exceeds a final threshold. 9. The method of claim 6, further comprising using a stochastic math model (SMM) to compute the cumulative trained Pc and using a discrete event simulator (DES) to implement the SMM and perform a Monte Carlo simulation on multiple instances of the object in the source data to generate the confidence interval corresponding to the Pc at each level. 10. The method of claim 6, wherein the expected processing load is based on both the processing resources required to run each algorithm and a nominal culling percentage for that algorithm. 11. The method of claim 6, further comprising: selecting multiple subsets each configured to detect the object, each said subset's algorithms configured to process a different data type of source data; retrieving multiple source data for each data type; processing the source data according to the selected subset for the data type to obtain one or more results for each subset indicating whether the object was detected; and fusing the one or more results for each subset to obtain one or more results indicating whether the object was detected 12. The method of claim 11, further comprising: computing and displaying a cumulative fused Pc and confidence interval for the detected object. 13. The method of claim 6, wherein the algorithms in the candidate subsets are configured to reduce the number of retrieved source data by a nominal culling percentage in total. 14. The method of claim 13, further comprising: receiving a mission time critical targeting (TCT) requirement; and reconfiguring one or more of the algorithms to adjust the nominal culling percentage based on the mission TCT requirement. 15. The method of claim 6, further comprising: computing and displaying a cumulative object Pc and confidence interval for the detected object. 16. The method of claim 15, further comprising: for each algorithm in the chain, computing and displaying a cumulative algorithm Pc and confidence interval. 17. The method of claim 6, further comprising: if the object metadata is not a match for a subset, selecting in real-time a subset of algorithms based on the object and algorithm metadata and ordering the algorithms to reduce an expected processing load. 18. The method of claim 6, further comprising: receiving operator feedback as to whether the detected object was correct or incorrect. 19. The method of claim 6, further comprising: receiving operator feedback as to whether the result for each said algorithm was correct or incorrect. 20. The method of claim 6, further comprising using a the non-real-time evaluation global object recognition server and cluster of processing nodes to perform the non-real-time evaluation and using a client device to perform the real-time processing. 21. A computer-implemented method for global object recognition comprising: receiving, by the one or more hardware processors, object metadata including a plurality of characteristics that define the object to be located; receiving, by one or more hardware processors, for each of a plurality of algorithms configured to process source data of a given data type, algorithm metadata including a plurality of algorithm characteristics that describe the algorithm; retrieving a plurality of training source data of the given data type; selecting, from the plurality of algorithms, based on the object and algorithm metadata a plurality of candidate subsets of algorithms to be used in processing the retrieved source data; for each candidate subset, ordering the algorithms in the chain based on algorithm metadata to reduce an expected processing load; for each candidate subset, processing the retrieved source data in order according to the chain of algorithms to obtain a plurality of results and to reduce the number of training source data that is processed by the next algorithm in the chain, at least one result indicating whether the object was identified in corresponding source data output from the last algorithm in the chain; for each candidate subset, computing a cumulative trained probability of correctness (Pc) and corresponding confidence interval that each of the algorithms, which are processed in the chain and conditioned upon the result of the preceding algorithms, produce a correct result; selecting a candidate subset based on its trained Pc and corresponding confidence interval and expected processing load; pairing the selected candidate subset of algorithms with the object to be detected; and repeating the steps for a plurality of different objects. 22. A computer-implemented method for global object recognition comprising: receiving, by the one or more hardware processors, object metadata including a plurality of characteristics that define the object to be detected; receiving, by one or more hardware processors, search metadata including a plurality of context parameters that define a search for the object; receiving, by one or more hardware processors, a plurality algorithms, algorithm metadata including a plurality of algorithm characteristics that describe each algorithm, and a plurality of defined subsets of chained algorithms configured to detect different objects, each said defined subset selected based on a cumulative trained probability of correctness (Pc) and corresponding confidence interval that each one of the algorithms, which are processed in the chain and conditioned upon the result of the preceding algorithms, produce a correct result and an expected processing load of the chain; selecting, from the plurality of defined subsets, based on the object metadata one of the defined subsets, said algorithms in the selected subsets configured to process source data of a given data type; if none of the defined subsets match the object to be located, based on the object and algorithm metadata selecting and ordering a plurality of algorithms, configured to process source data of a given data type, to define a selected subset; retrieving, based on the plurality of context parameters, a plurality of source data of the given data type; processing the retrieved one or more source data in order according to the chain of the selected subset of algorithms to obtain a plurality of results and to reduce the number of source data that is processed by the next algorithm in the chain, at least one result indicating whether the object was detected in corresponding source data output from the last algorithm in the chain; and determining a cumulative object Pc and confidence interval representing whether the object was detected in one or more of the retrieved source data output from the last algorithm based on the at least one result. | Please help me write a proper abstract based on the patent claims. | A system and method improves the probability of correctly detecting an object from a collection of source data and reduces the processing load. A plurality of algorithms for a given data type are selected and ordered based on a cumulative trained probability of correctness (Pc) that each of the algorithms, which are processed in a chain and conditioned upon the result of the preceding algorithms, produce a correct result and a processing. The algorithms cull the source data to pass forward a reduced subset of source data in which the conditional probability of detecting the object is higher than the a priori probability of the algorithm detecting that same object. The Pc and its confidence interval is suitably computed and displayed for each algorithm and the chain and the final object detection. |
1. An improved computer-implemented method of efficiently determining actions to perform based on data from streaming continuous queries in a distributed computer system, the method comprising: at a central control computer: receiving a streaming continuous query and a rule-set; wherein the rule-set comprises decision data representing a plurality of decisions based on a plurality of attributes produced by the streaming continuous query, and action data representing end actions based on the plurality of decisions, wherein the plurality of attributes comprise data processed by one or more networked computers; separating the streaming continuous query into a sub-query executable at one or more edge computers; categorizing each rule from the rule-set into a set of one or more edge expressions that are configured to be evaluated at an edge computer to cause an action; providing the set of one or more edge expressions and the sub-query to at least one edge computer with instructions to process visible attributes on the edge computer and to evaluate the set of one or more edge expressions independently from the central control computer; wherein the method is performed by one or more computing devices. 2. The method of claim 1, wherein separating the streaming continuous query further comprises: retrieving a first set of attributes available at a particular edge computer of the one or more edge computers; comparing the first set of attributes available at the particular edge computer with a second set of attributes requested in the streaming continuous query; creating, from the streaming continuous query, the sub-query to request a third set of attributes, wherein the third set of attributes comprises an intersection of attributes from the first set of attributes available at the particular edge computer and the second set of attributes requested in the streaming continuous query. 3. The method of claim 2, wherein retrieving the first set of attributes available at the particular edge computer includes scanning the particular edge computer for metadata of visible attributes at the particular edge computer. 4. The method of claim 1, wherein separating the streaming continuous query comprises separating the streaming continuous query into the sub-query executable at the one or more edge computers, and a super-query comprising at least some attributes and syntax from the streaming continuous query and not in the sub-query; wherein the super-query aggregates attributes provided by a propagation action performed at a plurality of edge computers including the one or more edge computers. 5. The method of claim 1, wherein the rule-set is expressed as a decision tree, wherein each branch in the tree represents a true outcome of a decision applied to a set of one or more attributes and each leaf in the tree represents an end action taken on a networked computer, wherein each rule in the rule-set is derived by combining each branch in a path to an end action with an AND operator, and combining multiple paths to a single end action with an OR operator. 6. The method of claim 1, wherein the categorizing step includes applying a set of one or more computer specific constraints to the plurality of attributes within each expression to determine whether evaluation of any expression results permanently in a false decision such that the central control computer determines not to provide that particular expression to the edge computer. 7. The method of claim 1, wherein categorizing each rule from the rule-set includes parsing each rule into a first set of expressions based on decisions requiring attributes available at the edge computer and a second set of expressions based on decisions requiring attributes from a plurality of edge computers; wherein the first set of expressions are categorized into the set of one or more edge expressions that are configured to be evaluated at the edge computer to cause the action. 8. The method of claim 7, further comprising creating a separate rule for a propagation action when parsing a given rule from the rule-set results in a first decision from the given rule in the first set of expressions and a second decision from the given rule in the second set of expressions. 9. The method of claim 8, further comprising combining the separate rule for the propagation action with another rule for the propagation action when both rules result in the propagation action of a same attribute. 10. The method of claim 1, wherein the networked computers comprise a multi-tiered hierarchy, wherein edge specific attributes with respect to an intermediate computer represent attributes from more than one computer with respect to a lower tiered computer, and the intermediate computer represents the central control computer with respect to the lower tiered computer; wherein the intermediate computer represents the edge computer to a higher tiered computer; wherein the steps are applied recursively to available networked computers except for any computer on a lowest tier. 11. A system comprising: a controller computer, coupled to one or more edge computers; receiving logic, in the controller computer, that is configured to receive a streaming continuous query and a rule-set; wherein the rule-set comprises decisions based on attributes produced by the query, and end actions based on the decisions, wherein the attributes comprise data processed by one or more computers on a network; separating logic, in the controller computer, that is configured to separate the streaming continuous query into a sub-query executable at one or more edge computers; categorizing logic, in the controller computer, that is configured to categorize each rule from the rule-set based on decisions requiring attributes available from the sub-query into a set of one or more edge expressions evaluable at an edge computer to cause an action; distributing logic, in the controller computer, that is configured to provide the set of one or more edge expressions and the sub-query to at least one edge computer to enable processing of visible attributes on the edge computer and evaluation of an action independently from the controller computer. 12. The system of claim 11, wherein the receiving logic is configured to receive the rule-set expressed as a decision tree, wherein each branch in the tree represents a true outcome of a decision applied to a set of one or more attributes and each leaf in the tree represents an end action taken on the network, wherein the rule-set is derived by combining each branch in a path to an end action with an AND operator, and combining multiple paths to a single end action with an OR operator. 13. The system of claim 11, wherein the separating logic, in the controller computer, is configured to separate the streaming continuous query by looping through each edge computer from the one or more edge computers on the network to determine attributes that are visible at a particular edge computer, and creating the sub-query for the particular edge computer by removing, from the streaming continuous query, a statement requiring an attribute unavailable at the particular edge computer. 14. The system of claim 13, wherein the separating logic, in the controller computer, is configured to separate the streaming continuous query by separating the streaming continuous query into the sub-query executable at one or more edge computers, and a super-query comprising at least some attributes and syntax from the streaming continuous query but not in the sub-query. 15. The system of claim 11, wherein the categorizing logic, in the controller computer, is configure to review metadata on each computer to determine actions requiring expressions based on edge specific attributes and expressions based on attributes from more than one computer, wherein an action requiring both expressions is separated into two actions, wherein a propagation action is created for expressions based on edge specific attributes. 16. A system comprising: a controller computer, coupled to one or more intermediate computers each of which is coupled to one or more lower tiered computers; receiving logic, in the controller computer, that is configured to receive a streaming continuous query and a rule-set; wherein the rule-set comprises decisions based on attributes produced by the query, and end actions based on the decisions, wherein the attributes comprise data processed by one or more computers on a network; separating logic, in the controller computer, that is configured to separate the streaming continuous query into a sub-query executable at the intermediate computer; categorizing logic, in the controller computer, that is configured to categorize each rule from the rule-set based on decisions requiring attributes available from the sub-query into a set of one or more edge expressions evaluable at the intermediate computer to cause an action; distributing logic, in the controller computer, that is configured to provide the set of one or more edge expressions and the sub-query to at least one intermediate computer to enable processing of visible attributes on the intermediate computer and evaluation of an action independently from the controller computer. 17. The system of claim 16, wherein the receiving logic is configured to receive the rule-set expressed as a decision tree, wherein each branch in the tree represents a true outcome of a decision applied to a set of one or more attributes and each leaf in the tree represents an end action taken on the network, wherein the rule-set is derived by combining each branch in a path to an end action with an AND operator, and combining multiple paths to a single end action with an OR operator. 18. The system of claim 16, wherein the separating logic, in the controller computer, is configured to separate the streaming continuous query by looping through each edge computer from the one or more edge computers on the network to determine attributes that are visible at a particular edge computer, and creating the sub-query for the particular edge computer by removing, from the streaming continuous query, a statement requiring an attribute unavailable at the particular edge computer. 19. The system of claim 18, wherein the separating logic, in the controller computer, is configured to separate the streaming continuous query by separating the streaming continuous query into the sub-query executable at one or more edge computers, and a super-query comprising at least some attributes and syntax from the streaming continuous query but not in the sub-query. 20. The system of claim 16, wherein the categorizing logic, in the controller computer, is configure to review metadata on each computer to determine actions requiring expressions based on edge specific attributes and expressions based on attributes from more than one computer, wherein an action requiring both expressions is separated into two actions, wherein a propagation action is created for expressions based on edge specific attributes. | Please help me write a proper abstract based on the patent claims. | In an embodiment, an improved computer-implemented method of efficiently determining actions to perform based on data from a streaming continuous queries in a distributed computer system comprises, at a central control computer, receiving a streaming continuous query and a rule-set; wherein the rule-set comprises decision data representing decisions based on attributes produced by the query, and action data representing end actions based on the decisions, wherein the attributes comprise data processed by one or more networked computers; separating the streaming continuous query into a sub-query executable at one or more edge computers; categorizing end actions from the set based on decisions requiring attributes available from the sub-query into a set of one or more edge expressions that are configured to be evaluated at an edge agent to cause an action; providing the set of edge expressions and the sub-query to at least one edge computer with instructions to process visible attributes on the edge computer and to evaluate the set of one or more edge expressions independently from the central control computer; wherein the method is performed by one or more computing devices. |
1. A robot controlling apparatus which controls a robot by detecting time-series states of a worker and the robot, comprising: a detecting unit configured to detect a state of the worker; a learning information holding unit configured to hold learning information obtained by learning the time-series states of the robot and the worker; and a controlling unit configured to control an operation of the robot based on the state of the worker output from the detecting unit and the learning information output from the learning information holding unit. 2. The robot controlling apparatus according to claim 1, wherein the controlling unit further comprises a deciding unit configured to obtain the time-series state of the worker from the detected state of the worker, and decide whether or not the obtained time-series state of the worker is similar to the time-series state of the worker included in the learning information, and the controlling unit controls the operation of the robot based on a decision result of the deciding unit. 3. The robot controlling apparatus according to claim 2, wherein, in a case where it is decided by the deciding unit that the obtained time-series state of the worker is not similar to the time-series state of the worker included in the learning information, the controlling unit stops or decelerates the operation of the robot. 4. The robot controlling apparatus according t claim 3, further comprising a notifying unit configured to, in the case where it is decided by the deciding unit that the obtained time-series state of the worker is not similar to the time-series state of the worker included in the learning information, notify the worker of information for urging to restart work after the operation of the robot is stopped or decelerated. 5. The robot controlling apparatus according to claim 3, wherein in the case where it is decided by the deciding unit that the obtained time-series state of the worker is not similar to the time-series state of the worker included in the learning information, the controlling unit decides whether or not the worker pays attention to the robot, in a case where it is decided that the worker pays attention to the robot, the controlling unit continues the current operation of the robot, and in a case where it is decided that the worker does not pay attention to the robot, the controlling unit stops or decelerates the operation of the robot. 6. The robot controlling apparatus according to claim 2, wherein, in a case where it is decided by the deciding unit that the obtained time-series state of the worker is similar to the time-series state of the worker included in the learning information, the controlling unit continues the operation of the robot. 7. The robot controlling apparatus according to claim 1, wherein the state of the worker includes at least either a position and orientation of a predetermined part of the worker and a position and orientation of an object grasped by the worker. 8. The robot controlling apparatus according to claim 2, wherein the deciding unit further decides the operation of the robot based on a state of the robot. 9. The robot controlling apparatus according claim 8, wherein the state of the robot corresponds to position information of a hand or a joint of the robot. 10. The robot controlling apparatus according to claim 1, further comprising a learning information updating unit configured to update the learning information based on the state of the worker output from the detecting unit and the learning information output from learning information holding unit, and output the updated learning information to the learning information holding unit. 11. A robot controlling method which controls a robot by detecting time-series states of a worker and the robot, comprising: detecting a state of the worker; holding learning information obtained by learning the time-series states of the robot and the worker; and controlling an operation of the robot based on the detected state of the worker and the held learning information. 12. A non-transitory computer-readable storage medium which stores a program for causing a computer to perform a robot controlling method of controlling a robot by detecting time-series states of a worker and the robot, the controlling method comprising: detecting a state of the worker; holding learning information obtained by learning the time-series states of the robot and the worker; and controlling an operation of the robot based on the detected state of the worker and the held learning information. | Please help me write a proper abstract based on the patent claims. | To enable work safely in a space where a robot and a worker coexist without defining an area in a work space using a monitoring boundary or the like and thus improve productivity, there is provided a robot controlling apparatus which controls the robot by detecting time-series states of the worker and the robot, and comprises: a detecting unit configured to detect a state of the worker; a learning information holding unit configured to hold learning information obtained by learning the time-series states of the robot and the worker; and a controlling unit configured to control an operation of the robot based on the state the worker output from the detecting unit and the learning information output from the learning information holding unit. |
1. An interface apparatus for providing interaction over a communication network between a user and a plurality of network entities cooperating with said interface apparatus under a predetermined service agreement stored in the interface apparatus, the interface apparatus comprising: a front-end communication system including: at least one front-end communication input device configured for interaction with the user for receiving user input information and generating user information input signals, and at least one front-end communication output device configured for interaction with the user for outputting user information output signals obtained as a reaction to the user input information; a communication processing system coupled to the front-end communication system and configured for (i) receiving said user information input signals for coding thereof to a format suitable for data transferring the coded information input signals to at least one network entity selected from said plurality of network entities over said communication network to handle said coded information input signals at the end of said at least one network entity, thereby to generate information coded output signals responsive to said user information input signals; and (ii) receiving said information coded output signals generated by said at least one network entity; and decoding these signals to a format suitable for outputting thereof by said at least one front-end output device; and a configuration and control system configured for (i) automatic reconfiguration and control of a functionality of the interface apparatus, including: selecting desired functional characteristics of the interface apparatus; and adjusting said interface apparatus to operating conditions of the communication network, including network availability; and (ii) automatic reconfiguration and control of functionality of interaction of said at least one network entity with the interface apparatus, including adjusting said interaction to predetermined requirements imposed on said at least one network entity for desired cooperation with said interface apparatus in accordance with said predetermined service agreement; and a wireless network connector electrically coupled to said communication processing system, and to said configuration and control system; said wireless network connector configured for providing a wireless signal linkage between the interface apparatus and said plurality of network entities over the communication network. 2. The interface apparatus of claim 1, further comprising: a front-end monitoring system including at least one front-end monitoring device configured for interacting with the user, collecting user state information related to a state of the user and generating user state patterns indicative of the state of the user; a decision-making system coupled to said front-end monitoring system and to wireless network connector, and configured for receiving the user state patterns collected by said at least one front-end monitoring device, and processing thereof for taking a decision as to how to respond to the received user state patterns. 3. The interface apparatus of claim 2, further comprising an interface for remote monitoring coupled to said wireless network connector, said communication processing system and to said decision-making system, and configured for interaction of the interface apparatus with said plurality of network entities via said wireless network connector. 4. The interface apparatus of claim 1, wherein said at least one front-end communication input device of the front-end communication system is selected from a microphone configured for receiving said user input information provided verbally and converting said user information into the user information input signals corresponding to the user verbal input information; and a video camera configured for receiving said user information provided visually and converting said user information into the user information input signals corresponding to the visual user information. 5. The interface apparatus of claim 4, wherein said at least one front-end communication output device of the front-end communication system is selected from a speaker configured for audio outputting said user information output signals, and a display configured for video outputting said user information output signals, wherein said user information output signals are indicative of a reaction of said at least one network entity to said user information input signals. 6. The interface apparatus of claim 5, wherein said communication processing system comprises: an encoding and decoding module coupled to said at least one front-end communication input device and to said at least one front-end communication output device of the front-end communication system, said encoding and decoding module configured (i) for receiving the user information input signals including audio and video signals from said at least one front-end communication input device, coding thereof to obtain coded information input signals and forwarding said coded information input signals to the wireless network connector for relaying the coded information input signals to said at least one network entity; and (ii) for receiving coded information output signals and decoding these signals to obtain said user information output signals; a speech synthesizer coupled to the speaker and to the encoding and decoding module for encoding and decoding audio signals, and configured to receive decoded information output signals and to generate electrical signals in a format suitable for audio outputting thereof by the speaker; and a view synthesizer coupled to the display and to the encoding and decoding module for encoding and decoding video signals, and configured to receive decoded information output signals and to generate electrical signals in a format suitable for video outputting thereof by the display. 7. The interface apparatus of claim 3, further comprising a local dialogue organization device coupled to the speech synthesizer and to said interface for remote monitoring and configured for organization of local dialogues between the user and the interface apparatus. 8. The interface apparatus of claim 3, wherein said at least one front-end monitoring device of the front-end monitoring system is selected from the list including: a tactile sensor configured to provide user state information indicative of a force applied by the user to the interface apparatus; at least one user physiological parameter sensor configured for measuring at least one vital sign of the user; a user location sensor configured for determination of a location of the interface apparatus; an accelerometer configured for detecting motion of the interface apparatus; and a gyroscope configured for measuring orientation of the interface apparatus in space. 9. The interface apparatus of claim 8, wherein said at least one user physiological parameter sensor is selected from the list including: a temperature sensor, a pulse rate sensor, a blood pressure sensor, a pulse oximetry sensor, and a plethysmography sensor. 10. The interface apparatus of claim 3, wherein said decision-making system comprises: a sensor data collection device configured for receiving the user state patterns measured by the front-end monitoring system and formatting thereof; a pattern recognition device coupled to the sensor data collection device and configured for comparing the user state patterns with reference state patterns stored in the interface apparatus, and generating an identification signal indicative of whether at least one of the user state patterns matches or does not match at least one reference state pattern, said reference state patterns being indicative of various predetermined states of the user and being used as a reference for determining a monitored state of the user; a pattern storage device coupled to the pattern recognition device and configured for storing said reference state patterns; a decision maker device coupled to said pattern recognition device, and configured for receiving said identification signal from the pattern recognition device, and in response to said identification signal, generating said coded information output signals indicative of at least one policy for taking said decision; and a policy storage device coupled to the decision maker device and configured for storing policies for the taking of the decision. 11. The interface apparatus of claim 10, wherein the policy for the taking of the decision includes: (i) if at least one of the user state patterns matches at least one reference state pattern, to generate said coded information output signals including advice of the decision-making system as a reaction to the monitored state of the user, and provide said coded information output signals to at least one receiver selected from a corresponding at least one network entity selected from said plurality of network entities configured for handling the advice, and said communication processing module of the interface apparatus further configured for decoding said coded information output signals for extracting the advice, and outputting the advice to the user; and (ii) if none of the user state patterns matches at least one reference state pattern, to forward the monitored user state patterns to at least one network entity being configured for handling the user patterns. 12. The interface apparatus of claim 3, wherein said configuration and control system includes: a cyber certificate database comprising at least one record selected from: a record with a description of functional characteristics of the interface apparatus, a record with a description of functional characteristics of the network entities selected to cooperate with the interface apparatus for a predetermined purpose; a record with a description of functional characteristics of said plurality of network entities providing services to which the interface apparatus has a right to access; an archive record for interaction of the user with the interface apparatus; and a cyber portrait of the user including at least one kind of characteristics selected from: cognitive characteristics of the user, behavioral characteristics of the user, physiological characteristics of the user, and mental characteristics of the user; a cyber certificate database controller coupled to the cyber certificate database, and configured for controlling an access to said at least one record stored in the cyber certificate database to read and update said at least one record; and a reconfiguration device coupled to said cyber certificate database controller, and configured for dynamic reconfiguration of functionality of the interface apparatus, and interaction of said at least one network entity with the interface apparatus in accordance with said predetermined service agreement. 13. The interface apparatus of claim 12, wherein said dynamic reconfiguration of the functionality includes at least one of the following actions: receiving external signals for (i) adjusting said interface apparatus to the operating conditions of the communication network, and (ii) adjusting operation of said at least one external entity to said predetermined requirements imposed on said at least one external entity for cooperation with said interface apparatus in accordance with said predetermined service agreement; and providing instruction signals to said cyber certificate database controller to read and update said at least one record. 14. The interface apparatus of claim 13, wherein said at least one entity includes an entities control system configured for receiving from said configuration and control system of the interface apparatus a request for finding at least one network entity providing services desired to the interface apparatus in accordance with the conditions of the predetermined service agreement, conducting a semantic search of said at least one network entity, adjusting the interaction between the interface apparatus and said at least one network entity to the conditions of the predetermined service agreement, and providing addresses and access conditions of said at least one network entity to said configuration and control system. 15. The interface apparatus of claim 14, wherein said at least one entity providing services desired to the interface apparatus in accordance with the conditions of the predetermined service agreement includes at least one system selected from: (a) an external dialogue system configured for organization and conduction of natural language dialogues with the user, configured for receiving at least one type of input signals selected from said coded information input signals originating from the front-end communication system, and said user state patterns provided from the decision-making system; and analyzing said at least one type of the input signals and generating said coded information output signals indicative of reaction on said coded information input signals; (b) a supervisor communication support system configured for finding a supervisor communication device used by a supervisor of the user and supporting communication of said at least one user interface apparatus with the supervisor communication device; and (c) a peer communication support system configured for finding at least one other interface apparatus used by a peer to the user, and for supporting communication between the interface apparatus of the user and said at least one other interface apparatus. 16. The interface apparatus of claim 14, wherein said at least one entity providing cloud services desired to the interface apparatus in accordance with the conditions of the predetermined service agreement includes a situation identification system configured for receiving said coded information input signals from the front-end communication system and said user state patterns forwarded by the decision-making system, and providing analysis thereof for identifying various situations occurring with the user and notifying said supervisor communication support system of the situations as they are discovered. 17. The interface apparatus of claim 15, wherein said external dialogue system comprises a speech recognition system configured for receiving said coded information input signals originating from the front-end communication system and transforming these signals into data suitable for computer processing, and a dialogue manager coupled to the speech recognition system, and configured to process said data and to generate said coded information output signals produced as a reaction to said coded information input signals. 18. The interface apparatus of claim 17, wherein said coded information input signals include a query signal; and wherein the external dialogue system further comprises a search engine associated with the dialogue manager and configured for receiving a processed query signal from the dialogue manager, conducting a search based on a query related to said query signal and providing search results to the dialogue manager for targeting thereof to the user; wherein said search results are included in said coded information output signals. 19. The interface apparatus of claim 17, wherein said coded information input signals include said user state patterns forwarded by the decision-making system, and wherein the external dialogue system is further configured to analyze said user state patterns forwarded by the decision-making system, and generate advice of the entity as a reaction to the monitored state of the user, wherein the entity advice is included in said coded information output signals. 20. The interface apparatus of claim 15, wherein the user is a child, and the supervisor is a parent of the child, and said supervisor communication device is a communication device of the parent. 21. The interface apparatus of claim 16, wherein the situation identification system is configured to communicate with at least one system providing a medical diagnostics service. 22. The interface apparatus of claim 15, wherein the user is a child, and the peer is another child. 23. A method for providing interaction of users with a plurality of network entities over a communication network by the interface apparatus configured to provide interaction between a user and a plurality of network entities cooperating with said interface apparatus under a predetermined service agreement stored in the interface apparatus, the method comprising at the interface apparatus end: automatically reconfiguring and controlling functionality of the interface apparatus, including automatic selecting of desired functional characteristics of the interface apparatus; and adjusting said interface apparatus to operating conditions in the communication network, including network availability; automatically reconfiguring and controlling functionality of interaction of said at least one network entity with the interface apparatus, including adjusting said interaction to predetermined requirements imposed on said at least one network entity for desired cooperation with said interface apparatus in accordance with said predetermined agreement; receiving user input information from the user; processing said user input information and forwarding the corresponding processed signal to at least one entity selected from said plurality of entities configured for handling a communication with the user; and receiving coded information output signals from said at least one entity, processing thereof to obtain user information output signals in a format suitable for outputting to the user. 24. The method of claim 23, comprising: collecting user state information related to a state of the user and generating user state patterns indicative of the state of the user; receiving the user state patterns and processing thereof; and taking a decision as to how to respond to the received user state patterns; wherein said taking of the decision as to how to respond to the received user state patterns comprises: (i) if at least one of the user state patterns matches at least one reference state pattern, taking a decision to generate said coded information output signals including advice indicative of reaction on the monitored state of the user, and processing said coded information output signals for extracting the advice and outputting it to the user; and (ii) if none of the user state patterns matches at least one reference state pattern, forwarding the monitored user state patterns to a corresponding at least one entity configured for handling the user patterns. 25. The method of claim 24, wherein the processing of the user state patterns includes comparing the user state patterns with reference state patterns stored in the interface apparatus, said reference state patterns being indicative of various predetermined states of the user and being used as a reference for determining a monitored state of the user; and taking a decision as to how to respond to the received user state patterns. 26. The method of claim 23, further comprising at the end of at least one entity: receiving said coded information input signals from the interface apparatus; analyzing said coded information input signals and generating said coded information output signals indicative of reaction on said coded information input signals; and relaying said coded information output signals to the interface apparatus. 27. The method of claim 24, further comprising at the end of at least one entity: receiving said user state patterns from the interface apparatus; analyzing said user state patterns and generating said coded information output signals indicative of reaction on said coded information input signals; and relaying said coded information output signals to the interface apparatus. 28. The method of claim 23, comprising at the end of at least one entity: receiving said coded information input signals from the interface apparatus; providing analysis thereof for identifying various situations occurring with the user; finding a supervisor communication device used by a supervisor of the user; and providing communication of the supervisor communication device with the interface apparatus of the user. 29. The method of claim 24, comprising at the end of at least one entity: receiving said user state patterns from the interface apparatus; providing analysis thereof for identifying various situations occurring with the user; finding a supervisor communication device used by a supervisor of the user; and providing communication of the supervisor communication device with the interface apparatus of the user. 30. The method of claim 23, comprising at the end of at least one entity: receiving said coded information input signals from the interface apparatus; finding at least one another interface apparatus used by a peer to the user; and providing communication between the interface apparatus of the user and said at least one another interface apparatus. | Please help me write a proper abstract based on the patent claims. | An interface apparatus, system and method for providing interaction over a communication network between a user and network entities are described. The interface apparatus includes a front-end communication system configured for receiving user input information and for outputting output signals in response to the input information. The interface apparatus also includes a communication processing system for coding the input information and forwarding it to the network entity. The interface apparatus can also include a front-end monitoring system for generating user state patterns indicative of the state of the user, a decision-making system for processing the patterns and taking a decision as to how to respond thereto. The interface apparatus includes a configuration and control system configured for reconfiguration and control of functionality of the interface apparatus and for reconfiguration and control of functionality of the network entities. |
1. A computer system for knowledge based ontology editing, comprising: one or more computer devices each having one or more processors and one or more tangible storage devices; and a program embodied on at least one of the one or more storage devices, the program having a plurality of program instructions for execution by the one or more processors, the program instructions comprising instructions for: receiving, from a remote server through a network, by spoken words of a user into a microphone of the computer, a language instance to update a knowledge base, using a computer, the network being an Internet connection; semantically parsing the language instance to detect an ontology for editing using predetermined grammatical values comprising identifying a subject for the received language instance as the ontology to be edited and POS tags, wherein POS tags comprise word-category disambiguation and the marking up a word in text corpus as corresponding to particular part of speech, based on the word's definition and context; mapping one or more nodes for the ontology for editing based on an ontology database and the knowledge base comprising a Linking Open data (LOD) web data form; determining whether the mapped nodes are defined or undefined within the Linking Open data (LOD) web data form; inquiring the user as to information regarding the undefined nodes by displaying a question regarding the undefined nodes on the user interface on the computer; receiving an answer to the question regarding the undefined nodes on the user interface from the user; calculating a first confidence score based on a number of the defined and undefined mapped nodes; and updating the Linking Open data (LOD) web data form when the first confidence score meets a pre-defined threshold. | Please help me write a proper abstract based on the patent claims. | A computer-implemented method for knowledge based ontology editing, is provided. The method receives a language instance to update a knowledge base, using a computer. The method semantically parses the language instance to detect an ontology for editing. The method maps one or more nodes for the ontology for editing based on an ontology database and the knowledge base. The method determines whether the mapped nodes are defined or undefined within the knowledge base. The method calculates a first confidence score based on a number of the defined and undefined mapped nodes. Furthermore, the method updates the knowledge base when the first confidence score meets a pre-defined threshold. |
1. A method of presenting a plurality of alerts based on a predictive classification, comprising: receiving the plurality of alerts, wherein each alert of the plurality of alerts indicates an alarm condition occurring at a monitored network system, and wherein said each alert is a data record comprising one or more data fields describing the alarm condition; classifying said each alert using a predictive machine learning model, wherein the predictive machine learning model is trained to classify said each alert as actionable or non-actionable using the one or more data fields of said each alert as one or more respective classification features, and to calculate a respective probability that said each alert is actionable or non-actionable; and initiating a presentation of the plurality of alerts in a network monitoring user interface based on the respective probability of said each alert. 2. The method of claim 1, further comprising: appending said each alert with an additional data field storing the respective probability. 3. The method of claim 2, further comprising: transmitting said each appended alert to a network monitoring service, wherein the network monitoring user interface is presented by the network monitoring service. 4. The method of claim 1, further comprising: receiving a set of historical alerts that are labeled as actionable or non-actionable, wherein the predictive machine learning model is trained using the set of historical alerts. 5. The method of claim 4, further comprising: transforming the one or more data fields of the set of historical alerts into the numeric matrix based on a variable type of the one or more data fields, wherein the predictive machine learning model is trained using the numeric matrix. 6. The method of claim 5, further comprising: binarizing the one or more data fields into a categorical vector based on one or more categorical labels when the variable type is a categorical variable type, wherein the categorical vector is included in the numeric matric for training of the predictive machine learning model. 7. The method of claim 5, further comprising: extracting one or more keywords from the one or more data fields when the variable type is a text variable type; and generating a hashed vector of the one or more keywords, wherein the hashed vector is included in the numeric matrix for training of the predictive machine learning model. 8. The method of claim 5, wherein the variable type is a decision tree variable type, the method further comprising: transforming the one or more data fields using a decision tree to correlate the one or more data fields to a likelihood of being associated with an actionable alert, wherein the decision tree includes one or more decision rules indicating a non-linear relationship between the one or more data fields and the likelihood of being associated with the actionable alert. 9. The method of claim 8, wherein the number variable type is a temporal variable type including a time-of-day variable. 10. The method of claim 1, further comprising: receiving a feedback input that specifies a labeled classification for at least one of the plurality of alerts; and updating a training of the predictive machine learning model based on the feedback input. 11. A non-transitory computer-readable non-transitory storage medium to present a plurality of alerts based on a predictive classification, carrying one or more sequences of one or more instructions which, when executed by one or more processors, cause an apparatus to perform: receiving a plurality of alerts, wherein each alert in the plurality of alerts indicates an alarm condition occurring in a monitored network system, and wherein said each alert is labeled as either an actionable alert or a non-actionable alert; and training a predictive machine language model to classify a subsequent alert as actionable or non-actionable using the one or more data fields of said each alert as one or more respective classification features, wherein the predictive machine language model is configured to calculate a probability that the subsequent alert is actionable or non-actionable; and wherein the subsequent alert is presented in a network monitoring user interface based on the probability. 12. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: pre-processing said each alert to add one or more additional data fields to record contextual information about the alarm condition, the monitored system, or a combination thereof, wherein the predictive machine learning model is further trained using the one or more additional data fields. 13. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: segmenting the plurality of alerts based on a training window and a validation window, wherein the plurality of alerts falling within the training window is used to train the predictive machine learning model; and wherein the plurality of alerts falling within the validation window are used to validate the predictive machine learning model after training. 14. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: determining that said each alert is labeled as an actionable alert when said each alert is associated with an incident number. 15. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: training the predictive machine language model by applying a regression analysis on the one or more respective classification features. 16. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: selecting the one or more data fields to designate as the one or more respective classification features based on a variance threshold value. 17. The non-transitory computer-readable storage medium of claim 11, wherein the apparatus is further caused to perform: initiating a retraining of the predictive machine learning model based on a change in the monitored network system, an addition of a new monitored network system, or a combination thereof. 18. An apparatus to present a plurality of network alerts based on a predictive classification, comprising: at least one processor; and at least one memory including computer program code for one or more programs, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following, receive the plurality of alerts, wherein each alert of the plurality of alerts indicates an alarm condition occurring in a monitored network system, and wherein said each alert is a data record comprising one or more data fields describing the alarm condition; designate the one or more data fields as one or more classification features of a predictive machine learning model configured to classify said each alert as an actionable alert or a non-actionable alert; calculate a respective probability that said each alert is actionable or non-actionable using the predictive machine learning model; and present the plurality of alerts in a network monitoring user interface based on the calculated respective probability of said each alert. 19. The apparatus of claim 18, wherein the apparatus is further caused to: append said each alert with an additional data field storing the respective probability. 20. The apparatus of claim 18, wherein the apparatus is further caused to: determining whether to be present said each alert in the network monitoring user interface, a sort order for presenting said each alert, or a combination thereof based on the respective probability. | Please help me write a proper abstract based on the patent claims. | An approach is provided for providing predictive classification of actionable network alerts. The approach includes receiving the plurality of alerts. Each alert of the plurality of alerts indicates an alarm condition occurring at a monitored network system, and is a data record comprising one or more data fields describing the alarm condition. The approach also includes classifying said each alert using a predictive machine learning model. The predictive machine learning model is trained to classify said each alert as actionable or non-actionable using the one or more data fields of said each alert as one or more respective classification features, and to calculate a respective probability that said each alert is actionable or non-actionable. The approach further includes presenting the plurality of alerts in a network monitoring user interface based on the respective probability of said each alert. |
1. A method for calculating a relation indicator for a relation between entities, the method comprising: providing a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of a number, N, of entities; providing a rules tensor M of rules tensor components, Mijn, describing a prediction of an n-th rule; calculating a weighting tensor of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; calculating a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; calculating a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent properties, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by: Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; and calculating a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 2. The method of claim 1, further comprising: generating at least one control signal, based on the predicted value of the relation indicator, for controlling one or more of an actuator, a sensor, a controller, a field device, or a display. 3. The method of claim 2, wherein a visual signal, an acoustic signal, or the visual and the acoustic signal are created based on the control signal. 4. The method of claim 1, further comprising: expanding the measurement tensor with additional measurement tensor components Xi(N+1)k for i=1 . . . N, X(N+1)jk for j=1 . . . N, and X(N+1)(N+1)k, comprising measurement data as relation indicators between the (N+1)-th additional entity and the entities; and expanding the rules tensor with additional rules tensor components, Mi(N+1)n for i=1 . . . N, M(N+1)jn for j=1 . . . N and M(N+1)(N+1)n, wherein a value of a relation indicator to be predicted is set to a predetermined value. 5. The method of claim 1, further comprising: monitoring a relation between at least two of the entities; and setting a value of at least one relation indicator based on the monitored relation between the at least two of the entities. 6. The method of claim 1, wherein at least some of the measurement data are provided by at least one sensor, are read out from at least one database, or are both provided by the at least one sensor and read out from the at least one database. 7. The method of claim 1, wherein the calculating of the result tensor comprises using an alternating least-squares method, wherein the transformation tensor, the relationship tensor, and the weighting tensor are updated alternatingly until convergence. 8. A computer program for calculating a relation indicator for a relation between entities, comprising program instructions configured to, when executed: provide a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of a number, N, of entities; provide a rules tensor M of rules tensor components, Mijn, describing a prediction of an n-th rule; calculate a weighting tensor W of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; calculate a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; calculate a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent properties, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; and calculate a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 9. A computer-readable, non-transitory storage medium comprising stored program instructions configured to, when executed: provide a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of a number, N, of entities; provide a rules tensor M of rules tensor components, Mijn, describing a prediction of an n-th rule; calculate a weighting tensor W of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; calculate a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; calculate a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent properties, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by: Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; and calculate a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 10. An apparatus for calculating a relation indicator for a relation between entities, comprising: a measurement tensor module configured to provide a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of a number, N, of entities; a rules tensor module M configured to provide a rules tensor of rules tensor components, Mijn, describing a prediction of an n-th rule; a weighting tensor module configured to calculate a weighting tensor W of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; a relationship tensor module configured to calculate a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; a transformation tensor module configured to calculate a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent variables, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by: Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; a result tensor calculation module configured to calculate a result tensor X′ of result tensor components, Xijk′; and a relation indicator calculation module configured to calculate a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 11. The apparatus of claim 10, further comprising: a control signal generation module configured to generate at least one control signal, based on the predicted value of the relation indicator, for controlling one or more of an actuator, a sensor, a controller, a field device, or a display. 12. The apparatus of claim 11, further comprising: an output module configured to create a visual signal, an acoustic signal, or the visual signal and the acoustic signal based on the control signal. 13. The apparatus of claim 10, further comprising: a measurement tensor expansion module configured to: (1) expand the measurement tensor with additional measurement tensor components Xi(N+1)k for i=1 . . . N, X(N+1)jk for j=1 . . . N and X(N+1)(N+1)k, comprising measurement data as relation indicators between the (N+1)-th additional entity and the entities, and (2) set a value of a relation indicator to be predicted to a predetermined value; and a rules tensor expansion module configured to expand the rules tensor with additional rules tensor components, Mi(N+1)n for i=1 . . . N, M(N+1)jn for j=1 . . . N and M(N+1)(N+1)n. 14. The apparatus of claim 10, further comprising: a monitoring module configured to monitor a relation between at least two of the entities; and a setting module configured to set a value of at least one relation indicator based on the monitored relation between the at least two of the entities. 15. The apparatus of claim 10, further comprising: a measurement module configured to provide at least some of the measurement data to the measurement tensor module. 16. The apparatus of claim 10, further comprising: at least one database; and a readout module configured to read out at least some of the measurement data from the at least one database. 17. The method of claim 10, wherein the result tensor calculation module is configured to use an alternating least-squares method, wherein the transformation tensor, the relationship tensor, and the weighting tensor are updated alternatingly until convergence. 18. A system for calculating a relation indicator for a relation between entities, comprising: a number, N, of entities; a measurement tensor module configured to provide a measurement tensor X of measurement tensor components, Xijk, with i, j=1 . . . N, comprising measurement data as relation indicators, wherein the relation indicator Xijk indicates a k-th relation between an i-th and a j-th of the number of entities; a rules tensor module configured to provide a rules tensor M of rules tensor components, Mijn, describing a prediction of an n-th rule; a weighting tensor module configured to calculate a weighting tensor W of weighting tensor components, Wnk, indicating relative weights of the rules for the k-th relation between the entities; a relationship tensor module configured to calculate a relationship tensor R of relationship tensor components, Rabk, with a, b=1 . . . r, indicating relations between a set of a number, r, of properties of the entities; a transformation tensor module configured to calculate a transformation tensor A of transformation tensor components, Aia, describing the i-th entity via r latent properties, wherein the transformation tensor A, the weighting tensor W, and the relationship tensor R are calculated as minimum solutions to the following equation: min A , R , W X ijk - X ijk ′ F 2 + λ A A F 2 + λ R R F 2 + λ W W F 2 , with λA, λR, and λW as Lagrange parameters and with result tensor components Xijk′ of a result tensor X′ given by: Xijk′=Σa,b,n(AiaRabkAbjT+MijnWnk), where AT is the transposed tensor corresponding to the transformation tensor A; a result tensor calculation module configured to calculate a result tensor X′ of result tensor components, Xijk′, and a relation indicator calculation module configured to calculate a value of the relation indicator for the k-th relation between the i-th and the j-th entity based on the result tensor component Xijk′. 19. The system of claim 18, wherein at least one of the entities is a sensor, an actuator, a field device, a controller, a display, or a section of a conveyer belt assembly. 20. The system of claim 18, further comprising: a control signal generation module configured to generate at least one control signal, based on the predicted value of the relation indicator, for controlling at least one of the entities of the system. | Please help me write a proper abstract based on the patent claims. | A method is provided for calculating a relation indicator for a relation between entities based on an optimization procedure. The method combines the strong relational learning ability and the good scalability of the RESCAL model with the linear regression model, which may deal with observed patterns to model known relations. The method may be used to determine relations between objects, for instance entries in a database, such as a shopping platform, medical treatments, production processes, or in the context of the Internet of Things, in a fast and precise manner. |
1. A user borne system comprising: a brain activity sensing subsystem configured to collect data corresponding to brain activity of a user; a measurement computer subsystem configured to quantify perceptions of the user; a user sensing subsystem configured to collect data corresponding to user events; a surrounding environment sensing subsystem configured to collect data corresponding to the user's surrounding environment; a recording subsystem configured to record said data; a user mobile electronic device in communication with said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, and recording subsystem, said user mobile electronic device including an interactive graphic user interface and being configured to: operate as a host computer processing subsystem for command, control, and processing of signals to and from said brain activity sensing subsystem, user sensing subsystem, surrounding environment sensing subsystem, and correlation subsystem; command said brain activity sensing subsystem to transmit brain activity and pattern data to said correlation subsystem; and command said user sensing subsystem and surrounding environment sensing subsystem to transmit processed sensor data to said correlation subsystem; a correlation subsystem configured to: create relationships between said data corresponding to said brain activity of said user and said data corresponding to said user events and surrounding environment; and receive and perform correlation processing operations to determine an extent of neural relationships between data received from said user mobile electronic device and said brain activity sensing subsystem, user sensing subsystem, and surrounding environment sensing subsystem to derive neural correlates of consciousness of conscious precepts of the user; a non-transitory computer readable medium configured to store data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for performing queries on real-time and near real-time data received from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for determining whether to keep or disregard said data based on pre-established rule-sets and user interactive command and control from said user mobile electronic device; and a computer processing device configured to process and communicate at least a portion of said data logged and derived by said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem into at least one of a recipient biological, mechanical, or bio-mechanical system. 2. The user borne system according to claim 1, wherein said user borne system and recipient system include at least one of intrusion detection software, hardware, or firmware application and information security software, hardware, or firmware application that provides at least one of firewall protection, virus protection, privacy protection, or user authentication capabilities. 3. The user borne system according to claim 1, wherein said user borne system and recipient biological, mechanical, or bio-mechanical system further comprise at least one remote sensing technology that measures distance by illuminating a target with a laser and analyzing the reflected light for at least one of remote sensing, contour mapping, or aiding in determining distance to the target in a surrounding environment. 4. The user borne system according to claim 1, wherein said brain activity sensing subsystem is configured to record signatures and derive measurements on at least one of a whole, region, neural, or electro-chemical interaction in the synaptic cleft of the brain at the molecular level in order to identify neural correlates of consciousness such that various components of perception across the brain that form the totality of a conscious precept identify the minimal set of components of neural material, chemical, electrical, and associated activity that define a thought or memory of the conscious precept in the users mind or the surrounding environment, said correlation subsystem being configured to correlate and filter said components of perception to create a relational database for input into an artificial neural network that at least one mimics, supplements, or enhances the brain of said user or said recipient biological, mechanical, or bio-mechanical system. 5. The user borne system according to claim 1, wherein said brain activity sensing subsystem includes at least one of a neuro-stimulation fiber optic light emitter or neuro-stimulation micro-electrode for at least one of diagnostic purposes, performance enhancement, or detecting performance degradation of said user's brain functions. 6. The user borne system according to claim 1, wherein said user borne system is designed to blend into the user's natural appearance by incorporating at least one of a prosthetic device with human skin color and shape, grafted skin, synthetic skin, a display, a fashion accessory, body art, hair piece, skull cap, jewelry, cap, hat, material covering, or clothing. 7. The user borne system according to claim 1, wherein said recipient biological, mechanical, or bio-mechanical system looks substantially like a human. 8. The user borne system according to claim 1, wherein said recipient biological, mechanical, or bio-mechanical is configured to act substantially like a human. 9. The user borne system according to claim 1, wherein said surrounding environment sensing subsystem data comprises video recording play back capability for playing back video derived from brain activity signatures for comparison of real world recorded imagery versus brain signal motion imagery of the user. 10. The user borne system according to claim 1, wherein at least a portion of said user borne system is configured to be supported by an exoskeleton worn by said user. 11. The user borne system according to claim 1, wherein said user borne system includes an e-commerce payment system. 12. The user borne system according to claim 1, wherein said user borne system further comprises a spherical field-of-view camera sensor and camera sensor supporting structure configured to automatically extend in front of the user when a phone call is initiated for face-to-face video teleconferencing and retract when the phone call is completed. 13. The user borne system according to claim 1, further comprising a video teleconferencing system configured to overlay video representations of teleconference users over geographical information or imagery at each of said teleconference user's geographical or spatial location and allow said teleconference users to interact with the geographical information or imagery. 14. The user borne system according to claim 1, wherein said user borne system is configured to operate on real-time and near real-time data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem to determine a threat to said user or said surrounding environment. 15. The user borne system according to claim 1, wherein said user borne system includes a cognitive memory system comprising a neuromorphic computing system including at least one of an analog and/or digital circuit, Application Specific Integrated Circuit (ASIC), microprocessor, or other logic in hardware, software, or firmware with auto associative artificial neural networks that at least receive and process some portion of said data logged and derived by said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem to organize, remember, and update said data communicated to at least one of said user borne system and recipient biological, mechanical, or bio-mechanical systems such that said user borne system and/or recipient biological, mechanical, or bio-mechanical system learn through experience. 16. The user borne system according to claim 1, wherein said user mobile electronic device is a head mounted device housing said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem. 17. The user borne system according to claim 1, wherein said user borne system includes a cognitive model realized as a modern dynamic system with behavioral dynamics coded into a neural network system that achieves embodied cognition in at least one of said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem, information from said at least one of brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and/or correlation subsystem informing the user's behavior and the user's actions through back propagation of said neural network in an iterative repetitive manner such that the user affects the environment and the environment affects the user in perception-action cycles. 18. The user borne system according to claim 1, wherein said user borne system includes an artificial neural network configured to process said logged or derived data. 19. The user borne system according to claim 1, wherein said user borne system includes an auto-associative neural network. 20. The user borne system according to claim 1, wherein said user borne system includes a neuromorphic system configured to process said logged or derived data. 21. The user borne system according to claim 1, wherein said user borne system includes a self-learning neural network. 22. The user borne system according to claim 1, wherein said user borne system further comprises a portable magnetoencephalography (MEG) brain activity system configured to derive some of said brain activity data. 23. The user borne system according to claim 1, wherein said user borne system is configured to perform artificial neural network backward propagation algorithms and processing for achieving iterative learning as said user borne system receives new data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, correlation subsystem, or recipient biological, mechanical, or bio-mechanical system. 24. The user borne system according to claim 1, wherein said user borne system includes an auto-association neural network configured to perform repetitive, iterative, supervised, and unsupervised machine learning to yield an output action potential into said user borne system and/or recipient biological, mechanical, or bio-mechanical system. 25. The user borne system according to claim 1, wherein said user borne system includes a cognitive model realized as a dynamic system with behavioral dynamics coded into a neural network system that achieves embodied cognition in at least one of said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, correlation subsystem, wherein said information embodies the user's behavior and actions through back propagation of the neural network in an iterative and repetitive manner such that the user affects the environment and the environment affects the user in perception action cycles. 26. The user borne system according to claim 1, wherein said brain activity data is operated upon when a thought generated by said user is translated by said system into at least one of text, audio, imagery, or machine language that is communicated wirelessly to said user or recipient biological, mechanical, or bio-mechanical system. 27. The user borne system according to claim 1, wherein said user borne system or said recipient is configured to operate on data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, correlation subsystem, or user mobile electronic device to operate at least one actuator to assist said user or recipient system in responding to an event. 28. A system according to claim 1, wherein at least some portion of said data logged and derived by said system is communicated to and operated upon by a biological, mechanical, or bio-mechanical system that performs diagnostic medicine or life support. 29. A system according to claim 1, wherein said user portable system or said recipient system includes an integrated wireless communication system for command and control of said user or said recipient system from a remote location. 30. The system according to claim 1, wherein said user portable system includes a graphical user interface software or firmware application that depicts a user's body from which the user may interactively select which of said user brain activity sensing subsystem, user periphery sensing subsystem, and surrounding environment sensing subsystems and associated sensors said user wants to control or turn on and off. 31. A user borne system comprising: a robotic system; a computer system for operating said robotic system, said computer system including a neural network, said robotic system being configured to train at least a portion of said neural network and use output from said neural network to learn, negotiate, and survive in an environment; and a biological or bio-mechanical life-logging database installed on said computer system and operated upon on a non-transitory computer readable medium, said database being logged by sensors borne by a user to record perceptions of said user and surrounding environment perceptions. 32. A biological or bio-mechanical system user borne system comprising: a brain activity sensing subsystem configured to collect data corresponding to brain activity of a biological or bio-mechanical system user; a measurement computer subsystem configured to quantify perceptions of the biological or bio-mechanical system user; a user sensing subsystem configured to collect data corresponding to biological or bio-mechanical system user events; a surrounding environment sensing subsystem configured to collect data corresponding to the biological or bio-mechanical system user's surrounding environment; a recording subsystem configured to record said data; a user mobile electronic device in communication with said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, and recording subsystem, said user mobile electronic device including an interactive graphic user interface and being configured to: operate as a host computer processing subsystem for command, control, and processing of signals to and from said brain activity sensing subsystem, user sensing subsystem, surrounding environment sensing subsystem, and correlation subsystem; command said brain activity sensing subsystem to transmit brain activity and pattern data to said correlation subsystem; and command said user sensing subsystem and surrounding environment sensing subsystem to transmit processed sensor data to said correlation subsystem; a correlation subsystem configured to: create relationships between said data corresponding to said brain activity of said biological or bio-mechanical system user and said data corresponding to said biological or bio-mechanical system user events and surrounding environment; and receive and perform correlation processing operations to determine an extent of neural relationships between data received from said user mobile electronic device and said brain activity sensing subsystem, user sensing subsystem, and surrounding environment sensing subsystem to derive neural correlates of consciousness of conscious precepts of the biological or bio-mechanical system user; a non-transitory computer readable medium configured to store data from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for performing queries on real-time and near real-time data received from said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem for determining whether to keep or disregard said data based on pre-established rule-sets and biological or bio-mechanical system user interactive command and control from said user mobile electronic device; and a computer processing device configured to process and communicate at least a portion of said data logged and derived by said brain activity sensing subsystem, measurement computer subsystem, user sensing subsystem, surrounding environment sensing subsystem, recording subsystem, and correlation subsystem into said biological or bio-mechanical system user. | Please help me write a proper abstract based on the patent claims. | A mobile user borne brain activity data and surrounding environment data correlation system comprising a brain activity sensing subsystem, a recording subsystem, a measurement computer subsystem, a user sensing subsystem, a surrounding environment sensing subsystem, a correlation subsystem, a user portable electronic device, a non-transitory computer readable medium, and a computer processing device. The mobile user borne system collects and records brain activity data and surrounding environment data and statistically correlates and processes the data for communicating the data into a recipient biological, mechanical, or bio-mechanical system. |
1. A method for managing and automating customization of a device based on learning about a user's behavior, the method comprising: collecting data on the user's activities; learning about the user's behavior by analyzing the data on the user's activities; generating an automation setting of the device based on the user's behavior; and presenting the automation setting of the device to the user for customizing the device. 2. The method of claim 1, wherein collecting data on the user's activities comprises: collecting data of the user's activities based on the user's usual behavior of using options on the device or turning on/off options on the device. 3. The method of claim 2, wherein collecting data on the user's activities further comprises: collecting data of the user's activities that includes one or more of the following variables: time, location, and a device state. 4. The method of claim 3, wherein collecting data on the user's activities comprises: collecting data of the user's activities for a period of time. 5. The method of claim 4, wherein the period of time is associated with a given number of repetitive operations. 6. The method of claim 4, wherein the period of time is for a given number of days. 7. The method of claim 2, wherein learning about the user's behavior by analyzing the data on the user's activities comprises: learning about the user's behavior by analyzing the data on the user's activities that are routine. 8. The method of claim 7, wherein the automation setting of the device comprises: a workflow of operations that is automated based on the user's activities. 9. The method of claim 1 further comprising: implementing the automation setting of the device after the user accepts the automation setting. 10. The method of claim 1 further comprising: implementing the automation setting of the device after the user fine tunes the automation setting. 11. A device that manages and automates customization based on learning about a user's behavior, the device comprising: a processor; and a memory storing computer executable instructions that when executed by the processor causes the processor to: collect data on the user's activities; learn about the user's behavior by analyzing the data on the user's activities; generate an automation setting of the device based on the user's behavior; and present the automation setting of the device to the user for customizing the device. 12. The device of claim 11, wherein the data on the user's activities is collected for a period of time. 13. The device of claim 11, wherein the automation setting of the device comprises: a workflow of operations that is automated based on the user's activities. 14. The device of claim 11, wherein the memory further stores computer executable instructions that when executed by the processor causes the processor to: implement the automation setting of the device after the user accepts the automation setting. 15. The device of claim 11, wherein the memory further stores computer executable instructions that when executed by the processor causes the processor to: implement the automation setting of the device after the user fine tunes the automation setting. 16. A computer program product encoded in a non-transitory computer readable medium for managing and automating customization of a device based on learning about a user's behavior, the computer program product comprising: computer code for collecting data on the user's activities; computer code for learning about the user's behavior by analyzing the data on the user's activities; computer code for generating an automation setting of the device based on the user's behavior; and computer code for presenting the automation setting of the device to the user for customizing the device. 17. The computer program product of claim 16, wherein the data on the user's activities is collected for a period of time. 18. The computer program product of claim 16, wherein the automation setting of the device comprises: a workflow of operations that is automated based on the user's activities. 19. The computer program product of claim 16, wherein the computer program product further comprises: computer code for implementing the automation setting of the device after the user accepts the automation setting. 20. The computer program product of claim 16, wherein the computer program product further comprises: computer code for implementing the automation setting of the device after the user fine tunes the automation setting. | Please help me write a proper abstract based on the patent claims. | A method for managing and automating user customization of a device based on observed user behavior is disclosed. First, the method collects data on the user's activities on a device for a period of time. Second, the method learns about the user's behavior for routine repetitive operations by analyzing the user's activities data. Third, the method generates automation settings of the device based on the user's behavior for routine repetitive and predictive operations, and then presents the automation settings of the device to the user for customization of the device. These automation settings help to make the device operate more efficiently and more conveniently for the user, because they help perform the user's own routine repetitive operations. |
1. A prediction device comprising: a data acquisition unit configured to acquire present time data; a data generation unit configured to generate time-series data from the data acquired by the data acquisition unit at plurality of times; a determination unit configured to determine whether the generated time-series data satisfies a predetermined condition; and a final prediction unit configured to predict future data, based on past time data without using the present time data, when the predetermined condition is determined to be satisfied, and to predict the future data, based on the present time data and the past time data, when the predetermined condition is determined not to be satisfied, wherein past time data is previously acquired present time data which has been acquired prior to the present time data. 2. The prediction device according to claim 1, wherein, when the predetermined condition is determined not to be satisfied by the determination unit, the final prediction unit uses, as past time data, previously acquired present time data that falls within a predetermined range. 3. The prediction device according to claim 1, wherein the determination unit includes: a present time prediction unit configured to calculate a prediction result in a present time from the present time data and the past time data; and a comparison unit configured to determine whether the predetermined condition is satisfied, based on a first difference between the calculated prediction result in the present time and the prediction result in a past time calculated by the present time prediction unit in a past, and a second difference between the prediction results in past times calculated by the present time prediction unit in the past. 4. The prediction device according to claim 3, wherein the comparison unit determines that the predetermined condition is satisfied when the first difference is larger than the second difference. 5. The prediction device according to claim 3, wherein the comparison unit determines whether the predetermined condition is satisfied based on the first difference, and a statistic of a plurality of the second differences. 6. The prediction device according to claim 5, wherein the statistic is any of a maximum value, an average value, a median, and a most frequent value. 7. The prediction device according to claim 3, wherein the comparison unit determines whether the time-series data satisfies the predetermined condition, based on the first difference, and a transition of a plurality of the second differences. 8. The prediction device according to claim 1, wherein the determination unit includes: a detection unit configured to detect a transition of data similar to a transition of a part of the time-series data from past time-series data; and a transition determination unit configured to determine whether the predetermined condition is satisfied based on the detected similar transition of data. 9. The prediction device according to claim 1, wherein the determination unit includes: a detection unit configured to detect a transition of data similar to a transition of a part of the time-series data from time-series data of another machine number; and a transition determination unit configured to determine whether the predetermined condition is satisfied based on the detected similar transition of data. 10. The prediction device according to claim 8, wherein the detection unit detects a plurality of the similar transitions of data, and wherein the transition determination unit determines whether the predetermined condition is satisfied based on the plurality of similar transitions of data. 11. The prediction device according to claim 8, wherein the transition determination unit determines whether the predetermined condition is satisfied based on the plurality of weighted similar transitions of data. 12. The prediction device according to claim 1, wherein the future data is data about a degree of consumption, a degree of deterioration, or a possibility of occurrence of failure, of a component that configures a product. 13. A prediction method comprising: acquiring present time data; generating time-series data from the data acquired at a plurality of times; determining whether the generated time-series data satisfies a predetermined condition; and predicting future data, based on past time data without using the present time data, when the predetermined condition is determined to be satisfied, and predicting the future data, based on the present time data and the past time data, when the predetermined condition is determined not to be satisfied, wherein past time data is previously acquired present time data which has been acquired prior to the present time data. 14. A non-transitory computer-readable recording medium that stores a program for causing a computer to function as the units of a prediction device comprising: a data acquisition unit configured to acquire present time data; a data generation unit configured to generate time-series data from the data acquired by the data acquisition unit at plurality of times; a determination unit configured to determine whether the generated time-series data satisfies a predetermined condition; and a final prediction unit configured to predict future data, based on past time data without using the present time data, when the predetermined condition is determined to be satisfied, and to predict the future data, based on the present time data and the past time data, when the predetermined condition is determined not to be satisfied, wherein past time data is previously acquired present time data which has been acquired prior to the present time data. | Please help me write a proper abstract based on the patent claims. | To precisely predict future data even when the number of pieces of time-series data is small, in predicting the future data, using the time-series data. When the future data is predicted using the time-series data, whether present time data is used is determined based on prediction variation or a data transition, and then the prediction of the future data is performed. |
1. A non-transitory computer-readable recording medium having stored therein a detection program that causes a computer to execute a process including: performing a first conversion processing to convert a value indicating each event that is included in history log into an identification value corresponding to the value, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group, values that belong to a group indicated in the conversion information into an identical identification value that corresponds to the group; constructing information with occurrence probabilities by connecting identification values that are obtained by conversion by the first conversion processing in order of occurrence of the event sequentially from a root, and by assigning an occurrence probability of an event that corresponds to the identification value per identification value; performing second conversion processing to convert a value indicating each event included in event data that is input according to an event has occurred into an identification value corresponding to the value, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information; and detecting an anomaly based on a result of comparison between the constructed information with occurrence probabilities and the identification value that is obtained by conversion by the second conversion processing. 2. The non-transitory computer-readable recording medium according to claim 1, wherein the conversion information indicates a range of values as the group, and the first and the second conversion processing converts values within the range indicated in the conversion information into an identical identification value that corresponds to the range. 3. The non-transitory computer-readable recording medium according to claim 2, wherein the process further including: calculating a statistical distribution of values indicating respective events that are included in the history log, and of creating conversion information in which a range of the values and an identification value corresponding to the range is defined, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 4. The non-transitory computer-readable recording medium according to claim 1, wherein the conversion information indicates order of array of values as the group, and the first and the second conversion processing converts values arranged in the order of array indicated in the conversion information into an identical identification value corresponding to the order of array. 5. The non-transitory computer-readable recording medium according to claim 4, wherein the process further including: calculating an appearance frequency according to order of array of values that indicate respective events included in the history log, and of creating conversion information in which the order of array having the appearance frequency equal to or higher than a predetermined value and an identification value that corresponds to the order of array are defined, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 6. A detection method comprising: performing a first conversion processing to convert a value indicating each event that is included in history log into an identification value corresponding to the value, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group, values that belong to a group indicated in the conversion information into an identical identification value that corresponds to the group by a processor; constructing information with occurrence probabilities by connecting identification values that are obtained by conversion by the first conversion processing in order of occurrence of the event sequentially from a root, and by assigning an occurrence probability of an event that corresponds to the identification value per identification value by the processor; performing second conversion processing to convert a value indicating each event included in event data that is input according to an event has occurred into an identification value corresponding to the value, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information by the processor; and detecting an anomaly based on a result of comparison between the constructed information with occurrence probabilities and the identification value that is obtained by conversion by the second conversion processing by the processor. 7. The detection method according to claim 6, wherein the conversion information indicates a range of values as the group, and the first and the second conversion processing converts values within the range indicated in the conversion information into an identical identification value that corresponds to the range. 8. The detection method according to claim 7, further comprising: calculating a statistical distribution of values indicating respective events that are included in the history log, and of creating conversion information in which a range of the values and an identification value corresponding to the range is defined, by the processor, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 9. The detection method according to claim 6, wherein the conversion information indicates order of array of values as the group, and the first and the second conversion processing converts values arranged in the order of array indicated in the conversion information into an identical identification value corresponding to the order of array. 10. The detection method according to claim 9, further comprising: calculating an appearance frequency according to order of array of values that indicate respective events included in the history log, and of creating conversion information in which the order of array having the appearance frequency equal to or higher than a predetermined value and an identification value that corresponds to the order of array are defined, by the processor, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 11. A detection apparatus comprising a processor that executes a process comprising: performing a first conversion processing to convert a value indicating each event that is included in history log into an identification value corresponding to the value, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group, values that belong to a group indicated in the conversion information into an identical identification value that corresponds to the group; constructing information with occurrence probabilities by connecting identification values that are obtained by conversion by the first conversion processing in order of occurrence of the event sequentially from a root, and by assigning an occurrence probability of an event that corresponds to the identification value per identification value; performing second conversion processing to convert a value indicating each event included in event data that is input according to an event has occurred into an identification value corresponding to the value, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information; and detecting an anomaly based on a result of comparison between the constructed information with occurrence probabilities and the identification value that is obtained by conversion by the second conversion processing. 12. The detection apparatus according to claim 11, wherein the conversion information indicates a range of values as the group, and the first and the second conversion processing converts values within the range indicated in the conversion information into an identical identification value that corresponds to the range. 13. The detection apparatus according to claim 12, wherein the process further comprising: calculating a statistical distribution of values indicating respective events that are included in the history log, and of creating conversion information in which a range of the values and an identification value corresponding to the range is defined, by the processor, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. 14. The detection apparatus according to claim 11, wherein the conversion information indicates order of array of values as the group, and the first and the second conversion processing converts values arranged in the order of array indicated in the conversion information into an identical identification value corresponding to the order of array. 15. The detection apparatus according to claim 14, wherein the process further comprising: calculating an appearance frequency according to order of array of values that indicate respective events included in the history log, and of creating conversion information in which the order of array having the appearance frequency equal to or higher than a predetermined value and an identification value that corresponds to the order of array are defined, by the processor, wherein the first and the second conversion processing performs conversion processing based on the created conversion information. | Please help me write a proper abstract based on the patent claims. | A non-transitory computer-readable recording medium stores a program that causes a computer to execute a process including: performing a first conversion processing to convert a value indicating each event, and to convert, based on conversion information that indicates a group of the value and an identification value that corresponds to values belonging to the group; constructing information with occurrence probabilities by connecting identification values; performing second conversion processing to convert a value indicating each event included in event data, and to convert values that belong to a group indicated in the conversion information into an identical identification value corresponding to the group based on the conversion information; and detecting an anomaly based on a result of comparison between the constructed information and the identification value. |
1. A computer-implemented method for assisting a plurality of users in navigating one or more life events, the method comprising: providing interactive media to a plurality of computing devices associated with the plurality of users, wherein the interactive media is provided via an event navigation portal that is designed to aid the plurality of users in navigating the one or more life events, wherein the interactive media comprises a set of visual objects associated with the one or more life events, and wherein the set of visual objects are displayed to the users on graphical displays of the computing devices; receiving input data from the computing devices when the users interact with the set of visual objects in the event navigation portal; and analyzing the input data derived from the users' interaction with the set of visual objects to: (1) determine the life event(s) that each user is currently experiencing, has experienced, or is likely to experience, (2) predict one or more steps relating to the life event(s) for each user, wherein each step further comprises information to aid the user in navigating said step, and (3) map the step(s) relating to the life event(s) for each user on a timeline, wherein the timeline and the step(s) are included in the set of visual objects and displayed to the users on the graphical displays of the computing devices. 2. The method of claim 1, wherein the input data comprises questions, answers, comments, and/or insights in the form of text, audio, video, and/or photographs that are (1) provided by the plurality of users and (2) associated with the one or more life events. 3. The method of claim 2, wherein the input data is obtained from a social media or a social networking website visited by the plurality of users. 4. The method of claim 1, wherein the users interact with the set of visual objects on the graphical displays using at least one of the following input devices: a mouse, a keyboard, a touchscreen monitor, a voice recognition software, or a virtual reality and/or augmented reality headset. 5. The method of claim 1, wherein the timeline and the step(s) are configured to be manipulated on the graphical displays by the users, and wherein said manipulation by the users comprises at least one of the following: (1) modifying the timeline to display a desired time period, (2) increasing or decreasing a duration of the timeline, (3) modifying the location of each step along the timeline, (4) displaying the information included within each step, (5) modifying the information displayed within each step, (6) overlaying a plurality of timelines for the plurality of users onto a common timeline, or (7) linking different timelines for different life events. 6. The method of claim 1, wherein the one or more life events include at least one of the following: diagnosis with a terminal illness, death, marriage, divorce, or retirement. 7. The method of claim 1, wherein the input data is analyzed using a natural language processing (NLP) algorithm. 8. The method of claim 1, wherein the input data further comprises information indicative of the physical locations of the plurality of users, and wherein the physical locations of the users are extracted from the input data. 9. The method of claim 8, wherein the information indicative of the physical locations of the users is dynamically updated in real-time as the users move between different places. 10. The method of claim 1, further comprising: generating a predictive model by applying machine learning to the input data received from the plurality of users, wherein the predictive model is used to predict the one or more steps relating to the life event(s) for each user. 11. The method of claim 10, wherein the predictive model is configured to predict each user's needs at each step along the timeline, and wherein the information in each step is customized for each user depending on the user's predicted needs. 12. The method of claim 10, wherein the predictive model is configured to extract temporal status of the users based on the users' interactions within the event navigation portal, wherein the temporal status of the users corresponds to mental states or physical states of the users at a given moment in time, and wherein the temporal status of the users is extracted based on thoughts, feelings, opinions, statements, and/or comments made by the users in the interactions within the event navigation portal. 13. The method of claim 1, wherein two or more of the steps relating to the life event(s) are mapped in chronological order on the timeline for each user. 14. The method of claim 1, wherein two or more of the steps occur at a same point in time or at different points in time along the timeline for each user. 15. The method of claim 1, wherein the step(s) and the information in each step are updated dynamically on the timeline as the users experience the life event(s). 16. The method of claim 1, wherein the information in each step comprises insights or comments that are provided by one or more users about the corresponding step. 17. The method of claim 1, further comprising: filtering one or more user insights from the input data, and matching the one or more user insights to the one or more steps. 18. The method of claim 17, wherein said matching is based on at least one of the following: (1) a crowd-sourced rating of each user insight, (2) a credentials rating of a user associated with a user insight, (3) a popularity rating of each user insight, or (4) a popularity rating of a user associated with the corresponding user insight. 19. The method of claim 17, wherein said matching is based on a plurality of predefined topics associated with the life event(s). 20. The method of claim 17, further comprising: determining a frequency at which each user insight is matched to the corresponding step, and ranking the matched user insights based on their frequencies. 21. The method of claim 1, further comprising: displaying a plurality of different possible journeys to the users on the graphical displays of the computing devices, wherein the plurality of different possible journeys are generated through different combinations of the steps on the timeline. 22. The method of claim 21, wherein the plurality of different journeys and steps are selectable by the users, to allow the users to observe the effects of selecting different journeys and/or steps for the life event(s). 23. The method of claim 21, wherein the plurality of different journeys and steps are included in the set of visual objects that are displayed on the graphical displays of the computing devices. 24. The method of claim 21, wherein the plurality of different journeys and steps are configured to be spatially manipulated by the users on the graphical displays of the computing devices using drag-and-drop functions. 25. The method of claim 24, wherein at least some of the journeys and/or steps are configured to be (1) expanded into a plurality of sub-journeys and/or sub-steps, or (2) collapsed into a main journey and/or main step. 26. The method of claim 1, wherein the timeline further comprises a graphical plot indicative of a significance level of each step on the timeline to each user. 27. The method of claim 1, wherein the users' interactions with the set of visual objects comprise the users entering alphanumeric text, image data, and/or audio data via one or more of the visual objects on the graphical displays. 28. The method of claim 1, wherein the set of visual objects are provided on the graphical displays in a plurality of different colors, shapes, dimensions, and/or sizes, and wherein the timeline and step(s) for different users are displayed in different visual coding schemes. 29. A system for implementing an event navigation portal that is designed to aid a plurality of users in navigating one or more life events, the system comprising: a server in communication with a plurality of computing devices associated with a plurality of users, wherein the server comprises a memory for storing interactive media and a first set of software instructions, and one or more processors configured to execute the first set of software instructions to: provide the interactive media via the event navigation portal to the plurality of computing devices associated with the plurality of users, wherein the interactive media comprises a set of visual objects associated with the one or more life events, and wherein the set of visual objects are displayed to the users on graphical displays of the computing devices; receive input data from the computing devices when the users interact with the set of visual objects in the event navigation portal; and analyze the input data derived from the users' interaction with the set of visual objects to: (1) determine the life event(s) that each user is currently experiencing, has experienced, or is likely to experience, (2) predict one or more steps relating to the life event(s) for each user, wherein each step further comprises information to aid the user in navigating said step, and (3) map the step(s) relating to the life event(s) for each user on a timeline, wherein the timeline and the step(s) are included in the set of visual objects; and wherein the plurality of computing devices comprise a memory for storing a second set of software instructions, and one or more processors configured to execute the second set of software instructions to: receive the interactive media from the server; display the set of visual objects visually on the graphical displays of the computing devices to the users; generate the input data when the users interact with the set of visual objects in the event navigation portal; transmit the input data to the server for analysis of the input data; receive the analyzed input data comprising the timeline and the step(s); and display the timeline and the step(s) on the graphical displays of the computing devices to the users. 30. A tangible computer readable medium storing instructions that, when executed by one or more servers, causes the one or more servers to perform a computer-implemented method for assisting a plurality of users in navigating one or more life events, the method comprising: providing interactive media to a plurality of computing devices associated with the plurality of users, wherein the interactive media is provided via an event navigation portal that is designed to aid the plurality of users in navigating the one or more life events, wherein the interactive media comprises a set of visual objects associated with the one or more life events, and wherein the set of visual objects are displayed to the users on graphical displays of the computing devices; receiving input data from the computing devices when the users interact with the set of visual objects in the event navigation portal; and analyzing the input data derived from the users' interaction with the set of visual objects to: (1) determine the life event(s) that each user is currently experiencing, has experienced, or is likely to experience, (2) predict one or more steps relating to the life event(s) for each user, wherein each step further comprises information to aid the user in navigating said step, and (3) map the step(s) relating to the life event(s) for each user on a timeline, wherein the timeline and the step(s) are included in the set of visual objects and displayed to the users on the graphical displays of the computing devices. | Please help me write a proper abstract based on the patent claims. | Systems and methods are provided herein for generating personalized timeline-based feeds to a user. A computer-implemented method for generating feeds to a user may be provided. The method may include generating a timeline comprising a plurality of milestones and needs associated with an event, and providing the feeds based on community wisdom. The feeds may be provided for each milestone on the time-line specific to the user, and may be configured to address the user's needs at each milestone. |
1. An analysis device for analyzing a system that inputs input data including a plurality of input parameters and outputs output data, the device comprising: an acquisition unit that acquires learning data including a plurality of sets of the input data and the output data; and a learning processing unit that learns, based on the acquired learning data, an amount of difference of output data corresponding to a difference between input parameters of two pieces of input data. 2. The analysis device according to claim 1, wherein the learning processing unit generates an estimation model for learning a distance between two pieces of output data with respect to the difference between the input parameters of the two pieces of input data and estimating a change of the output data with respect to a change of the input parameters. 3. The analysis device according to claim 1, wherein the learning processing unit performs pair-wise regression to perform regression analysis of the relationship between the difference between the input parameters and the amount of difference of the output data for each pair. 4. The analysis device according to claim 2, wherein the learning processing unit generates an estimation model for estimating, by using a degree of change for every range of a value between the two input parameters, the amount of difference of the output data for the every range of the value between the input parameters. 5. The analysis device according to claim 2, further comprising: an estimation unit that estimates, based on the estimation model, an amount of change of output data with respect to an amount of change of the input data. 6. The analysis device according to claim 1, further comprising: a normalization unit that performs, for the plurality of pieces of input data, normalization of the input parameters so that an average of the input parameters is 0 and a variance of the input parameters is 1. 7. The analysis device according to claim 2, further comprising: a display unit that displays, in accordance with an amount of change of the input parameters, an estimated amount of change of the output data. 8. The analysis device according to claim 1, wherein the input parameters include an initial condition in a collision simulation, and wherein the output data includes shape data of an object in the collision simulation. 9. An analysis method for analyzing a system that inputs input data including a plurality of input parameters and outputs output data, the method comprising: an acquisition step of acquiring learning data including a plurality of sets of the input data and the output data; and a learning processing step of learning, based on the acquired learning data, an amount of difference of output data corresponding to a difference between input parameters of two pieces of input data. 10. The analysis method according to claim 9, wherein the learning processing step includes generating an estimation model for learning a distance between two pieces of output data with respect to the difference between the input parameters of the two pieces of input data and estimating a change of the output data with respect to a change of the input parameters is generated. 11. The analysis method according to claim 10, wherein the learning processing step includes performing pair-wise regression to perform regression analysis of the relationship between the difference between the input parameters and the amount of difference of the output data for each pair. 12. The analysis method according to claim 10, wherein the learning processing step includes generating an estimation model for estimating, by using a degree of change for every range of a value between the two input parameters, the amount of difference of the output data for the every range of the value between the input parameters is generated. 13. The analysis method according to claim 10, further comprising: an estimation step of estimating, based on the estimation model, an amount of change of output data with respect to an amount of change of the input data. 14. The analysis method according to claim 9, further comprising: a normalization step of performing, for the plurality of pieces of input data, normalization of the input parameters so that an average of the input parameters is 0 and a variance of the input parameters is 1. 15. The analysis method according to claim 10, further comprising: a display step of displaying, in accordance with an amount of change of the input parameters, an estimated amount of change of the output data. 16. The analysis method according to claim 9, wherein the input parameters include an initial condition in a collision simulation, and wherein the output data includes shape data of an object in the collision simulation. 17. A computer program product for analyzing a system that inputs input data including a plurality of input parameters and outputs output data, the computer program product comprising at least one computer readable non-transitory storage medium having computer readable program instructions thereon for execution by a processor, the computer readable program instructions comprising program instructions for: acquiring learning data including a plurality of sets of the input data and the output data; and learning, based on the acquired learning data, an amount of difference of output data corresponding to a difference between input parameters of two pieces of input data. 18. The computer program product according to claim 17, wherein the learning includes generating an estimation model for learning a distance between two pieces of output data with respect to the difference between the input parameters of the two pieces of input data and estimating a change of the output data with respect to a change of the input parameters is generated. 19. The computer program product according to claim 18, wherein the learning includes performing pair-wise regression to perform regression analysis of the relationship between the difference between the input parameters and the amount of difference of the output data for each pair. 20. The computer program product according to claim 18, wherein the learning includes generating an estimation model for estimating, by using a degree of change for every range of a value between the two input parameters, the amount of difference of the output data for the every range of the value between the input parameters is generated. | Please help me write a proper abstract based on the patent claims. | An analysis device which analyzes a system that inputs input data including a plurality of input parameters and outputs output data, including an acquisition unit that acquires learning data including a plurality of sets of the input data and the output data, and a learning processing unit that learns, based on the acquired learning data, the amount of difference of output data corresponding to a difference between input parameters of two pieces of input data, an analysis method using the analysis device, and a program used in the analysis device are provided. |
1. A system comprising: one or more processing modules; and one or more non-transitory memory storage modules storing computing instructions configured to run on the one or more processing modules and perform acts of: for each record in a set of distinct records in a database system: inputting a training feature vector associated with the record into a machine learning algorithm, the training feature vector associated with the record comprising a list of characteristics of the record; and inputting a cost vector associated with the record into the machine learning algorithm, the cost vector associated with the record configured to train the machine learning algorithm to reduce a probability of a false negative prediction for the record; and iteratively operating the machine learning algorithm on each record in the set of distinct records to train the machine learning algorithm to create a predictive model. 2. The system of claim 1, wherein: the database system comprises a first database cluster H and a second database cluster L; and the one or more non-transitory memory storage modules storing the computing instructions are further configured to run on the one or more processing modules and perform acts of: for each record of the set of distinct records: using the predictive model to calculate a probability of the record being accessed; if the probability of the record being accessed as calculated is greater than a threshold value, then placing the record in the first database cluster H; and if the probability of the record being accessed as calculated is not greater than the threshold value, then placing the record in the second database cluster L; receiving a request from a requester for at least one record of the set of distinct records; and presenting the at least one record from the set of distinct records to the requester in response to the request. 3. The system of claim 2, wherein: the threshold value is determined such that at least approximately 99 percent of predicted accesses will access records of the set of distinct records placed in the first database cluster H; and using the predictive model to calculate the probability of the record being accessed comprises: for each record in the set of distinct records: inputting a list of prediction feature vectors, each prediction feature vector of the list of prediction feature vectors comprising the list of characteristics of the record; and using the predictive model to analyze the list of prediction feature vectors to calculate the probability of the record being accessed. 4. The system of claim 1, wherein the machine learning algorithm comprises at least one of: a decision tree, a bagging technique, a logistic regression, a perceptron, a support vector machine, or a relevance vector machine. 5. The system of claim 1, wherein iteratively operating the machine learning algorithm to train the machine learning algorithm to create the predictive model comprises: operating the machine learning algorithm on a periodic basis; and for each record of the set of distinct records: reviewing historical access data associated with the record; and comparing a probability of the record being accessed as calculated with the historical access data associated with the record. 6. The system of claim 1, wherein: the training feature vector further comprises a label configured to indicate, for each record in the set of distinct records, if the record has been accessed within a pre-defined time period; and the cost vector associated with the record represents an estimate of cost incurred when the record is placed in an incorrect database cluster of the first database cluster H or the second database cluster L. 7. A method implemented via execution of computer instructions configured to run on one or more processing modules and configured to be stored on one or more non-transitory memory storage modules, the method comprising: for each record in a set of distinct records in a database system: inputting a training feature vector associated with the record into a machine learning algorithm, the training feature vector associated with the record comprising a list of characteristics of the record; and inputting a cost vector associated with the record into the machine learning algorithm, the cost vector associated with the record configured to train the machine learning algorithm to reduce a probability of a false negative prediction for the record; and iteratively operating the machine learning algorithm on each record in the set of distinct records to train the machine learning algorithm to create a predictive model. 8. The method of claim 7, wherein: the database system comprises a first database cluster H and a second database cluster L; the method further comprises: for each record of the set of distinct records: using the predictive model to calculate a probability of the record being accessed; if the probability of the record being accessed as calculated is greater than a threshold value, then placing the record in the first database cluster H; and if the probability of the record being accessed as calculated is not greater than the threshold value, then placing the record in the second database cluster L; receiving a request from a requester for at least one record of the set of distinct records; and presenting the at least one record from the set of distinct records to the requester in response to the request. 9. The method of claim 8, wherein: the threshold value is determined such that at least approximately 99 percent of predicted accesses will access records of the set of distinct records placed in the first database cluster H; and using the predictive model to calculate the probability of the record being accessed comprises: for each record in the set of distinct records: inputting a list of prediction feature vectors, each prediction feature vector of the list of prediction feature vectors comprising the list of characteristics of the record; and using the predictive model to analyze the list of prediction feature vectors to calculate the probability of the record being accessed. 10. The method of claim 7, wherein the machine learning algorithm comprises at least one of: a decision tree, a bagging technique, a logistic regression, a perceptron, a support vector machine, or a relevance vector machine. 11. The method of claim 7, wherein iteratively operating the machine learning algorithm to train the machine learning algorithm to create the predictive model comprises: operating the machine learning algorithm on a periodic basis; and for each record of the set of distinct records: reviewing historical access data associated with the record; and comparing a probability of the record being accessed as calculated with the historical access data associated with the record. 12. The method of claim 7, wherein: the training feature vector further comprises a label configured to indicate, for each record in the set of distinct records, if the record has been accessed within a pre-defined time period; and the cost vector associated with the record represents an estimate of cost incurred when the record is placed in an incorrect database cluster of the first database cluster H or the second database cluster L. 13. A system comprising: one or more processing modules; and one or more non-transitory memory storage modules storing computing instructions configured to run on the one or more processing modules and perform acts of training a machine learning algorithm to create a predictive model; receiving, from a requesting party, a request to analyze a probability that a record of a database will be requested within a predetermined time period; retrieving a feature vector corresponding to the record; calculating a prediction of the probability that the record will be requested within the predetermined time period, the prediction being based on the predictive model used in conjunction with the feature vector; and presenting, to the requesting user, the prediction of the probability, as calculated. 14. The system of claim 13, wherein training the machine learning algorithm to create the predictive model comprises: for each record in a set of distinct records in the database: inputting a training feature vector associated with the record into the machine learning algorithm, the training feature vector comprising a list of characteristics of the record; and inputting a cost vector associated with the record into the machine learning algorithm, the cost vector configured to train the machine learning algorithm to reduce a probability of a false negative prediction for the record; and iteratively operating the machine learning algorithm on each record in the set of records to create the predictive model. 15. The system of claim 14, wherein the machine learning algorithm uses a MetaCost algorithm and a cost-insensitive machine learning algorithm used in conjunction with the MetaCost algorithm. 16. The system of claim 13, wherein the feature vector corresponding to the record comprises a list of characteristics of the record. 17. A method implemented via execution of computer instructions configured to run on one or more processing modules and configured to be stored on one or more non-transitory memory storage modules, the method comprising: training a machine learning algorithm to create a predictive model; receiving, from a requesting party, a request to analyze a probability that a record of a database will be requested within a predetermined time period; retrieving a feature vector corresponding to the record; calculating a prediction of the probability that the record will be requested within the predetermined time period, the prediction being based on the predictive model used in conjunction with the feature vector; and presenting, to the requesting user, the prediction of the probability, as calculated. 18. The method of claim 17, wherein training the machine learning algorithm to create the predictive model comprises: for each record in a set of distinct records in the database: inputting a training feature vector associated with the record into the machine learning algorithm, the training feature vector comprising a list of characteristics of the record; and inputting a cost vector associated with the record into the machine learning algorithm, the cost vector configured to train the machine learning algorithm to reduce a probability of a false negative prediction for the record; and iteratively operating the machine learning algorithm on each record in the set of records to create the predictive model. 19. The method of claim 118, wherein the machine learning algorithm uses a MetaCost algorithm and a cost-insensitive machine learning algorithm used in conjunction with the MetaCost algorithm. 20. The method of claim 17, wherein the feature vector corresponding to the record comprises a list of characteristics of the record. | Please help me write a proper abstract based on the patent claims. | A system and method for predicting search term popularity is disclosed herein. A database system may comprise a first database cluster H and a second database cluster L. A machine learning algorithm is trained to create a predictive model. Thereafter, for each record in a database system, the predictive model is used to calculate a probability of the record being accessed. If the calculated probability of the record being accessed is greater than a threshold value, then the record in the first database cluster H; otherwise, the record is placed in the second database cluster L. Training the machine learning algorithm comprises inputting a training feature vector associated with the record into the machine learning algorithm, inputting a cost vector into the machine learning algorithm, and iteratively operating the machine learning algorithm on each record in the set of records to create a predictive model. Other embodiments are also disclosed herein. |
1. A method for measuring technology trends comprising: analyzing for each inventor of a plurality of inventors in a technology field, a time series of publication dates for technical documents in the technology field to provide a baseline of technical documents published in a time period; detecting with a hardware processor based counter a number of technical document publications having at least one inventor in the plurality of inventors in the technology field; comparing the number of technical document publications to said baseline of technical documents filed in the time period, wherein if the technical document publications exceed the baseline of technical documents, the number of technical document publications are trending; performing a comparative analysis of the content for the technical document publications that are trending to determine a measurement of similarity in technical field subgroups described in the technical document publications that are trending; and extracting trending technical subgroups from the technical document publications that are trending with a degree of similarity above a threshold as a technical subgroup that is a trend. 2. The method of claim 1, wherein the technical documents are patent applications published by a patent office. 3. The method of claim 1, wherein the technology field is selected from the classification system of a patent office. 4. The method of claim 1, wherein said detecting with a hardware processor based counter an increase in a number of publications of trending technical documents includes building a time-series model for the number of technical documents of each inventor and comparing an expected value with an actual value. 5. The method of claim 4, wherein the time-series model includes a Poisson process. 6. The method of claim 5, wherein said comparing the number of technical document publications to said baseline of technical documents filed in the time period calculating a score s from: S=−log(Px0) wherein P(x) is a distribution of a Poisson process modeled using an average number of patent applications published per month for said each inventor of said plurality of inventors in said technology field, and x0 is an observed number of patent applications published per month for said each inventor of said plurality of inventors in said technology field. 7. The method of claim 1, wherein said comparative analysis of the content for the trending technical documents to determine a measurement of similarity in technical field subgroups described in the trending technical documents further comprises identifying a keyword as a keyword indicating a trending technology from an extracted subgroup of technical documents by using an index. 8. The method of claim 5, wherein the comparative analysis comprises Kullback-Leibler divergence, Pointwise Mutual Information (PMI) or a combination thereof. 9. The method of claim 1 further comprising providing an index of terms for said technology subgroups that are trending. 10. The method of claim 9, wherein providing the index comprises Term Frequency-Inverse Document Frequency (TFIDF), Pointwise Mutual Information (PMI) or a combination thereof. 11. A system for detecting technology trends comprising: a database of inventors in a technology field; a baseline generator for providing a baseline frequency of technical publications published by each inventor of said database for a specified time period; a counter for determining from technical publications whether there is an increase in technical publications for at least one of the inventors in the database of inventors in the technological field; and a comparison module for determining whether the technical publications providing the increase in the technical publications have technology subgroups with a frequency that is greater than a target trend frequency that indicates a technical subgroup as a trend. 12. The system of claim 11, wherein the technical documents are patent applications published by a patent office. 13. The system of claim 11, wherein the technology field is selected from the classification system of a patent office. 14. The system of claim 11, wherein the counter detects an increase in a number of publications by creating a time-series model for the number of technical documents of each inventor and comparing an expected value of technical publication with an actual value of technical publications. 15. The system of claim 14, wherein said creating the time-series model includes a Poisson process. 16. The system of claim 15, wherein said comparing the number of expected technical document publications to the actual technical documents published in the time period calculating a score s from: S=−log(Px0) wherein P(x) is a distribution of a Poisson process modeled using an average number of patent applications published per month for said each inventor of said plurality of inventors in said technology field, and x0 is an observed number of patent applications published per month for said each inventor of said plurality of inventors in said technology field. 17. The system of claim 15, wherein said comparison module performs a comparative analysis of the content for the technical publications to determine a measurement of similarity in technical field subgroups described that comprises identifying a keyword as a keyword indicating an extracted subgroup of technical documents by using an index. 18. The system of claim 15, wherein the comparison module performs a comparative analysis comprising Kullback-Leibler divergence, Pointwise Mutual Information (PMI) or a combination thereof. 19. The system of claim 15 further comprising a term extractor that provides an index of terms for technology subgroups that are trending. 20. A non-transitory computer readable storage medium comprising a computer readable program for determining technology trends, wherein the computer readable program when executed on a computer causes the computer to perform the steps of: analyzing for each inventor of a plurality of inventors in a technology field, a time series of publication dates for technical documents in the technology field to provide a baseline of technical documents published in a time period; detecting with a counter a number of technical document publications having at least one inventor in the plurality of inventors in the technology field; comparing the number of technical document publications to said baseline of technical documents published in the time period, wherein if the technical document publications exceed the baseline of technical documents, the number of technical document publications are trending; performing a comparative analysis of the content for the technical document publications that are trending to determine a measurement of similarity in technical field subgroups described in the technical document publications that are trending; and extracting trending technical subgroups from the technical document publications that are trending with a degree of similarity above a threshold as a technical subgroup that is a trend. | Please help me write a proper abstract based on the patent claims. | A method for measuring technology trends that includes providing from a plurality of inventors in a technology field a baseline of technical documents published in a time period, and detecting a number of technical document publications having at least one inventor in the plurality of inventors in the technology field. The method further includes comparing the number of technical document publications to the baseline of technical documents published in the time period. If the technical document publications exceed the baseline, the number of technical document publications are trending. Comparative analysis of the content for the technical document publications that are trending determines a measurement of similarity in technical field subgroups. Trending technical subgroups are extracted from the technical document publications that are trending with a degree of similarity above a threshold as a target technical group that is a trend. |
1. A system comprising: a processor; and a memory accessible to the processor and storing instructions that, when executed, cause the processor to: provide, via a graphical user interface, a selected one of a plurality of decision options; and obscure others of the plurality of decision options. 2. The system of claim 1, wherein the memory further comprises instructions that, when executed, cause the processor to selectively reveal others of the plurality of decision options. 3. A system comprising: a processor; and a memory accessible to the processor and storing instructions that, when executed, cause the processor to: provide, via a graphical user interface (GUI), a plurality of decision options as a set of cards, each card representing a decision option of the plurality of decision options; and selectively alter an appearance of the card within the graphical user interface in response to an input. 4. The system of claim 3, wherein an appearance of the card is selectively altered, within the GUI, by providing a view that represents a back side of the card. 5. The system of claim 3, wherein the memory further includes instructions that, when executed cause the processor to: move the image of the card within the GUI in response to the input; store a decision option associated with the card when the card is moved in a first direction; and discard a decision option associated with the card when the card is moved in a second direction. 6. A system comprising: a processor; and a memory accessible to the processor and storing instructions that, when executed, cause the processor to: receive decision options corresponding to a plurality of possible options; provide, via a graphical user interface, a selected one of the plurality of possible options; and obscure others of the plurality of itineraries. 7. The system of claim 6, wherein the memory further includes instructions that, when executed, cause the processor to: include one or more user-selectable elements within the graphical user interface; receive input corresponding to one of the user-selectable elements; and provide one or more options to configure a continuous decision making process related to a selected decision option. | Please help me write a proper abstract based on the patent claims. | Systems and methods are disclosed that have and implement persona-based decision assistants and graphical user interfaces. The graphical user interfaces may present a view of one or more decision options and may include one or more user-selectable elements through which a selected decision option may be accessed or modified. In certain embodiments, user selections and similar traveler “look-alikes'” purchase behaviors may be processed to refine a persona corresponding to the search, in parallel to a search occurring and after an initial search result has been presented. In certain embodiments, the graphical user interface may show a subset of possible decision options. In certain embodiments, the graphical user interface may provide a selectable element to modify search, persona, and other preferences. |
1. A system for diagnosing at least one component requiring maintenance in an appliance and/or installation, having a) a device, designed for data and/or message interchange with regard to states of one or more components with an analysis unit that is designed to monitor the states of the one or more components and/or events arising thereon and to output them to the device in a systematized form, b) a device, designed to receive historic data from the one or more components with regard to their life in collective form, c) a device, designed for data and/or message interchange with a learning machine unit that is designed to deliver a predictive model for identifying at least one component requiring maintenance to the device, d) an evaluation device that is designed to use the data and/or messages coming from the analysis unit in systematic form, the historic data in collective form and to use the predictive model to identify the one or more components requiring maintenance, e) a device, designed for data and/or message interchange with a monitoring device that is designed to take the one or more identified components requiring maintenance as a basis for outputting an error message to the monitoring device, which can prompt a visual and/or audible display. 2. The system as claimed in claim 1, wherein the learning machine unit is designed to identify, within a determined time window, one or more components requiring maintenance on the basis of a target value, specified by the respective affected component, for a training on the basis of classifications that are derivable from the historic data of the appliance and/or installation. 3. The system as claimed in claim 1, wherein events and/or states are provided in a systematized form according to their frequency, if need be in a manner provided with a weighting that corresponds to their relevance, within a time window. 4. The system as claimed in claim 1, wherein said collective form reproduces a correlation between the one or more components and other components of the appliance and/or installation. 5. The system as claimed in claim 1, wherein said life represents an expected life cycle, the average life cycle having been related to the ongoing life cycle. 6. The system as claimed in claim 1, wherein the predictive model is representable by a decision tree in which the leaves represent class tags and branches represent relationships to functions and/or rules that lead to these class tags. 7. The system as claimed in claim 1, wherein the evaluation device is integrated in said monitoring device remotely from the system. 8. A method for diagnosing at least one component requiring maintenance in an appliance and/or installation, having the following steps: a) accepting states from one or more components provided in a systematized form, wherein the states of the one or more components and/or events arising thereon are monitored by an analysis device, b) receiving historic data from the one or more components with regard to their life in collective form, c) accepting a predictive model from a learning machine unit that delivers the predictive model for identifying at least one component requiring maintenance, d) using the states coming from the analysis unit in systematic form, the historic data in collective form and using the predictive model to identify the one or more components requiring maintenance, e) outputting an error message on the basis of the identification of the one or more components requiring maintenance. 9. The method as claimed in claim 8, wherein one or more components requiring maintenance are identified, within a determined time window, on the basis of a target value, specified by the respective affected component, for a training on the basis of classifications that are derived from the historic data of the appliance and/or installation. 10. The method as claimed in claim 8, wherein events and/or states are provided in a systematized form according to their frequency, if need be in a manner provided with a weighting that corresponds to their relevance, within a time window. 11. The method as claimed in claim 8, wherein said collective form reproduces a correlation between the one or more components and other components of the appliance and/or installation. 12. The method as claimed in claim 8, wherein said life represents an expected life cycle, the average life cycle being related to the ongoing life cycle. 13. The method as claimed in claim 8, wherein the predictive model is represented by a decision tree in which the leaves represent class tags and branches represent relationships to functions and/or rules that lead to these class tags. 14. A computer program having means for performing the method as claimed in claim 8 when the computer program is executed on a system or on the devices of the system as claimed in one of the aforementioned system claims. | Please help me write a proper abstract based on the patent claims. | A system for diagnosing at least one component requiring maintenance in an appliance and/or installation, having a) a device, designed for data and/or message interchange with regard to states of one or more components with an analysis unit, b) a device, designed to receive historic data from the one or more components with regard to their life in collective form, c) a device, designed for data and/or message interchange with a learning machine unit that is designed to deliver a predictive model for identifying at least one component requiring maintenance to the device, d) an evaluation device that is designed to use the data and/or messages coming from the analysis unit, e) a device, designed for data and/or message interchange with a monitoring device that is designed to take the one or more identified components requiring maintenance, which can prompt a visual and/or audible display is provided. |
1. A computer-implemented method for managing a patient's medical adherence, the method comprising: receiving, using a processor, data related to the patient, the data including information related to a prescribed medication regimen having one or more medications, patient behavior data, a respective literacy level associated with each of the one or more medications; calculating a compliance to dosage and a compliance to time for each of the one or more medications based on the received data; calculating a drug adherence count associated with each of the one or more medications by summing at least two of the compliance to dosage, the compliance to time, and the respective literacy level associated with each of the one or more medications; determining a daily medication adherence for each of the one or more medications; calculating a daily regimen adherence value by summing the daily medication adherence of all of the one or more medications in the medication regimen; calculating a daily regimen baseline value by re-calculating the daily regimen adherence value by utilizing a maximum potential value for the drug adherence count for each of the respective medications in the regimen associated with the patient; determining a medical adherence value based on the daily regimen adherence value and the daily regimen baseline value; and comparing the medical adherence with a threshold value. 2. The computer-implemented method of claim 1, further comprising: generating a notification when the medial adherence value is less than the threshold value. 3. The computer-implemented method of claim 2, further comprising: transmitting the generated notification to one or more user devices; 4. The computer-implemented method of claim 3, wherein the generated notification contains one or more intervention options. 5. The computer-implemented method of claim 1, wherein: receiving data related to the patient further comprises receiving a respective drug importance factor associated with each of the one or more medications; and determining a daily medication adherence for each of the one or more medications comprises determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications. 6. The computer-implemented method of claim 5, wherein determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications comprises multiplying the drug adherence count and the drug importance factor. 7. The computer-implemented method of claim 1, wherein calculating the compliance to dosage comprises: determining a prescribed dosage for each of the one or more medications; assigning dosage boolean values for actual dosage consumed based on the prescribed dosage and the received data; and calculating the compliance to dosage based on the assigned dosage boolean values. 8. The computer-implemented method of claim 7, wherein calculating the compliance to time comprises: determining a prescribed dosage time for each of the one or more medications; assigning time boolean values for actual consumption time based on the prescribed dosage time and the received data; and calculating the compliance to time based on the assigned time boolean values. 9. The computer-implemented method of claim 1, further comprising: assigning a unique regimen identity to the regimen; and assigning respective unique medication identities to each of the one or more medications based on the unique regimen identity. 10. A system for managing medical adherence, the system comprising: a memory having processor-readable instructions stored therein; and a processor configured to access the memory and execute the processor-readable instructions, which when executed by the processor configures the processor to perform a method, the method comprising: receiving data related to the patient, the data including information related to a prescribed medication regimen having one or more medications, patient behavior data, a respective literacy level associated with each of the one or more medications; calculating a compliance to dosage and a compliance to time for each of the one or more medications based on the received data; calculating a drug adherence count associated with each of the one or more medications by summing at least two of the compliance to dosage, the compliance to time, and the respective literacy level associated with each of the one or more medications; determining a daily medication adherence for each of the one or more medications; calculating a daily regimen adherence value by summing the daily medication adherence of all of the one or more medications in the medication regimen; calculating a daily regimen baseline value by re-calculating the daily regimen adherence value by utilizing a maximum potential value for the drug adherence count for each of the respective medications in the regimen associated with the patient; determining a medical adherence value based on the daily regimen adherence value and the daily regimen baseline value; and comparing the medical adherence with a threshold value. 11. The system of claim 10, wherein the method further comprises: generating a notification when the medial adherence value is less than the threshold value. 12. The system of claim 11, wherein the method further comprises: transmitting the generated notification to one or more user devices; 13. The system of claim 12, wherein the generated notification contains one or more intervention options. 14. The system of claim 13, wherein: receiving data related to the patient further comprises receiving a respective drug importance factor associated with each of the one or more medications; and determining a daily medication adherence for each of the one or more medications comprises determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications. 15. The system of claim 14, wherein determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications comprises multiplying the drug adherence count and the drug importance factor. 16. The system of claim 10, wherein calculating the compliance to dosage comprises: determining a prescribed dosage for each of the one or more medications; assigning dosage boolean values for actual dosage consumed based on the prescribed dosage and the received data; and calculating the compliance to dosage based on the assigned dosage boolean values. 17. The system of claim 16, wherein calculating the compliance to time comprises: determining a prescribed dosage time for each of the one or more medications; assigning time boolean values for actual consumption time based on the prescribed dosage time and the received data; and calculating the compliance to time based on the assigned time boolean values. 18. The system of claim 10, wherein the method further comprises: assigning a unique regimen identity to the regimen; and assigning respective unique medication identities to each of the one or more medications based on the unique regimen identity. 19. A non-transitory computer-readable medium storing instructions, then instructions, when executed by a computer system cause the computer system to perform a method, the method comprising: receiving, using a processor, data related to the patient, the data including information related to a prescribed medication regimen having one or more medications, patient behavior data, a respective literacy level associated with each of the one or more medications; calculating a compliance to dosage and a compliance to time for each of the one or more medications based on the received data; calculating a drug adherence count associated with each of the one or more medications by summing at least two of the compliance to dosage, the compliance to time, and the respective literacy level associated with each of the one or more medications; determining a daily medication adherence for each of the one or more medications; calculating a daily regimen adherence value by summing the daily medication adherence of all of the one or more medications in the medication regimen; calculating a daily regimen baseline value by re-calculating the daily regimen adherence value by utilizing a maximum potential value for the drug adherence count for each of the respective medications in the regimen associated with the patient; determining a medical adherence value based on the daily regimen adherence value and the daily regimen baseline value; and comparing the medical adherence with a threshold value. 20. The non-transitory computer-readable medium of claim 19, wherein the method further comprises: generating a notification when the medial adherence value is less than the threshold value. 21. The non-transitory computer-readable medium of claim 20, wherein the method further comprises: transmitting the generated notification to one or more user devices; 22. The non-transitory computer-readable medium of claim 21, wherein the generated notification contains one or more intervention options. 23. The non-transitory computer-readable medium of claim 19, wherein: receiving data related to the patient further comprises receiving a respective drug importance factor associated with each of the one or more medications; and determining a daily medication adherence for each of the one or more medications comprises determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications. 24. The non-transitory computer-readable medium of claim 23, wherein determining the daily medication adherence for each of the one or more medications based on the drug adherence count and the drug importance factor associated with each of the one or more medications comprises multiplying the drug adherence count and the drug importance factor. 25. The non-transitory computer-readable medium of claim 19, wherein calculating the compliance to dosage comprises: determining a prescribed dosage for each of the one or more medications; assigning dosage boolean values for actual dosage consumed based on the prescribed dosage and the received data; and calculating the compliance to dosage based on the assigned dosage boolean values. 26. The non-transitory computer-readable medium of claim 25, wherein calculating the compliance to time comprises: determining a prescribed dosage time for each of the one or more medications; assigning time boolean values for actual consumption time based on the prescribed dosage time and the received data; and calculating the compliance to time based on the assigned time boolean values. 27. The non-transitory computer-readable medium of claim 19, wherein the method further comprises: assigning a unique regimen identity to the regimen; and assigning respective unique medication identities to each of the one or more medications based on the unique regimen identity. | Please help me write a proper abstract based on the patent claims. | Systems and methods are provided for managing medical adherence. An exemplary method may include managing medical adherence utilizing data aggregating and processing to determine impact on a user's health based on their behavior related to prescribed medication. The method may entail utilizing data related to a medication regimen and patient behavior to determine a patient's compliance to the regimen in terms of dosage and time. These values may be utilized to calculate a medical adherence value representing a patient's adherence to a prescribed regimen. Responsive to determining low medical adherence, a notification may be generated which may result in an intervention with the patient. |
1. A method for processing media consumption information across multiple data spaces over a common media asset space, the method comprising: receiving, by a consumption model, first preference information of a first plurality of users, wherein the first preference information is associated with a first data space and describes monitored user interactions of the first plurality of users with respect to a first plurality of media assets, and wherein the first plurality of media assets corresponds to the first data space; receiving, by the consumption model, second preference information of a second plurality of users, wherein the second preference information is associated with a second data space and comprises levels of enjoyment that are expressly input by the second plurality of users with respect to a second plurality of media assets, and wherein the second plurality of media assets corresponds to the second data space; transforming the first preference information to first consumption layer preference information, wherein the first consumption layer preference information comprises specific attributes that are indicative of users' preferences; transforming the second preference information to second consumption layer preference information, wherein the second consumption layer preference information comprises specific attributes that are indicative of users' preferences; determining, using a preference model, first user preference details corresponding to a first media asset and a second media asset based on the first consumption layer preference information; determining, using the preference model, second user preference details corresponding to the first media asset and the second media asset based on the second consumption layer preference information; determining, using a similarity model, a first sentimental similarity between the first media asset and the second media asset, wherein the first sentimental similarity corresponds to a degree of similarity between the first media asset and the second media asset based on the first user preference details; determining, using the similarity model, a second sentimental similarity between the first media asset and the second media asset, wherein the second sentimental similarity corresponds to a degree of similarity between the first media asset and the second media asset based on the second user preference details; and determining, using an error model, a difference between the first sentimental similarity and the second sentimental similarity. 2. The method of claim 1, wherein the difference is a pair-wise difference, and wherein the method further comprises: adjusting, based on the pair-wise difference between the first sentimental similarity and the second sentimental similarity, the first user preference details and the second user preference details determined from the first and second consumption layer preference information in order to minimize the error value. 3. The method of claim 2, wherein adjusting, based on the difference between the first sentimental similarity and the second sentimental similarity, the user preference details comprises applying a chain rule in order to determine weights associated with trainable parameters of the preference model. 4. The method of claim 1, wherein determining, using the preference model, the user preference details corresponding to the first media asset and the second media asset based on the first consumption layer preference information and the second consumption layer preference information respectively, comprises applying at least one of a linear transformation function, a neural network, and a restricted Boltzmann machine. 5. The method of claim 1, wherein determining, using the similarity model, the first sentimental similarity between the first media asset and the second media asset based on the received user preference details associated with the first data space comprises applying at least one of a Pearson's coefficient and cosine similarity. 6. The method of claim 1, wherein determining, using the error model, the difference between the first sentimental similarity and the second sentimental similarity comprises: calculating a first quality value, wherein the first quality value is associated with the first sentimental similarity; calculating a second quality value, wherein the second quality value is associated with the second sentimental similarity; and determining the difference between the first sentimental similarity and the second sentimental similarity based on the first quality value and the second quality value. 7. The method of claim 6, wherein the first quality value is based on a number of users from the first data space who consumed the first media asset and the second media asset. 8. The method of claim 6, wherein the second quality value is based on a number of users from the second data space who expressly input their level of enjoyment with respect to the first media asset and the second media asset. 9. The method of claim 6, wherein determining, using the error model, the difference between the first sentimental similarity and the second sentimental similarity comprises: determining a first particularity value of the first preference information; determining a second particularity value of the second preference information; and determining the difference between the first sentimental similarity and the second sentimental similarity based on the first particularity value and the second particularity value. 10. The method of claim 1, wherein transforming the first preference information and the second preference information to the first consumption layer preference information and the second consumption layer preference information comprises: determining, for the first media asset of the first plurality of media assets whether the first media asset is also within the second plurality of media assets; and in response to determining that the first media asset is also within the second plurality of media assets, generating a record for the first media asset, wherein the record comprises preference information that is retrieved from both the first data space and the second data space. 11. A system for processing media consumption information across multiple data spaces over a common media asset space, the system comprising: control circuitry configured to: receive first preference information of a first plurality of users, wherein the first preference information is associated with a first data space and describes monitored user interactions of the first plurality of users with respect to a first plurality of media assets, and wherein the first plurality of media assets corresponds to the first data space; receive second preference information of a second plurality of users, wherein the second preference information is associated with a second data space and comprises levels of enjoyment that are expressly input by the second plurality of users with respect to a second plurality of media assets, and wherein the second plurality of media assets corresponds to the second data space; transform the first preference information to first consumption layer preference information, wherein the first consumption layer preference information comprises specific attributes that are indicative of users' preferences; transform the second preference information to second consumption layer preference information, wherein the second consumption layer preference information comprises specific attributes that are indicative of users' preferences; determine first user preference details corresponding to a first media asset and a second media asset based on the first consumption layer preference information; determine second user preference details corresponding to the first media asset and the second media asset based on the second consumption layer preference information; determine a first sentimental similarity between the first media asset and the second media asset, wherein the first sentimental similarity corresponds to a degree of similarity between the first media asset and the second media asset based on the first user preference details; determine a second sentimental similarity between the first media asset and the second media asset, wherein the second sentimental similarity corresponds to a degree of similarity between the first media asset and the second media asset based on the second user preference details; and determine a difference between the first sentimental similarity and the second sentimental similarity. 12. The system of claim 11, wherein the difference is a pair-wise difference, and wherein the control circuitry is further configured to: adjust, based on the pair-wise difference between the first sentimental similarity and the second sentimental similarity, the first user preference details and the second user preference details determined from the first and second consumption layer preference information in order to minimize the error value. 13. The system of claim 12, wherein the control circuitry configured to adjust, based on the difference between the first sentimental similarity and the second sentimental similarity, the user preference details is further configured to apply a chain rule in order to determine weights associated with trainable parameters of the preference model. 14. The system of claim 11, wherein the control circuitry configured to determine, using the preference model, the user preference details corresponding to the first media asset and the second media asset based on the first consumption layer preference information and the second consumption layer preference information respectively is further configured to apply at least one of a linear transformation function, a neural network, and a restricted Boltzmann machine. 15. The system of claim 11, wherein the control circuitry configured to determine, using the similarity model, the first sentimental similarity between the first media asset and the second media asset based on the received user preference details associated with the first data space is further configured to apply at least one of a Pearson's coefficient and cosine similarity. 16. The system of claim 11, wherein the control circuitry configured to determine, using the error model, the difference between the first sentimental similarity and the second sentimental similarity is further configured to: calculate a first quality value, wherein the first quality value is associated with the first sentimental similarity; calculate a second quality value, wherein the second quality value is associated with the second sentimental similarity; and determine the difference between the first sentimental similarity and the second sentimental similarity based on the first quality value and the second quality value. 17. The system of claim 16, wherein the first quality value is based on a number of users from the first data space who consumed the first media asset and the second media asset. 18. The system of claim 16, wherein the second quality value is based on a number of users from the second data space who expressly input their level of enjoyment with respect to the first media asset and the second media asset. 19. The system of claim 16, wherein the control circuitry configured to determine, using the error model, the difference between the first sentimental similarity and the second sentimental similarity is further configured to: determine a first particularity value of the first preference information; determine a second particularity value of the second preference information; and determine the difference between the first sentimental similarity and the second sentimental similarity based on the first particularity value and the second particularity value. 20. The system of claim 11, wherein the control circuitry configured to transform the first preference information and the second preference information to the first consumption layer preference information and the second consumption layer preference information is further configured to: determine, for the first media asset of the first plurality of media assets whether the first media asset is also within the second plurality of media assets; and in response to determining that the first media asset is also within the second plurality of media assets, generate a record for the first media asset, wherein the record comprises preference information that is retrieved from both the first data space and the second data space. 21-50. (canceled) | Please help me write a proper abstract based on the patent claims. | Methods and systems are described for processing media consumption information across multiple data spaces over a common media asset space. User preference information is received from two data spaces. User preference information from the first data space includes monitored user interactions of a first plurality of users with respect to a first plurality of media assets and user preference information from the second data space includes levels of enjoyment that a second plurality of users expressly input with respect to a second plurality of media assets. Both sets of preference information are transformed to respective consumption layer preference information and respective attributes indicative of users' preferences are determined. A first and second sentimental similarity values are determined for the first and second preference information respectively. The two sentimental similarity values are compared and an error value is calculated based on the comparison. |
1. A method for graph syntax validation comprising: receiving, by a processor, (i) an input graph that contains one or more input graph nodes, (ii) transformation rules, and (iii) a minimal valid graph; and transforming, by the processor in response to receiving the input graph, the input graph into the minimal valid graph using the transformation rules, including a source pattern and a target pattern, further comprising, in the transforming step, source pattern-matching by comparing the input graph with the source pattern of the transformation rules and determining whether the input graph matches the source pattern of one or more transformation rules; and rule-executing, by replacing the input graph nodes that are determined to match the source pattern of one or more transformation rules with the target patterns for the one or more transformation rules, wherein the transforming recurs until either the input graph has been determined to be reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that none of the transformation rules match the input graph indicating that the input graph uses an invalid syntax. 2. The method of claim 1, further comprising result-visualizing, by the processor, the transformation result. 3. The method of claim 1, further comprising design-facilitating, by the processor, the minimal valid graph and the transformation rules. 4. The method of claim 3, wherein the design-facilitating includes the use of non-terminal symbol design tools to generate intermediate transformation results. 5. The method of claim 3, wherein the design-facilitating includes placeholder design tools for use in the source pattern and the target patterns. 6. A graph syntax validation system, comprising: a processor configured to receive (i) an input graph that contains one or more input graph nodes, (ii) transformation rules, and (iii) a minimal valid graph; transform, in response to receiving the input graph, the input graph into the minimal valid graph using the transformation rules, including a source pattern and a target pattern; source pattern-match by comparing the input graph with the source patterns of the transformation rules and determining whether the input graph matches the source pattern of one or more transformation rules; and rule-execute, by replacing the input graph nodes that are determined to match with the source pattern of one or more transformation rules with the target patterns for the one or more transformation rules, wherein the processor recurrently transforms until either the input graph has been determined to be reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that none of the transformation rules match the input graph indicating that the input graph uses an invalid syntax. 7. The system of claim 6, wherein the processor is further configured to result-visualize the transformation result. 8. The system of claim 6, wherein the processor is further configured to design-facilitate the minimal valid graph and the transformation rules. 9. The system of claim 8, wherein the processor is further configured to design-facilitate using non-terminal symbol design tools to generate intermediate transformation results. 10. The system of claim 8, wherein the processor is further configured to design-facilitate using placeholder design tools in the source pattern and the target patterns. 11. A non-transitory computer readable medium configured to provide a method for graph syntax validation when executable instructions are executed, comprising instructions for: receiving (i) an input graph that contains one or more input graph nodes, (ii) transformation rules, and (iii) a minimal valid graph; and transforming, in response to receiving the input graph, the input graph into the minimal valid graph using the transformation rules, including a source pattern and a target pattern, further comprising, in the transforming, source pattern-matching by comparing the input graph with the source pattern of the transformation rules and determining whether the input graph matches the source pattern of one or more transformation rules; and rule-executing, by replacing the input graph nodes that are determined to match the source pattern of one or more transformation rules with the target patterns for the one or more transformation rules, wherein the transforming recurs until either the input graph has been determined to be reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that none of the transformation rules match the input graph indicating that the input graph uses an invalid syntax. 12. The non-transitory computer-readable medium of claim 11, further comprising result-visualizing, by the processor, the transformation result. 13. The non-transitory computer-readable medium of claim 11, further comprising design-facilitating, by the processor, the minimal valid graph and the transformation rules. 14. The non-transitory computer-readable medium of claim 13, wherein the design-facilitating includes the use of non-terminal symbol design tools to generate intermediate transformation results. 15. The non-transitory computer-readable medium of claim 13, wherein the design-facilitating includes placeholder design tools for use in the source pattern and the target patterns. | Please help me write a proper abstract based on the patent claims. | A graph syntax validation system, method, or computer-readable medium that receives: (i) an input graph, (ii) transformation rules, and (iii) a minimal valid graph. The system/method/computer-readable medium transforms the input graph into the minimal valid graph using the transformation rules that are comprised of source patterns and target patterns. The system/method/computer-readable medium recurrently transforms the input graph until either the input graph has been reduced to the minimal valid graph indicating that the input graph uses a valid syntax, or until it is determined that one or more transformation rules do not match the input graph indicating that the input graph uses an invalid syntax. |
1. A method of configuring a superconducting router, the method comprising: operating the superconducting router in a first mode, wherein ports are configured to be in reflection in the first mode in order to reflect a signal; and operating the superconducting router in a second mode, wherein a given pair of the ports is connected together and in transmission in the second mode, such that the signal is permitted to pass between the given pair of the ports. 2. The method of claim 1, wherein the ports are in reflection in the first mode for a predefined frequency of the signal. 3. The method of claim 1, wherein the given pair of the ports are in transmission in the second mode for a predefined frequency. 4. The method of claim 1, wherein the ports are configured to be isolated from one another, such that any port is controllable to be separated from another port. 5. The method of claim 1, wherein the superconducting router comprises superconducting materials. 6. The method of claim 1, wherein the superconducting router comprises tunable filters associated with the ports, such that the tunable filters are controllable to operate in the first mode and the second mode. 7. The method of claim 6, wherein the tunable filters are configured to be operated as an open switch and a closed switch according to the first and second modes. 8. The method of claim 1, wherein in the second mode, the given pair of the ports is in transmission while other ports of the ports are in reflection for a predefined frequency of the signal. 9. The method of claim 1, wherein the signal is in a microwave domain. 10. The method of claim 1, wherein the given pair of the ports are time dependent, such that a selection of the given pair of the ports are configured to change according to a defined time scheme. 11. The method of claim 1, wherein the given pair of the ports are configured to be arbitrarily selected from the ports. 12. The method of claim 1, wherein the given pair of the ports are configured to be selected from the ports at a defined time. 13. The method of claim 1, wherein the superconducting router is a lossless microwave switch having superconducting materials. 14. A method of configuring a superconducting circulator, the method comprising: operating the superconducting circulator to receive a readout signal at an input port while an output port is in reflection, wherein the readout signal is to be transmitted through a common port to a quantum system, wherein the readout signal is configured to cause a reflected readout signal to resonate back from the quantum system; and operating the superconducting circulator to output the reflected readout signal at the output port while the input port is in reflection. 15. The method of claim 14, wherein a delay line delays transmission of the readout signal and the reflected readout signal. 16. The method of claim 15, further comprising providing a transition time to switch between operating the input port in transmission to operating the input port in reflection, wherein the delay line causes the transition time. 17. The method of claim 14, wherein the quantum system includes a readout resonator operatively connected to a qubit. 18. The method of claim 17, wherein the reflected readout signal includes quantum information of the qubit. 19. The method of claim 14, wherein a first tunable filter is connected to the input port, such that the first tunable filter permits the input port to operate in transmission or reflection; and wherein a second tunable filter is connected to the output port, such that the second tunable filter permits the output port to operate in transmission or reflection. 20. The method of claim 14, wherein the circulator is a lossless microwave switch having superconducting materials. 21. A superconducting router comprising: ports configured to operate in a first mode and a second mode, wherein in the first mode the ports are configured to be in reflection in order to reflect a signal; and a given pair of the ports configured to operate in the second mode, wherein in the second mode the given pair of the ports is connected together and in transmission, such that the signal is permitted to pass between the given pair of the ports. 22. The superconducting router of claim 21, wherein the ports are in reflection in the first mode for a predefined frequency of the signal; and wherein the given pair of the ports is in transmission in the second mode for the predefined frequency. 23. The superconducting router of claim 21, wherein tunable filters are associated with the ports, such that the tunable filters are controllable to operate in the first mode and the second mode. 24. A superconducting circulator comprising: an input port connected to a first tunable filter such that the input port is configured to operate in a first mode and a second mode; and an output port connected to a second tunable filter such that the output port is configured to operate in the first mode and the second mode, wherein in the first mode the input port is configured to receive a readout signal while the output port is in reflection, wherein the readout signal is to be transmitted through a common port to a quantum system, wherein the readout signal is configured to cause a reflected readout signal to resonate back from the quantum system, wherein in the second mode the output port is configured to output the reflected readout signal while the input port is in reflection. 25. A system comprising: a quantum system; and a superconducting microwave switch connected to the quantum system, wherein the superconducting microwave switch is configured to receive a readout signal at an input port, wherein the readout signal is to be transmitted through a common port to the quantum system, wherein the superconducting microwave switch is configured to output a reflected readout signal at an output port, the reflected readout signal being from the quantum system. | Please help me write a proper abstract based on the patent claims. | A technique relates to configuring a superconducting router. The superconducting router is operated in a first mode. Ports are configured to be in reflection in the first mode in order to reflect a signal. The superconducting router is operated in a second mode. A given pair of the ports is connected together and in transmission in the second mode, such that the signal is permitted to pass between the given pair of the ports. |
1-5. (canceled) 6. A computer implemented method for recurrent data processing, comprising the steps of: computing activity of multiple layers of hidden layer nodes in a feed forward neural network, given an input data instance, forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing, finding memories that are closest to the presented test data instance according to the class decision of the feedforward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion, wherein the step of forming memories of hidden layer activities, utilizing clustering and filtering methods, as a training phase in a recurrent processing further comprises the substeps of: computing hidden layer activities of every training data instance, then low-pass filtering and stacking the hidden layer activities in a data structure, keeping a first and second hidden layer activity memory, indexed with the class label, forming both class specific and class independent duster centers as quantized memories of training data's second hidden layer activity, via k-means clustering, using each class data separately; or using all the data together depending on a choice of class specificity, keeping quantized second hidden layer memories, indexed with class labels or non-indexed, depending on the class specificity choice, training a cascade of classifiers for enabling multiple hypotheses generation of a network, via utilizing a subset of the data as the training data, keeping a classifier memory, indexed with the set of data used during training, wherein the step of finding memories that are closest to the presented test data instance according to the class decision of the feedforward network, and imputing the test data hidden layer activity with computed closest memories in an iterative fashion further comprises the substeps of determining the first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions, computing a set of candidate samples for the second layer, that are closest (Euclidian) hidden layer memories to the test data's second hidden layer activity, using the multiple hypotheses class decisions of the network and a corresponding memory database; then assigning the second hidden layer sample as one of the candidate hidden layer memories, via max or averaging operations depending on the choice of multi-hypotheses competition, merging the second hidden layer sample with the test data hidden layer activity via weighted averaging operation, creating an updated second hidden layer activity, using the updated second hidden layer activity to compute the closest (Euclidian) first hidden layer memory, and assigning as the first hidden layer sample, merging the first hidden layer sample with the test data first hidden layer activity via weighted averaging operation, creating an updated first hidden layer activity, computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feedforward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation, repeating these steps for multiple iterations starting from the step of determining the first, second and third class label choices of the neural network as multiple hypotheses, via a cascaded procedure utilizing a sequence of classifier decisions, and using the output of step of computing the feedforward second hidden layer activity from updated first hidden layer activity, and merging this feedforward second hidden layer activity with updated second hidden layer activity, via weighted averaging operation in the beginning of the next iteration. 7. A computer implemented method according to claim 6 for enabling a feedforward network to mimic a recurrent neural network via making a class decision at the output layer of feedforward neural network, and selecting an appropriate memory to estimate selected model's (class decision) hidden layer activities, then inserting the selected memory to the hidden layer activity as if it is a feedback from a higher layer network in classical recurrent networks. | Please help me write a proper abstract based on the patent claims. | Recurrent neural networks are powerful tools for handling incomplete data problems in machine learning thanks to their significant generative capabilities. However, the computational demand for algorithms to work in real time applications requires specialized hardware and software solutions. We disclose a method for adding recurrent processing capabilities into a feedforward network without sacrificing much from computational efficiency. We assume a mixture model and generate samples of the last hidden layer according to the class decisions of the output layer, modify the hidden layer activity using the samples, and propagate to lower layers. For an incomplete data problem, the iterative procedure emulates feedforward-feedback loop, filling-in the missing hidden layer activity with meaningful representations. |
1. A method that improves the training of predictive models, comprising: converting and transforming a variety of inconsistent and incoherent supervised and unsupervised training data for predictive models received by a network server as electronic data files, and storing that in a computer data storage mechanism, and then into another single, error-free, uniformly formatted record file stored in the computer data storage mechanism with an apparatus for executing a data integrity analysis algorithm that harmonizes a range of supervised and unsupervised training data into flat-data records in which every field of every record file is modified to be coherent and well-populated with information; comparing and correcting any data values in each data field in the inconsistent and incoherent supervised and unsupervised training data according to a user-service consumer preference and a predefined data dictionary of valid data values with an apparatus for executing an algorithm that substitutes data values in the data fields of incoming supervised and unsupervised training data with at least one value representing a minimum, a maximum, a null, an average, and a default; discerning the context of any text included in the inconsistent and incoherent supervised and unsupervised training data with an apparatus for executing a contextual dictionary algorithm that employs a thesaurus of alternative contexts of ambiguous words for find a common context denominator, and to then record the context determined into the computer data storage mechanism for later access by a predictive model; cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for executing an algorithm for cleaning up raw data in stored data records, field-by-field, record-by-record in which some types of fields are restricted in what is legal or allowed, and includes fetching raw data from the computer data storage mechanism and testing each field if a data value reported is numeric or symbolic, and if numeric, a data dictionary is used to see if such data value is previously listed as valid, and if symbolic, using another data dictionary to see if such data value is listed there as valid; cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for executing a Smith-Waterman algorithm for a local-sequence alignment and to determine if there are any similar regions between two strings or sequences, and in which a consistent, coherent terminology is then enforceable in each data field without data loss, and in which the Smith-Waterman algorithm compares segments of all possible lengths and optimizes a similarity measure without looking at any total sequence; cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for replacing a numeric value, wherein a numeric value to use as a replacement depends on any flags or preferences that were set to use a default, the average, a minimum, a maximum, or a null; sampling cleaned, raw-data from the flat-data records in the computer data storage mechanism with an apparatus for executing an algorithm that tests if data are supervised, and if so, that creates a plurality of individual data sets for each class with a stratified selection as needed, and then testing if a selected class is abnormal or uncharacteristic, and if not, down-sampling and producing sampled records of the classes and splitting any remaining data into separate training sets, separate test sets, and separate blind sets all then stored in the computer data storage mechanism for later use in subsequent steps to train a predictive model; if the test for each record of each class in supervised data is abnormal or uncharacteristic, then skipping a down-sampling for that instance; and if in a previous step the cleaned, raw-data from the flat-data records in the computer data storage mechanism was determined by the apparatus for executing an algorithm that tests if data are supervised are, in fact, unsupervised, then down-sampling all records and splitting a remaining a sampled record data into a separate a training set, a separate test set, and a separate blind set for later use in subsequent steps to train a predictive model. 2. A method that improves the training of predictive models, comprising: converting and transforming a variety of inconsistent and incoherent supervised and unsupervised training data for predictive models received by a network server as electronic data files, and storing that in a computer data storage mechanism, and then into another single, error-free, uniformly formatted record file stored in the computer data storage mechanism with an apparatus for executing a data integrity analysis algorithm that harmonizes a range of supervised and unsupervised training data into flat-data records in which every field of every record file is modified to be coherent and well-populated with information. 3. The method of claim 2, further comprising: comparing and correcting any data values in each data field in the inconsistent and incoherent supervised and unsupervised training data according to a user-service consumer preference and a predefined data dictionary of valid data values with an apparatus for executing an algorithm that substitutes data values in the data fields of incoming supervised and unsupervised training data with at least one value representing a minimum, a maximum, a null, an average, and a default. 4. The method of claim 2, further comprising: discerning the context of any text included in the inconsistent and incoherent supervised and unsupervised training data with an apparatus for executing a contextual dictionary algorithm that employs a thesaurus of alternative contexts of ambiguous words for find a common context denominator, and to then record the context determined into the computer data storage mechanism for later access by a predictive model. 5. The method of claim 2, further comprising: cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for executing an algorithm for cleaning up raw data in stored data records, field-by-field, record-by-record in which some types of fields are restricted in what is legal or allowed, and includes fetching raw data from the computer data storage mechanism and testing each field if a data value reported is numeric or symbolic, and if numeric, a data dictionary is used to see if such data value is previously listed as valid, and if symbolic, using another data dictionary to see if such data value is listed there as valid. 6. The method of claim 2, further comprising: cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for executing a Smith-Waterman algorithm for a local-sequence alignment and to determine if there are any similar regions between two strings or sequences, and in which a consistent, coherent terminology is then enforceable in each data field without data loss, and in which the Smith-Waterman algorithm compares segments of all possible lengths and optimizes a similarity measure without looking at any total sequence. 7. The method of claim 2, further comprising: cleaning up inconsistent, missing, and illegal data in each data field by removal or reconstitution with an apparatus for replacing a numeric value, wherein a numeric value to use as a replacement depends on any flags or preferences that were set to use a default, the average, a minimum, a maximum, or a null. 8. The method of claim 2, further comprising: sampling cleaned, raw-data from the flat-data records in the computer data storage mechanism with an apparatus for executing an algorithm that tests if data are supervised, and if so, that creates a plurality of individual data sets for each class with a stratified selection as needed, and then testing if a selected class is abnormal or uncharacteristic, and if not, down-sampling and producing sampled records of the classes and splitting any remaining data into separate training sets, separate test sets, and separate blind sets all then stored in the computer data storage mechanism for later use in subsequent steps to train a predictive model; and if the test for each record of each class in supervised data is abnormal or uncharacteristic, then skipping a down-sampling for that instance. 9. The method of claim 8, further comprising: if in a previous step the cleaned, raw-data from the flat-data records in the computer data storage mechanism was determined by the apparatus for executing an algorithm that tests if data are supervised are, in fact, unsupervised, then down-sampling all records and splitting a remaining a sampled record data into a separate a training set, a separate test set, and a separate blind set for later use in subsequent steps to train a predictive model. | Please help me write a proper abstract based on the patent claims. | A method that improves the training of predictive models. Better trained predictive models make better predictions, and can classify transactions with reduced levels of false positives and false negative. Included is an apparatus for executing a data clean-up algorithm that harmonizes a wide range of real world supervised and unsupervised training data into a single, error-free, uniformly formatted record file that has every field coherent and well populated with information. |
1. A method comprising: maintaining neural data for an electronic neuron; receiving an incoming neuronal firing event targeting the neuron; in response to the incoming neuronal firing event received, updating a membrane potential variable of the neuron based on the neural data; and generating an outgoing neuronal firing event in response to the neuron spiking, wherein the neuron spikes at an average firing rate proportional to a relative magnitude of the membrane potential variable. 2. The method of claim 1, further comprising: in response to the neuron spiking, re-setting the membrane potential variable to one of the following values: a zero value, a stored reset value, a computed reset value based on a threshold value, or a non-reset value. 3. The method of claim 1, wherein the neural data comprises at least one of the following: a leak weight for the neuron, a synaptic weight, or a threshold value for the membrane potential variable. 4. The method of claim 1, wherein the neuron spikes in response to the membrane potential variable meeting or exceeding a threshold value for the membrane potential variable. 5. A system comprising a computer processor, a computer-readable hardware storage medium, and program code embodied with the computer-readable hardware storage medium for execution by the computer processor to implement a method comprising: maintaining neural data for an electronic neuron; receiving an incoming neuronal firing event targeting the neuron; in response to the incoming neuronal firing event received, updating a membrane potential variable of the neuron based on the neural data; and generating an outgoing neuronal firing event in response to the neuron spiking, wherein the neuron spikes at an average firing rate proportional to a relative magnitude of the membrane potential variable. 6. The system of claim 5, further comprising: in response to the neuron spiking, re-setting the membrane potential variable to one of the following values: a zero value, a stored reset value, a computed reset value based on a threshold value, or a non-reset value. 7. The system of claim 5, wherein the neural data comprises at least one of the following: a leak weight for the neuron, a synaptic weight, or a threshold value for the membrane potential variable. 8. The system of claim 5, wherein the neuron spikes in response to the membrane potential variable meeting or exceeding a threshold value for the membrane potential variable. 9. A computer program product comprising a computer-readable hardware storage device having program code embodied therewith, the program code being executable by a computer to implement a method comprising: maintaining neural data for an electronic neuron; receiving an incoming neuronal firing event targeting the neuron; in response to the incoming neuronal firing event received, updating a membrane potential variable of the neuron based on the neural data; and generating an outgoing neuronal firing event in response to the neuron spiking, wherein the neuron spikes at an average firing rate proportional to a relative magnitude of the membrane potential variable. 10. The computer program product of claim 9, further comprising: in response to the neuron spiking, re-setting the membrane potential variable to one of the following values: a zero value, a stored reset value, a computed reset value based on a threshold value, or a non-reset value. 11. The computer program product of claim 9, wherein the neural data comprises at least one of the following: a leak weight for the neuron, a synaptic weight, or a threshold value for the membrane potential variable. 12. The computer program product of claim 9, wherein the neuron spikes in response to the membrane potential variable meeting or exceeding a threshold value for the membrane potential variable. | Please help me write a proper abstract based on the patent claims. | One embodiment provides a system comprising a memory device for maintaining deterministic neural data relating to a digital neuron and a logic circuit for deterministic neural computation and stochastic neural computation. Deterministic neural computation comprises processing a neuronal state of the neuron based on the deterministic neural data maintained. Stochastic neural computation comprises generating stochastic neural data relating to the neuron and processing the neuronal state of the neuron based on the stochastic neural data generated. |
1. A two-terminal resistive processing unit (RPU) comprising: a first terminal; a second terminal; and an active region having a conduction state; wherein the conduction state identifies a weight of a training methodology applied to the RPU; wherein the active region is configured to locally perform a data storage operation of the training methodology; and wherein the active region is further configured to locally perform a data processing operation of the training methodology. 2. The two-terminal RPU of claim 1, wherein the data storage operation comprises a change in the conduction state that is based at least in part on a result of the data processing operation. 3. The two-terminal RPU of claim 2, wherein the change in the conduction state comprises a non-linear change based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal. 4. The two-terminal RPU of claim 3, wherein: the active region is further configured to locally perform the data storage operation of the training methodology based at least in part on the non-linear change in the conduction state; and the active region is further configured to locally perform the data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. 5. The two-terminal RPU of claim 1, wherein the training methodology comprises at least one of: an online neural network training; a matrix inversion; and a matrix decomposition. 6. A two-terminal resistive processing unit (RPU) comprising: a first terminal; a second terminal; and an active region having a conduction state; wherein the active region is configured to effect a non-linear change in the conduction state based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal; wherein the active region is further configured to locally perform a data storage operation of a training methodology based at least in part on the non-linear change in the conduction state; and wherein the active region is further configured to locally perform a data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. 7. The two-terminal RPU of claim 6, wherein the encoding of the at least one first encoded signal and the at least one second encoded signal comprises a stochastic sequence of pulses. 8. The two-terminal RPU of claim 6, wherein the encoding of the at least one first encoded signal and the at least one second encoded signal comprise s a magnitude modulation. 9. The two-terminal RPU of claim 6, wherein the non-linear change comprises a rectifying non-linear change or a saturating non-linear change. 10. The two-terminal RPU of claim 6, wherein the non-linear change comprises an exponential non-linear change. 11. A trainable crossbar array comprising: a set of conductive row wires; a set of conductive column wires configured to form a plurality of crosspoints at intersections between the set of conductive row wires and the set of conductive column wires; and a two-terminal resistive processing unit (RPU) at each of the plurality of crosspoints; wherein the RPU is configured to locally perform a data storage operation of a training methodology applied to the trainable crossbar array; wherein the RPU is further configured to locally perform a data processing operation of the training methodology. 12. The array of claim 11, wherein: the two-terminal RPU comprises a first terminal, a second terminal and an active region having a conduction state; and the conduction state identifies a weight of the training methodology applied to the RPU. 13. The array of claim 12, wherein: the data storage operation comprises a change in the conduction state that is based at least in part on a result of the data processing operation; and the change in the conduction state comprises a non-linear change based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal. 14. The array of claim 13, wherein: the active region is further configured to locally perform the data storage operation of the training methodology based at least in part on the non-linear change in the conduction state; and the active region is further configured to locally perform the data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. 15. The array of claim 11, wherein the training methodology comprises at least one of: an online neural network training; a matrix inversion; and a matrix decomposition. 16-25. (canceled) | Please help me write a proper abstract based on the patent claims. | Embodiments are directed to a two-terminal resistive processing unit (RPU) having a first terminal, a second terminal and an active region. The active region effects a non-linear change in a conduction state of the active region based on at least one first encoded signal applied to the first terminal and at least one second encoded signal applied to the second terminal. The active region is configured to locally perform a data storage operation of a training methodology based at least in part on the non-linear change in the conduction state. The active region is further configured to locally perform a data processing operation of the training methodology based at least in part on the non-linear change in the conduction state. |
1. A decision support system configured to provide single source and centralized decision making, the decision support system comprising: one or more processors configured to execute computer program modules, the computer program modules comprising: a content harvesting module configured to receive persistent content; a plausibility scoring module configured to perform hypothesis validation and refutation functions and generate a plausibility scoring value; a potentiality scoring module configured to set confidence thresholds for decision making and generate a potentiality scoring value; and a decision determination module configured to adjudicate the potentiality scoring value and the plausibility scoring value as against threshold values and render a decision based thereon. 2. The decision support system of claim 1 wherein said persistent content comprises nouns. 3. The decision support system of claim 1 wherein said persistent content comprises noun based phrases. 4. The decision support system of claim 1 wherein said plausibility scoring value is determined based upon a confidence level related to whether or not said content includes sufficient information to identify said content as well as said content's association with a specified ontology. 5. The decision support system of claim 1 wherein said content is represented as feature vector elements. 6. The decision support system of claim 1 further comprising a reference data storage module, said reference data storage module storing reference data which is matched as against said persistent content. 7. The decision support system of claim 6 wherein said reference data is stored in the form of feature vector elements. 8. The decision support system of claim 1 wherein said plausibility scoring module generates said plausibility scoring value by employing at least one copula function to identify and model applicable dependence structures. 9. The decision support system of claim 1 wherein said potentiality scoring module generates said potentiality scoring value by employing at least one copula function to identify and model applicable dependence structures. 10. The decision support system of claim 1 wherein said decision represents a patient eligibility determination. 11. A computer-implemented method of providing decision support, the method being implemented in a computer system comprising one or more processors configured to execute computer program modules, the method comprising: receiving persistent content; performing hypothesis validation and refutation functions and generating a plausibility scoring value; setting confidence thresholds for decision making and generating a potentiality scoring value; and adjudicating the potentiality scoring value and the plausibility scoring value as against threshold values and rendering a decision based thereon. 12. The method of claim 11 wherein said persistent content comprises nouns. 13. The method of claim 11 wherein said persistent content comprises noun based phrases. 14. The method of claim 11 further comprising the step of determining said plausibility scoring value based upon a confidence level related to whether or not said content includes sufficient information to identify said content as well as said content's association with a specified ontology. 15. The method of claim 11 wherein said content is represented as feature vector elements. 16. The method of claim 11 further comprising the step of storing reference data which is matched as against said persistent content. 17. The method of claim 16 wherein said reference data is stored in the form of feature vector elements. 18. The method of claim 11 wherein said plausibility scoring module generates said plausibility scoring value by employing at least one copula function to identify and model applicable dependence structures. 19. The method of claim 11 wherein said potentiality scoring module generates said potentiality scoring value by employing at least one copula function to identify and model applicable dependence structures. 20. The method of claim 11 wherein said decision represents a patient eligibility determination. | Please help me write a proper abstract based on the patent claims. | A system and method configured to provide persistent evidence based multi-ontology context dependent decision support, eligibility assessment and feature scoring. Decisions are achieved via a probabilistic functional extension of both potentiality and plausibility towards nouns in all data forms. Plausibility refers to the full set of values garnered by the evidence accumulation process while potentiality is a mechanism to set the various match threshold values. The thresholds define acceptable confidence levels for decision-making and wherein both plausibility and potentiality are implemented through statistical applications which model and estimate the distribution of random vectors by estimating margins and copula separately from all data types. Evidence is filtered by margins and copula on a persistent basis from the scoring of newly harvested content and refined results are computed on the basis of partial matching of feature vector elements for separate and distinct feature weightings associated with the given entity and each of the reference entities within the compressed copula. |
1. A computer-implemented method of determining an approximated value of a parameter in a first domain, the parameter being dependent on one or more variables which vary in a second domain, and the parameter being determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain, the method being implemented on a computer system including a processor, and the method comprising: determining a plurality of anchor points in the second domain, wherein each anchor point comprises a set of values of the one or more variables in the second domain; evaluating, at each anchor point, the function to generate corresponding values of the parameter in the first domain; generating an approximation function to the function by fitting a series of orthogonal functions or an approximation to a series of orthogonal functions to the corresponding values of the parameter in the first domain; and using the approximation function to generate the approximated value of the parameter in the first domain. 2. The computer-implemented method of claim 1, further comprising: transforming the function into an intermediary function before the generating step, wherein the intermediary function is a linear transformation of the function, and wherein the generating step comprises approximating the intermediary function. 3. The computer-implemented method of claim 2, wherein the transforming step comprises linearly transforming the function into a predetermined range. 4. The computer-implemented method of claim 1, wherein the series of orthogonal functions is one of the group comprising: sines, cosines, sines and cosines, Bessel functions, Gegenbauer polynomials, Hermite polynomials, Laguerre polynomials, Chebyshev polynomials, Jacobi polynomials, Spherical harmonics, Walsh functions, Legendre polynomials, Zernike polynomials, Wilson polynomials, Meixner-Pollaczek polynomials, continuous Hahn polynomials, continuous dual Hahn polynomials, a classical polynomials described by the Askey scheme, Askey-Wilson polynomials, Racah polynomials, dual Hahn polynomials, Meixner polynomials, piecewise constant interpolants, linear interpolants, polynomial interpolants, gaussian process based interpolants, spline interpolants, barycentric interpolants, Krawtchouk polynomials, Charlier polynomials, sieved ultraspherical polynomials, sieved Jacobi polynomials, sieved Pollaczek polynomials, Rational interpolants, Trigonometric interpolants, Hermite interpolants, Cubic interpolants, and Rogers-Szegö polynomials. 5. The computer-implemented method of claim 1, wherein the series of orthogonal functions is a series of orthogonal polynomials. 6. The computer-implemented method of claim 1, wherein the approximation to a series of orthogonal functions is an approximation to a series of orthogonal polynomials. 7. The computer-implemented method of claim 1, further comprising: selecting a series of orthogonal functions or an approximation to a series of orthogonal functions before the generating step, and wherein the generating step comprises using the selected series of orthogonal functions or the approximation to the series of orthogonal functions. 8. The computer-implemented method of claim 1, wherein the number of anchor points, NE, is so that: N S · T N E · T + N S · t > 1 Wherein NS is the number of scenarios, t is the time taken to run the approximation function, and T is the time taken to run the function. 9. The computer-implemented method of claim 1, further comprising: holding the values of all but one variable in the set to be constant in the evaluating step. 10. The computer-implemented method of claim 1, further comprising using the approximated values of the parameter to generate standard metrics. 11. The computer-implemented method of claim 1, wherein the values of the variables vary stochastically. 12. The computer-implemented method of claim 1, wherein the using step comprises using the approximation function a plurality of times to generate at least one scenario or one time step, each scenario or each time step comprising a plurality of approximated values. 13. The computer-implemented method of claim 12, wherein the number of scenarios or time steps is significantly greater than the number of anchor points. 14. The computer-implemented method of claim 1, wherein an output of the function is a parameter of a financial product. 15. The computer-implemented method of claim 14, wherein the financial product is a financial derivative including one or more of a group comprising: an option pricing function, a swap pricing function and a combination thereof. 16. The computer-implemented method of claim 1, wherein the function is one of the group comprising: a Black-Scholes model, a Longstaff-Schwartz model, a binomial options pricing model, a Black model, a Garman-Kohlhagen model, a Vanna-Volga method, a Chen model, a Merton's model, a Vasicek model, a Rendleman-Bartter model, a Cox-Ingersoll-Ross model, a Ho-Lee model, a Hull-White model, a Black-Derman-Toy model, a Black-Karasinski model, a Heston model, a Monte Carlo based pricing model, a binomial pricing model, a trinomial pricing model, a tree based pricing model, a finite-difference based pricing model, a Heath-Jarrow-Morton model, a variance gamma model, a Fuzzy pay-off method, a Single-index model, a Chepakovich valuation model, a Markov switching multifractal, a Datar Mathews method, and a Kalotay-Williams-Fabozzi model. 17. The computer-implemented method of claim 1, wherein the anchor points are the zeros of each of orthogonal function, or a subset of the zeros. 18. The computer-implemented method of claim 17, wherein the approximating function is generated using an interpolation scheme. 19. The computer-implemented method of claim 18, wherein the interpolation scheme is one from the group comprising: piecewise constant interpolants, linear interpolants, polynomial interpolants, gaussian process based interpolants, spline interpolants, barycentric interpolants, Rational interpolants, Trigonometric interpolants, Hermite interpolants and Cubic interpolants. 20. The computer-implemented method of claim 1, wherein the anchor points are integration points of a numerical integration scheme. 21. The computer-implemented method of claim 20, wherein the numerical integration scheme is one from the group comprising: Newton-Cotes methods, a trapezoidal method, a Simpson's method, a Boole's method, a Romberg integration method, Gaussian quadrature methods, Chenshaw-Curtis methods, a Fejer method, a Gaus-Kronrod method, Fourier Transform methods, an Adaptive quadrature method, a Richardson extrapolation, a Monte Carlo and Quasi Monte Carlo method, a Markov chain Monte Carlo, a Metropolis Hastings algorithm, a Gibbs Sampling, and Fast Fourier Transform methods. 22. The computer-implemented method of claim 1, wherein the speed of the calculation is increased with no loss of accuracy in the standard metrics, and/or the accuracy of the standard metrics increase with no decrease in the speed of the calculation when using the approximation function. 23. The computer-implemented method of claim 1, wherein there is a difference between the approximated value of the parameter and the parameter in the 2nd significant figure when compared at the same point in the first domain. 24. The computer-implemented method of claim 1, wherein there is a difference between the approximated value of the parameter and the parameter in between the 4th and the 6th significant figure when compared at the same point in the first domain. 25. The computer-implemented method of claim 1, wherein there is a difference between the approximated value of the parameter and the parameter in the 15th significant figure when compared at the same point in the first domain. 26. A financial derivative comprising a parameter, wherein a value of the parameter is determined using the computer-implemented method of claim 1. 27. A computer system comprising a processor configured to determine an approximated value of a parameter in a first domain, the parameter being dependent on one or more variables which vary in a second domain, and the parameter being determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain, the computer system comprising: a determination module arranged to determine a plurality of anchor points in the second domain, wherein each anchor point comprises a set of values of the one or more variables in the second domain; an evaluation module arranged to evaluate, at each anchor point, the function to generate corresponding values of the parameter in the first domain; a generation module arranged to generate an approximation function to the function by fitting a series of orthogonal functions or an approximation to a series of orthogonal functions to the corresponding values of the parameter in the first domain; and an approximation module arranged to use the approximation function to generate the approximated value of the parameter in the first domain. 28. A computer-implemented method of determining an approximated value of a parameter in a first domain, the parameter being dependent on one or more variables which vary in a second domain, and the parameter being determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain, the method using a series of orthogonal functions or an approximation to a series of orthogonal functions to approximate the function. | Please help me write a proper abstract based on the patent claims. | A computer-implemented method of determining an approximated value of a parameter in a first domain is described. The parameter is dependent on one or more variables which vary in a second domain, and the parameter is determined by a function which relates sets of values of the one or more variables in the second domain to corresponding values in the first domain. The method is implemented on a computer system including a processor, and the method comprises: determining a plurality of anchor points in the second domain, wherein each anchor point comprises a set of values of the one or more variables in the second domain; evaluating, at each anchor point, the function to generate corresponding values of the parameter in the first domain; generating an approximation function to the function by fitting a series of orthogonal functions or an approximation to a series of orthogonal functions to the corresponding values of the parameter in the first domain; and using the approximation function to generate the approximated value of the parameter in the first domain. |
Subsets and Splits