input
stringlengths
66
114k
instruction
stringclasses
1 value
output
stringlengths
63
3.46k
1. A method for deep learning training, comprising: receiving candidate units for classification; and classifying the candidate units by soft labelling, wherein soft labelling provides at least one label comprising a plurality of possible values in a range between 0 and 1. 2. The method of claim 1, wherein the candidate units are detection bounding boxes within an image or phones of an input audio feature. 3. The method of claim 1, wherein the candidate units are detection bounding boxes, and wherein soft labelling comprises: providing a label of a class for a detection bounding box based at least partially on an overlap of the detection bounding box with a ground-truth bounding box for the class. 4. The method of claim 3, wherein providing a label of a class comprises: assigning a class label whose value is derived using the area of the overlap of the detection bounding box with the ground-truth bounding box for the class. 5. The method of claim 3, wherein providing a label of a class comprises: assigning a class label whose value is derived from a ratio involving the area of the overlap of the detection bounding box with the ground-truth bounding box for the class. 6. The method of claim 5, wherein assigning a class label comprises: calculating the ratio of the area of the overlap of the detection bounding box with the ground-truth bounding box for the class to the entire area of the detection bounding box. 7. The method of claim 3, wherein providing a label of a class is also based on one or more threshold values. 8. The method of claim 7, wherein providing a class label comprises: assigning a class label of 0 if a value based on the area of the overlap of the detection bounding box with the ground-truth bounding box for the class is below a first threshold value; assigning a class label of 1 if the value based on the area of the overlap of the detection bounding box with the ground-truth bounding box for the class is above a second threshold value and if the value based on the area of the overlap of the detection bounding box with the ground-truth bounding box for the class is the first threshold value, the second threshold value, or between the first and second threshold value, assigning a class label of the value based on the area of the overlap of the detection bounding box with the ground-truth bounding box for the class. 9. The method of claim 8, wherein the value based on the area of the overlap of the detection bounding box with a ground-truth bounding box for the class is a ratio of the area of the overlap of the detection bounding box with the ground-truth bounding box for the class to the entire area of the detection bounding box. 10. The method of claim 3, wherein providing a label of a class for a detection bounding box is also based on one or more confidence levels provided by a detection stage which also provided the detection bounding box. 11. The method of claim 3, wherein providing a label of a class for a detection bounding box comprises: providing a label of a first class for the detection bounding box based at least partially on the overlap of the detection bounding box with a ground-truth bounding box for the first class; and providing a label of a second class for the detection bounding box based at least partially on the overlap of the detection bounding box with a ground-truth bounding box for the second class. 12. The method of claim 11, wherein there is an overlap of the detection bounding box, the ground-truth bounding box for the first class, and the ground-truth bounding box for the second class, and wherein the first class label and the second class label are also based on the overlap of the detection bounding box, the ground-truth bounding box for the first class, and the ground-truth bounding box for the second class. 13. The method of claim 1, wherein the candidate units are phones of an input audio feature, and wherein soft labelling comprises: generating soft labels directly from classification scores from a probability model or neural network. 14. The method of claim 1, wherein the candidate units are phones of an input audio feature, and wherein soft labelling comprises: generating soft labels directly from classification scores from a hidden Markov Model (HMM), a Gaussian mixture model (GMM), or a pretrained neural network. 15. The method of claim 1, wherein the candidate units are phones of an input audio feature, and wherein soft labelling comprises: generating soft labels using maximum likelihood decoding, a distance metric, a soft output decoding algorithm, or a list decoding scheme. 16. An apparatus for deep learning training, comprising: one or more non-transitory computer-readable media; and at least one processor which, when executing instructions stored on one or more non-transitory computer readable media, performs the steps of: receiving candidate units for classification; and classifying the candidate units by soft labelling, wherein soft labelling provides at least one label comprising a plurality of possible values in a range between 0 and 1. 17. The apparatus of claim 16, wherein the candidate units are detection bounding boxes, and wherein soft labelling comprises: providing a label of a class for a detection bounding box based at least partially on an overlap of the detection bounding box with a ground-truth bounding box for the class 18. The apparatus of claim 16, wherein the candidate units are phones of an input audio feature, and wherein soft labelling comprises: generating soft labels directly from classification scores from a probability model or neural network. 19. A method, comprising: manufacturing a chipset capable of deep learning training comprising: at least one processor which, when executing instructions stored on one or more non-transitory computer readable media, performs the steps of: receiving candidate units for classification; and classifying the candidate units by soft labelling, wherein soft labelling provides at least one label comprising a plurality of possible values in a range between 0 and 1; and the one or more non-transitory computer-readable media which store the instructions. 20. A method of testing an apparatus, comprising: testing whether the apparatus has at least one processor which, when executing instructions stored on one or more non-transitory computer readable media, performs deep learning training comprising the steps of: receiving candidate units for classification; and classifying the candidate units by soft labelling, wherein soft labelling provides at least one label comprising a plurality of possible values in a range between 0 and 1; and testing whether the apparatus has the one or more non-transitory computer-readable media which store the instructions.
Please help me write a proper abstract based on the patent claims.
Apparatuses and methods of manufacturing same, systems, and methods for training deep learning machines are described. In one aspect, candidate units, such as detection bounding boxes in images or phones of an input audio feature, are classified using soft labelling, where at least label has a range of possible values between 0 and 1 based, in the case of images, on the overlap of a detection bounding box and one or more ground-truth bounding boxes for one or more classes.
1. A method comprising: determining the firing state of a plurality of neurons of a first neurosynaptic core substantially in parallel; delivering to at least one additional neurosynaptic core the firing state of the plurality of neurons substantially in parallel. 2. The method of claim 1, wherein the first neurosynaptic core and the at least one additional neurosynaptic core are located on a first chip. 3. The method of claim 2, wherein the substantially parallel delivery is via an inter-core network. 4. The method of claim 3, wherein the substantially parallel delivery is performed by a permutation network, a Clos network, or a butterfly network. 5. The method of claim 1, further comprising: pipelining the firing state of the plurality of neurons. 6. The method of claim 1, further comprising: constructing a binary vector corresponding to the firing state of the plurality of neurons; transmitting the binary vector to the at least one additional neurosynaptic core. 7. The method of claim 1, wherein the first neurosynaptic core is located on a first chip, and the at least one additional neurosynaptic core is located on a second chip. 8. The method of claim 7, further comprising: transmitting the firing state of the plurality of neurons via an inter-chip network connecting the first chip and the second chip. 9. The method of claim 8, wherein the inter-chip network comprises an outgoing port of the first chip and an incoming port of the second chip. 10. The method of claim 8, wherein the inter-chip network comprises an outgoing port of the first chip connected to an incoming port of the first chip. 11. The method of claim 7, wherein the first and second chip are located on a first board. 12. The method of claim 7, wherein the first chip is located on a first board and the second chip is located on a second board, the first and second boards being connected. 13. The method of claim 12, wherein a plurality of boards comprising the first board and the second board is hierarchically arranged, and wherein the first board and the second board are connected via a hierarchy of routers. 14. A system comprising: a plurality of neurosynaptic cores, the neurosynaptic cores comprising a plurality of axons, a plurality of synapses, and a plurality of neurons; a first inter-core network connecting the plurality of neurosynaptic cores, wherein the first inter-core network is adapted to deliver from a first neurosynaptic core of the plurality of neurosynaptic cores to at least one additional neurosynaptic core the firing state of the plurality of neurons of the first neurosynaptic core substantially in parallel; 15. The system of claim 14, wherein the inter-core network comprises a permutation network, a Clos network, or a butterfly network. 16. The system of claim 14, wherein the first inter-core network is located on a first chip and a second inter-core network is located on a second chip, the at least one additional neurosynaptic core being connected to the second inter-core network. 17. The system of claim 16, wherein the first chip and the second chip are adjacent. 18. The system of claim 16, further comprising: a port connecting the first inter-core network to the second inter-core network. 19. The system of claim 14, further comprising: a port connecting the first inter-core network to itself 20. The system of claim 16, wherein the first and second chip are located on a first board. 21. The system of claim 16, wherein the first chip is located on a first board and the second chip is located on a second board, the first and second boards being connected. 22. The system of claim 21, wherein a plurality of boards comprising the first board and the second board is hierarchically arranged, and wherein the first board and the second board are connected via a hierarchy of routers. 23. A method comprising: simulating a plurality of neurosynaptic cores, the simulated neurosynaptic cores comprising a plurality of simulated axons, a plurality of simulated synapses, and a plurality of simulated neurons; simulating a network connecting the plurality of simulated neurosynaptic cores; simulating the determination of the firing state of the plurality of simulated neurons of a first of the simulated neurosynaptic cores; simulating the delivery to an at least one additional of the simulated neurosynaptic cores the firing state of the plurality of simulated neurons. 24. The method of claim 19, wherein the simulated network comprises an inter-core network. 25. The method of claim 19, wherein the simulated network comprises an inter-chip network. 26. The method of claim 19, wherein the simulated network comprises an inter-board network. 27. A computer program product for operating a neurosynaptic network, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: determining the firing state of a plurality of neurons of a first neurosynaptic core substantially in parallel; delivering to at least one additional neurosynaptic core the firing state of the plurality of neurons substantially in parallel. 28. A computer program product for simulating a neurosynaptic network, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: simulating a plurality of neurosynaptic cores, the simulated neurosynaptic cores comprising a plurality of simulated axons, a plurality of simulated synapses, and a plurality of simulated neurons; simulating a network connecting the plurality of simulated neurosynaptic cores; simulating the determination of the firing state of the plurality of simulated neurons of a first of the simulated neurosynaptic cores; simulating the delivery to an at least one additional second of the simulated neurosynaptic cores the firing state of the plurality of simulated neurons.
Please help me write a proper abstract based on the patent claims.
A scalable stream synaptic supercomputer for extreme throughput neural networks is provided. The firing state of a plurality of neurons of a first neurosynaptic core is determined substantially in parallel. The firing state of the plurality of neurons is delivered to at least one additional neurosynaptic core substantially in parallel.
1. A computer implemented method comprising: inputting steps of at least two processes into the computer, wherein each step corresponds to a set of input products, a task to be completed, a set of output products, and a set of target output products, and wherein each input product, each output product, each target output product, and each task, comprises a set of property/value pairs that includes product type and product target values, where the properties have unique URL-based names and the property relationships are supported by an underlying ontology including a hierarchical taxonomy of product types, and where the target products specify a desired range for each corresponding property value; storing each input product and each output product in a shared, non-transitory database as they are generated, wherein each input product and each output product is stored with a unique uniform resource locator (URL); establishing an input product filter for each step, wherein the input product filter comprises performance-triggering criteria defined property/value filters, where one step's output product passes the input product filter of another step; periodically searching the shared database with URL queries for input products; determining with the computer whether each step in any of the processes has been executed, is in the process of being executed, or is awaiting execution; dynamically and automatically assembling or altering a set of steps composing any given process depending on the input products available in the shared database; and comparing with a performance status monitoring engine the input/output products property values to the target property values to automatically assess the status, effectiveness and efficiency of the individual steps and of each process as a whole. 2. The method of claim 1, wherein a given step is deemed executed if all of the given step's input products are found in the shared database and all the given step's output products have been generated and stored in the shared database, wherein the given step is awaiting execution if all of the given step's input products are found in the shared database but not all of the given step's output products have been generated, and wherein the given step is awaiting execution if all of the given step's input products are not found in the shared database. 3. The method of claim 2, where the URLs are defined according to Representational State Transfer (REST) design principles. 4. A method for providing situational awareness to a computer comprising: identifying a first process and a second process that have an aspect in common, wherein each of the first and second processes comprises a series of steps and an end goal; identifying an input product comprising performance-triggering criteria for each step in the first and second processes; generating a unique output product at the completion of each step in the first and second processes; storing the output products in a cloud-based, shared database, wherein each output product and each input product has a unique uniform resource locator (URL) according to a taxonomy; filtering with the computer the output products in the database for every un-executed step by performing URL queries for the input product of each un-executed step; determining that a given step may be executed based on the presence of the given step's input product in the database; and determining with the computer a situational assessment based on the interaction of the steps of the first and second processes and evaluating how interaction between the two processes will affect the attainment of the goals. 5. The method of claim 4 wherein, each product is represented as a set of property/value pairs where the properties have unique URL-based names and the property relationships are supported by an underlying ontology including a hierarchical taxonomy of product types. 6. The method of claim 4, wherein output products are defined in a progressive, generic, hierarchical, machine-understandable, and addressable format. 7. The method of claim 6, wherein the progressive, generic, hierarchical, machine-understandable, and addressable format is JavaScript Object Notation for Linked Data (JSON-LD). 8. The method of claim 7, wherein the output products include property values. 9. The method of claim 8, wherein the output products link to parent/child product types and linked data. 10. The method of claim 4, wherein the common aspect is an aspect or a combination of aspects selected from the group consisting of: a location, a time-range, a person, an organization, a field of study, a given topic, a user-entered property value, and a user-entered property value range. 11. The method of claim 4, wherein the situational assessment further includes an identification of remaining steps, identification of obstacles to executing the remaining steps, a time estimate for completion of the remaining steps, and a probability that the goals will be reached by a certain time. 12. The method of claim 4, further comprising the step of reporting the situational assessment to a user. 13. The method of claim 4, further comprising the step of eliminating a step from the first process based on the situational assessment. 14. The method of claim 4, wherein the first and second processes are tasks performed by the computer. 15. A method for providing situational awareness to a computer-controlled avatar in a simulated environment; identifying at least two processes related to the avatar, wherein each process comprises a series of steps and an end goal; identifying an input product comprising performance-triggering criteria for each step in the identified processes; generating a unique output product at the completion of each step in the identified processes; storing the output products in a cloud-based, shared database, wherein each output product and each input product has a unique uniform resource locator (URL) according to a taxonomy; filtering the output products in the database for every un-executed step by performing URL queries for the input product of each un-executed step; determining with the avatar that a given step may be executed based on the presence of the given step's input product in the database; and developing with the avatar a situational assessment based on the interaction of the steps of the identified processes and evaluating how interaction between the processes will affect the attainment of the respective goals. 16. The method of claim 15 further comprising the step of reporting the situational assessment to a human. 17. The method of claim 15, further comprising the step of having the avatar periodically updating the situational assessment as further information is available. 18. The method of claim 15, further comprising the step of having the avatar alter its behavior in view of the situational assessment.
Please help me write a proper abstract based on the patent claims.
A computer implemented method comprising: inputting steps of at least two processes into the computer; storing process step input products and process step output products in a shared, non-transitory database as they are generated; establishing an input product filter for each process step with performance-triggering-criteria-defined property/value filters, where one step's output product passes the input product filter of another step; periodically searching the shared database for input products; determining whether each step in any of the processes has been executed, is in the process of being executed, or is awaiting execution; dynamically and automatically assembling or altering a set of steps composing any given process depending on the input products available in the shared database; and comparing the input/output products property values to target property values to automatically assess the status, effectiveness and efficiency of the individual steps and each process as a whole.
1. A method for training a neuron network using a processor in communication with a memory, comprising: determining features of a signal using the neuron network; determining an uncertainty measure of the features for classifying the signal; reconstructing the signal from the features using a decoder neuron network to produce a reconstructed signal; comparing the reconstructed signal with the signal to produce a reconstruction error; combining the uncertainty measure with the reconstruction error to produce a rank of the signal for a necessity of a manual labeling; labeling the signal according to the rank to produce the labeled signal; and training the neuron network and the decoder neuron network using the labeled signal. 2. The method of claim 1, wherein the labeling comprises: transmitting a labeling request to an annotation device if the rank indicates the necessity of the manual labeling process. 3. The method of claim 1, wherein the determining features are performed by using an encoder neural network. 4. The method of claim 1, wherein the signal is an electroencephalogram (EEG) or an electrocardiogram (ECG). 5. The method of claim 1, wherein the reconstruction error is defined based on a Euclidean distance between the signal and the reconstructed signal. 6. The method of claim 1, wherein the rank is defined based on an addition of an entropy function and the reconstruction error. 7. An active learning system comprising: a human machine interface; a storage device including neural networks; a memory; a network interface controller connectable with a network being outside the system; an imaging interface connectable with an imaging device; and a processor configured to connect to the human machine interface, the storage device, the memory, the network interface controller and the imaging interface, wherein the processor executes instructions for classifying a signal using the neural networks stored in the storage device, wherein the neural networks perform steps of: determining features of the signal using the neuron network; determining an uncertainty measure of the features for classifying the signal; reconstructing the signal from the features using a decoder neuron network to produce a reconstructed signal; comparing the reconstructed signal with the signal to produce a reconstruction error; combining the uncertainty measure with the reconstruction error to produce a rank of the signal for a necessity of a manual labeling; labeling the signal according to the rank to produce the labeled signal; and training the neuron network and the decoder neuron network using the labeled signal. 8. The method of claim 7, wherein the labeling comprises: transmitting a labeling request to an annotation device if the rank indicates the necessity of the manual labeling process. 9. The method of claim 7, wherein the determining features are performed by using an encoder neural network. 10. The method of claim 7, wherein the signal is an electroencephalogram (EEG) or an electrocardiogram (ECG). 11. The method of claim 7, wherein the reconstruction error is defined based on a Euclidean distance between the signal and the reconstructed signal. 12. The method of claim 7, wherein the rank is defined based on an addition of an entropy function and the reconstruction error. 13. A non-transitory computer-readable medium storing software comprising instructions executable by one or more computers which, upon such execution, cause the one or more computers to perform operations comprising: determining features of a signal using the neuron network; determining an uncertainty measure of the features for classifying the signal; reconstructing the signal from the features using a decoder neuron network to produce a reconstructed signal; comparing the reconstructed signal with the signal to produce a reconstruction error; combining the uncertainty measure with the reconstruction error to produce a rank of the signal for a necessity of a manual labeling; labeling the signal according to the rank to produce the labeled signal; and training the neuron network and the decoder neuron network using the labeled signal. 14. The method of claim 13, wherein the labeling comprises: transmitting a labeling request to an annotation device if the rank indicates the necessity of the manual labeling process. 15. The method of claim 13, wherein the determining features are performed by using an encoder neural network. 16. The method of claim 13, wherein the signal is an electroencephalogram (EEG) or an electrocardiogram (ECG). 17. The method of claim 13, wherein the reconstruction error is defined based on a Euclidean distance between the signal and the reconstructed signal. 18. The method of claim 13, wherein the rank is defined based on an addition of an entropy function and the reconstruction error.
Please help me write a proper abstract based on the patent claims.
A method for training a neuron network using a processor in communication with a memory includes determining features of a signal using the neuron network, determining an uncertainty measure of the features for classifying the signal, reconstructing the signal from the features using a decoder neuron network to produce a reconstructed signal, comparing the reconstructed signal with the signal to produce a reconstruction error, combining the uncertainty measure with the reconstruction error to produce a rank of the signal for a necessity of a manual labeling, labeling the signal according to the rank to produce the labeled signal; and training the neuron network and the decoder neuron network using the labeled signal.
1-12. (canceled) 13. A system comprising: an artificial neural network having an at least one input neuron and an at least one output neuron; and one or more computer processor circuits that are configured to host a bundling application that is configured to: identify a software component having a first value for a first identification attribute and a second value for a second identification attribute; generate an input vector derived from the first value and the second value; load the input vector into the at least one input neuron of the artificial neural network; and obtain a yielded output vector from the at least one output neuron of the artificial neural network. 14. The system of claim 13, wherein the yielded output vector corresponds to a software bundle of a plurality of software bundles, and wherein the bundling application is further configured to: determine, based on the yielded output vector, that the software component is associated with the software bundle. 15. The system of claim 13, wherein the software component is associated with a software bundle of a plurality of software bundles, and wherein the bundling application is further configured to: generate a test output vector derived from the software bundle; compare the yielded output vector with the test output vector; and adjust parameters of the artificial neural network based on the comparison of the yielded output vector with the test output vector. 16. The system of claim 15, wherein the bundling application is further configured to: identify a second software component having a third value for the first identification attribute and a fourth value for the second identification attribute; generate a second input vector derived from the third value and the fourth value; load the second input vector into the at least one input neuron of the artificial neural network; obtain a second yielded output vector from the at least one output neuron of the artificial neural network, the second yielded output vector corresponding to a second software bundle of the plurality of software bundles; and determine, based on the second yielded output vector, that the second software component is associated with the second software bundle. 17. A computer program comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to: identify a software component having a first value for a first identification attribute and a second value for a second identification attribute; generate an input vector derived from the first value and the second value; load the input vector into an at least one input neuron of an artificial neural network; and obtain a yielded output vector from an at least one output neuron of the artificial neural network. 18. The computer program product of claim 17, wherein the yielded output vector corresponds to a software bundle of a plurality of software bundles, and wherein the program instructions are executable by the computer to further cause the computer to: determine, based on the yielded output vector, that the software component is associated with the software bundle. 19. The computer program product of claim 17, wherein the software component is associated with a software bundle of a plurality of software bundles, and wherein the program instructions are executable by the computer to further cause the computer to: generate a test output vector derived from the software bundle; compare the yielded output vector with the test output vector; and adjust parameters of the artificial neural network based on the comparison of the yielded output vector with the test output vector. 20. The computer program product of claim 19, wherein the program instructions are executable by the computer to further cause the computer to: identify a second software component having a third value for the first identification attribute and a fourth value for the second identification attribute; generate a second input vector derived from the third value and the fourth value; load the second input vector into the at least one input neuron of the artificial neural network; obtain a second yielded output vector from the at least one output neuron of the artificial neural network, the second yielded output vector corresponding to a second software bundle of the plurality of software bundles; and determine, based on the second yielded output vector, that the second software component is associated with the second software bundle.
Please help me write a proper abstract based on the patent claims.
An artificial neural network is used to manage software bundling. During a training phase, the artificial neural network is trained using previously bundled software components having known values for identification attributes and known software bundle asociations. Once trained, the artifical neural network can be used to identify the proper software bundles for newly discovered sofware components. In this process, a newly discovered software component having known values for the identification attributes is identified. An input vector is derived from the known values. The input vector is loaded into input neurons of the artificial neural network. A yielded output vector is then obtained from an output neuron of the artificial neural network. Based on the composition of the output vector, the software bundle associated with this newly discovered software component is determined.
1-9. (canceled) 10. A computer program product for generating a first answer relationship in a first answer sequence, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a computer to cause the computer to perform a method comprising: identifying the first answer sequence, the first answer sequence including a first answer and a second answer; analyzing, using the first answer and the second answer, a corpus to identify a set of influence factors corresponding to both the first answer and the second answer; and generating, based on the set of influence factors, the first answer relationship between the first answer and the second answer. 11. The computer program product of claim 10, wherein the first answer sequence further includes a third answer, and wherein the method further comprises: analyzing, using the third answer, the corpus to identify a second set of influence factors corresponding to both the first answer and the third answer and further to identify a third set of influence factors corresponding to both the second answer and the third answer; generating, based on the second set of influence factors, a second answer relationship between the first answer and the third answer; and generating, based on the third set of influence factors, a third answer relationship between the first answer and the third answer. 12. The computer program product of claim 11, wherein the method further comprises: evaluating, based on the first answer relationship, further based on the second answer relationship, and further based on the third answer relationship, the answer sequence. 13. The computer program product of claim 10, wherein the method further comprises: assigning a relationship score to the first answer relationship, the relationship score calculated based on the set of influence factors; and evaluating, based on the relationship score, the first answer relationship. 14. The computer program product of claim 13, wherein the evaluating the first answer relationship includes determining that the relationship score is below a relationship contraindication threshold, and wherein the method further comprises: identifying, in response to the determining, the first answer sequence as contraindicated. 15. The computer program product of claim 10, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: identifying a first characteristic relationship between the first answer and a characteristic; identifying a second characteristic relationship between the second answer and the characteristic; and identifying, based on comparing the first characteristic relationship and the second characteristic relationship, a first influence factor of the set of influence factors. 16. A system for generating a first answer relationship in a first answer sequence, the system comprising: a memory; and at least one processor in communication with the memory, wherein the at least one processor is configured to perform a method comprising: identifying the first answer sequence, the first answer sequence including a first answer and a second answer; analyzing, using the first answer and the second answer, a corpus to identify a set of influence factors corresponding to both the first answer and the second answer; and generating, based on the set of influence factors, the first answer relationship between the first answer and the second answer. 17. The system of claim 16, wherein the first answer sequence further includes a third answer, and wherein the method further comprises: analyzing, using the third answer, the corpus to identify a second set of influence factors corresponding to both the first answer and the third answer and further to identify a third set of influence factors corresponding to both the second answer and the third answer; generating, based on the second set of influence factors, a second answer relationship between the first answer and the third answer; and generating, based on the third set of influence factors, a third answer relationship between the first answer and the third answer. 18. The system of claim 17, wherein the method further comprises: evaluating, based on the first answer relationship, further based on the second answer relationship, and further based on the third answer relationship, the answer sequence. 19. The system of claim 16, wherein the method further comprises: assigning a relationship score to the first answer relationship, the relationship score calculated based on the set of influence factors; and evaluating, based on the relationship score, the first answer relationship. 20. The system of claim 19, wherein the evaluating the first answer relationship includes determining that the relationship score is below a relationship contraindication threshold, and wherein the method further comprises: identifying, in response to the determining, the first answer sequence as contraindicated. 21. The computer program product of claim 10, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: identifying a direct influence relationship between the first answer and the second answer; and identifying, based on the direct influence relationship, a first influence factor of the set of influence factors. 22. The computer program product of claim 10, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: parsing, by a natural language processing technique configured to analyze syntactic and semantic content, the corpus. 23. The computer program product of claim 10, wherein the method further comprises: receiving, from a user, a question; parsing, by a natural language processing technique configured to analyze syntactic and semantic content, the question, wherein the identifying the first answer sequence is in response to the parsing; assigning a first relationship score to the first answer relationship, the first relationship score calculated based on the set of influence factors; assigning a first confidence score to the first answer sequence, the first confidence score based in part on the first relationship score; and presenting, the first answer sequence to the user as a response to the question. 24. The system of claim 16, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: identifying a first characteristic relationship between the first answer and a characteristic; identifying a second characteristic relationship between the second answer and the characteristic; and identifying, based on comparing the first characteristic relationship and the second characteristic relationship, a first influence factor of the set of influence factors. 25. The system of claim 16, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: identifying a direct influence relationship between the first answer and the second answer; and identifying, based on the direct influence relationship, a first influence factor of the set of influence factors. 26. The system of claim 16, wherein the analyzing, using the first answer and the second answer, the corpus to identify the set of influence factors corresponding to both the first answer and the second answer includes: parsing, by a natural language processing technique configured to analyze syntactic and semantic content, the corpus. 27. The computer program product of claim 16, wherein the method further comprises: receiving, from a user, a question; parsing, by a natural language processing technique configured to analyze syntactic and semantic content, the question, wherein the identifying the first answer sequence is in response to the parsing; assigning a first relationship score to the first answer relationship, the first relationship score calculated based on the set of influence factors; assigning a first confidence score to the first answer sequence, the first confidence score based in part on the first relationship score; and presenting, the first answer sequence to the user as a response to the question.
Please help me write a proper abstract based on the patent claims.
In a question-answering (QA) environment, a first answer sequence is identified. As identified, the first answer sequence includes a first answer and a second answer. A corpus is analyzed using the first answer and the second answer. Based on the analysis, a set of influence factors corresponding to both the first answer and the second answer are identified. A first answer relationship between the first answer and the second answer is then generated based on the set of influence factors.
1-18. (canceled) 19. A method for knowledge discovery, the method comprising: plotting, by a processing device, a set of concepts derived from a selected set of fingerprints on a terminology system, wherein the plotting generates a map; selecting, by the processing device, a single concept from the set of concepts; displaying, by the processing device, the map with the set of concepts and the selected single concept; and indicating, by the processing device, a relative importance of the selected single concept with respect to the set of concepts on the map. 20. The method of claim 19, wherein each fingerprint represents a document. 21. The method of claim 19, wherein each fingerprint represents a person. 22. The method of claim 19, wherein each fingerprint represents an organization. 23. The method of claim 19, wherein indicating the relative importance of the selected single concept comprises displaying the selected single concept on the map in a first color and a remaining one or more concepts from the set of concepts in a second color that is different from the first color. 24. The method of claim 19, wherein indicating the relative importance of the selected single concept comprises displaying the selected single concept on the map with a first object and a remaining one or more concepts from the set of concepts with a second object, wherein the first object is larger than the second object. 25. A system for knowledge discovery, the system comprising: a processing device; and a non-transitory, processor-readable storage medium, the non-transitory, processor-readable storage medium comprising one or more programming instructions that, when executed, cause the processing device to: plot a set of concepts derived from a selected set of fingerprints on a terminology system, wherein the plotting generates a map; select a single concept from the set of concepts; display the map with the set of concepts and the selected single concept; and indicate a relative importance of the selected single concept with respect to the set of concepts on the map. 26. The system of claim 25, wherein each fingerprint represents a document. 27. The system of claim 25, wherein each fingerprint represents a person. 28. The system of claim 25, wherein each fingerprint represents an organization. 29. The system of claim 25, wherein the one or more programming instructions that, when executed, cause the processing device to indicate the relative importance of the selected single concept further cause the processing device to display the selected single concept on the map in a first color and a remaining one or more concepts from the set of concepts in a second color that is different from the first color. 30. The system of claim 25, wherein the one or more programming instructions that, when executed, cause the processing device to indicate the relative importance of the selected single concept further cause the processing device to display the selected single concept on the map with a first object and a remaining one or more concepts from the set of concepts with a second object, wherein the first object is larger than the second object. 31. A non-transitory, processor-readable storage medium with computer executable instructions embodied thereon for knowledge discovery, the computer executable instructions directing a processing device to: plot a set of concepts derived from a selected set of fingerprints on a terminology system, wherein the plotting generates a map; select a single concept from the set of concepts; display the map with the set of concepts and the selected single concept; and indicate a relative importance of the selected single concept with respect to the set of concepts on the map. 32. The computer readable medium of claim 31, wherein each fingerprint represents a document. 33. The computer readable medium of claim 31, wherein each fingerprint represents a person. 34. The computer readable medium of claim 31, wherein each fingerprint represents an organization. 35. The computer readable medium of claim 31, wherein the computer executable instructions directing the processing device to indicate the relative importance of the selected single concept further direct the processing device to indicate the relative importance of the selected single concept further cause the processing device to display the selected single concept on the map in a first color and a remaining one or more concepts from the set of concepts in a second color that is different from the first color. 36. The computer readable medium readable medium of claim 31, wherein the computer executable instructions directing the processing device to indicate the relative importance of the selected single concept further direct the processing device to indicate the relative importance of the selected single concept further cause the processing device to display the selected single concept on the map with a first object and a remaining one or more concepts from the set of concepts with a second object, wherein the first object is larger than the second object.
Please help me write a proper abstract based on the patent claims.
Provided are methods and systems for knowledge discovery utilizing knowledge profiles.
1. A computer-implemented method comprising: obtaining frame data representative of a plurality of frames captured by a touch-sensitive device; analyzing the frame data to define a respective blob in each frame of the plurality of frames, the blobs being indicative of a touch event; computing a plurality of feature sets for the touch event, each feature set specifying properties of the respective blob in each frame of the plurality of frames; and determining a type of the touch event via machine learning classification configured to provide multiple non-bimodal classification scores based on the plurality of feature sets for the plurality of frames, each non-bimodal classification score being indicative of an ambiguity level in the machine learning classification. 2. The computer-implemented method of claim 1, wherein the machine learning classification is configured to generate the non-bimodal classification scores such that each non-bimodal classification score is representative of a probability that the touch event is of a respective type. 3. The computer-implemented method of claim 2, wherein each one of the non-bimodal classification scores is generated by a machine learning classifier configured to accept the plurality of feature sets as inputs. 4. The computer-implemented method of claim 3, wherein the machine learning classifier comprises a random decision forest classifier. 5. The computer-implemented method of claim 1, further comprising: defining a track of the blobs across the plurality of frames for the touch event; and computing a track feature set for the track, wherein determining the type comprises applying the track feature set to a machine learning classifier. 6. The computer-implemented method of claim 1, wherein computing the plurality of feature sets comprises aggregating data indicative of the plurality of feature sets before application of the plurality of feature sets to a machine learning classifier in determining the type of the touch event. 7. The computer-implemented method of claim 1, wherein each feature set comprises data indicative of an appearance of an image patch disposed at the respective blob in each frame. 8. The computer-implemented method of claim 1, wherein each feature set comprises data indicative of an intensity gradient in the frame data for the respective blob in each frame. 9. The computer-implemented method of claim 1, wherein each feature set comprises data indicative of an isoperimetric quotient or other metric of a roundness of the respective blob in each frame. 10. The computer-implemented method of claim 1, wherein the machine learning classification comprises a lookup table-based classification. 11. The computer-implemented method of claim 1, wherein determining the type comprises applying the feature set for a respective frame of the plurality of frames to multiple look-up tables, each look-up table providing a respective individual non-bimodal classification score of the multiple non-bimodal classification scores. 12. The computer-implemented method of claim 11, wherein determining the type comprises combining each of the individual non-bimodal classification scores for the respective frame to generate a blob classification rating score for the respective frame. 13. The computer-implemented method of claim 12, wherein: the multiple look-up tables comprise a first look-up table configured to provide a first rating that the touch event is an intended touch and further comprise a second look-up table to determine a second rating that the touch event is an unintended touch; and determining the type comprises subtracting the second rating from the first rating to determine the blob classification rating score for the respective frame. 14. The computer-implemented method of claim 12, wherein determining the type comprises aggregating the blob classification rating scores across the plurality of frames to determine a cumulative, multi-frame classification score for the touch event. 15. The computer-implemented method of claim 14, wherein determining the type comprises: determining whether the cumulative, multi-frame classification score passes one of multiple classification thresholds; and if not, then iterating the feature set applying, the classification score combining, and the rating score aggregating acts in connection with a further feature set of the plurality of feature sets. 16. The computer-implemented method of claim 14, wherein determining the type further comprises, once the cumulative, multi-frame classification score exceeds passes a palm classification threshold for the touch event, classifying a further blob in a subsequent frame of the plurality of frames that overlaps the touch event as a palm touch event. 17. The computer-implemented method of claim 12, wherein combining each of the individual non-bimodal classification scores comprises adjusting the blob classification rating score by subtracting a value from the blob classification rating score if the respective blob overlaps an anti-blob. 18. The computer-implemented method of claim 12, wherein combining each of the individual non-bimodal classification scores comprises, when the blob has an area greater than a threshold area, and when the blob is within a threshold distance of a further blob having bimodal classification scores indicative of a palm, adjusting the blob classification rating score by subtracting a quotient calculated by dividing a blob area of the blob by the threshold area. 19. The computer-implemented method of claim 12, wherein combining each of the individual non-bimodal classification scores comprises: determining if a number of edge pixels in the respective blob exceeds a threshold; and if the threshold is exceeded, adjusting the blob classification rating score by subtracting a difference between the number of edge pixels and the threshold from the blob classification rating score. 20. A touch-sensitive device comprising: a touch-sensitive surface; a memory in which blob definition instructions, feature computation instructions, and machine learning classification instructions are stored; and a processor coupled to the memory, configured to obtain frame data representative of a plurality of frames captured via the touch-sensitive surface and configured to execute the blob definition instructions to analyze the frame data to define a respective blob in each frame of the plurality of frames, the blobs being indicative of a touch event; wherein the processor is further configured to execute the feature computation instructions to compute a plurality of feature sets for the touch event, each feature set specifying properties of the respective blob in each frame of the plurality of frames; and wherein the processor is further configured to execute the machine learning classification instructions to determine a type of the touch event via machine learning classification configured to provide multiple non-bimodal classification scores based on the plurality of feature sets for the plurality of frames, each non-bimodal classification score being indicative of an ambiguity level in the machine learning classification. 21. The touch-sensitive device of claim 20, wherein each non-bimodal classification score is representative of a probability that the touch event is of a respective type. 22. The touch-sensitive device of claim 20, wherein: each non-bimodal classification score is a blob classification score rating for a respective frame of the plurality of frames; and the processor is further configured to execute the machine learning classification instructions to sum the blob classification score ratings over the plurality of frames. 23. The touch-sensitive device of claim 22, wherein the processor is further configured to execute the machine learning classification instructions to combine lookup table ratings from multiple lookup tables to compute each blob classification score rating. 24. The touch-sensitive device of claim 20, wherein the processor is further configured to execute the blob definition instructions to split a connected component into multiple blobs for separate analysis. 25. The touch-sensitive device of claim 20, wherein the processor is further configured to execute the blob definition instructions to define a track for each blob of the touch event across the plurality of frames. 26. The touch-sensitive device of claim 20, wherein the processor is further configured to execute the blob definition instructions to merge multiple connected components for analysis as a single blob. 27. A touch-sensitive device comprising: a touch-sensitive surface; a memory in which a plurality of instruction sets are stored; and a processor coupled to the memory and configured to execute the plurality of instruction sets, wherein the plurality of instructions sets comprise: first instructions to cause the processor to obtain frame data representative of a plurality of sensor images captured by the touch-sensitive device; second instructions to cause the processor to analyze the frame data to define a respective connected component in each sensor image of the plurality of sensor images, the connected components being indicative of a touch event; third instructions to cause the processor to compute a plurality of feature sets for the touch event, each feature set specifying properties of the respective connected component in each sensor image of the plurality of sensor images; fourth instructions to cause the processor to determine a type of the touch event via machine learning classification configured to provide multiple non-bimodal classification scores based on the plurality of feature sets for the plurality of frames, each non-bimodal classification score being indicative of an ambiguity level in the machine learning classification; and fifth instructions to cause the processor to provide an output to the computing system, the output being indicative of the type of the touch event; wherein the fourth instructions comprise aggregation instructions to cause the processor to aggregate information representative of the touch event over the plurality of sensor images. 28. The touch-sensitive device of claim 27, wherein: the fourth instructions are configured to cause the processor to apply the plurality of feature sets to a machine learning classifier; and the aggregation instructions are configured to cause the processor to aggregate the plurality of feature sets for the plurality of sensor images before applying the plurality of feature sets. 29. The touch-sensitive device of claim 27, wherein the aggregation instructions are configured to cause the processor to aggregate the multiple non-bimodal classification scores.
Please help me write a proper abstract based on the patent claims.
A method for touch classification includes obtaining frame data representative of a plurality of frames captured by a touch-sensitive device, analyzing the frame data to define a respective blob in each frame of the plurality of frames, the blobs being indicative of a touch event, computing a plurality of feature sets for the touch event, each feature set specifying properties of the respective blob in each frame of the plurality of frames, and determining a type of the touch event via machine learning classification configured to provide multiple non-bimodal classification scores based on the plurality of feature sets for the plurality of frames, each non-bimodal classification score being indicative of an ambiguity level in the machine learning classification.
1. A system for managing of a rules engine, the system comprising: a computing device implementing a rules engine; a router to interact with the rules engine and rule engine results, the router configured to feed inputs to and receive outputs from the rules engine; and the router further configured to: programmatically route results of rule execution by the rules engine to a hierarchical structure stored in computer storage for access by subscriber devices. 2. The system of claim 1 wherein the rules engine operates with a defined rules format that supports routing information within the rule itself. 3. The system of claim 1 wherein the rules engine operates with a defined rules format that supports routing information that is associated with the rule. 4. The system of claim 1 wherein the router supports a publish/subscribe connectivity protocol. 5. The system of claim 1 wherein the router supports a MQTT connectivity protocol. 6. The system of claim 1 wherein the rules engine, when a rule executes, automatically publishes the output of the rule to a specific publish topic in the hierarchical structure. 7. The system of claim 1 wherein other processes subscribe to access the hierarchical structure to receive and process outputs of rules that fired. 8. The system of claim 1 wherein the hierarchical structure is arranged as sets of topics with a superset of topics being topics that are higher in the hierarchy than the topic subscribed to. 9. The system of claim 1 wherein the processes/devices are sensor processes and sensor devices. 10. A method for managing a network of sensor devices, the method comprising: firing a rule by a rules engine in a device, the firing of the rule providing rule results; and programmatically, routing the rule results from the rules engine to a hierarchical structure stored in computer storage for access to the results by subscriber devices that subscribe to the rule. 11. The method of claim 10 wherein the rules engine operates with a defined rules format that supports routing information within the rule itself. 12. The method of claim 10 wherein programmatically, routing further comprises: automatically publishing by the rules engine, an output of the rule to a specific publish topic in the hierarchical structure. 13. The method of claim 10 further comprising: subscribing to the hierarchical structure by other processes to receive and process outputs of rules that fired. 14. The method of claim 10 wherein the devices are sensor devices.
Please help me write a proper abstract based on the patent claims.
A networked system for managing a physical intrusion detection/alarm includes tiers devices and a rules engine and router to interact with the rules engine and rule engine results, where the router is configured to feed inputs to and receive outputs from the rules engine, and the router further configured to programmatically route results of rule execution by the rules engine to a hierarchical structure stored in computer storage for access by subscriber devices.
1. A method to identify anomalous behavior of a monitored entity, the method comprising, by a processing system: extracting features from data related to the operation of an entity; mapping the extracted features to states to generate a state sequence; determining an expected value of a metric based on the state sequence; and comparing the determined expected value of the metric to an observed value of the metric. 2. The method of claim 1, further comprising: presenting, via a user interface, a notification of anomalous behavior of the entity if the observed value of the metric differs from the expected value of the metric by a threshold amount. 3. The method of claim 1, wherein the metric is a performance metric or a sustainability metric. 4. The method of claim 1, wherein the data is reported by sensors monitoring various performance parameters of the entity. 5. The method of claim 4, wherein the data is recorded over the course of at east 24 hours of operation of the entity and the state sequence includes a plurality of distinct states. 6. The method of claim 1, wherein the expected value of the metric is determined using a state machine model previously trained on data related to the operation of one or more other entities of the same type as the entity. 7. The method of claim 1, wherein the expected value of the metric is determined using a mean value comparison technique, a distribution comparison technique, or a likelihood comparison technique. 8. A system to identify anomalous behavior of a monitored entity, the system comprising: sensors to report data regarding at least two parameters of an entity during operation; a feature extraction module to extract features from the reported data; a state sequence module to generate a state sequence by mapping the extracted features to a plurality of states; and an anomaly detection module to compare an expected value of a metric based on the state sequence to an observed value of the metric. 9. The system of claim 8, further comprising: a user interface to alert a user of anomalous behavior of the entity if the expected value of the metric differs from the observed value of the metric by a threshold amount. 10. The system of claim 9, wherein the user interface is configured to present a list of detected anomalies ordered by level of importance. 11. The system of claim 8, further comprising: a training module to build a state machine model based on observed operating parameters of one or more other entities of the same type as the entity. 12. The system of claim 8, further comprising: a memory storing a state machine model corresponding to the entity, wherein the anomaly detection module is configured to determine the expected value of the metric using information from the state machine model. 13. The system of claim 12, wherein the plurality of states into which the extracted features are mapped are predetermined based on state patterns in the state machine model. 14. The system of claim 13, wherein the state sequence module comprises a new-state detection module configured to detect a potential new state exhibited by a portion of the extracted features, wherein the potential new state corresponds to a pattern that does not exist in the state machine model. 15. The system of claim 8, wherein the system is configured to identify anomalous behavior in a plurality of monitored entities. 16. The system of claim 15, wherein the data reported by the sensors comprises measured parameters from each of the monitored entities, the state sequence module is configured to generate a state sequence for each of the monitored entities, and the anomaly detection module is configured to detect anomalous behavior in any one of or combination of the monitored entities. 17. The system of claim 15, wherein the plurality of monitored entities is an HVAC system. 18. A non-transitory computer-readable storage medium storing instructions for execution by a computer to identify anomalous behavior of a monitored entity, the instructions when executed causing the computer to: extract features from data characterizing operation of an entity during a time period; map the extracted features to states to generate a state sequence; determine an expected value of a metric based on the state sequence and a state machine model for the entity; compare the determined expected value of the metric to an observed value of the metric; and identify anomalous behavior if the expected value of the metric differs from the observed value of the metric. 19. The computer-readable storage medium of claim 18, the instructions when executed causing the computer to receive the data from a plurality of sensors monitoring performance parameters of the entity.
Please help me write a proper abstract based on the patent claims.
Described herein are techniques for identifying anomalous behavior of a monitored entity. Features can be extracted from data related to operation of an entity. The features can be mapped to a plurality of states to generate a state sequence. An observed value of a metric can be compared to an expected value of the metric based on the state sequence.
1. A method of manifold-aware ranking kernel learning programmed in a memory of a device comprising: a. performing combined supervised kernel learning and unsupervised manifold kernel learning; and b. generating a non-linear kernel model. 2. The method of claim 1 wherein Bregman projection is utilized when performing the supervised kernel learning. 3. The method of claim 1 wherein unlabeled data is utilized in the unsupervised manifold kernel learning. 4. The method of claim 1 wherein the result comprises a non-linear metric defined by a kernel model. 5. The method of claim 1 wherein the supervised kernel learning employs a relative comparison constraint. 6. The method of claim 1 wherein the device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart phone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a high definition video writer/player, a television and a home entertainment system. 7. A method of information retrieval programmed in a memory of a device comprising: a. receiving a search query input; b. performing a search based on the search query input and using a metric kernel learned by manifold-aware ranking kernel learning; and c. presenting a search result of the search. 8. The method of claim 7 wherein manifold-aware ranking kernel learning comprises: i. performing combined supervised kernel learning and unsupervised manifold kernel learning; and ii. generating a non-linear kernel model. 9. The method of claim 8 wherein Bregman projection is utilized when performing the supervised kernel learning. 10. The method of claim 8 wherein unlabeled data is utilized in the unsupervised manifold kernel learning. 11. The method of claim 8 wherein the result comprises a non-linear metric defined by a kernel model. 12. The method of claim 8 wherein the supervised kernel learning employs a relative comparison constraint. 13. The method of claim 7 wherein the search result comprises a set of entities from a database that are similar to the search query input. 14. The method of claim 7 wherein the device is selected from the group consisting of a personal computer, a laptop computer, a computer workstation, a server, a mainframe computer, a handheld computer, a personal digital assistant, a cellular/mobile telephone, a smart phone, a smart appliance, a gaming console, a digital camera, a digital camcorder, a camera phone, an portable music player, a tablet computer, a video player, a DVD writer/player, a high definition video writer/player, a television and a home entertainment system. 15. An apparatus comprising: a. a non-transitory memory for storing an application, the application for: i. performing combined supervised kernel learning and unsupervised manifold kernel learning; and ii. generating a non-linear kernel model; and b. a processing component coupled to the memory, the processing component configured for processing the application. 16. The apparatus of claim 15 wherein Bregman projection is utilized when performing the supervised kernel learning. 17. The apparatus of claim 15 wherein unlabeled data is utilized in the unsupervised manifold kernel learning. 18. The apparatus of claim 15 wherein the result comprises a non-linear metric defined by a kernel model. 19. The apparatus of claim 15 wherein the supervised kernel learning employs a relative comparison constraint. 20. An apparatus comprising: a. a non-transitory memory for storing an application, the application for: i. receiving a search query input; ii. performing a search based on the search query input and using a metric kernel learned by manifold-aware ranking kernel learning; and iii. presenting a search result of the search; and b. a processing component coupled to the memory, the processing component configured for processing the application. 21. The apparatus of claim 20 wherein manifold-aware ranking kernel learning comprises: i. performing combined supervised kernel learning and unsupervised manifold kernel learning; and ii. generating a non-linear kernel model. 22. The apparatus of claim 21 wherein Bregman projection is utilized when performing the supervised kernel learning. 23. The apparatus of claim 21 wherein unlabeled data is utilized in the unsupervised manifold kernel learning. 24. The apparatus of claim 21 wherein the result comprises a non-linear metric defined by a kernel model. 25. The apparatus of claim 21 wherein the supervised kernel learning employs a relative comparison constraint. 26. The apparatus of claim 20 wherein the search result comprises a set of entities from a database that are similar to the search query input.
Please help me write a proper abstract based on the patent claims.
A manifold-aware ranking kernel (MARK) for information retrieval is described herein. The MARK is implemented by using supervised and unsupervised learning. MARK is ranking-oriented such that the relative comparison formulation directly targets on the ranking problem, making the approach optimal for information retrieval. MARK is also manifold-aware such that the algorithm is able to exploit information from ample unlabeled data, which helps to improve generalization performance, particularly when there are limited number of labeled constraints. MARK is nonlinear: as a kernel-based approach, the algorithm is able to lead to a highly non-linear metric which is able to model complicated data distribution.
1. A method for aggregation of traffic impact metrics, the method comprising: associating, by one or more processors, each of a plurality of holidays with a holiday category of a plurality of holiday categories, wherein the plurality of holiday categories includes a first holiday category and a second holiday category; identifying, by one or more processors, a plurality of points of interest along a link of a transportation network; associating, by one or more processors, at least one of the plurality of points of interest with the first holiday category and at least one of the plurality of points of interest with the second holiday category; determining, by one or more processors, a mean category impact for each of the plurality of holiday categories; and determining, by one or more processors, an aggregated traffic impact metric based, at least in part, on the mean category impact of each of the plurality of holiday categories. 2. The method of claim 1, wherein the aggregated traffic impact metric corresponds to a date on which a holiday of the first holiday category occurs and on which a holiday of the second holiday category occurs. 3. The method of claim 2, further comprising: determining, by one or more processors, a mean weekday volume for each day of a week, wherein the mean weekday volume for each day of the week is based, at least in part, on historic traffic data for one or more previous days corresponding to the day of the week. 4. The method of claim 3, wherein the aggregated traffic impact metric is a function of: (i) the mean category impact for each of the plurality of holiday categories; (ii) a number of points of interest located along the link and associated with each of the plurality of holiday categories; and (iii) a total number of points of interest along the link. 5. The method of claim 4, wherein the mean category impact for each of the plurality of holiday categories is weighted based on a ratio of the number of points of interest along a link that are associated with the holiday category to the total number of points of interest along the link. 6. The method of claim 2, wherein the aggregated traffic impact metric is a sum of: (i) a mean weekday volume of the date; and (ii) the mean category impact for each of the plurality of holiday categories, wherein each mean category impact is weighted based, at least in part, on the mean category impact of each other holiday category. 7. The method of claim 3, wherein the mean category impact for each of the plurality of holiday categories is based on a traffic impact for each holiday of the holiday category. 8. The method of claim 7, wherein the traffic impact for each holiday is based on historical traffic data of one or more previous occurrences of the holiday and a mean weekday volume for a day of a week of each of the one or more previous occurrences. 9. A computer program product for aggregation of traffic impact metrics, the computer program product comprising: a computer readable storage medium and program instructions stored on the computer readable storage medium, the program instructions comprising: program instructions to associate each of a plurality of holidays with a holiday category of a plurality of holiday categories, wherein the plurality of holiday categories includes a first holiday category and a second holiday category; program instructions to identify a plurality of points of interest along a link of a transportation network; program instructions to associate at least one of the plurality of points of interest with the first holiday category and at least one of the plurality of points of interest with the second holiday category; program instructions to determine a mean category impact for each of the plurality of holiday categories; and program instructions to determine an aggregated traffic impact metric based, at least in part, on the mean category impact of each of the plurality of holiday categories. 10. The computer program product of claim 9, wherein the aggregated traffic impact metric corresponds to a date on which a holiday of the first holiday category occurs and on which a holiday of the second holiday category occurs. 11. The computer program product of claim 10, wherein the program instructions further comprise: program instructions to determine a mean weekday volume for each day of a week, wherein the mean weekday volume for each day of the week is based, at least in part, on historic traffic data for one or more previous days corresponding to the day of the week. 12. The computer program product of claim 11, wherein the aggregated traffic impact metric is a function of: (i) the mean category impact for each of the plurality of holiday categories; (ii) a number of points of interest located along the link and associated with each of the plurality of holiday categories; and (iii) a total number of points of interest along the link. 13. The computer program product of claim 12, wherein the mean category impact for each of the plurality of holiday categories is weighted based on a ratio of the number of points of interest along a link that are associated with the holiday category to the total number of points of interest along the link. 14. The computer program product of claim 10, wherein the aggregated traffic impact metric is a sum of: (i) a mean weekday volume of the date; and (ii) the mean category impact for each of the plurality of holiday categories, wherein each mean category impact is weighted based, at least in part, on the mean category impact of each other holiday category. 15. A computer system for aggregation of traffic impact metrics, the computer system comprising: one or more computer processors; one or more computer readable storage media; program instructions stored on the computer readable storage media for execution by at least one of the one or more processors, the program instructions comprising: program instructions to associate each of a plurality of holidays with a holiday category of a plurality of holiday categories, wherein the plurality of holiday categories includes a first holiday category and a second holiday category; program instructions to identify a plurality of points of interest along a link of a transportation network; program instructions to associate at least one of the plurality of points of interest with the first holiday category and at least one of the plurality of points of interest with the second holiday category; program instructions to determine a mean category impact for each of the plurality of holiday categories; and program instructions to determine an aggregated traffic impact metric based, at least in part, on the mean category impact of each of the plurality of holiday categories. 16. The computer system of claim 15, wherein the aggregated traffic impact metric corresponds to a date on which a holiday of the first holiday category occurs and on which a holiday of the second holiday category occurs. 17. The computer system of claim 16, wherein the program instructions further comprise: program instructions to determine a mean weekday volume for each day of a week, wherein the mean weekday volume for each day of the week is based, at least in part, on historic traffic data for one or more previous days corresponding to the day of the week. 18. The computer system of claim 17, wherein the aggregated traffic impact metric is a function of: (i) the mean category impact for each of the plurality of holiday categories; (ii) a number of points of interest located along the link and associated with each of the plurality of holiday categories; and (iii) a total number of points of interest along the link. 19. The computer system of claim 18, wherein the mean category impact for each of the plurality of holiday categories is weighted based on a ratio of the number of points of interest along a link that are associated with the holiday category to the total number of points of interest along the link. 20. The computer system of claim 16, wherein the aggregated traffic impact metric is a sum of: (i) a mean weekday volume of the date; and (ii) the mean category impact for each of the plurality of holiday categories, wherein each mean category impact is weighted based, at least in part, on the mean category impact of each other holiday category.
Please help me write a proper abstract based on the patent claims.
Aggregation of traffic impact metrics is provided. Each of a plurality of holidays is associated with a holiday category of a plurality of holiday categories. The plurality of holiday categories includes a first holiday category and a second holiday category. A plurality of points of interest along a link of a transportation network is identified. At least one of the plurality of points of interest is associated with the first holiday category and at least one of the plurality of points of interest with the second holiday category. A mean category impact for each of the plurality of holiday categories is determined. An aggregated traffic impact metric is determined based, at least in part, on the mean category impact of each of the plurality of holiday categories.
1. A method for annotating natural language text based on an emotive content of the natural language text, the method comprising the steps of: receiving, by one or more computer processors, a natural language text from a user, wherein the natural language text includes typing characteristics metadata, and wherein the typing characteristics metadata includes all of the following: a key press duration; a duration between key presses in the natural language text; a capitalization of the natural language text; a frequency of the capitalization of the natural language text; a frequency of spelling errors in the natural language text; an average word length in the natural language text; and previously deleted natural language text; determining, by one or more computer processors, an emotive content of the natural language text using a machine learning model; and determining, by one or more computer processors, an annotation to the natural language text based on the emotive content, wherein the annotation includes all of the following: an emoticon; a picture; an audio; a video; and a text that describes the emotive content. 2. The method of claim 1, further comprising: receiving, by one or more computer processors, an indication from the user, wherein the indication is to an accuracy of the modification to the natural language text; and updating, by one or more computer processors, the machine learning model based on the indication. 3. (canceled) 4. The method of claim 1, wherein the step of determining, by one or more computer processors, an emotive content of the natural language text using a machine learning model comprises: determining, by one or more computer processors, an emotive content of the natural language text using a machine learning model and the typing characteristics metadata. 5. (canceled) 6. (canceled) 7. The method of claim 1, wherein the annotation is modifying a font of the natural language text. 8. A computer program product for annotating natural language text based on an emotive content of the natural language text, the computer program product comprising: one or more computer readable storage media; and program instructions stored on the one or more computer readable storage media, the program instructions comprising: program instructions to receive a natural language text from a user, wherein the natural language text includes typing characteristics metadata, and wherein the typing characteristics metadata includes all of the following: a key press duration; a duration between key presses in the natural language text; a capitalization of the natural language text; a frequency of the capitalization of the natural language text; a frequency of spelling errors in the natural language text; an average word length in the natural language text; and previously deleted natural language text; program instructions to determine an emotive content of the natural language text using a machine learning model; and program instructions to determine an annotation to the natural language text based on the emotive content, wherein the annotation includes all of following: an emoticon; a picture; an audio; a video; and a text that describes the emotive content. 9. The computer program product of claim 8, further comprising program instructions, stored on the one or more computer readable storage media, to: receive an indication from the user, wherein the indication is to an accuracy of the modification to the natural language text; and update the machine learning model based on the indication. 10. (canceled) 11. The computer program product of claim 8, wherein the program instructions to determine an emotive content of the natural language text using a machine learning model comprise: program instructions to determine an emotive content of the natural language text using a machine learning model and the typing characteristics metadata. 12. (canceled) 13. (canceled) 14. The computer program product of claim 8, wherein the annotation is modifying a font of the natural language text. 15. A computer system for annotating natural language text based on an emotive content of the natural language text, the computer system comprising: one or more computer processors; one or more computer readable storage media; and program instructions, stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, the program instructions comprising: program instructions to receive a natural language text from a user, wherein the natural language text includes typing characteristics metadata, and wherein the typing characteristics metadata includes all of the following: a key press duration; a duration between key presses in the natural language text; a capitalization of the natural language text; a frequency of the capitalization of the natural language text; a frequency of spelling errors in the natural language text; an average word length in the natural language text; and previously deleted natural language text; program instructions to determine an emotive content of the natural language text using a machine learning model; and program instructions to determine an annotation to the natural language text based on the emotive content, wherein the annotation includes all of the following: an emoticon; a picture; an audio; a video; and a text that describes the emotive content. 16. The computer system of claim 15, further comprising program instructions, stored on the one or more computer readable storage media for execution by at least one of the one or more computer processors, to: receive an indication from the user, wherein the indication is to an accuracy of the modification to the natural language text; and update the machine learning model based on the indication. 17. (canceled) 18. The computer system of claim 15, wherein the program instructions to determine an emotive content of the natural language text using a machine learning model comprise: program instructions to determine an emotive content of the natural language text using a machine learning model and the typing characteristics metadata. 19. (canceled) 20. (canceled)
Please help me write a proper abstract based on the patent claims.
A natural language text is received from a user. The natural language text includes typing characteristics metadata. An emotive content of the natural language text is determined using a machine learning model. The natural language text is modified based on the emotive content.
1. A cognitive state prediction system comprising: a receiving circuit configured to receive an electronic message sent by a first user; a labeling circuit configured to query a second user to associate a label with the electronic message based on a cognitive state of the first user; and a correlating circuit configured to correlate the label with user data at a time of sending the electronic message, the user data corresponding to data output by at least one of a wearable and an external sensor in a database. 2. The system of claim 1, further comprising an analyzing and label predicting circuit configured to predict a label of a current cognitive state of the first user based on current user data and a plurality of labels stored with the user data in the database. 3. The system of claim 1, further comprising an analyzing and label predicting circuit configured to: analyze the user data stored in the database; analyze a current state of the first user based on current user data being detected by at least one of the wearable and the external sensor; and predict a predicted label of a current cognitive state of the first user to send to associate with a current electronic message being sent by the first user. 4. The system of claim 3, wherein the predicted label includes a plurality of cognitive states of the first user, and wherein each of the plurality of cognitive states of the first user associated with the predicted label includes a confidence level for each of the plurality of cognitive states. 5. The system of claim 1, wherein the labeling circuit further queries the first user to confirm that the label input by the second user is correct for the electronic message. 6. The system of claim 1, wherein the database comprises an external database. 7. The system of claim 1, wherein the database includes pre-configured labels associated with user data of a cohort. 8. The system of claim 1, wherein the receiving circuit further receives electronic calendar data for the first user, and wherein the labeling circuit sends the electronic calendar data with the query to the second user. 9. The system of claim 1, wherein the labeling circuit sends electronic calendar data with the query to the second user such that the second user determines the label based on the cognitive state of the first user and the calendar data. 10. The system of claim 1, wherein the user data detected by at least one of the external sensor and the wearable includes at least one of: a glucose level; blood pressure; electrocardiogram (ECG); a breathing status; a heart rate; a stress level; a perspiration level; a facial expression; a measurement of a body movement; an eye movement; and a voice characteristic. 11. The system of claim 1, wherein the cognitive state of the first user that the second user associates the label with the electronic message comprises an interpreted cognitive state of the first user by the second user. 12. The system of claim 1, wherein the interpreted cognitive state of the first user by the second user is based on prior knowledge of the first user by the second user. 13. The system of claim 1, wherein the user data further includes data corresponding to a mode of the sending of the electronic message by the first user. 14. A cognitive state prediction method comprising: receiving an electronic message sent by a first user; querying a second user to associate a label with the electronic message based on a cognitive state of the first user; and correlating the label with user data at a time of sending the electronic message, the user data corresponding to data output by at least one of a wearable and an external sensor in a database. 15. The method of claim 14, further comprising analyzing and label predicting in which a label of a current cognitive state of the first user based on current user data and a plurality of labels stored with the user data in the database is predicted. 16. The method of claim 14, further comprising: analyzing the user data stored in the database; analyzing a current state of the first user based on current user data being detected by at least one of the wearable and the external sensor; and predicting a label of a current cognitive state of the first user to send to associate with a current electronic message being sent by the first user. 17. The method of claim 16, wherein the predicted label includes a plurality of cognitive states of the first user, and wherein each of the plurality of cognitive states of the first user associated with the predicted label includes a confidence level. 18. The method of claim 14, wherein the querying further queries the first user to confirm that the label input by the second user is correct for the electronic message. 19. The method of claim 14, wherein the database includes pre-configured labels associated with user data of a cohort. 20. A non-transitory computer-readable recording medium recording a cognitive state prediction program, the program causing a computer to perform: receiving an electronic message sent by a first user; querying a second user to associate a label with the electronic message based on a cognitive state of the first user; and correlating the label with user data at a time of sending the electronic message, the user data corresponding to data output by at least one of a wearable and an external sensor in a database.
Please help me write a proper abstract based on the patent claims.
A cognitive state prediction method, system, and non-transitory computer readable medium, include a receiving circuit configured to receive an electronic message sent by a first user, a labeling circuit configured to query a second user to associate a label with the electronic message based on a cognitive state of the first user, and a correlating circuit configured to correlate the label with user data at a time of sending the electronic message, the user data corresponding to data output by at least one of the wearable and an external sensor in a database.
1. A system for indicating a probability of errors in multimedia content, the system comprising: a memory; a processor communicatively coupled to the memory, where the processor is configured to perform monitoring work being performed on multimedia content; identifying distractions during the monitoring of the work being performed; calculating a probability of errors in at least one location of the multimedia content by on the distractions that have been identified; and annotating the location with an indication of the probability of errors. 2. The system of claim 1, wherein the multimedia content is text, sound, a 2-D picture, a 3-D picture, a 2-D video, a 3-D video, or a combination thereof. 3. The system of claim 1, wherein the monitoring work being performed includes contemporaneous monitoring of pop-ups on a graphical user interface, instant messaging, e-mail, operation of telephone, detection of other people within an area, switching of windows on a graphical user interface, amount of elapsed time on a given task, ambient noise, user eye-tracking, user typing speed, user heart rate, user breathing rate, user blink frequency, user skin conductance, or a combination thereof. 4. The system of claim 1, where in the calculating the probability of errors includes using a function F(U,S,P) based on a determination of user state (U); a determination of sensitivity (S) of user input; and a determination of user characteristics stored in a profile (P). 5. The system of claim 4, wherein the user state (U) includes an output of the work being performed on the multimedia content, a day of week, a time of day, a location, or a combination thereof. 6. The system of claim 4, wherein the sensitivity (S) includes a location in the multimedia content, a category of the multimedia content, a complexity of the multimedia content, regulatory requirements, legal requirements, or a combination thereof. 7. The system of claim 4, wherein the profile (P) includes sensitivity according to times of day, history of creating errors, crowd-sourcing of a team, presence of specific individuals within a given area, vocational profile of user, or a combination thereof. 8. The system of claim 4, wherein the function F(U,S,P) uses machine learning 9. The system of claim 1, wherein the annotating the location with an indication of the probability of errors includes annotating with. color of text, color of background area, blinking font, font size, textures, insertion of tracking bubbles, or a combination thereof. 10. The system of claim 1, further comprising displaying the distractions that have been identified in conjunction with the annotating. 11. The system of claim 1, in which the multimedia content is an audio signal and the annotating includes graphical markers in a graphic representation of the audio signal, additional audio, or a combination thereof. 12. The system of claim 1, in which the multimedia content is a video signal and the annotating is graphical markers applied to sections of the video signal. 13. The system of claim 1, in which the multimedia content is a video game or virtual universe and the annotating indicates areas of user game play with probabilities of distractions. 14. A non-transitory computer program product for indicating a probability of errors in multimedia content, the computer program product comprising a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code configured to perform: monitoring work being performed on multimedia content; identifying distractions during the monitoring of the work being performed; calculating a probability of errors in at least one location of the multimedia content by on the distractions that have been identified; and annotating the location with an indication of the probability of errors. 15. The non-transitory computer program product of claim 14, wherein the multimedia content is text, sound, a 2-D picture, a 3-D picture, a 2-D video, a 3-D video, or a combination thereof. 16. The non-transitory computer program product of claim 14, wherein the monitoring work being performed includes contemporaneous monitoring of pop-ups on a graphical user interface, instant messaging, e-mail, operation of telephone, detection of other people within an area, switching of windows on a graphical user interface, amount of elapsed time on a given task, ambient noise, user eye-tracking, user typing speed, user heart rate, user breathing rate, user blink frequency, user skin conductance, or a combination thereof. 17. The non-transitory computer program product of claim 14, where in the calculating the probability of errors includes using a function F(U,S,P) based on a determination of user state (U); a determination of sensitivity (S) of user input; and a determination of user characteristics stored in a profile (P). 18. The non-transitory computer program product of claim 17, wherein the user state (U) includes an output of the work being performed on the multimedia content, a day of week, a time of day, a location, or a combination thereof. 19. The non-transitory computer program product of claim 17, wherein the sensitivity (S) includes a location in the multimedia content, a category of the multimedia content, a complexity of the multimedia content, regulatory requirements, legal requirements, or a combination thereof. 20. The non-transitory computer program product of claim 17, wherein the profile (P) includes sensitivity according to times of day, history of creating errors, crowd-sourcing of a team, presence of specific individuals within a given area, vocational profile of user, or a combination thereof.
Please help me write a proper abstract based on the patent claims.
Disclosed is a novel system and method for indicating a probability of errors in multimedia content. The system determines a user state or possible user distraction level. The user distraction level is indicated in the multimedia content. In one example, work is monitored being performed on the multimedia content. Distractions are identified while the work is being monitored. A probability of errors is calculated in at least one location of the multimedia content by on the distractions that have been identified. Annotations are used to indicate of the probability of errors. In another example, the calculating of probability includes using a function F(U,S,P) based on a combination of: i) a determination of user state (U), ii) a determination of sensitivity (S) of user input, and iii) a determination of user characteristics stored in a profile (P).
1. One or more non-transitory computer-readable storage media storing computer-executable instructions for causing a computing system to perform processing to execute a process model, the processing comprising: receiving a specification of a process model, the process model comprising a plurality of process components; determining a relationship between a first process component and another process component of the plurality of process components using a predictive model, the predictive model associated with at least a portion of the process model; defining a process rule for the first process component, the process rule specifying a second process component to be executed, the process rule comprising the relationship or a heuristic rule; and executing the second process component according to the process rule. 2. The one or more non-transitory computer-readable storage media of claim 1, wherein determining a relationship between a first and another process component of the plurality of process components using predictive model comprises: issuing a request for execution of a stored procedure in a database system. 3. The one or more non-transitory computer-readable storage media of claim 2, wherein the request for execution of a stored procedure in a database system accesses a predictive model of at least a portion of the process model components. 4. The one or more non-transitory computer-readable storage media of claim 1, wherein determining a relationship between a first process component and another process component of the plurality of process components comprises requesting the analysis of the predictive model. 5. The one or more non-transitory computer-readable storage media of claim 1, wherein defining a process rule for the first process component comprises receiving user input selecting the relationship or the heuristic rule. 6. The one or more non-transitory computer-readable storage media of claim 1, wherein defining a process rule for the first process component comprises evaluating a confidence level associated with a factor of the predictive model. 7. The one or more non-transitory computer-readable storage media of claim 1, wherein defining a process rule for the first process component comprises determining whether a current date is later than a threshold date and, if the current date is later than the threshold date, selecting the relationship as the process rule. 8. The one or more non-transitory computer-readable storage media of claim 1, wherein defining a process rule for the first process component comprises determining whether a number of data points analyzed by the predictive model exceeds a threshold, and, if the threshold is exceeded, selecting the relationship as the process rule. 9. The one or more non-transitory computer-readable storage media of claim 1, further comprises generating a data aggregation comprising data associated with the process model and data associated with the predictive model. 10. The one or more non-transitory computer-readable storage media of claim 1, wherein the defining selects the heuristic rule, the processing further comprising: selecting at a later time the relationship. 11. The one or more non-transitory computer-readable storage media of claim 1, wherein the defining selects the heuristic rule, and wherein the process model is received with heuristic rules forming process rules specifying an order in which the process components should be executed. 12. The one or more non-transitory computer-readable storage media of claim 1, wherein the predictive model is not directly associated with the process model. 13. The one or more non-transitory computer-readable storage media of claim 1, wherein determining a relationship between a first process component and another process component of the plurality of process components comprises determining a relationship between the first process component and each of a plurality of other process components of the plurality of process components. 14. The one or more non-transitory computer-readable storage media of claim 13, the processing further comprising: displaying a plurality of the relationships to a user. 15. The one or more non-transitory computer-readable storage media of claim 14, wherein defining a process rule for the first process component comprises receiving user input selecting one of the plurality of displayed relationships. 16. The one or more non-transitory computer-readable storage media of claim 1, wherein the determining and the defining is carried out on a component-by-component basis for the process model specification as the process model specification is executed. 17. A computing system that implements a process control engine, the computing system comprising: one or more memories; one or more processing units coupled to the one or more memories; and one or more non-transitory computer readable storage media storing instructions that, when loaded into the memories, cause the one or more processing units to perform operations for: implementing a computing platform comprising: a process control engine, the process control engine executing a process specification comprising a plurality of process components; and a rules framework in communication with the process control engine; implementing a database comprising: a data store; and a predictive modeling engine; generating a predictive model of at least a portion of the process components using the predictive modeling engine; determining a rule of the rule framework using the predictive model; and executing with the process control engine at least one of the plurality of process components according to the rule. 18. The computing system of claim 17, the operations further comprising: receiving user input selecting the rule for execution by the process control engine. 19. In a computing system comprising a memory and one or more processors, a method of executing a process specification according to a ruleset, the method comprising: defining a ruleset for the process using a plurality of heuristic rules, the process comprising a plurality of process components; defining a predictive model for at least a portion of the plurality of process components; executing the process according to the ruleset; training the predictive model using data obtained during the executing; determining a rule for the process specification using the predictive model; revising the ruleset to include the determined rule; and executing the process according to the revised ruleset. 20. The method of claim 19, further comprising: receiving user input revising the ruleset to include the rule.
Please help me write a proper abstract based on the patent claims.
A specification of the process model is received. The process model includes a plurality of process components. A relationship between a first process component and another process component of the plurality of process components is determined using a predictive model. A process rule for the first process component is determined. The process rule specified a second process component to be executed. The process rule includes the relationship determined using the predictive model or a heuristic rule. The second process component is executed according to the process rule.
1. A method of operating a quantum computing device, comprising: causing a quantum computing device to evolve from a first state to a second state according to a schedule, the first state corresponding to a first Hamiltonian, the second state corresponding to a second Hamiltonian, wherein the schedule includes an X schedule for Hamiltonian terms in the X basis, and a Z schedule for Hamiltonian terms in the Z basis, and wherein the schedule is nonlinear or piecewise linear in the X schedule, the Z schedule, or both the X schedule and the Z schedule. 2. The method of claim 1, wherein the schedule includes one or more sequences where the X schedule and the Z schedule converge toward one another and one or more sequences where the X schedule and the Z schedule diverge from one another. 3. The method of claim 1, wherein the X schedule and the Z schedule intersect only in a latter half of the respective schedules. 4. The method claim 1, wherein one or both of the X schedule or the Z schedule has terms that vary and wherein the variation in terms is greater in a latter half of the respective schedule than in a front half of the respective schedule. 5. The method of claim 1, further comprising generating the schedule by performing a schedule-training process beginning from an initial schedule, wherein the initial schedule includes one or more of: (a) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein the initial X schedule and the initial Z schedule are both constant; (b) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one of the initial X schedule or the initial Z schedule is constant, and the other one of the initial X schedule or the initial Z schedule is nonconstant; (c) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one of the initial X schedule or the initial Z schedule is linear, and the other one of the initial X schedule or the initial Z schedule is nonlinear and nonconstant; (d) all initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one or both of the initial X schedule or the initial Z schedule have terms that vary with greater degree in a latter half of the respective schedule; (e) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one or both of the initial X schedule or the initial Z schedule have terms that vary with greater degree in a latter half of the respective schedule; or (f) an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis, and wherein one or both of the initial X schedule or the initial Z schedule have terms that are constant in a first half of the respective schedule and that vary in a second half of the respective schedule. 6. The method of claim 1, wherein the second Hamiltonian is a solution to an optimization problem, and wherein the schedule-training process uses one or more training problems having a size that is a smaller than a size of the optimization problem. 7. The method of claim 1, further comprising generating the schedule by: modifying an initial schedule from its initial state to create a plurality of modified schedules; testing the modified schedules relative to one or more problem instances; selecting one of the modified schedules based on an observed improvement in solving one or more of the problem instances. 8. The method of claim 7, wherein the generating further comprises iterating the acts of modifying, testing, and selecting until no further improvement is observed in the selected modified schedule. 9. The method of claim 1, wherein, for at least one step of the Z schedule or X schedule, the sign of the Z schedule or X schedule step is opposite of the sign of the respective final step of the Z schedule or X schedule. 10. The method of claim 1, wherein, for at least one step of the Z schedule or X schedule, the sign of the Z schedule or X schedule step switches from positive to negative or vice versa. 11. The method of claim 1, wherein one or more terms of the first Hamiltonian are noncommuting with corresponding terms of the second Hamiltonian. 12. A method, comprising: generating a learned schedule for controlling a quantum computing device by performing a schedule-training process beginning from an initial schedule, wherein the initial schedule includes an initial X schedule for Hamiltonian terms in the X basis and an initial Z schedule for Hamiltonian terms in the Z basis. 13. The method of claim 12, wherein at least one of the initial X schedule or the initial Z schedule is nonlinear. 14. The method of claim 12, wherein the initial X schedule and the initial Z schedule are both constant. 15. The method of claim 12, wherein one of the initial X schedule or the initial Z schedule is constant, and the other one of the initial X schedule or the initial Z schedule is nonconstant. 16. The method of claim 12, wherein one of the initial X schedule or the initial Z schedule is linear, and the other one of the initial X schedule or the initial Z schedule is nonlinear and nonconstant. 17. The method of claim 12, wherein one or both of the initial X schedule or the initial Z schedule have terms that vary with greater degree in a latter half of the respective schedule. 18. The method of claim 12, wherein one or both of the initial X schedule or the initial Z schedule have terms that are constant in a first half of the respective schedule and that vary in a second half of the respective schedule. 19. The method of claim 12, wherein the learned schedule includes one of: (a) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, wherein the learned schedule includes one or more sequences where the learned X schedule and the learned Z schedule converge toward one another and one or more sequences where the learned X schedule and the learned Z schedule diverge from one another; (b) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, wherein the learned X schedule and the learned Z schedule intersect only in a latter half of the respective schedules. (c) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, and wherein one or both of the learned X schedule or the learned Z schedule have terms that vary and wherein the variation in terms is greater in a latter half of the respective schedule than in a front half of the respective schedule; (d) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, and wherein, for at least one step of the learned Z schedule or learned X schedule, the sign of the learned Z schedule or learned X schedule step is opposite of the sign of the respective final step of the learned Z schedule or learned X schedule; or (e) a learned X schedule for Hamiltonian terms in the X basis and a learned Z schedule for Hamiltonian terms in the Z basis, and wherein, for at least one step of the learned Z schedule or learned X schedule, the sign of the learned Z schedule or learned X schedule step switches from positive to negative or vice versa. 20. A system, comprising: a processor; and at least one memory coupled to the processor and having stored thereon processor-executable instructions for: generating a schedule for controlling a quantum computing device by performing a schedule-training process beginning from an initial schedule; and causing a quantum computing device to evolve from a first state to a second state according to the schedule, the first state corresponding to a first Hamiltonian, the second state corresponding to a second Hamiltonian, wherein the schedule includes an X schedule for Hamiltonian terms in the X basis, and a Z schedule for Hamiltonian terms in the Z basis, and wherein the schedule is nonlinear or piecewise linear in one or both of the X schedule or the Z schedule.
Please help me write a proper abstract based on the patent claims.
Among the embodiments disclosed herein are variants of the quantum approximate optimization algorithm with different parametrization. In particular embodiments, a different objective is used: rather than looking for a state which approximately solves an optimization problem, embodiments of the disclosed technology find a quantum algorithm that will produce a state with high overlap with the optimal state (given an instance, for example, of MAX-2-SAT). In certain embodiments, a machine learning approach is used in which a “training set” of problems is selected and the parameters optimized to produce large overlap for this training set. The problem was then tested on a larger problem set. When tested on the full set, the parameters that were found produced significantly larger overlap than optimized annealing times. Testing on other random instances (e.g., from 20 to 28 bits) continued to show improvement over annealing, with the improvement being most notable on the hardest problems. Embodiments of the disclosed technology can be used, for example, for near-term quantum computers with limited coherence times.
1. A computer implemented method for developing a system to fabricate test data into a database, the method comprising: receiving, using a processor system, a file format layout of the database, wherein the database includes variables; defining rules independently of the file format layout of the database; receiving, using the processor system, the rules that are defined independently of the file format layout of the database; wherein the rules impose constraints on the variables; wherein the rules being defined independently of the file format layout prevents the rules from imposing any limit on a first manner in which the rules are defined; wherein the rules being defined independently of the file format layout prevents the rules from imposing any limit on a second manner in which relationships between and among the variables are defined; defining a constraint problem based on the variables and the constraints; and solving the constraint problem. 2. The computer implemented method of claim 1, wherein solving the constraint problem generates an assignment of fabricated test data to each one of the variables. 3. The computer implemented method of claim 2 further comprising generating an output comprising the file format layout having the fabricated test data, wherein the fabricated test data conforms to the rules. 4. The computer implemented method of claim 3, wherein the file format layout comprises a template. 5. The computer implemented method of claim 1, wherein the constraint problem is solved using a constraint satisfaction problem (CSP) solver. 6. The computer implemented method of claim 1, wherein the rules include an individual rule that imposes a constraint on more than one of the variables. 7. The computer implemented method of claim 1, wherein the file format layout comprises is selected from the group consisting of: a database; a flat file; a message; a data stream; and a web service call. 8-20. (canceled)
Please help me write a proper abstract based on the patent claims.
Embodiments are directed to a computer implemented method for fabricating test data. The method includes receiving, using a processor system, a file format layout having variables. The method further includes receiving, using the processor system, rules that are defined independently of the file format layout, wherein the rules impose constraints on the variables. The method further includes defining a constraint problem based on the variables and the constraints, and solving the constraint problem.
1. A non-transitory machine-readable medium storing instructions for relationship extraction executable by a machine to cause the machine to: apply unsupervised relationship learning to a logic knowledge base and a plurality of entity groups recognized from a document to provide a probabilistic model; perform joint inference on the probabilistic model to make simultaneous statistical judgments about a respective relationship between at least two entities in one of the plurality of entity groups; and extract a relationship between at least two entities in one of the plurality of entity groups based on the joint inference. 2. The medium of claim 1, wherein the respective relationship is an unidentified relationship between at least two entities in one of the plurality of entity groups and wherein the relationship between at least two entities is a most likely relationship between at least two entities in one of the plurality of entity groups. 3. The medium of claim 1, wherein the logic knowledge base includes a plurality of first-order logic formulas. 4. The medium of claim 3, wherein the probabilistic model includes the plurality of first-order logic formulas and a plurality of weights. 5. The medium of claim 4, wherein the instructions to apply unsupervised relationship learning includes instructions to associate the plurality of weights with the plurality of first-order logic formulas and wherein each of the plurality of weights is associated with one of the plurality of first-order logic formulas. 6. The medium of claim 5, wherein the plurality of associated weights collectively provide a plurality of probabilities that are associated with the plurality of first-order logic formulas. 7. The medium of claim 6, wherein the probabilities of the plurality of first-order logic formulas are provided via a log-linear model. 8. The medium of claim 3, including instructions to extract the relationship between at least two entities in one of the plurality of entity groups using the plurality of first-order logic formulas. 9. A system for relationship extraction comprising a processing resource in communication with a non-transitory machine readable medium having instructions executed by the processing resource to implement: an unsupervised relationship learning engine to apply unsupervised relationship learning to a first-order logic knowledge base and a plurality of entity pairs recognized from a textual document to provide a probabilistic graphical model; a joint inference engine to perform joint inference on the probabilistic graphical model to make simultaneous statistical judgments about a respective relationship between each of the plurality of recognized entity pairs; and an extraction engine to extract a relationship between an entity pair based on the joint inference. 10. The system of claim 9, including instructions to extract a relationship between a recognized entity pair. 11. The system of claim 10, including instructions to make a plurality of probabilistic determinations in parallel for a plurality of recognized entity pairs to make a statistical judgment about the respective relationships between the plurality of recognized entity pairs. 12. The system of claim 9, wherein the instructions executable to extract the implicit relationship between the entity pair includes instruction to relationally auto-correlate a variable pertaining to a first recognized entity pair with a variable pertaining to a second recognized entity pair to extract an implicit relationship between the entity pair based on the joint inference. 13. A method for relationship extraction comprising: applying unsupervised relationship learning to a first-order logic knowledge base and a plurality of entity pairs recognized from a textual document to provide a probabilistic graphical model, wherein a plurality of relationships between the plurality of recognized entity pairs are not labeled; performing joint inference on the probabilistic graphical model to make simultaneous statistical judgments about a respective relationship between each of the plurality of recognized entity pairs; and extracting an implicit relationship between an entity pair based on the joint inference. 14. The method of claim 13, wherein the textual document does not provide explicit support for the implicit relationship. 15. The method of claim 13, wherein a portion of the plurality of first-order logic formulas represent implicit relationships.
Please help me write a proper abstract based on the patent claims.
Relationship extraction can include applying unsupervised relationship learning to a logic knowledge base and a plurality of entity groups recognized from a document to provide a probabilistic model. Relationship extraction can include performing joint inference on the probabilistic model to make simultaneous statistical judgments about a respective relationship between at least two entities in one of the plurality of entity groups. Relationship extraction can include extracting a relationship between at least two entities in one of the plurality of entity groups based on the joint inference.
1. A semantic entity relation detection classifier training system, comprising: one or more computing devices, said computing devices being in communication with each other via a computer network whenever there is a plurality of computing devices; and a computer program having program modules executable by the one or more computing devices, the one or more computing devices being directed by the program modules of the computer program to, receive a query click log and a knowledge graph, find queries included in the query click log that are associated with entities found in the knowledge graph, said entities being associated with a knowledge graph domain of interest, infer explicit relations from the found queries and generate an explicit relations data set comprising queries associated with the inferred explicit relations, infer implicit relations from the found queries and generate an implicit relations data set comprising queries associated with the inferred implicit relations, and train a semantic entity relation detection classifier using the explicit and implicit data sets to find an explicit or implicit relation, or both, in a query. 2. The system of claim 1, wherein the program module for finding queries included in the query click log that are associated with entities found in the knowledge graph, comprises sub-modules for: identifying one or more central entity types in the knowledge graph which correspond to a domain of interest; for each identified central entity type, finding central type entities in the knowledge graph that correspond to the central entity type under consideration, establishing a central entity type property list for each of the found central type entities that comprises the found central type entity and other entities in the knowledge graph having a prescribed degree of relation to the central type entity under consideration, as well as the type of relation existing between the found central type entity and each of the other entities in the knowledge graph having a prescribed degree of relation to the central type entity under consideration, combining the central entity type property list established for the identified central entity types to produce a combined entity property list, and finding queries associated with entities listed in the combined entity property list in the query click log. 3. The system of claim 2, wherein the sub-module for finding queries associated with entities listed in the combined entity property list in the query click log, comprises sub-modules for: creating a seed query from an entity in the combined entity property list; finding query click log queries that include the seed query; identifying uniform resource locators (URLs) from the query click log that are associated with at least one of the found queries; and finding other queries in the query click log that are associated with at least one of the identified URLs. 4. The system of claim 3, wherein the sub-module for finding queries associated with entities listed in the combined entity property list in the query click log, further comprises a sub-module for eliminating the query click log queries found to include the seed query from consideration prior to identifying URLs that do not meet a prescribed length criteria, or quantity criteria, or both. 5. The system of claim 2, wherein the sub-module for finding queries associated with entities listed in the combined entity property list in the query click log, comprises sub-modules for: identifying one or more relations of an entity in the combined entity property list each of which points to at least one URL in the knowledge graph; generating a list of the pointed to URLs; and finding queries in the query click log that are associated with at least one of the listed URLs. 6. The system of claim 2, further comprising a sub-module for, after queries associated with entities listed in the combined entity property list in the query click log are found, eliminating from consideration those found queries that are non-natural spoken language queries. 7. The system of claim 1, wherein an entity in the knowledge graph has said prescribed degree of relation to a central type entity whenever the entity is associated with an incoming relation from the central type entity, or is reachable in the knowledge graph from the central type entity within a prescribed number of relations. 8. The system of claim 1, wherein the program module for inferring explicit relations from the found queries and generating an explicit relations data set comprising queries associated with the inferred explicit relations, comprises sub-modules for: scanning the found queries to find those queries exhibiting an inferred explicit relation between entities wherein an inferred explicit relation between entities is defined as the presence of an entity and a closely related entity in the same query, and wherein an entity is closely related to another entity whenever the entity is connected to the another entity in the knowledge graph by no more than a prescribed number of intermediate entities; determining the types of relation exhibited by a pair of entities in each query exhibiting an inferred explicit relation; and generating an explicit relations data set comprising the text of queries associated with the inferred explicit relations as well as the type of relation assigned to each of the entities in the pair. 9. The system of claim 8, wherein said prescribed number of intermediate entities is one, such that entities that were directly connected to each other are considered closely related, as well as entities that are connected to another entity by no more than one intermediate entity. 10. The system of claim 8, wherein the sub-module for scanning the found queries to find those queries exhibiting an inferred explicit relation between entities, comprises sub-modules for: determining if an entity associated with a found query is connected in the knowledge graph to another entity by a directed connector or path of connectors originating at the entity associated with a found query by no more than the prescribed number of intermediate entities; whenever the entity associated with the found query is connected in the knowledge graph to another entity by a directed connector or path of connectors originating at the entity associated with a found query by no more than the prescribed number of intermediate entities, determining if said other entity is also contained in the found query; and whenever said other entity is also contained in the found query, designating the found query as exhibiting an inferred explicit relation between the entities. 11. The system of claim 10, wherein the sub-module for scanning the found queries to find those queries exhibiting an inferred explicit relation between entities, further comprises sub-modules for: for a query designated as exhibiting an inferred explicit relation between a pair of entities contained therein, identifying the relation label assigned to each connector connecting the pair of entities in the knowledge graph; determining the relation of said other entity of the entity pair based on the identify relation label or labels and assigning the determined relation to said other entity of the entity pair; and assigning the relation of the entity associated with a found query, if known, to that entity of the entity pair. 12. The system of claim 8, wherein the sub-module for scanning the found queries to find those queries exhibiting an inferred explicit relation between entities, comprises sub-modules for: identifying an entity pair in the knowledge graph having a first entity of the pair that is connected to another entity of the pair by a directed connector or path of connectors originating at the first entity by no more than the prescribed number of intermediate entities, and whose connector or connectors connecting the pair of entities have relation label or labels that correspond to a semantic entity relation type associated with a domain of interest; determining if a found query contains the identified entity pair; and whenever the found query contains the identified entity pair, designating the found query as exhibiting an inferred explicit relation between the entities, assigning the semantic entity relation type associated with the domain of interest to said other entity of the entity pair, and assigning the relation of the first entity of the pair, if known, to that entity. 13. A semantic entity relation detection classifier training system, comprising: one or more computing devices, said computing devices being in communication with each other via a computer network whenever there is a plurality of computing devices; and a computer program having program modules executable by the one or more computing devices, the one or more computing devices being directed by the program modules of the computer program to, receive a query click log and a knowledge graph, find queries included in the query click log that are associated with entities found in the knowledge graph, said entities being associated with a knowledge graph domain of interest, infer implicit relations from the found queries and generate an implicit relations data set comprising queries associated with the inferred implicit relations, and train a semantic entity relation detection classifier using at least the implicit data set to find a relation in a query. 14. The system of claim 13, wherein the program module for finding queries included in the query click log that are associated with entities found in the knowledge graph, comprises sub-modules for: identifying one or more central entity types in the knowledge graph which correspond to a domain of interest; for each identified central entity type, finding central type entities in the knowledge graph that correspond to the central entity type under consideration, establishing a central entity type property list for each of the found central type entities that comprises the found central type entity and other entities in the knowledge graph having a prescribed degree of relation to the central type entity under consideration, as well as the type of relation existing between the found central type entity and each of the other entities in the knowledge graph having a prescribed degree of relation to the central type entity under consideration, combining the central entity type property list established for the identified central entity types to produce a combined entity property list, and finding queries associated with entities listed in the combined entity property list in the query click log. 15. The system of claim 14, wherein the program module for inferring implicit relations from the found queries and generating an implicit relations data set comprising queries associated with the inferred implicit relations, comprises sub-modules for: for each of one or more of the found queries, using the query click log to identify from a found query the URL associated with a result presented from a search of the query that was selected by a user, determining if an entity associated with the identified URL is found in the query, wherein an entity is associated with a URL if the entity points to that URL in the knowledge graph, whenever the entity associated with the identified URL is not found in the query, using said combined entity property list to identify a central entity type related to the entity associated with the identified URL and what type of relation exists between that central entity type and the entity associated with the identified URL, and inferring the existence of an implicit relation from the found query and assigning the identified relation type to the entity associated with the identified URL; and generating an implicit relations data set having entries each of which comprises the text of a query associated with an inferred implicit relation as well as the type of relation assigned to the entity associated with the URL indentified from that query. 16. The system of claim 14, wherein the program module for inferring implicit relations from the found queries and generating an implicit relations data set comprising queries associated with the inferred implicit relations, comprises sub-modules for: for each of one or more of the found queries, using the query click log to identify from a found query the URL associated with a result presented from a search of the query that was selected by a user, determining if an entity associated with the identified URL is found in the query, wherein an entity is associated with a URL if the entity points to that URL in the knowledge graph, whenever the entity associated with the identified URL is not found in the query, using said combined entity property list to identify a central entity type related to the entity associated with the identified URL and determining if the identified the central entity type is found in the query, whenever the identified the central entity type is found in the query, identifying what type of relation exists between that central entity type and the entity associated with the identified URL, and inferring the existence of an implicit relation from the found query and assigning the identified relation type to the entity associated with the identified URL; and generating an implicit relations data set having entries each of which comprises the text of a query associated with an inferred implicit relation as well as the type of relation assigned to the entity associated with the URL indentified from that query. 17. The system of claim 13, wherein the program module for inferring implicit relations from the found queries and generating an implicit relations data set comprising queries associated with the inferred implicit relations, comprises sub-modules for: identifying, for one or more semantic entity relation types associated with a domain of interest, at least one entity pair in the knowledge graph having a first entity of the pair that is connected to another entity of the pair by a directed connector or path of connectors originating at the first entity by no more than the prescribed number of intermediate entities, and whose connector or connectors connecting the pair of entities have a relation label or labels that correspond to the semantic entity relation type associated with a domain of interest; determining, for each entity pair identified, if a found query contains the first entity of the pair, but not the other entity of the pair, whenever the found query contains the first entity of the pair, but not the other entity of the pair, using the query click log to identify from the found query the URL associated with a result presented from a search based on the query that was selected by a user, and determining if the other entity of the pair is associated with the identified URL, wherein an entity is associated with a URL if the entity points to that URL in the knowledge graph, whenever the other entity of the pair is associated with the identified URL, designating the found query infers an implicit relation, and assigning the semantic entity relation type associated with the domain of interest to said other entity of the entity pair, and assigning the relation of the first entity of the pair, if known, to that entity; and generating an implicit relations data set having entries each of which comprises the text of a query associated with an inferred implicit relation as well as the type of relation assigned to the first entity of an entity pair associated with the query and the type of relation assigned to said other entity of the entity pair. 18. The system of claim 13, wherein the program module for inferring implicit relations from the found queries and generating an implicit relations data set comprising queries associated with the inferred implicit relations, comprises sub-modules for: identifying, for one or more semantic entity relation types associated with a domain of interest, those found queries having the name of the relation type or a variation thereof contained therein, and at least one entity pair in the knowledge graph having a first entity of the pair that is connected to another entity of the pair by a directed connector or path of connectors originating at the first entity by no more than the prescribed number of intermediate entities, and whose connector or connectors connecting the pair of entities have a relation label or labels that correspond to the semantic entity relation type; determining, for each entity pair identified and each found query identified, if the query contains the first entity of the pair, but not the other entity of the pair, whenever the found query contains the first entity of the pair, but not the other entity of the pair, designating the found query infers an implicit relation, and assigning the semantic entity relation type associated with the domain of interest to said other entity of the entity pair, and assigning the relation of the first entity of the pair, if known, to that entity; and generating an implicit relations data set having entries each of which comprises the text of a query associated with an inferred implicit relation as well as the type of relation assigned to the first entity of an entity pair associated with the query and the type of relation assigned to said other entity of the entity pair. 19. The system of claim 13, further comprising a program module for inferring explicit relations from the found queries and generating an explicit relations data set comprising queries associated with the inferred explicit relations, and wherein the program module for training the semantic entity relation detection classifier comprises training the semantic entity relation detection classifier using the explicit and implicit data sets to find an explicit or implicit relation, or both, in a query. 20. A semantic entity relation detection classifier training system, comprising: one or more computing devices, said computing devices being in communication with each other via a computer network whenever there is a plurality of computing devices; and a computer program having program modules executable by the one or more computing devices, the one or more computing devices being directed by the program modules of the computer program to, receive a query click log and a knowledge graph, find queries included in the query click log that are associated with entities found in the knowledge graph, said entities being associated with a knowledge graph domain of interest, infer explicit relations from the found queries and generate an explicit relations data set comprising queries associated with the inferred explicit relations, infer implicit relations from the found queries and generate an implicit relations data set comprising queries associated with the inferred implicit relations, train a first classifier using the implicit relations data set to produce an implicit relations classifier that can find implicit relations in a query, apply the implicit relations classifier to each of the queries in the explicit relations data set to find queries predicted to have an implicit relation or implicit relations, augment the explicit relations data set, said augmenting comprising, for each query in the explicit relations data set predicted to have an implicit relation or implicit relations, adding the implicit relation or implicit relations predicted for that query to the explicit relations data set entry associated with the query to produce an augmented explicit relations data set, and train a second classifier using the augmented explicit relations data set to produce a combined relations classifier that can find explicit, or implicit relations, or both, in a query.
Please help me write a proper abstract based on the patent claims.
Semantic entity relation detection classifier training implementations are presented that are generally used to train a semantic entity relation detection classifier to identify relations expressed in a natural language query. In one general implementation, queries are found in a search query click log that exhibit relations and entity types found in a semantic knowledge graph. Explicit relations are inferred from the found queries and an explicit relations data set is generated that includes queries associated with the inferred explicit relations. In addition, implicit relations are inferred from the found queries and an implicit relations data set is generated that includes queries associated with the inferred implicit relations. A semantic entity relation detection classifier is then trained using the explicit and implicit data sets.
1. A computerized system for evaluating the likelihood of technology change incidents, comprising: a computer apparatus including a processor, a memory, and a network communication device; and a technology change evaluation module stored in the memory, executable by the processor, and configured for: retrieving a plurality of encoded records regarding a plurality of historic information technology operational activities from an activity record database; decoding each of the plurality of encoded records into a plurality of decoded records, each of the decoded records comprising a binary value in each of a plurality of data fields, the plurality of data fields including a first data field defining whether one of the historic information technology operational activities is associated with a prior technology incident; processing the decoded records using a technology incident predictive model to produce an incident predictive algorithm for predicting whether a technology change event will cause a technology incident, the incident predictive algorithm defining a subset of the data fields and a weight factor for each data field in the subset of the data fields; retrieving a change record related to a future technology change event, the change record comprising change information related to one or more of the plurality of data fields; and evaluating the change information in the change record using the incident predictive algorithm to determine a likelihood that the future technology change event will cause a future technology incident. 2. The computerized system according to claim 1, wherein: the plurality of decoded records are associated with a first time period; the technology change evaluation module is configured for incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm. 3. The computerized system according to claim 1, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field. 4. The computerized system according to claim 1, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field. 5. The computerized system according to claim 1, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 6. The computerized system according to claim 1, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; the plurality of decoded records are associated with a first time period; the technology change evaluation module is configured for incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises: processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm; performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field; performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field; and performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 7. The computerized system according to claim 1, wherein the technology change evaluation module is configured for periodically updating the incident predictive algorithm. 8. A computer program product for evaluating the likelihood of technology change incidents, comprising a non-transitory computer-readable storage medium having computer-executable instructions for: retrieving a plurality of encoded records regarding a plurality of historic information technology operational activities from an activity record database; decoding each of the plurality of encoded records into a plurality of decoded records, each of the decoded records comprising a binary value in each of a plurality of data fields, the plurality of data fields including a first data field defining whether one of the historic information technology operational activities is associated with a prior technology incident; processing the decoded records using a technology incident predictive model to produce an incident predictive algorithm for predicting whether a technology change event will cause a technology incident, the incident predictive algorithm defining a subset of the data fields and a weight factor for each data field in the subset of the data fields; retrieving a change record related to a future technology change event, the change record comprising change information related to one or more of the plurality of data fields; and evaluating the change information in the change record using the incident predictive algorithm to determine a likelihood that the future technology change event will cause a future technology incident. 9. The computer program product according to claim 8, wherein: the plurality of decoded records are associated with a first time period; the non-transitory computer-readable storage medium has computer-executable instructions for incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm. 10. The computer program product according to claim 8, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field. 11. The computer program product according to claim 8, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field. 12. The computer program product according to claim 8, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 13. The computer program product according to claim 8, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; the plurality of decoded records are associated with a first time period; the non-transitory computer-readable storage medium has computer-executable instructions for incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises: processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm; performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field; performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field; and performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 14. The computer program product according to claim 8, wherein the non-transitory computer-readable storage medium has computer-executable instructions for periodically updating the incident predictive algorithm. 15. A computerized method for evaluating the likelihood of technology change incidents, comprising: retrieving, via a computer processor, a plurality of encoded records regarding a plurality of historic information technology operational activities from an activity record database; decoding, via a computer processor, each of the plurality of encoded records into a plurality of decoded records, each of the decoded records comprising a binary value in each of a plurality of data fields, the plurality of data fields including a first data field defining whether one of the historic information technology operational activities is associated with a prior technology incident; processing, via a computer processor, the decoded records using a technology incident predictive model to produce an incident predictive algorithm for predicting whether a technology change event will cause a technology incident, the incident predictive algorithm defining a subset of the data fields and a weight factor for each data field in the subset of the data fields; retrieving, via a computer processor, a change record related to a future technology change event, the change record comprising change information related to one or more of the plurality of data fields; and evaluating, via a computer processor, the change information in the change record using the incident predictive algorithm to determine a likelihood that the future technology change event will cause a future technology incident. 16. The computerized method according to claim 15, wherein: the plurality of decoded records are associated with a first time period; the method comprises incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm. 17. The computerized method according to claim 15, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field. 18. The computerized method according to claim 15, wherein processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field. 19. The computerized method according to claim 15, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field. 20. The computerized method according to claim 15, wherein: the incident predictive algorithm defines an aggregate weight factor for the subset of the data fields; the plurality of decoded records are associated with a first time period; the computerized method comprises incrementally altering the first time period to identify a second time period that correlates with the first data field, the second time period being associated with a subset of the plurality of decoded records; processing the decoded records using the technology incident predictive model to produce the incident predictive algorithm comprises: processing the subset of the plurality of decoded records using the technology incident predictive model to produce the incident predictive algorithm; performing a field selection test to identify the subset of the data fields, the subset of the data fields correlating with the first data field; performing a field weight test to identify the weight factor for each data field in the subset of data fields, each weight factor correlating with the first data field; and performing an aggregate weight test to identify the aggregate weight factor for the subset of the data fields, the aggregate weight factor correlating with the first data field.
Please help me write a proper abstract based on the patent claims.
Embodiments of the present invention relate to apparatuses, systems, methods and computer program products for a technology configuration system. Specifically, the system typically provides operational data processing of a plurality of records associated with information technology operational activities, for dynamic transformation of data and evaluation of interdependencies of technology resources. In other aspects, the system typically provides technical language processing of the plurality of records for transforming technical and descriptive data, and constructing categorical activity records. The system may be configured to achieve significant reduction in memory storage and processing requirements by performing categorical data encoding of the plurality of records. The system may employ a dynamic categorical data decoding process, which delivers a reduction in processing time when the encoded records are decoded for evaluating the exposure of technology change events to technology incidents and modifying such technology change events.
1. A non-transitory recording medium having recorded thereon a machine learning result editing program that is a processing program configured to generate a group of relevant words on a basis of expressions of words learned by a machine learning processing program that learns the expressions of the words on a basis of input data, the machine learning result editing program that causes a computer to execute a process comprising: causing a display unit to display the generated group of relevant words; and exercising control so that, after a designation of a word to be eliminated from the displayed group of relevant words is received, when a process is performed by using the group of relevant words generated on the basis of the expressions of the words learned by the machine learning processing program, the process is performed by using the group from which the designated word has been eliminated. 2. The non-transitory recording medium according to claim 1 having recorded thereon the machine learning result editing program, wherein the machine learning result editing program causes the computer to execute the process further comprising: when learning a new piece of input data in a machine learning process, learning the new piece of input data in the machine learning process while using, as an initial value, a parameter used for expressions of words included in the group other than the word for which the elimination designation has been received. 3. The non-transitory recording medium according to claim 1 having recorded thereon file machine learning result editing program, wherein the group of relevant words is a group containing a relatively large number of words that are, as individual words, used is predetermined expressions close to each other in a result of learning the expressions of the words. 4. A method for editing a machine learning result that is a processing method by which a group of relevant words is generated on a basis of expressions of words learned by a machine learning processing program that learns the expressions of the words on a basis of input data, wherein a computer is caused to execute a process comprising: causing a display unit to display the generated group of relevant words, using a processor; and exercising control so that, after a designation of a word to be eliminated from the displayed group of relevant words is received, when a process is performed by using the group of relevant words generated on the basis of the expressions of the words learned by the machine learning processing program, the process is performed by using the group from which the designated word has been eliminated, using the processor. 5. The method for editing the machine learning result according to claim 4, wherein the computer is caused to execute the process further comprising: when learning a new piece of input data in a machine learning process, learning the new piece of input data in the machine learning process while using, as an initial value, a parameter used for expressions of words, included in the group other than the word for which the elimination designation has been received, using the processor. 6. The method for editing the machine learning result according to claim 4, wherein the group of relevant words is a group containing a relatively large number of words that are, as individual words, used in predetermined expressions close to each other in a result of learning the expressions of the words. 7. An information processing apparatus that generates a group of relevant words on a basis of expressions of words learned by a machine learning processing program that learns the expressions of the words on a basis of input data, the information processing apparatus comprising: a memory; and a processor coupled to the memory, wherein the processor-executes a process comprising: causing a display unit to display the generated group of relevant words; and exercising control so that, after a designation of a word to be eliminated from the displayed group of relevant words is received, when a process is performed by using the group of relevant words generated on the basis of the expressions of the words learned by the machine learning processing program, the process performed by using the group from which the designated word has been eliminated. 7. The information processing apparatus according to claim 7, wherein, when learning a new piece of input da ta in a machine Learning process, the exercising includes learning the new piece of input data in the machine teaming process while using, as an initial value, a parameter used for expressions of words included in the group other than the word for which fee elimination designation has been received. 9. The information processing apparatus according to claim 7, wherein fee group of relevant words is a group containing a relatively large number of words that are, as individual words, used in predetermined expressions close to each other in a result of learning the expressions of the words.
Please help me write a proper abstract based on the patent claims.
A machine learning result editing program recorded on a recording medium causes a computer to execute a process of generating a group of relevant words on the basis of expressions of words learned by a machine learning processing program that learns the expressions of the words on the basis of input data. The machine learning result editing program causes the computer to execute: a process of causing a display unit to display the generated group of relevant words; and a process of exercising control so that, after a designation of a word to be eliminated from the displayed group of relevant words is received, when a process is performed by using the group of relevant words generated on the basis of the expressions of the words learned by the machine learning processing program, the process is performed by using the group from which the designated word has been eliminated.
1. A method comprising: receiving or accessing data encapsulating a sample of at least a portion of one or more files; feeding at least a portion of the received or accessed data as a time-based sequence into a recurrent neural network (RNN) trained using historical data; extracting, by the RNN, a final hidden state hi in a hidden layer of the RNN in which i is a number of elements of the sample; and determining, using the RNN and the final hidden state, whether at least a portion of the sample is likely to comprise malicious code. 2. The method of claim 1, wherein the received or accessed data forms at least part of a data stream. 3. The method of claim 1, wherein the at least a portion of the received or accessed data comprises a series of fixed-length encoded words. 4. The method of claim 1, wherein the elements comprises a series of instructions. 5. The method of claim 1, wherein the hidden state is defined by: ht=f(x, ht-1), wherein hidden state ht is a time-dependent function of input x as well as a previous hidden state ht-1. 6. The method of claim 1, wherein the RNN is an Elman network. 7. The method of claim 6, wherein the Elman network has deep transition or decoding functions. 8. The method of claim 6, wherein the Elman network parameterizes f(x, ht-1) as ht=g(W1x+Rht-1); where hidden state ht is a time-dependent function of input x as well as previous hidden state ht-1, W1 is a matrix defining input-to-hidden connections, R is a matrix defining the recurrent connections, and g(·) is a differentiable nonlinearity. 9. The method of claim 8 further comprising: adding an output layer on top of the hidden layer, such that ot=σ(W2ht) where ot is output, W2 defines a linear transformation of hidden activations, and σ(·) is a logistic function. 10. The method of claim 9 further comprising: applying backpropagation through time by which parameters of network W2, W1, and R are iteratively refined to drive the output otto a desired value as portions of the received or accessed data are passed through the RNN. 11. The method of claim 1, wherein the RNN is a long short term memory network. 12. The method of claim 1, wherein the RNN is a clockwork RNN. 13. The method of claim 1, wherein the RNN is a deep transition function. 14. The method of claim 1, wherein the RNN is an echo-state network. 15. The method of claim 1 further comprising: providing data characterizing the determination. 16. The method of claim 15, wherein providing data comprises at least one of: transmitting the data to a remote computing system, loading the data into memory, or storing the data. 17. The method of claim 1, wherein the files are binary files. 18. The method of claim 1, wherein the files are executable files. 19. A system comprising: at least one programmable data processor; and memory storing instructions which, when executed by the at least one programmable data processor, result in operations comprising: receiving or accessing data encapsulating a sample of at least a portion of one or more files; feeding at least a portion of the received or accessed data as a time-based sequence into a recurrent neural network (RNN) trained using historical data; extracting, by the RNN, a final hidden state hi in a hidden layer of the RNN in which i is a number of elements of the sample; and determining, using the RNN and the final hidden state, whether at least a portion of the sample is likely to comprise malicious code. 20. A non-transitory computer program product storing instructions which, when executed by at least one programmable data processor forming part of at least one computing device, result in operations comprising: receiving or accessing data encapsulating a sample of at least a portion of one or more files; feeding at least a portion of the received or accessed data as a time-based sequence into a recurrent neural network (RNN) trained using historical data; extracting, by the RNN, a final hidden state hi in a hidden layer of the RNN in which i is a number of elements of the sample; and determining, using the RNN and the final hidden state, whether at least a portion of the sample is likely to comprise malicious code.
Please help me write a proper abstract based on the patent claims.
Using a recurrent neural network (RNN) that has been trained to a satisfactory level of performance, highly discriminative features can be extracted by running a sample through the RNN, and then extracting a final hidden state hi, where i is the number of instructions of the sample. This resulting feature vector may then be concatenated with the other hand-engineered features, and a larger classifier may then be trained on hand-engineered as well as automatically determined features. Related apparatus, systems, techniques and articles are also described.
1. A digital conversation generating processor-implemented method, comprising: instantiating a conversational artificial-intelligence agent; identifying an individual target for conversation; initiating a conversation with the individual target by the artificial-intelligence agent by providing a first portion of a conversational dialogue to the individual target; recording a response from the individual target to the first portion of the conversational dialogue; and responding to the response from the individual target with a next contextual portion of the conversational dialogue. 2-20. (canceled)
Please help me write a proper abstract based on the patent claims.
The APPARATUSES, METHODS AND SYSTEMS FOR A DIGITAL CONVERSATION MANAGEMENT PLATFORM (“DCM-Platform”) transforms digital dialogue from consumers, client demands and, Internet search inputs via DCM-Platform components into tradable digital assets, and client needs based artificial intelligence campaign plan outputs. In one implementation, The DCM-Platform may capture and examine conversations between individuals and artificial intelligence conversation agents. These agents may be viewed as assets. One can measure the value and performance of these agents by assessing their performance and ability to generate revenue from prolonging conversations and/or ability to effect sales through conversations with individuals.
1.-20. (canceled) 21. A system to associate computing devices with each other based on computer network activity, comprising: a data processing system having a matching engine and a connector executed by one or more processors, the data processing to: identify a first linking factor based on a connection between a first computing device and the computer network at a first location during a first time period, and based on a connection between a second computing device and the computer network at the first location during the first time period; monitor for a second linking factor based on input activity at the first computing device in a second time period, and based on input activity at the second computing device in the second time period; monitor for a third linking factor based on activity at the first computing device at the first location during a third time period, and based on activity at the second computing device at a second location during the third time period; determine a negative match probability based on the second linking factor and based on the third linking factor; and indicate, in a data structure, a non-link between the first computing device and the second computing device based on the negative match probability. 22. The system of claim 21, comprising the data processing system to: remove, from the data structure, a previous link to indicate the non-link between the first computing device and the second computing device. 23. The system of claim 21, comprising the data processing system to: determine a number of computing devices other than the first computing device that connect with the computer network at the first location during the first time period; generate a positive match probability based on the first linking factor and based on the number of computing devices; and determine to generate the non-link based on the positive match probability, the negative match probability, and a threshold. 24. The system of claim 21, comprising the data processing system to: determine a link between the first computing device and the second computing device in a time period prior to the second time period and the third time period; and determine to remove the link based on the negative match probability. 25. The system of claim 21, wherein the data processing system comprises a geographic location module, the data processing system to: receive geo-location data points from the first computing device to determine the first computing device is at the first location; and receive geo-location data points from the second computing device to determine the second computing device is at the first location. 26. The system of claim 21, wherein the data processing system comprises a geographic location module, the data processing system to: receive geo-location data points from the first computing device, the geo-location data points comprising at least one of Global Positioning System information, Wi-Fi information, an IP address, Bluetooth information or cell tower triangulation information. 27. The system of claim 21, comprising the data processing system to: determine the first location based on a first IP address; determine the second location based on a second IP address. 28. The system of claim 21, comprising the data processing system to: determine a number of computing devices other than the first computing device that connect with the computer network at the first location during the first time period; generate a positive match probability based on the first linking factor and based on the number of computing devices; increase the positive match probability based on an identification, by the data processing system, of activity from the first computing device corresponding to a cessation of activity at the second computing device at a fourth time period; determine to link the first computing device with the second computing device based on the positive match probability and the negative match probability at or subsequent to the fourth time period; and generate a link between the first computing device and the second computing device at or subsequent to the fourth time period. 29. The system of claim 28, comprising the data processing system to: select, at or subsequent to the fourth time period, a content item for placement with an online document on the second computing device based on the link and based on computer network activity of the first computing device. 30. The system of claim 28, comprising the data processing system to: link the first computing device with the second computing device based on an overall match probability based on a weight for positive match probability, a weight for negative match probability, the positive match probability, and the negative match probability. 31. The system of claim 28, comprising the data processing system to: calibrate the weight for positive match probability and the weight for negative match probability based on known links and known non-links. 32. The system of claim 21, comprising the data processing system to: monitor for a fourth linking factor of a third computing device and a fourth computing device based on input activity at the third computing device during a fourth time period, and based on input activity at the fourth computing device during the fourth time period; monitor for a fifth linking factor of the third computing device and the fourth computing device based on activity at the third computing device at a third location during a fifth time period, and based on activity at the fourth computing device at a fourth location during the fifth time period; generate a positive match probability based on the fourth linking factor of the third computing device and the fourth computing device and based on the fifth linking factor of the third computing device and the fourth computing device; determine a link between the third computing device and the fourth computing device based on the positive match probability; and indicate, in a data structure, a link between the third computing device and the fourth computing device. 33. A method of associating computing devices with each other based on computer network activity, comprising: identifying, by a data processing system comprising a matching engine and a connector executed by at least one processor, a first linking factor based on a connection between a first computing device and the computer network at a first location during a first time period, and based on a connection between a second computing device and the computer network at the first location during the first time period; monitoring, by the data processing system, for a second linking factor based on input activity at the first computing device in a second time period, and based on input activity at the second computing device in the second time period; monitoring, by the data processing system, for a third linking factor based on activity at the first computing device at the first location during a third time period, and based on activity at the second computing device at a second location during the third time period; determining, by the data processing system, a negative match probability based on the second linking factor and based on the third linking factor; and generating, by the data processing system, a non-link between the first computing device and the second computing device based on the negative match probability. 34. The method of claim 33, comprising: creating, by the data processing system, a data structure to indicate the non-link between the first computing device and the second computing device. 35. The method of claim 33, comprising: determining a number of computing devices other than the first computing device that connect with the computer network at the first location during the first time period; generating a positive match probability based on the first linking factor and based on the number of computing devices; and determining to generate the non-link based on the positive match probability, the negative match probability, and a threshold. 36. The method of claim 33, comprising: determining a link between the first computing device and the second computing device in a time period prior to the second time period and the third time period; and determining to remove the link based on the negative match probability. 37. The method of claim 33, comprising: receive geo-location data points from the first computing device, the geo-location data points comprising at least one of Global Positioning System information, Wi-Fi information, an IP address, Bluetooth information or cell tower triangulation information. 38. The method of claim 33, comprising: determining a number of computing devices other than the first computing device that connect with the computer network at the first location during the first time period; generating a positive match probability based on the first linking factor and based on the number of computing devices; increasing the positive match probability based on an identification, by the data processing system, of activity from the first computing device corresponding to a cessation of activity at the second computing device at a fourth time period; linking the first computing device with the second computing device based on the positive match probability and the negative match probability at or subsequent to the fourth time period; and generating a link between the first computing device and the second computing device at or subsequent to the fourth time period. 39. The method of claim 38, comprising: selecting, at or subsequent to the fourth time period, a content item for placement with an online document on the second computing device based on the link and based on computer network activity of the first computing device. 40. The method of claim 33, comprising: monitoring for a fourth linking factor of a third computing device and a fourth computing device based on input activity at the third computing device during a fourth time period, and based on input activity at the fourth computing device during the fourth time period; monitoring for a fifth linking factor of the third computing device and the fourth computing device based on activity at the third computing device at a third location during a fifth time period, and based on activity at the fourth computing device at a fourth location during the fifth time period; generating a positive match probability based on the fourth linking factor of the third computing device and the fourth computing device and based on the fifth linking factor of the third computing device and the fourth computing device; determining a link between the third computing device and the fourth computing device based on the positive match probability; and indicating, in a data structure, a link between the third computing device and the fourth computing device.
Please help me write a proper abstract based on the patent claims.
The present disclosure is directed to associating computing devices with each other based on computer network activity for selection of content items as part of an online content item placement campaign. A first linking factor is identified based on a connection between a first device and the computer network via a first IP address during a first time period, and based on a connection between a second device and the computer network via the first IP address during the first time period. A number of devices that connect with the computer network via the first IP address is determined. A positive match probability is generated. A second and third linking factors are monitored. A negative match probability is determined based on the second and third linking factors. The first device is linked with the second device based on the positive and negative match probabilities.
1. A quantum processor comprising: N successive groups of a plurality of qubits (1, 2, . . . , N), wherein N is greater than or equal to three; wherein each group of qubits of the N successive groups of a plurality of qubits comprises a plurality of substantially parallel qubits; wherein each qubit of a first group of the N successive groups of a plurality of qubits is sized and shaped so that it crosses substantially perpendicularly a portion of each qubit of only a second group of qubits of the N successive groups of a plurality of qubits; wherein each qubit of a last group of the N successive groups of a plurality of qubits is sized and shaped so that it crosses substantially perpendicularly a portion of each qubit of only a second to last group of the N successive groups of a plurality of qubits; wherein each qubit of any given group of the N-2 successive groups of a plurality of qubits, not including the first group and the last group, is sized and shaped so that it crosses substantially perpendicularly a portion of each qubit of only a corresponding successive group and a corresponding preceding group of the N successive groups of a plurality of qubits; and a plurality of couplers, each coupler for providing a communicative coupling at a crossing of two qubits. 2. The quantum processor as claimed in claim 1, wherein the quantum processor is used for implementing a neural network comprising a plurality of neurons and a plurality of synapses; wherein each neuron of the plurality of neurons is associated to a qubit and each synapse of the plurality of synapses is associated to a coupler of the quantum processor. 3. A method for training the neural network implemented in the quantum processor claimed in claim 2, the method comprising: providing initialization data for initializing the plurality of couplers and the plurality of qubits of the quantum processor; until a criterion is met: performing a quantum sampling of the quantum processor to provide first empirical means; obtaining at least one training data instance for training the neural network; performing a quantum sampling of the quantum processor; wherein no bias is assigned to the qubits of the first group of the N successive groups of a plurality of qubits of the quantum processor; wherein the couplings of the first group of qubits of the N successive groups of a plurality of qubits and the second group of the N successive groups of a plurality of qubits are switched off; further wherein the biases of the second group of qubits of the N successive groups of a plurality of qubits are altered using the biases on a first group of neurons associated with the first group of qubits, the weights of the switched off couplings, and the at least one training data instance, to determine second empirical means; updating corresponding weights and biases of the couplers and the qubits of the quantum processor using the first and second empirical means; and providing final weights and biases of the couplers and the qubits of the quantum processor indicative of data representative a trained neural network. 4. The method as claimed in claim 3, wherein the initialization data comprise a plurality of biases, each for a qubit of the plurality of qubits; a plurality of weights, each weight for a coupler of the plurality of couplers and a learning rate schedule. 5. The method as claimed in claim 4, wherein the providing of the initialization data is performed using an analog computer comprising the quantum processor and a digital computer operatively connected to the analog computer. 6. The method as claimed in claim 3, wherein the at least one training data instance is obtained from a previously generated data set. 7. The method as claimed in claim 3, wherein the at least one training data instance is obtained from a real-time source. 8. The method as claimed in claim 6, wherein the generated data set is stored in a digital computer operatively connected to an analog computer comprising the quantum processor. 9. The method as claimed in claim 7, wherein the real-time source originates from a digital computer operatively connected to an analog computer comprising the quantum processor. 10. The method as claimed in claim 3, wherein the criterion comprises a stopping condition; wherein the stopping condition comprises determining if there is no further training data instance available. 11. The method as claimed in claim 5, wherein the digital computer comprises a memory; further wherein the providing of the final weights and biases of the couplers and the qubits of the quantum processor comprises storing the final weights and biases of the couplers and the qubits of the quantum processor in the memory of the digital computer. 12. The method as claimed in claim 5, wherein the providing of the final weights and biases of the couplers and the qubits of the quantum processor comprises providing the final weights and biases of the couplers and the qubits of the quantum processor to another processing unit operatively connected to the digital computer. 13. A digital computer comprising: a central processing unit; a display device; a communication port for operatively connecting the digital computer to an analog computer comprising a quantum processor used for implementing a neural network as claimed in claim 2; a memory unit comprising an application for training the neural network, the application comprising: instructions for providing initialization data for initializing the plurality of couplers and the plurality of biases of the qubits of the quantum processor; instructions for, until a criterion is met: performing a quantum sampling of the quantum processor to provide first empirical means; obtaining at least one training data instance for training the neural network; performing a quantum sampling of the quantum processor; wherein no bias is assigned to the qubits of the first group of the N successive groups of a plurality of qubits of the quantum processor; wherein the couplings of the first group of qubits of the N successive groups of a plurality of qubits and the second group of qubits of the N successive groups of a plurality of qubits are switched off; further wherein the biases of the second group of qubits of the N successive groups of a plurality of qubits are altered using the biases on a first group of neurons associated with the first group of qubits, the weights of the switched off couplings, and the at least one training data instance, to determine second empirical means; and updating corresponding weights and biases of the couplers and the qubits of the quantum processor using the first and the second empirical means; and instructions for providing final weights and biases of the couplers and the qubits of the quantum processor as data representative of a trained neural network. 14. A non-transitory computer readable storage medium for storing computer-executable instructions which, when executed, cause a digital computer to perform a method for training the neural network implemented in the quantum processor claimed in claim 2, the method comprising: providing initialization data for initializing the plurality of couplers and the plurality of qubits of the quantum processor; until a criterion is met: performing a quantum sampling of the quantum processor to provide first empirical means; obtaining at least one training data instance for training the neural network; performing a quantum sampling of the quantum processor; wherein no bias is assigned to the qubits of the first group of the N successive groups of a plurality of qubits of the quantum processor; wherein the couplings of the first group of qubits of the N successive groups of a plurality of qubits and the second group of qubits of the N successive groups of a plurality of qubits are switched off; further wherein the biases of the second group of qubits of the N successive groups of a plurality of qubits are altered using the biases on a first group of neurons associated with the first group of qubits, the weights of the switched off couplings, and the at least one training data instance, to determine second empirical means; and updating corresponding weights and biases of the couplers and the qubits of the quantum processor using the first and second empirical means; and providing final weights and biases of the couplers and the qubits of the quantum processor indicative of data representative a trained neural network. 15. The method as claimed in claim 2, wherein the neural network is a Deep Boltzmann Machine.
Please help me write a proper abstract based on the patent claims.
A quantum processor comprises a first set of qubits comprising a first plurality of substantially parallel qubits; a second set of qubits comprising N successive groups of a plurality of qubits (1, 2, . . . , N), wherein N is greater than or equal to two; wherein each group of qubits comprises a plurality of substantially parallel qubits; wherein each qubit of the first plurality of substantially parallel qubits of the first set of qubits crosses substantially perpendicularly a portion of the plurality of substantially parallel qubits of a first group of the second set of qubits; wherein each qubit of any given group of the second set of qubits crosses substantially perpendicularly a portion of the plurality of substantially parallel qubits of a successive group of the second set of qubits and a plurality of couplers, each coupler for providing a communicative coupling at a crossing of two qubits.
1. A method for controlling exchange oscillations between a pair of electron spin states in a quantum computation device comprising a pair of donor atoms incorporated in crystalline silicon, wherein each donor atom has a nucleus and at least one bound electron; wherein, quantum information is encoded in a spin state of the bound electron of each donor atom of the pair of donor atoms, and the spin state of the nucleus of each donor atom is coupled to the spin state of its respective bound electron via the hyperfine interaction (A); and wherein an exchange interaction (J) between the spin state of each of the two electrons results in exchange oscillations between them; the method comprising the steps of: preparing the nuclear spins of the two donors in opposite states; tuning the exchange interaction (J) by the application of a switchable voltage to selectively change the relative strength of the exchange interaction with respect to the hyperfine interaction; and thereby selectively controlling the exchange oscillations between the two bound electrons. 2. The method according to claim 1, comprising the step of tuning the exchange interaction (I) between the donor electron spins, by modifying the relative potential between the two donor atoms. 3. The method according to claim 1, comprising the step of controlling the exchange interaction (I) between the donor electron spins, by modifying the potential barrier between the two donor atoms. 4. The method according to claim 1, wherein the exchange oscillations between the two bound electrons are turned on or off; depending on the tuning of the exchange interaction (J). 5. The method according to claim 1, wherein the amplitude of the exchange oscillations between the two bound electrons is made higher or lower; depending on the tuning of the exchange interaction (J). 6. The method according to claim 1, wherein the method is performed in the context of two qubit exchange gate operations. 7. The method according to claim 6, for performing SWAP operations between the electron spins of two donors in silicon. 8. The method according to claim 6, for performing VSWAP operations between the electron spins of two donors in silicon. 9. The method according to claim 1, wherein an electron reservoir is provided to facilitate initialization. 10. The method according to claim 1, wherein an electrometer is provided to determine a charge state of a donor atom. 11. The method according to claim 1, wherein a Single Electron Transistor (SET) is provided for readout. 12. The method according to claim 11, wherein the Single Electron Transistor (SET) is tunnel-coupled to the donor atoms. 13. The method according to claim 1, wherein preparing the nuclear spins of the two donors in opposite states comprises performing read-out and control of the state of the nuclear spins of the donor atoms. 14. The method according to claim 1, for performing a quantum logic operation between two electron spins by tuning the exchange interaction (J) relative to the hyperfine interaction (A), and preparing the nuclear spins in opposite states, such that quantum logic operations take place while J>>A, whereas the quantum logic operations are stopped to allow the readout of the results while J<<A. 15. The method according to claim 13 wherein read-out involves spin dependent quantum mechanical tunnelling of a donor electron in the ‘spin-up’ state to a charge reservoir upon application of a magnetic field. 16. The method for operating a quantum device according to claim 1, comprising the steps of: initialization of the nuclear spins in opposite states; initialization of the electron spins; exchange operation between the electrons; and, readout of the final electron spins states; wherein, the exchange operation is tuned by the application of a switchable electric field between the pair of ‘J’ gates to selectively modify the relative energy of the two bound electrons. 17. The method for operating a quantum device according to claim 1, comprising the step of detuning to protect against unwanted exchange oscillations during a two qubit exchange operations. 18. A quantum computing device for controlling exchange oscillations between a pair of electron spin states in a quantum computation device comprising: a pair of donor atoms incorporated in crystalline silicon, wherein each donor atom has a nucleus and at least one bound electron; wherein, quantum information is encoded in a spin state of the nucleus or the bound electron, or both, of the donor atoms, and the spin state of the nucleus of each donor atom is coupled to the spin state of its respective bound electron via the hyperfine interaction (A); and wherein an exchange interaction (J) between the spin state of each of the two electrons results in exchange oscillations between them; and wherein the nuclear spins of the two donors are prepared in opposite states; and further wherein tuning the exchange interaction (J) by the application of a switchable voltage selectively changes the relative strength of the exchange interaction with respect to the hyperfine interaction; and thereby selectively controls the exchange oscillations between the two bound electrons. 19. A large scale quantum device comprising plural devices according to claim 18 fabricated on a common silicon wafer.
Please help me write a proper abstract based on the patent claims.
This invention concerns a method to switch on and off the exchange interaction J between electron spins bound to donor atoms. The electron spins have the role of ‘qubits’ to carry quantum information, and the exchange interaction J has the role of mediator for two-qubit quantum logic operations. The invention aims at exploiting the existence of a further magnetic interaction, the hyperfine interaction A, between each electron spin and the nuclear spin of the donor atom (301, 302) that binds the electron. The hyperfine interaction A, together with the ability to read out (504) and control the state of the nuclear spins, is used to suppress the effect of the exchange interaction J at all times, except while a quantum logic operation is being performed. In this way, the result of the quantum logic operation is not distorted after the operation has taken place. In a further aspect, the invention concerns an electronic device where the method can be practically implemented, and a large scale device made up of many of the devices.
1. A method of detecting unknown classes, comprising: generating a first classifier for a first plurality of classes, an output of the first classifier having a dimension of at least two; and designing a second classifier to receive the output of the first classifier to decide whether input data belongs to the first plurality of classes or at least one second class. 2. The method of claim 1, further comprising classifying the input data into at least one unknown class when the input data does not belong to one of the first plurality of classes. 3. The method of claim 1, in which designing the second classifier comprises training the second classifier with examples of data belonging to the first plurality of classes and data not belonging to the first plurality of classes. 4. The method of claim 3, in which the data not belonging to the first plurality of classes comprises synthetically generated negative data. 5. The method of claim 4, in which the synthetically generated negative data is a function of known data from the first plurality of classes. 6. The method of claim 3, further comprising modifying a boundary of at least one of the first plurality of classes, one of the at least one second class, or a combination thereof based at least in part on the data not belonging to the first plurality of classes. 7. The method of claim 1, in which the first plurality of classes are a plurality of known classes. 8. The method of claim 1, in which the at least one second class comprises an unknown class or a plurality of classes that are different from the first plurality of classes. 9. The method of claim 1, in which the second classifier is linear or non-linear. 10. A method of generating synthetic negative data, comprising: obtaining known data from a plurality of classes; and synthetically generating negative data as a function of the known data. 11. The method of claim 10, in which synthetically generating the negative data comprises: computing a first vector between each known data point in a cluster of known data and a centroid of the cluster; and computing a second vector between a centroid of class specific clusters and a centroid of all known data points independent of class. 12. The method of claim 11, further comprising generating the negative data from the second vector or a negative vector of the first vector. 13. The method of claim 10, further comprising training a classifier on the negative data. 14. The method of claim 10, further comprising modifying a boundary of at least an existing known class, an existing unknown class, or a combination thereof based at least in part on the negative data. 15. An apparatus for detecting unknown classes, comprising: at least one memory unit; and at least one processor coupled to the memory unit, the at least one processor configured: to generate a first classifier for a first plurality of classes, an output of the first classifier having a dimension of at least two; and to design a second classifier to receive the output of the first classifier to decide whether input data belongs to the first plurality of classes or at least one second class. 16. The apparatus of claim 15, in which the at least one processor is further configured to classify the input data into at least one unknown class when the input data does not belong to one of the first plurality of classes. 17. The apparatus of claim 15, in which the at least one processor is further configured to train the second classifier with examples of data belonging to the first plurality of classes and data not belonging to the first plurality of classes. 18. The apparatus of claim 17, in which the data not belonging to the first plurality of classes comprises synthetically generated negative data. 19. The apparatus of claim 18, in which the synthetically generated negative data is a function of known data from the first plurality of classes. 20. The apparatus of claim 17, in which the at least one processor is further configured to modify a boundary of at least one of the first plurality of classes, one of the at least one second class, or a combination thereof based at least in part on the data not belonging to the first plurality of classes. 21. The apparatus of claim 15, in which the first plurality of classes are a plurality of known classes. 22. The apparatus of claim 15, in which the at least one second class comprises an unknown class or a plurality of classes that are different from the first plurality of classes. 23. The apparatus of claim 15, in which the second classifier is linear or non-linear. 24. A apparatus for generating synthetic negative data, comprising: at least one memory unit; and at least one processor coupled to the memory unit, the at least one processor configured: to obtain known data from a plurality of classes; and to synthetically generate negative data as a function of the known data. 25. The apparatus of claim 24, in which the at least one processor is further configured: to compute a first vector between each known data point in a cluster of known data and a centroid of the cluster; and to compute a second vector between a centroid of class specific clusters and a centroid of all known data points independent of class. 26. The apparatus of claim 25, in which the at least one processor is further configured to generate the negative data from the second vector or a negative vector of the first vector. 27. The apparatus of claim 24, in which the at least one processor is further configured to train a classifier on the negative data. 28. The apparatus of claim 24, in which the at least one processor is further configured to modify a boundary of at least an existing known class, an existing unknown class, or a combination thereof based at least in part on the negative data. 29. A non-transitory computer-readable medium having program code recorded thereon, the program code being executed by a processor and comprising: program code to generate a first classifier for a first plurality of classes, an output of the first classifier having a dimension of at least two; and program code to design a second classifier to receive the output of the first classifier to decide whether input data belongs to the first plurality of classes or at least one second class. 30. A non-transitory computer-readable medium having program code recorded thereon, the program code being executed by a processor and comprising: program code to obtain known data from a plurality of classes; and program code to synthetically generate negative data as a function of the known data. 31. An apparatus for detecting unknown classes, comprising: means for generating a first classifier for a first plurality of classes, an output of the first classifier having a dimension of at least two; and means for designing a second classifier to receive the output of the first classifier to decide whether input data belongs to the first plurality of classes or at least one second class. 32. An apparatus for generating synthetic negative data, comprising: means for obtaining known data from a plurality of classes; and means for synthetically generating negative data as a function of the known data.
Please help me write a proper abstract based on the patent claims.
A method of detecting unknown classes is presented and includes generating a first classifier for multiple first classes. In one configuration, an output of the first classifier has a dimension of at least two. The method also includes designing a second classifier to receive the output of the first classifier to decide whether input data belongs to the multiple first classes or at least one second class.
1. A system for use in connection with performing optimization using a plurality of objective functions associated with a respective plurality of tasks, the system comprising: at least one computer hardware processor; and at least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to perform: identifying, based at least in part on a joint probabilistic model of the plurality of objective functions, a first point at which to evaluate an objective function in the plurality of objective functions; selecting, based at least in part on the joint probabilistic model, a first objective function in the plurality of objective functions to evaluate at the identified first point; evaluating the first objective function at the identified first point; and updating the joint probabilistic model based on results of the evaluation to obtain an updated joint probabilistic model. 2. The system of claim 1, wherein the first objective function relates values of hyper-parameters of a machine learning system to values providing a measure of performance of the machine learning system. 3. The system of claim 1, wherein the first objective function relates values of a plurality of hyper-parameters of a neural network for identifying objects in images to respective values providing a measure of performance of the neural network in identifying the objects in the images. 4. The system of claim 1, wherein the processor-executable instructions further cause the at least one computer hardware processor to perform: identifying, based at least in part on the updated joint probabilistic model of the plurality of objective functions and, a second point at which to evaluate an objective function in the plurality of objective functions; selecting, based at least in part on the joint probabilistic model, a second objective function in the plurality of objective functions to evaluate at the identified first point; and evaluating the second objective function at the identified first point. 5. The system of claim 4, wherein the first objective function is different from the second objective function. 6. The system of claim 1, wherein the joint probabilistic model of the plurality of objective functions, models correlation among tasks in the plurality of tasks. 7. The system of claim 1, wherein the joint probabilistic model of the plurality of objective functions comprises a vector-valued Gaussian process. 8. The system of claim 1, wherein the joint probabilistic model comprises a covariance kernel obtained based, at least in part, on a first covariance kernel modeling correlation among tasks in the plurality of tasks and a second covariance kernel modeling correlation among points at which objective functions in the plurality of objective functions may be evaluated. 9. The system of claim 1, wherein the identifying is performed further based on a cost-weighted entropy-search utility function. 10. A method for use in connection with performing optimization using a plurality of objective functions associated with a respective plurality of tasks, the method comprising: using at least one computer hardware processor to perform: identifying, based at least in part on a joint probabilistic model of the plurality of objective functions, a first point at which to evaluate an objective function in the plurality of objective functions; selecting, based at least in part on the joint probabilistic model, a first objective function in the plurality of objective functions to evaluate at the identified first point; evaluating the first objective function at the identified first point; and updating the joint probabilistic model based on results of the evaluation to obtain an updated joint probabilistic model. 11. The method of claim 10, wherein the first objective function relates values of hyper-parameters of a machine learning system to values providing a measure of performance of the machine learning system. 12. The method of claim 10, further comprising: identifying, based at least in part on the updated joint probabilistic model of the plurality of objective functions, a second point at which to evaluate an objective function in the plurality of objective functions; selecting, based at least in part on the joint probabilistic model, a second objective function in the plurality of objective functions to evaluate at the identified first point; and evaluating the second objective function at the identified first point. 13. The method of claim 10, wherein the joint probabilistic model of the plurality of objective functions, models correlation among tasks in the plurality of tasks. 14. The method of claim 10, wherein the joint probabilistic model of the plurality of objective functions comprises a vector-valued Gaussian process. 15. The method of claim 10, wherein the joint probabilistic model comprises a covariance kernel obtained based, at least in part, on a first covariance kernel modeling correlation among tasks in the plurality of tasks and a second covariance kernel modeling correlation among points at which objective functions in the plurality of objective functions may be evaluated. 16. At least one non-transitory computer-readable storage medium storing processor-executable instructions that, when executed by the at least one computer hardware processor, cause the at least one computer hardware processor to perform a method for use in connection with performing optimization using a plurality of objective functions associated with a respective plurality of tasks, the method comprising: identifying, based at least in part on a joint probabilistic model of the plurality of objective functions, a first point at which to evaluate an objective function in the plurality of objective functions; selecting, based at least in part on the joint probabilistic model, a first objective function in the plurality of objective functions to evaluate at the identified first point; evaluating the first objective function at the identified first point; and updating the joint probabilistic model based on results of the evaluation to obtain an updated joint probabilistic model. 17. The at least one non-transitory computer-readable storage medium of claim 16, wherein the first objective function relates values of hyper-parameters of a machine learning system to values providing a measure of performance of the machine learning system. 18. The at least one non-transitory computer-readable storage medium of claim 16, wherein the processor-executable instructions further cause the at least one computer hardware processor to perform: identifying, based at least in part on the updated joint probabilistic model of the plurality of objective functions, a second point at which to evaluate an objective function in the plurality of objective functions; selecting, based at least in part on the joint probabilistic model, a second objective function in the plurality of objective functions to evaluate at the identified first point; and evaluating the second objective function at the identified first point; 19. The at least one non-transitory computer-readable storage medium of claim 16, wherein the joint probabilistic model of the plurality of objective functions comprises a vector-valued Gaussian process. 20. The at least one non-transitory computer-readable storage medium of claim 16, wherein the joint probabilistic model comprises a covariance kernel obtained based, at least in part, on a first covariance kernel modeling correlation among tasks in the plurality of tasks and a second covariance kernel modeling correlation among points at which objective functions in the plurality of objective functions may be evaluated.
Please help me write a proper abstract based on the patent claims.
Techniques for use in connection with performing optimization using a plurality of objective functions associated with a respective plurality of tasks. The techniques include using at least one computer hardware processor to perform: identifying, based at least in part on a joint probabilistic model of the plurality of objective functions, a first point at which to evaluate an objective function in the plurality of objective functions; selecting, based at least in part on the joint probabilistic model, a first objective function in the plurality of objective functions to evaluate at the identified first point; evaluating the first objective function at the identified first point; and updating the joint probabilistic model based on results of the evaluation to obtain an updated joint probabilistic model.
1. A computer-implemented method comprising: a rule engine locally storing a first event dispatched from a first remote source system via a message broker; the rule engine triggering a first rule relevant to the first event; the rule engine issuing an update operation via the message broker to a first remote provider storing first context information relevant to the first event; and the rule engine updating the first event stored locally with the first context information. 2. A computer-implemented method as in claim 1 further comprising: the rule engine locally storing a second event dispatched from a second source system via the message broker; the rule engine triggering a second rule relevant to the second event; and the rule engine issuing a second update operation via the message broker to a second provider storing second context information relevant to the second event. 3. A method as in claim 1 wherein the first remote source system comprises a business system, and the first remote provider comprises a business entity provider. 4. A method as in claim 1 wherein the first remote provider and rule engine together comprise a context-aware platform. 5. A method as in claim 4 wherein the context-aware platform comprises a gamification platform. 6. A method as in claim 1 wherein during initialization, the method further comprises: the rule engine sending a request event to the first remote provider via the message broker; rule engine receiving data sent by the first remote provider in response to the request event; and the rule engine locally storing the data. 7. A method as in claim 1 further comprising the rule engine resolving a conflict arising between the first event and a changed content of the first remote provider. 8. A non-transitory computer readable storage medium embodying a computer program for performing a method, said method comprising: a rule engine locally storing a first event dispatched from a first remote source system via a message broker; the rule engine triggering a first rule relevant to the first event; the rule engine issuing an update operation via the message broker to a first remote provider storing first context information relevant to the first event; and the rule engine updating the first event stored locally with the first context information. 9. A non-transitory computer readable storage medium as in claim 8 wherein the method further comprises: the rule engine locally storing a second event dispatched from a second remote source system via the message broker; the rule engine triggering a second rule relevant to the second event; and the rule engine issuing a second update operation via the message broker to a second remote provider storing second context information relevant to the second event. 10. A non-transitory computer readable storage medium as in claim 8 wherein the first remote source system comprises a business system, and the first remote provider comprises a business entity provider. 11. A non-transitory computer readable storage medium as in claim 8 wherein the first remote provider and rule engine together comprise a context-aware platform. 12. A non-transitory computer readable storage medium as in claim 11 wherein the context-aware platform comprises a gamification platform. 13. A non-transitory computer readable storage medium as in claim 8 wherein during initialization, the method further comprises: the rule engine sending a request event to the first remote provider via the message broker; the rule engine receiving data sent by the first remote provider in response to the request event; and the rule engine locally storing the data. 14. A non-transitory computer readable storage medium as in claim 8 wherein the method further comprises the rule engine resolving a conflict arising between the first event and a changed content of the first provider. 15. A computer system comprising: one or more processors; a first rule engine; a software program, executable on said computer system such that: the first rule engine locally stores a first event dispatched from a first remote source system via a message broker; the rule engine triggers a first rule relevant to the first event; the rule engine issues an update operation via the message broker to a first remote provider storing first context information relevant to the first event; and the first event stored locally is updated with the first context information. 16. A computer system as in claim 15 wherein the software program is further configured to operate such that: the rule engine locally stores a second event dispatched from a second remote source system via the message broker; the rule engine triggers a second rule appropriate to the second event; and the rule engine issues a second update operation to a second remote provider storing second context information relevant to the second event. 17. A computer system as in claim 15 wherein the first remote source system comprises a business system, and the first remote provider comprises a business entity provider. 18. A computer system as in claim 15 wherein the first remote provider and rule engine together comprise a context-aware platform. 19. A computer system as in claim 18 wherein the context-aware platform comprises a gamification platform. 20. A computer system as in claim 15 wherein during initialization, the software program is further configured to operate such that: the rule engine sends a request event to the first remote provider via the message broker; the rule engine receives data sent by the first remote provider in response to the request event; and the rule engine locally stores the data.
Please help me write a proper abstract based on the patent claims.
A complex event processing system comprises one or more rule engines configured to receive information from a source system via a message broker. Multiple rule engines may be used in parallel, with the same/different rules deployed. According to an embodiment, a rule engine may include a manager component, a proxy component, a reasoner component, and a working memory. The manager and proxy serve as interfaces with the message broker to allow asynchronous communication with a provider storing state information. The reasoner is configured to execute rules based upon occurrence of events in the source system. Embodiments may be particularly suited to implementing a gamification platform including a business entity provider, with an existing business source system (e.g. CRM, ERP).
1. A method for privatizing an iteratively reweighted least squares (IRLS) solution, comprising: perturbing a first moment of a dataset by adding noise; perturbing a second moment of the dataset by adding noise; obtaining the IRLS solution based on the perturbed first moment and the perturbed second moment; and generating a differentially private output based on the IRLS solution. 2. The method of claim 1, in which the first moment is a mean of the dataset and the second moment is a covariance of the dataset. 3. The method of claim 1, further comprising: perturbing the first moment via a Laplace mechanism; and perturbing the second moment by adding Wishart noise. 4. The method of claim 1, further comprising determining an amount of noise to add to the first moment via a Laplace mechanism based on an amount of change that can occur to a data point of the dataset. 5. The method of claim 1, in which the IRLS solution is a differential private (DP) iteratively reweighted least squares function. 6. The method of claim 1, in which the IRLS solution is a concentrated differential private (CDP) iteratively reweighted least squares function. 7. An apparatus for privatizing an iteratively reweighted least squares (IRLS) solution, the apparatus comprising: a memory; and at least one processor coupled to the memory, the at least one processor configured: to perturb a first moment of a dataset by adding noise; to perturb a second moment of the dataset by adding noise; to obtain the IRLS solution based on the perturbed first moment and the perturbed second moment; and to generate a differentially private output based on the IRLS solution. 8. The apparatus of claim 7, in which the first moment is a mean of the dataset and the second moment is a covariance of the dataset. 9. The apparatus of claim 7, in which the at least one processor is further configured: to perturb the first moment via a Laplace mechanism; and to perturb the second moment by adding Wishart noise. 10. The apparatus of claim 7, in which the at least one processor is further configured to determine an amount of noise to add to the first moment via a Laplace mechanism based on an amount of change that can occur to a data point of the dataset. 11. The apparatus of claim 7, in which the IRLS solution is a differential private (DP) iteratively reweighted least squares function. 12. The apparatus of claim 7, in which the IRLS solution is a concentrated differential private (CDP) iteratively reweighted least squares function. 13. An apparatus for privatizing an iteratively reweighted least squares (IRLS) solution, comprising: means for perturbing a first moment of a dataset by adding noise; means for perturbing a second moment of the dataset by adding noise; and means for obtaining the IRLS solution based on the perturbed first moment and the perturbed second moment; and means for generating a differentially private output based on the IRLS solution. 14. The apparatus of claim 13, in which the first moment is a mean of the dataset and the second moment is a covariance of the dataset. 15. The apparatus of claim 13, further comprising: means for perturbing the first moment via a Laplace mechanism; and means for perturbing the second moment by adding Wishart noise. 16. The apparatus of claim 13, further comprising means for determining an amount of noise to add to the first moment via a Laplace mechanism based on an amount of change that can occur to a data point of the dataset. 17. The apparatus of claim 13, in which the IRLS solution is a differential private (DP) iteratively reweighted least squares function. 18. The apparatus of claim 13, in which the IRLS solution is a concentrated differential private (CDP) iteratively reweighted least squares function. 19. A non-transitory computer-readable medium having program code recorded thereon for privatizing an iteratively reweighted least squares (IRLS) solution, the program code executed by a processor and comprising: program code to perturb a first moment of a dataset by adding noise; program code to perturb a second moment of the dataset by adding noise; program code to obtain the IRLS solution based on the perturbed first moment and the perturbed second moment; and program code to generate a differentially private output based on the IRLS solution. 20. The non-transitory computer-readable medium of claim 19, in which the first moment is a mean of the dataset and the second moment is a covariance of the dataset. 21. The non-transitory computer-readable medium of claim 19, further comprising: program code to perturb the first moment via a Laplace mechanism; and program code to perturb the second moment by adding Wishart noise. 22. The non-transitory computer-readable medium of claim 19, in which the program code to perturb the first moment further comprises program code to determine an amount of noise to add to the first moment via a Laplace mechanism based on an amount of change that can occur to a data point of the dataset. 23. The non-transitory computer-readable medium of claim 19, in which the IRLS solution is a differential private (DP) iteratively reweighted least squares function. 24. The non-transitory computer-readable medium of claim 19, in which the IRLS solution is a concentrated differential private (CDP) iteratively reweighted least squares function.
Please help me write a proper abstract based on the patent claims.
A method for privatizing an iteratively reweighted least squares (IRLS) solution includes perturbing a first moment of a dataset by adding noise and perturbing a second moment of the dataset by adding noise. The method also includes obtaining the IRLS solution based on the perturbed first moment and the perturbed second moment. The method further includes generating a differentially private output based on the IRLS solution.
1. A method for deriving optimal discriminating features indicative of a subject state when the subject listens to one of a set of musical pieces, the method comprising: extracting frequency features from the subject's EEG signal when the subject is in a first subject state and a second subject state, the frequency features being extracted from more than one frequency band in one set of time segments; and identifying optimal discriminating features from the extracted frequency features, the optimal discriminating features indicative of characteristics of the EEG signal when the subject is in the first subject state and the second subject state, wherein one of the first subject state and the second subject state indicates that the subject likes a musical piece while the other state indicates that the subject does not like the musical piece. 2. The method according to claim 1, wherein the step of identifying optimal discriminating features includes tabulating a matrix including the extracted frequency features in each corresponding frequency band. 3. The method according to claim 1, wherein the step of identifying optimal discriminating features includes deriving optimal spectral filters, each of the optimized spectral filters being frequency band associated with the optimal discriminating features. 4. The method according to claim 3, wherein the step of identifying optimal discriminating features includes obtaining a signal after passing each EEG signal through the optimal spectral filters, each EEG signal being obtained when the subject listens to a corresponding one of the set of musical pieces. 5. The method according to claim 4, wherein the signal is used to calculate a form feature vector for each time segment. 6. The method according to claim 5, wherein the form feature vector is used to generate a musical preference score for each of the set of the musical pieces, the musical preference score indicating the subject's preference of the musical piece. 7. The method according to claim 6, wherein the musical preference score is used to control the set of the musical pieces, wherein the musical pieces is controlled to stop playing when the musical preference score indicates that the subject does not like the musical piece. 8. The method according to claim 1, further comprising the step of extracting musical features associated with each of the set of the musical pieces; and comparing the extracted musical features with the optimal discriminating features to determine which of the extracted musical features are indicative of characteristics of musical pieces that the subject prefers. 9. The method according to claim 8, further comprising developing a model in response to the optimal discriminating features of the subject when listening to one of the set of the musical pieces and the musical features for the one of the set of the musical pieces. 10. The method according to claim 9, wherein the model is used for enhancing musical parameters of an acoustic signal of one or more of the set of musical pieces. 11. The method according to claim 9, wherein the model is used to organize the order of the set of musical pieces. 12. The method according to claim 9, wherein the set of musical pieces are stored in an external database. 13. The method according to claim 9, wherein the step of developing the model further comprises developing the model in response to the optimal discriminating features of one or more other subjects when listening to the one of the set of the musical pieces and the musical features for the one of the set of the musical pieces. 14. A method for developing a model indicative of a subject state when the subject listens to one of a set of musical pieces, the method comprising: extracting frequency features from the subject's EEG signal when the subject is in a first subject state and a second subject state, the frequency features being extracted from more than one frequency band in one set of time frames; identifying optimal discriminating features from the extracted frequency features, the optimal discriminating features indicative of similar characteristics of the EEG signal when subject is in the first subject state and the second subject state; extracting musical features associated with each of a set of the musical pieces; comparing the extracted musical features with the optimal discriminating features to determine which of the extracted musical features are indicative of characteristics of musical pieces that the subject prefers; and developing a model in response to the optimal discriminating features of the subject when listening to one of the set of the musical pieces and the musical features for the one of the set of the musical pieces, wherein one of the first subject state and the second subject state indicates that the subject likes a musical piece while the other state indicates that the subject does not like the musical piece. 15. The method according to claim 14, wherein the step of developing the model further comprises developing the model in response to the optimal discriminating features of one or more other subjects when listening to the one of the set of the musical pieces and the musical features for the one of the set of the musical pieces. 16. The method according to claim 14, wherein the model is used for enhancing musical parameters of acoustic signal of one or more of the set of musical pieces. 17. The method according to claim 14, wherein the model is used to organize the order of the set of musical pieces. 18. The method according to claim 14, wherein the EEG signal is obtained by at least one electrode, the at least one electrode being placed on the forehead of the subject. 19. A system for developing a model indicative of a subject state when the subject listens to one of a set of musical pieces, comprising: an input device for the system operable to receive electroencephalography (EEG) signal; an EEG discriminative feature generator operable to extract frequency features from the received EEG signal when the subject is in a first subject state and a second subject state and identify optimal discriminating features from the extracted frequency features, the optimal discriminating features indicative of similar characteristics of the EEG signal when subject is in the first subject state and the second subject state; a model builder operable to develop a model in response to the optimal discriminating features of the subject when listening to one of the set of the musical pieces and the musical features for the one of the set of the musical pieces; a music scorer operable to generate a musical preference score for each of the set of the musical pieces, the musical preference score indicating the subject's preference of the musical piece; and a music controller operable to control the set of the musical pieces, wherein the musical pieces is controlled to stop playing when the musical preference score indicates that the subject does not like the musical piece, wherein one of the first subject state and the second subject state indicates that the subject likes a musical piece while the other state indicates that the subject does not like the musical piece.
Please help me write a proper abstract based on the patent claims.
A method for deriving optimal discriminating features indicative of a subject state when the subject listens to one of a set of musical pieces, comprising a step of extracting frequency features from the subject's EEG signal when the subject is in a first subject state and a second subject state, the frequency features being extracted from more than one frequency band in one set of time segments; and identifying optimal discriminating features from the extracted frequency features, the optimal discriminating features indicative of characteristics of the EEG signal when the subject is in the first subject state and the second subject state, wherein one of the first subject state and the second subject state indicates that the subject likes a musical piece while the other state indicates that the subject does not like the musical piece.
1. A computer-implemented method of identifying information of interest within an organization, the method comprising: determining use data that characterizes relationships among a plurality of information items; for each of a plurality of users associated with the organization, training at least one classifier for the user associated with the organization, wherein the training comprises: selecting a proper subset of the plurality of information items; for each of the selected information items, receiving, from the user associated with the organization, feedback related to the selected information item, and for each of a plurality of tokens associated with the selected information item, attributing a score to the token based at east in part on the received feedback, and storing the attributed score; receiving an indication of a first group of two or more information items within the organization, wherein the first group of two or more information items within the organization includes at least two information items not included in the selected proper subset of the plurality of information items; and for each of a plurality of information items of the first group of two or more information items within the organization, for each of a plurality of users associated with the organization, determining whether the user is likely to find the information item interesting based at least in part on at least one trained classifier, and providing, for display, an indication of at least one information item of the first group of two or more information items within the organization that the user is likely to find interesting. 2. The method of claim 1, wherein the information items include user data and collections of information items. 3. The method of claim 2, wherein the organization includes a structured body of users with associated roles within the organization and who have access to the information items within the organization. 4. The method of claim 1, wherein determining whether a first user is likely to find a first information item interesting comprises: identifying a plurality of tokens associated with the first information item, for each of the identified plurality of tokens associated with the first information item attributing a score to the identified token based on at least one trained classifier; and determining a score for the first information item based at least in part on the scores attributed to the identified plurality of tokens. 5. The method of claim 4, further comprising: determining whether the determined score for the first information item exceeds a first predetermined threshold. 6. The method of claim 4, wherein the plurality of identified tokens include at least one keyword associated with the first information item, at least one category associated with the first information item, at least one title word associated with the first information item, and at least one word in the body of the first information item. 7. The method of claim 1, further comprising: collaboratively training at least one classifier for a first plurality of users associated with the organization, wherein the training comprises: selecting a proper subset of the plurality of information items; for each of the selected information items; receiving, from each of the first plurality of users associated with the organization, feedback related to the selected information item, for each of a plurality of tokens of the selected information, attributing a score to the token based at least in part on: the received feedback from each of the first plurality of users associated with the organization, and the reputation and influence of each of the first plurality of users associated with the organization. 8. The method of claim 1, wherein the received feedback is implicit feedback. 9. The method of claim 8, wherein the received implicit feedback includes at least one document view associated with an information item. 10. The method of claim 9, wherein the received implicit feedback includes at least one information item being forwarded to other users and at least one comment associated with an information item. 11. The method of claim 8, wherein the received implicit feedback includes at least one form of negative feedback. 12. The method of claim 11, wherein the negative feedback includes deleting an information item. 13. The method of c an 1, wherein the received feedback is explicit feedback. 14. The method of claim 1, wherein a provided indication of a first information item of the first group of two or more information items within the organization that the user is likely to find interesting includes a first image selected from among a plurality of images associated with the first information item. 15. The method of claim 14, wherein selecting the first image comprises: for each of the plurality of images associated with the first information item, determining a resolution of the image; and selecting the image with the highest resolution from among the plurality of images associated with the first information item. 16. The method of claim 14, wherein selecting the first image comprises: for each of the plurality of images associated with the first information item, determining a size of the image; and selecting the image with the highest determined size from among the plurality of images associated with the first information item. 17. The method of claim 14, wherein selecting the first image comprises: for each of the plurality of images associated with the first information item, determining a dare and time of the image; and selecting the image with the earliest determined date and time from among the plurality of images associated with the first information item. 18. A computer-readable medium storing instructions that, if executed by a computing system having a processor, cause the computing system to perform a method for identifying information of interest within an organization, the method comprising: determining use data that characterizes relationships among a plurality of information items; for each of a plurality of users associated with the organization, training at least one classifier for the user associated with the organization, wherein the training comprises: selecting a proper subset of the plurality of information items; for each of the selected information items, receiving, from the user associated with the organization, feedback related to the selected information item, and storing an indication of the received feedback; receiving an indication of a first group of two or more information items within the organization, wherein the first group of two or more information items within the organization includes at least two information items not included in the selected proper subset of the plurality of information items; and for each of a plurality of information items of the first group of two or more information items within the organization, for each of a plurality of users associated with the organization, determining whether the user is likely to find the information item interesting based at least in part on at least one trained classifier, and providing, for display, an indication of at least one information item of the first group of two or more information items within the organization that the user is likely to find interesting. 19. The computer-readable medium of claim 18, the training further comprising: for each of the selected information items, for each of a plurality of features of the selected information item, attributing a score to the feature based on feedback from at least one user. 20. The computer-readable medium of claim 19, wherein determining whether the user is likely to find the information item interesting based at least in part on at least one trained classifier comprises: applying at least one trained classifier to features of the first information item. 21. A computing system, having one or more processors, comprising: at least one of the one or more processors configured to determine use data that characterizes relationships among a plurality of information items; at least one of the one or more processors configured to, for each of a plurality of users associated with the organization, train at least one classifier for the user associated with the organization; at least one of the one or more processors configured to receive an indication of a first group of two or more information items within the organization, wherein the first group of two or more information items within the organization include at least two information items not included in the selected proper subset of the plurality of information items; and at least one of the one or more processors configured to, for each of a plurality of information items of the first group of two or more information items within the organization, for each of a plurality of users associated with the organization, determine whether the user is likely to find the information item interesting based at least in part on at least one trained classifier, and provide, for display, an indication of at least one information item of the first group of two or more information items within the organization that the user is likely to find interesting. 22. The computing system of claim 21, further comprising: at least one of the one or more processors configured to periodically identify newly available information items; and at least one of the one or more processors configured to, for each of the identified newly-available information items, for each of a plurality of users, applying at least one trained classifier associated with the user to attributes of the newly-available information item to determine a relevance value, determining that the determined relevance value exceeds a predetermined threshold, and in response to determining that the determined relevance value exceeds the predetermined threshold, adding the newly-available information item to a collection of items associated with the user. 23. The computing system of claim 21, further comprising: at least one of the one or more processors configured to, for a first information item, determine a number of views of the first information item, determine a duration of views of the first information item, determine a number of likes of the first information item, and determine a number of comments on the first information item.
Please help me write a proper abstract based on the patent claims.
Systems and methods for selecting items of interest for an organization from a set of feeds, based on the interests that users have demonstrated through their interactions with existing content, are described herein. In some embodiments, the system is part of a content management service that allows users to add and organize files, media, links, and other information. The content can be uploaded from a computer, imported from cloud file systems, added via links, or pulled from various kinds of feeds.
1. An apparatus for generating a weight estimation model, the apparatus comprising: a training data collection unit configured to collect training data, the training data comprising skin spectrum information and weight information of a plurality of objects; and a model generation unit configured to generate the weight estimation model, used for spectrum-based weight estimation, through machine learning based on the collected training data. 2. The apparatus of claim 1, wherein the model generation unit generates the weight estimation model through the machine learning, in which the skin spectrum information is used as an input and the weight information is used as an output. 3. The apparatus of claim 1, wherein the training data further comprises at least one of age information, height information, gender information, and bio impedance information of an object. 4. The apparatus of claim 3, wherein the model generation unit generates the weight estimation model through the machine learning in which the skin spectrum information and the at least one of the age information, the height information, the gender information, and the bio impedance information are used as inputs and the weight information is used as an output. 5. The apparatus of claim 1, wherein an algorithm used to perform the machine learning comprises at least one of partial least squares regression, linear regression, neural network, decision tree, genetic algorithm, genetic programming, K-nearest neighbor, radial basis function network, random forest, support vector machine, and deep-learning. 6. A method for generating a weight estimation model, the method comprising: collecting training data, the training data comprising skin spectrum information and weight information of a plurality of objects; and generating the weight estimation model, used for spectrum-based weight estimation, through machine learning based on the collected training data. 7. The method of claim 6, wherein the generating of the weight estimation model comprises generating the weight estimation model through the machine learning, in which the skin spectrum information is used as an input and the weight information is used as an output. 8. The method for generating the weight estimation model of claim 6, wherein the training data further comprises at least one of age information, height information, gender information, and bio impedance information of an object. 9. The method of claim 8, wherein the generating of the weight estimation model comprises generating the weight estimation model through the machine learning in which the skin spectrum information and the at least one of the age information, the height information, the gender information, and the bio impedance information are used as inputs and the weight information is used as an output. 10. The method of claim 6, wherein an algorithm used to perform the machine learning comprises at least one of partial least squares regression, linear regression, neural network, decision tree, genetic algorithm, genetic programming, K-nearest neighbor, radial basis function network, random forest, support vector machine, and deep-learning. 11. An apparatus for estimating a weight, the apparatus comprising: a spectrum measurement unit configured to measure a skin spectrum of a user; and a weight estimation unit configured to estimate a weight of the user using a weight estimation model based on the measured skin spectrum. 12. The apparatus of claim 11, wherein the weight estimation model is generated through machine learning based on training data, the training data comprising skin spectrum information and weight information of a plurality of objects. 13. The apparatus of claim 12, wherein the training data further comprises at least one of age information, height information, gender information, and bio impedance information of the plurality objects. 14. The apparatus of claim 13, wherein the weight estimation unit estimates the weight of the user further based on at least one of age information, height information, gender information, and bio impedance information of the user. 15. The apparatus of claim 11, wherein an algorithm used to perform the machine learning comprises at least one of partial least squares regression, linear regression, neural network, decision tree, genetic algorithm, genetic programming, K-nearest neighbor, radial basis function network, random forest, support vector machine, and deep-learning.
Please help me write a proper abstract based on the patent claims.
An apparatus for generating the weight estimation model includes a training data collection unit that collects training data, the training data including skin spectrum information and weight information of a plurality of objects, and a model generation unit that generates the weight estimation model, used for a spectrum-based weight estimation, through machine learning based on the collected training data.
1. A method of generating a larger neural network from a smaller neural network, the method comprising: obtaining data specifying an original neural network, the original neural network being configured to generate neural network outputs from neural network inputs, the original neural network having an original neural network structure comprising a plurality of original neural network units, each original neural network unit having respective parameters, and each of the parameters of each of the original neural network units having a respective original value; generating a larger neural network from the original neural network, the larger neural network having a larger neural network structure comprising: (i) the plurality of original neural network units, and (ii) a plurality of additional neural network units not in the original neural network structure, each additional neural network unit having respective parameters; initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network; and training the larger neural network to determine trained values of the parameters of the original neural network units and the additional neural network units from the initialized values. 2. The method of claim 1, further comprising: training the original neural network to determine the original values of the parameters of the original neural network. 3. The method of claim 2, wherein the original neural network structure comprises a first original neural network layer having a first number of original units, and wherein generating the larger neural network comprises: adding a plurality of additional neural network units to the first original neural network layer to generate a larger neural network layer. 4. The method of claim 3, wherein initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network comprises: initializing the values of the parameters of the original neural network units in the larger neural network layer to the respective original values for the parameters; and for each additional neural network unit in the larger neural network layer: selecting an original neural network unit in the original neural network layer, and initializing the values of the parameters of the additional neural network unit to be the same as the respective original values for the selected original neural network unit. 5. The method of claim 4, wherein selecting an original neural network unit in the larger neural network layer comprises: randomly selecting an original neural network unit from the original neural network units in the original neural network layer. 6. The method of claim 4, wherein: in the original neural network structure, a second original neural network layer is configured to receive as input outputs generated by the first original neural network layer; in the larger neural network structure, the second original neural network layer is configured to receive as input outputs generated by the larger neural network layer; and initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network comprises: initializing the values of the parameters of the original neural network units in the second original neural network layer so that, for a given neural network input, the second neural network layer generates the same output in both the original neural network structure and the larger neural network structure. 7. The method of claim 6, wherein the original neural network structure comprises a third original neural network layer configured to receive a third original layer input and generate a third original layer output from the third layer input, and wherein generating the larger neural network comprises: replacing the third original neural network layer with a first additional neural network layer having additional neural network units and a second additional neural network layer having additional neural network units, wherein: the first additional neural network layer is configured to receive the third original layer input and generate a first additional layer output from the third original layer input, and the second additional neural network layer is configured to receive the first additional layer output and generate a second additional layer output from the first additional layer output. 8. The method of claim 7, wherein initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network comprises: initializing the values of the parameters of the additional neural network units in the first additional neural network layer and in the second additional neural network layer so that, for the same neural network input, the second additional layer output is the same as the third original layer output. 9. The method of claim 7, wherein initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network comprises: initializing the values of the parameters of the additional neural network units in the first additional neural network layer using the respective original values for the parameters of the original neural network units in the third original neural network layer. 10. A system comprising one or more computers and one or more storage devices storing instructions that are operable, when executed by the one or more computers, to cause the one or more computers to perform operations comprising: obtaining data specifying an original neural network, the original neural network being configured to generate neural network outputs from neural network inputs, the original neural network having an original neural network structure comprising a plurality of original neural network units, each original neural network unit having respective parameters, and each of the parameters of each of the original neural network units having a respective original value; generating a larger neural network from the original neural network, the larger neural network having a larger neural network structure comprising: (i) the plurality of original neural network units, and (ii) a plurality of additional neural network units not in the original neural network structure, each additional neural network unit having respective parameters; initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network; and training the larger neural network to determine trained values of the parameters of the original neural network units and the additional neural network units from the initialized values. 11. The system of claim 10, further comprising: training the original neural network to determine the original values of the parameters of the original neural network. 12. The system of claim 11, wherein the original neural network structure comprises a first original neural network layer having a first number of original units, and wherein generating the larger neural network comprises: adding a plurality of additional neural network units to the first original neural network layer to generate a larger neural network layer. 13. The system of claim 12, wherein initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network comprises: initializing the values of the parameters of the original neural network units in the larger neural network layer to the respective original values for the parameters; and for each additional neural network unit in the larger neural network layer: selecting an original neural network unit in the original neural network layer, and initializing the values of the parameters of the additional neural network unit to be the same as the respective original values for the selected original neural network unit. 14. The system of claim 13, wherein selecting an original neural network unit in the larger neural network layer comprises: randomly selecting an original neural network unit from the original neural network units in the original neural network layer. 15. The system of claim 13, wherein: in the original neural network structure, a second original neural network layer is configured to receive as input outputs generated by the first original neural network layer; in the larger neural network structure, the second original neural network layer is configured to receive as input outputs generated by the larger neural network layer; and initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network comprises: initializing the values of the parameters of the original neural network units in the second original neural network layer so that, for a given neural network input, the second neural network layer generates the same output in both the original neural network structure and the larger neural network structure. 16. The system of claim 15, wherein the original neural network structure comprises a third original neural network layer configured to receive a third original layer input and generate a third original layer output from the third layer input, and wherein generating the larger neural network comprises: replacing the third original neural network layer with a first additional neural network layer having additional neural network units and a second additional neural network layer having additional neural network units, wherein: the first additional neural network layer is configured to receive the third original layer input and generate a first additional layer output from the third original layer input, and the second additional neural network layer is configured to receive the first additional layer output and generate a second additional layer output from the first additional layer output. 17. A computer storage medium encoded with instructions that, when executed by one or more computers, cause the one or more computers to perform operations comprising: obtaining data specifying an original neural network, the original neural network being configured to generate neural network outputs from neural network inputs, the original neural network having an original neural network structure comprising a plurality of original neural network units, each original neural network unit having respective parameters, and each of the parameters of each of the original neural network units having a respective original value; generating a larger neural network from the original neural network, the larger neural network having a larger neural network structure comprising: (i) the plurality of original neural network units, and (ii) a plurality of additional neural network units not in the original neural network structure, each additional neural network unit having respective parameters; initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network; and training the larger neural network to determine trained values of the parameters of the original neural network units and the additional neural network units from the initialized values. 18. The computer storage medium of claim 17, further comprising: training the original neural network to determine the original values of the parameters of the original neural network. 19. The computer storage medium of claim 18, wherein the original neural network structure comprises a first original neural network layer having a first number of original units, and wherein generating the larger neural network comprises: adding a plurality of additional neural network units to the first original neural network layer to generate a larger neural network layer. 20. The computer storage medium of claim 19, wherein initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same neural network outputs from the same neural network inputs as the original neural network comprises: initializing the values of the parameters of the original neural network units in the larger neural network layer to the respective original values for the parameters; and for each additional neural network unit in the larger neural network layer: selecting an original neural network unit in the original neural network layer, and initializing the values of the parameters of the additional neural network unit to be the same as the respective original values for the selected original neural network unit.
Please help me write a proper abstract based on the patent claims.
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for generating a larger neural network from a smaller neural network. In one aspect, a method includes obtaining data specifying an original neural network; generating a larger neural network from the original neural network, wherein the larger neural network has a larger neural network structure including the plurality of original neural network units and a plurality of additional neural network units not in the original neural network structure; initializing values of the parameters of the original neural network units and the additional neural network units so that the larger neural network generates the same outputs from the same inputs as the original neural network; and training the larger neural network to determine trained values of the parameters of the original neural network units and the additional neural network units from the initialized values.
1. A computer-implemented system for real time topic detection in a social media message, wherein information structure of the social media message includes a topic and a comment, and wherein the topic is an R-expression (Referential expression) that restricts the information structure of an event described by the social media message, comprising: a knowledge base of keywords used to ingest the social media message; a partial parser deriving a syntax-semantic parse tree; a topic calculator compositionally deriving the topic of the social media message by computing a topic value for given entities in the event described by the social media message, wherein the topic value is derived from a first set of rules assigning a Restrictor R-value to prominent R-expressions compositionally in the syntax-semantic parse tree and a second set of rules assigning a numeric Strength S-value to the R-expressions according to whether or not they are part of anaphoric chains in the social media message, and whether or not the R-expressions include name entities that are part of the knowledge base of keywords. 2. The computer-implemented system for real time topic detection in the social media message according to claim 1, further including an inference engine reducing uncertainty in results of the topic calculator. 3. The computer-implemented system for real time topic detection in the social media message according to claim 2, wherein the inference engine includes a data structure and a set of inference rules. 4. The computer-implemented system for real time topic detection in the social media message according to claim 1, wherein the topic value of the social media message is associated with a strength value 1 to 3, where 1 is the lowest strength and 3 is the highest strength. 5. The computer-implemented system for real time topic detection in the social media message according to claim 4, wherein the strength value is the sum of the numeric Strength S-value of the second set of rates.
Please help me write a proper abstract based on the patent claims.
A computer-implemented system for real time topic detection in a social media message includes a knowledge base of keywords used to ingest the social media message and a partial parser deriving a syntax-semantic parse tree. The system also includes a topic calculator compositionally deriving the topic of the social media message by computing a topic value for given entities in the event described by the social media message. The topic value is derived from a first set of rules assigning Restrictor R-value to prominent R-expressions compositionally in the syntax-semantic parse tree and a second set of rules assigning a numeric Strength S-value to the R-expressions according to whether or not they are part of anaphoric chains in the social media message, and whether or not the R-expressions include name entities that are part of the knowledge base of keywords.
1. A method for training an encoder and a decoder, the method comprising: obtaining a set of training content; and for a compression model including an encoding portion, a decoding portion, and a discriminator portion: repeatedly backpropagating one or more error terms obtained from a loss function to update a set of parameters of the encoding portion and the decoding portion, wherein the loss function includes: a reconstruction loss indicating a dissimilarity between the training content and reconstructed content, wherein the reconstructed content is generated by applying the encoding portion to the training content to generate tensors for the training content, and applying the decoding portion to the tensors to generate the reconstructed content, and a discriminator loss indicating a cost of generating incorrect discrimination predictions generated by applying the discriminator portion to input content, wherein the input content includes the training content and the reconstructed content, and wherein the discrimination predictions indicate likelihoods of whether the input content is a reconstructed version of the training content; and stopping the backpropagation after the loss function satisfies a predetermined criteria. 2. The method of claim 1, wherein the discriminator portion is coupled to receive each of the training content and the reconstructed content individually as the input content, and wherein the discrimination predictions indicate likelihoods that the input content is the reconstructed version of the training content. 3. The method of claim 1, wherein the discriminator portion is coupled to receive ordered pairs of the training content and the corresponding reconstructed content as the input content, and wherein the discrimination predictions indicate which content in the ordered pairs is the reconstructed version of the training content. 4. The method of claim 3, wherein the ordered pairs of input content include a first pair having first training content as a first element and first reconstructed content as a second element, and a second pair having second reconstructed content as the first element and second training content as the second element. 5. The method of claim 1, wherein the discriminator portion includes a neural network model, and the discrimination predictions are generated by combining outputs from one or more intermediate layers of the neural network model. 6. The method of claim 1, wherein the loss function further includes: a codelength regularization loss indicating a cost of code lengths for compressed codes generated by applying an entropy coding technique to the tensors, wherein the codelength regularization loss is determined based on magnitudes of elements of the tensors for the training content. 7. The method of claim 1, further comprising: repeatedly backpropagating one or more error terms obtained from the discriminator loss to update a set of parameters of the discriminator portion while fixing the set of parameters of the encoding portion and the decoding portion; and stopping the backpropagation after the discriminator loss satisfies a predetermined criteria. 8. The method of claim 7, further comprising: responsive to determining that an accuracy of the discrimination predictions are above a first threshold, repeatedly backpropagating the error terms obtained from the loss function to update the set of parameters of the encoding portion and the decoding portion for one or more iterations, responsive to determining that the accuracy of the discrimination predictions are below a second threshold, repeatedly backpropagating the error terms obtained from the discriminator loss to update the set of parameters of the discriminator portion for one or more iterations, and responsive to determining that the accuracy of the discrimination predictions are between the first threshold and the second threshold, alternating between backpropagating the error terms obtained from the loss function to update the set of parameters of the encoding portion and the decoding portion and backpropagating the error terms obtained from the discriminator loss to update the set of parameters of the discriminator portion. 9. A decoder stored on a computer readable storage medium, wherein the decoder is manufactured by a process comprising: obtaining a set of training content; for a compression model including an encoding portion, a decoding portion, and a discriminator portion, repeatedly backpropagating one or more error terms obtained from a loss function to update a set of parameters of the encoding portion and the decoding portion, wherein the loss function includes: a reconstruction loss indicating a dissimilarity between the training content and reconstructed content, wherein the reconstructed content is generated by applying the encoding portion to the training content to generate tensors for the training content, and applying the decoding portion to the tensors to generate the reconstructed content, and a discriminator loss indicating a cost of generating incorrect discrimination predictions generated by applying the discriminator portion to input content, wherein the input content includes the training content and the reconstructed content, and wherein the discrimination predictions indicate likelihoods of whether the input content is a reconstructed version of the training content; and stopping the backpropagation after the loss function satisfies a predetermined criteria; and storing the set of parameters of the decoding portion on the computer readable storage medium as the parameters of the decoder, wherein the decoder is coupled to receive a compressed code for content and output a reconstructed version of the content using the stored parameters. 10. The decoder of claim 9, wherein the discriminator portion is coupled to receive each of the training content and the reconstructed content individually as the input content, and wherein the discrimination predictions indicate likelihoods that the input content is the reconstructed version of the training content. 11. The decoder of claim 9, wherein the discriminator portion is coupled to receive ordered pairs of the training content and the corresponding reconstructed content as the input content, and wherein the discrimination predictions indicate which content in the ordered pairs is the reconstructed version of the training content. 12. The decoder of claim 11, wherein the ordered pairs of input content include a first pair having first training content as a first element and first reconstructed content as a second element, and a second pair having second reconstructed content as the first element and second training content as the second element. 13. The decoder of claim 9, wherein the discriminator portion includes a neural network model, and the discrimination predictions are generated by combining outputs from one or more intermediate layers of the neural network model. 14. The decoder of claim 9, wherein the loss function further includes: a codelength regularization loss indicating a cost of code lengths for compressed codes generated by applying an entropy coding technique to the tensors, wherein the codelength regularization loss is determined based on magnitudes of elements of the tensors for the training content. 15. The decoder of claim 9, further comprising: repeatedly backpropagating one or more error terms obtained from the discriminator loss to update a set of parameters of the discriminator portion while fixing the set of parameters of the encoding portion and the decoding portion; and stopping the backpropagation after the discriminator loss satisfies a predetermined criteria. 16. The decoder of claim 15, further comprising: responsive to determining that an accuracy of the discrimination predictions are above a first threshold, repeatedly backpropagating the error terms obtained from the loss function to update the set of parameters of the encoding portion and the decoding portion for one or more iterations, responsive to determining that the accuracy of the discrimination predictions are below a second threshold, repeatedly backpropagating the error terms obtained from the discriminator loss to update the set of parameters of the discriminator portion for one or more iterations, and responsive to determining that the accuracy of the discrimination predictions are between the first threshold and the second threshold, alternating between backpropagating the error terms obtained from the loss function to update the set of parameters of the encoding portion and the decoding portion and backpropagating the error terms obtained from the discriminator loss to update the set of parameters of the discriminator portion. 17. An encoder stored on a computer readable storage medium, wherein the encoder is manufactured by a process comprising: obtaining a set of training content; for a compression model including an encoding portion, a decoding portion, and a discriminator portion, repeatedly backpropagating one or more error terms obtained from a loss function to update a set of parameters of the encoding portion and the decoding portion, wherein the loss function includes: a reconstruction loss indicating a dissimilarity between the training content and reconstructed content, wherein the reconstructed content is generated by applying the encoding portion to the training content to generate tensors for the training content, and applying the decoding portion to the tensors to generate the reconstructed content, and a discriminator loss indicating a cost of generating incorrect discrimination predictions generated by applying the discriminator portion to input content, wherein the input content includes the training content and the reconstructed content, and wherein the discrimination predictions indicate likelihoods of whether the input content is a reconstructed version of the training content; and stopping the backpropagation after the loss function satisfies a predetermined criteria; and storing the set of parameters of the encoding portion on the computer readable storage medium as the parameters of the encoder, wherein the decoder is coupled to receive content and output a compressed code for the content using the stored parameters. 18. The encoder of claim 17, wherein the discriminator portion is coupled to receive each of the training content and the reconstructed content individually as the input content, and wherein the discrimination predictions indicate likelihoods that the input content is the reconstructed version of the training content. 19. The encoder of claim 17, wherein the discriminator portion is coupled to receive ordered pairs of the training content and the corresponding reconstructed content as the input content, and wherein the discrimination predictions indicate which content in the ordered pairs is the reconstructed version of the training content. 20. The encoder of claim 19, wherein the ordered pairs of input content include a first pair having first training content as a first element and first reconstructed content as a second element, and a second pair having second reconstructed content as the first element and second training content as the second element. 21. The encoder of claim 17, wherein the discriminator portion includes a neural network model, and the discrimination predictions are generated by combining outputs from one or more intermediate layers of the neural network model. 22. The encoder of claim 17, wherein the loss function further includes: a codelength regularization loss indicating a cost of code lengths for compressed codes generated by applying an entropy coding technique to the tensors, wherein the codelength regularization loss is determined based on magnitudes of elements of the tensors for the training content. 23. The encoder of claim 17, further comprising: repeatedly backpropagating one or more error terms obtained from the discriminator loss to update a set of parameters of the discriminator portion while fixing the set of parameters of the encoding portion and the decoding portion; and stopping the backpropagation after the discriminator loss satisfies a predetermined criteria. 24. The encoder of claim 23, further comprising: responsive to determining that an accuracy of the discrimination predictions are above a first threshold, repeatedly backpropagating the error terms obtained from the loss function to update the set of parameters of the encoding portion and the decoding portion for one or more iterations, responsive to determining that the accuracy of the discrimination predictions are below a second threshold, repeatedly backpropagating the error terms obtained from the discriminator loss to update the set of parameters of the discriminator portion for one or more iterations, and responsive to determining that the accuracy of the discrimination predictions are between the first threshold and the second threshold, alternating between backpropagating the error terms obtained from the loss function to update the set of parameters of the encoding portion and the decoding portion and backpropagating the error terms obtained from the discriminator loss to update the set of parameters of the discriminator portion. 25. A method for training an encoder and a decoder, comprising: obtaining training content and downsampling the training content to generate a set of training content each associated with a corresponding scale in a set of scales; and for a compression model including a set of autoencoder and discriminator pairs: for each autoencoder and discriminator pair associated with a corresponding scale, repeatedly backpropagating one or more error terms obtained from a loss function to update a set of parameters of the autoencoder in the pair, wherein the loss function includes: a reconstruction loss indicating a dissimilarity between the training content and reconstructed content, wherein the reconstructed content is generated by combining a set of content generated by applying the autoencoders of the set of pairs to the corresponding set of training content, and a discriminator loss indicating a cost of incorrect discrimination predictions generated by applying the discriminator of the pair to input content, wherein the input content includes the training content associated with the scale and content generated by applying the autoencoder of the pair to the training content associated with the scale, and wherein the discrimination predictions indicate likelihoods of whether the input content is a reconstructed version of the training content; and stopping the backpropagation after the loss function satisfies a predetermined criteria.
Please help me write a proper abstract based on the patent claims.
The compression system trains a machine-learned encoder and decoder through an autoencoder architecture. The encoder can be deployed by a sender system to encode content for transmission to a receiver system, and the decoder can be deployed by the receiver system to decode the encoded content and reconstruct the original content. The encoder is coupled to receive content and output a tensor as a compact representation of the content. The content may be, for example, images, videos, or text. The decoder is coupled to receive a tensor representing content and output a reconstructed version of the content. The compression system trains the autoencoder with a discriminator to reduce compression artifacts in the reconstructed content. The discriminator is coupled to receive one or more input content, and output a discrimination prediction that discriminates whether the input content is the original or reconstructed version of the content.
1. A method comprising: receiving a request from a content-providing user of an online system to generate a content item to be presented to one or more viewing users of the online system, the request specifying one or more modifications to an appearance of content received from the content-providing user; generating a plurality of content item instances of the content item, each of the plurality of content item instances including a different set of the one or more modifications to the appearance of the content specified in the request; generating an identifier for each of the plurality of content item instances, each identifier associated with the set of the one or more modifications to the appearance of the content included in the content item instance; presenting the plurality of content item instances to a subset of the one or more viewing users of the online system; tracking a performance metric associated with impressions of each of the plurality of content item instances using the identifier associated with the set of the one or more modifications to the appearance of the content included in each of the plurality of content item instances; identifying one or more pairs of the plurality of content item instances; and for each of the one or more pairs of the plurality of content item instances: comparing a first value of the performance metric associated with a first content item instance of the pair to a second value of the performance metric associated with a second content item instance of the pair; determining a difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance based at least in part on the comparing; identifying a subset of the one or more modifications to the appearance of the content to which the difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance is attributable; and predicting an improvement in a value of the performance metric associated with content item instances including the subset of the one or more modifications, based at least in part on the difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance. 2. The method of claim 1, wherein the identifier associated with the set of the one or more modifications to the appearance of the content included in each of the plurality of content item instances comprises a digital watermark, an image fingerprint, or an image hash. 3. The method of claim 1, wherein the one or more modifications to the appearance of the content are selected from a group consisting of: modifying one or more colors of the content, modifying a placement of an element of the content, modifying a size of the content, modifying a size of an element of the content, modifying a color of an element of the content, and any combination thereof. 4. The method of claim 1, wherein the improvement in the value of the performance metric associated with content item instances including the subset of the one or more modifications is predicted by a machine-learned model. 5. The method of claim 1, wherein the request is received from the content-providing user via a tool provided by the online system. 6. The method of claim 5, wherein the one or more modifications are specified using one or more features of the tool. 7. The method of claim 6, further comprising: identifying a feature of the tool used to specify the subset of the one or more modifications to the appearance of the content to which the difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance is attributable; and predicting the improvement in the value of the performance metric associated with content item instances including the subset of the one or more modifications based at least in part on the difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance. 8. The method of claim 1, wherein the content comprises one or more selected from a group consisting of: an image, a photograph, text, and any combination thereof. 9. The method of claim 1, wherein the performance metric describes a number of times a content item instance is accessed, a number of times a preference for the content item instance is indicated, a number of installations of an application associated with the content item instance, a number of times an application associated with the content item instance is accessed, a number of purchases of a product associated with the content item instance, a number of purchases of a service associated with the content item instance, a number of views of data associated with the content item instance, a number of conversions associated with the content item instance, a number of subscriptions associated with the content item instance, or a number of interactions with the content item instance. 10. The method of claim 1, further comprising: ranking the plurality of content item instances based at least in part on the value of the performance metric associated with each content item instance of the plurality of content item instances; determining an amount of variation in values of the performance metric associated with the plurality of content item instances; responsive to determining the amount of variation in values of the performance metric associated with the plurality of content item instances is at least a threshold amount, identifying an additional subset of the one or more modifications to the appearance of the content to which the amount of variation in values of the performance metric associated with the plurality of content item instances is attributable; and predicting the improvement in the value of the performance metric associated with content item instances including the additional subset of the one or more modifications based at least in part on the ranking and the amount of variation in values of the performance metric associated with the plurality of content item instances. 11. The method of claim 1, further comprising: storing the identifier associated with the set of the one or more modifications to the appearance of the content included in each of the plurality of content item instances in association with each of the plurality of content item instances including the set of the one or more modifications to the appearance of the content. 12. The method of claim 1, further comprising: communicating the predicted improvement in the value of the performance metric associated with content item instances including the subset of the one or more modifications to the content-providing user of the online system. 13. The method of claim 1, further comprising: receiving the content from the content-providing user of the online system. 14. A computer program product comprising a computer readable storage medium having instructions encoded thereon that, when executed by a processor, cause the processor to: receive a request from a content-providing user of an online system to generate a content item to be presented to one or more viewing users of the online system, the request specifying one or more modifications to an appearance of content received from the content-providing user; generate a plurality of content item instances of the content item, each of the plurality of content item instances including a different set of the one or more modifications to the appearance of the content specified in the request; generate an identifier for each of the plurality of content item instances, each identifier associated with the set of the one or more modifications to the appearance of the content included in the content item instance; present the plurality of content item instances to a subset of the one or more viewing users of the online system; track a performance metric associated with impressions of each of the plurality of content item instances using the identifier associated with the set of the one or more modifications to the appearance of the content included in each of the plurality of content item instances; identify one or more pairs of the plurality of content item instances; and for each of the one or more pairs of the plurality of content item instances: compare a first value of the performance metric associated with a first content item instance of the pair to a second value of the performance metric associated with a second content item instance of the pair; determine a difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance based at least in part on the comparing; identify a subset of the one or more modifications to the appearance of the content to which the difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance is attributable; and predict an improvement in a value of the performance metric associated with content item instances including the subset of the one or more modifications, based at least in part on the difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance. 15. The computer program product of claim 14, wherein the identifier associated with the set of the one or more modifications to the appearance of the content included in each of the plurality of content item instances comprises a digital watermark, an image fingerprint, or an image hash. 16. The computer program product of claim 14, wherein the one or more modifications to the appearance of the content are selected from a group consisting of: modifying one or more colors of the content, modifying a placement of an element of the content, modifying a size of the content, modifying a size of an element of the content, modifying a color of an element of the content, and any combination thereof. 17. The computer program product of claim 14, wherein the improvement in the value of the performance metric associated with content item instances including the subset of the one or more modifications is predicted by a machine-learned model. 18. The computer program product of claim 14, wherein the request is received from the content-providing user via a tool provided by the online system. 19. The computer program product of claim 18, wherein the one or more modifications are specified using one or more features of the tool. 20. The computer program product of claim 18, wherein the computer readable storage medium further has instructions encoded thereon that, when executed by the processor, cause the processor to: identifying a feature of the tool used to specify the subset of the one or more modifications to the appearance of the content to which the difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance is attributable; and predicting the improvement in the value of the performance metric associated with content item instances including the subset of the one or more modifications based at least in part on the difference between the first value of the performance metric associated with the first content item instance and the second value of the performance metric associated with the second content item instance. 21. The computer program product of claim 14, wherein the content comprises one or more selected from a group consisting of: an image, a photograph, text, and any combination thereof. 22. The computer program product of claim 14, wherein the performance metric describes a number of times a content item instance is accessed, a number of times a preference for the content item instance is indicated, a number of installations of an application associated with the content item instance, a number of times an application associated with the content item instance is accessed, a number of purchases of a product associated with the content item instance, a number of purchases of a service associated with the content item instance, a number of views of data associated with the content item instance, a number of conversions associated with the content item instance, a number of subscriptions associated with the content item instance, or a number of interactions with the content item instance. 23. The computer program product of claim 14, wherein the computer readable storage medium further has instructions encoded thereon that, when executed by the processor, cause the processor to: rank the plurality of content item instances based at least in part on the value of the performance metric associated with each content item instance of the plurality of content item instances; determine an amount of variation in values of the performance metric associated with the plurality of content item instances; responsive to determine the amount of variation in values of the performance metric associated with the plurality of content item instances is at least a threshold amount, identify an additional subset of the one or more modifications to the appearance of the content to which the amount of variation in values of the performance metric associated with the plurality of content item instances is attributable; and predict the improvement in the value of the performance metric associated with content item instances including the additional subset of the one or more modifications based at least in part on the ranking and the amount of variation in values of the performance metric associated with the plurality of content item instances. 24. The computer program product of claim 14, wherein the computer readable storage medium further has instructions encoded thereon that, when executed by the processor, cause the processor to: store the identifier associated with the set of the one or more modifications to the appearance of the content included in each of the plurality of content item instances in association with each of the plurality of content item instances including the set of the one or more modifications to the appearance of the content. 25. The computer program product of claim 14, wherein the computer readable storage medium further has instructions encoded thereon that, when executed by the processor, cause the processor to: communicate the predicted improvement in the value of the performance metric associated with content item instances including the subset of the one or more modifications to the content-providing user of the online system. 26. The computer program product of claim 14, wherein the computer readable storage medium further has instructions encoded thereon that, when executed by the processor, cause the processor to: receive the content from the content-providing user of the online system. 27. A method comprising: receiving a request from a content-providing user of an online system to generate a content item to be presented to one or more viewing users of the online system, the request specifying one or more modifications to an appearance of content received from the content-providing user; generating a plurality of content item instances of the content item, each of the plurality of content item instances including a different set of the one or more modifications to the appearance of the content specified in the request; generating an identifier for each of the plurality of content item instances, each identifier associated with the set of the one or more modifications to the appearance of the content included in the content item instance; presenting the plurality of content item instances to a subset of the one or more viewing users of the online system; tracking a performance metric associated with impressions of each of the plurality of content item instances using the identifier associated with the set of the one or more modifications to the appearance of the content included in each of the plurality of content item instances; ranking the plurality of content item instances based at least in part on a value of the performance metric associated with each content item instance of the plurality of content item instances; determining an amount of variation in values of the performance metric associated with the plurality of content item instances; responsive to determining the amount of variation in values of the performance metric associated with the plurality of content item instances is at least a threshold amount, identifying a subset of the one or more modifications to the content to which the amount of variation in values of the performance metric associated with the plurality of content item instances is attributable; and predicting an effect on the value of the performance metric associated with content item instances as a result of including the subset of the one or more modifications in the content item instances, the effect predicted based at least in part on the ranking and the amount of variation in values of the performance metric associated with the plurality of content item instances.
Please help me write a proper abstract based on the patent claims.
An online system receives a request from a user of the online system to generate a content item specifying content (e.g., an image) received from the user and one or more modifications to the appearance of the content to be included in the content item. The online system generates multiple instances of the content item based on the request, in which each instance includes a different set of the specified modifications. Using an identifier that identifies each instance based on the set of modifications to the appearance of the included content (e.g., using an image fingerprint), the online system tracks a performance metric associated with each instance. By comparing the performance metrics associated with the instances, the online system identifies one or more modifications responsible for one or more differences between the performance metrics and predicts an effect on the performance metrics associated with content item instances including the identified modifications.
1. A method, comprising: obtaining validated training data comprising a first set of content items and a first set of classification tags for the first set of content items; using the validated training data to produce, by one or more computer systems, a statistical model for classifying content using a set of dimensions represented by the first set of classification tags; using the statistical model to generate, by the one or more computer systems, a second set of classification tags for a second set of content items; and outputting, by the one or more computer systems, one or more groupings of the second set of content items by the second set of classification tags to improve understanding of content related to the set of dimensions without requiring a user to manually analyze the second set of content items. 2. The method of claim 1, further comprising: obtaining a validated subset of the second set of classification tags for the second set of content items. 3. The method of claim 2, further comprising: providing the validated subset as additional training data to the statistical model to produce an update to the statistical model; and using the update to generate a third set of classification tags for a third set of content items. 4. The method of claim 2, wherein obtaining the validated subset of the second set of classification tags comprises: displaying the second set of content items and the second set of classification tags in a user interface; and obtaining one or more corrections to the second set of classification tags through the user interface. 5. The method of claim 1, wherein using the training data to produce the statistical model for classifying the relevance of content to the one or more topics comprises: generating a set of features from a content item in the first set of content items; and providing the set of features as input to the statistical model. 6. The method of claim 5, wherein the set of features comprises at least one of: one or more n-grams from the content items; a number of characters; a number of units of speech; an average number of units of speech; and a percentage of a character type. 7. The method of claim 5, wherein the set of features comprises profile data for a creator of the content item. 8. The method of claim 1, wherein the set of dimensions comprises a sentiment. 9. The method of claim 1, wherein the set of dimensions comprises a product associated with an online professional network. 10. The method of claim 1, wherein the set of dimensions comprises a value proposition. 11. An apparatus, comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the apparatus to: obtain validated training data comprising a first set of content items and a first set of classification tags for the first set of content items; use the validated training data to produce a statistical model for classifying content using a set of dimensions represented by the first set of classification tags; use the statistical model to generate a second set of classification tags for a second set of content items; and output one or more groupings of the second set of content items by the second set of classification tags to improve understanding of content related to the set of dimensions without requiring a user to manually analyze the second set of content items. 12. The apparatus of claim 11, wherein the memory further stores instructions that, when executed by the one or more processors, cause the apparatus to: obtain a validated subset of the second set of classification tags for the first set of content items; provide the validated subset as additional training data to the statistical model to produce an update to the statistical model; and use the update to generate a third set of classification tags for a third set of content items. 13. The apparatus of claim 12, wherein obtaining the validated subset of the first set of classification tags comprises: displaying the second set of content items and the second set of classification tags in a user interface; and obtaining one or more corrections to the second set of classification tags through the user interface. 14. The apparatus of claim 11, wherein using the training data to produce the statistical model for classifying the relevance of content to the one or more topics comprises: generating a set of features from a content item in the first set of content items; and providing the set of features as input to the statistical model. 15. The apparatus of claim 14, wherein the set of features comprises at least one of: one or more n-grams from the content items; a number of characters; a number of units of speech; an average number of units of speech; a percentage of a character type; and profile data for a creator of the content item. 16. The apparatus of claim 11, wherein the set of dimensions comprises a sentiment. 17. The apparatus of claim 11, wherein the set of dimensions comprises a product associated with an online professional network. 18. The apparatus of claim 11, wherein the set of dimensions comprises a value proposition. 19. A system, comprising: an analysis non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the system to: obtain validated training data comprising a first set of content items and a first set of classification tags for the first set of content items; use the validated training data to produce a statistical model for classifying content using a set of dimensions represented by the first set of classification tags; and use the statistical model to generate a second set of classification tags for a second set of content items; and a management non-transitory computer-readable medium comprising instructions that, when executed by one or more processors, cause the system to output one or more groupings of the second set of content items by the second set of classification tags to improve understanding of content related to the set of dimensions without requiring a user to manually analyze the second set of content items. 20. The system of claim 19, wherein the analysis non-transitory computer-readable medium further instructions that, when executed by the one or more processors, cause the system to: obtain a validated subset of the second set of classification tags for the first set of content items; provide the validated subset as additional training data to the statistical model to produce an update to the statistical model; and use the update to generate a third set of classification tags for a third set of content items.
Please help me write a proper abstract based on the patent claims.
The disclosed embodiments provide a system for processing data. During operation, the system obtains validated training data containing a first set of content items and a first set of classification tags for the first set of content items. Next, the system uses the validated training data to produce a statistical model for classifying content using a set of dimensions represented by the first set of classification tags. The system then uses the statistical model to generate a second set of classification tags for a second set of content items. Finally, the system outputs one or more groupings of the second set of content items by the second set of classification tags to improve understanding of content related to the set of dimensions without requiring a user to manually analyze the second set of content items.
1. A method of computer-assisted pathology report preparation, wherein a computer displays a cursor at a current cursor location in an active window, the method comprising: in response to determining a voice input regarding a report document comprises a command for the computer: determining a current context of the computer, wherein the current context is based at least in part on situational knowledge, the active window and the current cursor location, wherein the situational knowledge includes information regarding at least one of a current user, a current patient, a current case, a current specimen and a current slide; determining at least one instruction based on information stored in long-term knowledge data files, the command and the current context, wherein the information stored in the long-term knowledge data files includes program defined instructions and user taught instructions; and executing the at least one instruction on the computer. 2. The method of claim 1, further comprising in response to determining the voice input comprises dictation, inputting text into the active window at the current cursor location based on the dictation. 3. The method of claim 1, wherein the voice input comprises a combination of dictation and at least one command. 4. The method of claim 1, wherein the current specimen is a first specimen, the method further comprising: receiving a scanned barcode associated with a second specimen; and in response to determining that the second specimen is a different slide of the first specimen, ensuring a text editor is the active window. 5. The method of claim 4, the method comprising, in response to determining the second specimen is a different from the first specimen: accessing case information for the second specimen; parsing gross description text; and placing the cursor at a new cursor location in the report document based at least in part on gross description text. 6. The method of claim 1, further comprising: accessing case information for a specimen; and generating the report document by loading specimen information and the case information into a report template. 7. The method of claim 6, wherein generating the report document comprises: accessing a specimen list; and for each specimen in the specimen list, adding a specimen header and a diagnosis placeholder into the report document. 8. The method of claim 7, wherein generating the report document comprises automatically typing a diagnosis to replace a diagnosis placeholder based at least in part on a label for a specimen in the specimen list and a gross description. 9. The method of claim 1, further comprising: accessing case information for the report document; and automatically verifying consistency of case information and gross description. 10. The method of claim 9, wherein automatically verifying consistency comprises determining whether a potential gender error. 11. The method of claim 9, further comprising, in response to detecting a potential inconsistency, notifying a user of the potential inconsistency. 12. The method of claim 1, wherein the command is a request to finalize and release the report document. 13. The method of claim 12, wherein the at least one instruction comprises instructions to determine that all slides described in a gross description have been scanned. 14. The method of claim 1, wherein the current specimen is a first specimen, the method further comprising: receiving a scanned barcode associated with a second specimen; and in response to receiving the scanned barcode, updating the situational knowledge regarding the second specimen. 15. The method of claim 14, wherein updating the situational knowledge comprises updating at least one of: the current patient, the current case, the current specimen and the current slide. 16. The method of claim 1, wherein determining at least one instruction includes accessing at least one dictionary file, wherein entries in the at least one dictionary file describe the program defined instructions and the user taught instructions. 17. The method of claim 16, wherein the user taught instructions include instructions to automatically replace entered text with alternative language. 18. The method of claim 17, wherein automatically replacing the entered text includes: automatically expanding abbreviations; automatically rearranging an order of words in the entered text; automatically adding a tissue source in front of a label; and automatically putting in correct procedures for each header in the report document. 19. A computer readable medium tangibly encoded with a computer program executable by a processor to perform actions comprising: in response to determining a voice input regarding a report document comprises a command for the computer: determining a current context of the computer, wherein the current context is based at least in part on situational knowledge, an active window and a current cursor location, wherein the situational knowledge includes information regarding at least one of a current user, a current patient, a current case, a current specimen and a current slide; determining at least one instruction based on information stored in long-term knowledge data files, the command and the current context, wherein the information stored in the long-term knowledge data files includes program defined instructions and user taught instructions; and executing the at least one instruction on the computer. 20. A computer readable medium of claim 19, wherein the actions further comprise, in response to determining the voice input comprises dictation, inputting text into the active window at the current cursor location based on the dictation.
Please help me write a proper abstract based on the patent claims.
A method of using artificial intelligence (e.g., SMILE) to assist users, such as pathologists and pathologist assistants, in pathology report preparation is described. The method includes the steps of (1) specimen gross examination, submission and dictation, (2) final diagnosis dictation, and (3) Cancer Protocol Templates construction. SMILE “listens” to the voice commands, “reads” case/slide information, goes through algorithms, and engages in report preparation. SMILE performs secretarial tasks, such as typing, error checking, important information announcing, and inputting commands by simulating keystrokes and mouse clicks, thus enabling the user to focus on the professional tasks at hand. This results in an increase in the efficiency for the user, and a decrease in reporting errors. Human-SMILE interaction is very much human-to-human like, mediated by voice recognition technology and text-to-speech. There is a significant reduction in the keyboard and mouse usage in comparison to either human transcription or voice recognition without SMILE.
1. A machine learning apparatus which can communicate with a winding machine and which learns an operation for forming a coil by the winding machine, comprising: a state observing unit for observing a state variable comprised of at least one of an actual dimension value, a resistance actual value, and a wire rod used amount of a coil formed by the winding machine, and a program execution time actual value, and at least one of a dimension command value, a resistance command value, a turn number command value, a winding speed command value, and a tension command value for the coil, which are instructed by a program for the winding machine, and an execution time command value for the program; and a learning unit for learning by linking at least one of an actual dimension value, a resistance actual value, and a wire rod used amount of the coil observed by the state observing unit, and a program execution time actual value to at least one of a dimension command value, a resistance command value, a turn number command value, a winding speed command value, and a tension command value for the coil observed by the state observing unit, and an execution time command value for the program. 2. The machine learning apparatus according to claim 1, wherein the learning unit comprises: a reward computing unit for computing a reward based on at least one of an actual dimension value, a resistance actual value, and a wire rod used amount of the coil observed by the state observing unit, and a program execution time actual value; and a function updating unit for updating a function for deciding, from the state variable at present, based on the reward computed by the reward computing unit, at least one of a dimension command value, a resistance command value, a turn number command value, a winding speed command value, and a tension command value for the coil, and an execution time command value for the program. 3. The machine learning apparatus according to claim 1, comprising a decision-making unit for deciding, from the state variable at present, based on the result of learning of the learning unit, an optimal value of at least one of a dimension command value, a resistance command value, a turn number command value, a winding speed command value, and a tension command value for the coil, and an execution time command value for the program. 4. The machine learning apparatus according to claim 1, wherein the reward computing unit increases a reward when an actual dimension value, a resistance actual value, and a wire rod used amount of a coil, and a program execution time actual value remain within their respective allowable ranges, and decreases a reward when the same are outside of the allowable ranges. 5. The machine learning apparatus according to claim 1, wherein the learning unit computes a state variable observed by the state observing unit in a multilayer structure, to update the function on a real-time basis. 6. The machine learning apparatus according to claim 1, wherein the function of the function updating unit is updated using a function updated by a function updating unit of another machine learning apparatus. 7. A coil producing apparatus comprising the machine learning apparatus according to claim 1.
Please help me write a proper abstract based on the patent claims.
A machine learning apparatus includes a state observing unit for observing a state variable comprised of at least one of an actual dimension value, a resistance actual value, etc., and at least one of a dimension command value, a resistance command value, etc., and an execution time command value for a program, and a learning unit for performing a learning operation by linking at least one of an actual dimension value, a resistance actual value, etc., to at least one of a dimension command value, a resistance command value, etc., observed by the state observing unit, and an execution time command value for the program.
1. A semi- or fully automated, integrated learning and labeling and classification learning system with closed, self-sustaining pattern recognition, labeling and classification operation, comprising: circuitry configured to implement a machine learning classifier; select unclassified data sets and convert the unclassified data sets into an assembly of graphic and text data forming compound data sets to be classified, wherein, by generated feature vectors of training data sets, the machine learning classifier is trained for improving the classification operation of the automated system generically during training as a measure of the classification performance, if the automated labeling and classification system is applied to unlabeled and unclassified data sets, and wherein unclassified data sets are classified by applying the machine learning classifier of the system to the compound data set of the unclassified data sets; generate training data sets, wherein for each data set of selected test data sets, a feature vector is generated comprising a plurality of labeled features associated with the different selected test data sets; generate a two-dimensional confusion matrix based on the feature vector of the test data sets, wherein a first dimension of the two-dimensional confusion matrix comprises pre-processed labeled features of the feature vectors of the test data sets and a second dimension of the two-dimensional confusion matrix comprises classified and verified features of the feature vectors of the test data sets by applying the machine learning classifier to the test data sets; and in case an inconsistently or wrongly classified test data set and/or feature of a test data set is detected, assign the inconsistently or wrongly classified test data set and/or feature of the test data set to the training data sets, and generate additional training data sets based on the confusion matrix, which are added to the training data sets for filling in the gaps in the training data sets and improving the measurable performance of the system. 2. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured such that the machine learning classifier comprises at least a scalable Naive Bayes classifier based on a linear number of parameters in the number of features and predictors, respectively. 3. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured such that the machine learning classifier comprises a non-probabilistic, binary, linear, support vector machines classifier and/or a non-parametric k-Nearest Neighbors classifier, and/or an exponential, probabilistic, max entropy classifier, and/or decision tree classifier based on a finite set of values, and/or Balanced Winnow classifier, and/or deep learning classifiers using multiple processing layers composed of multiple linear and non-linear transformations. 4. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured such that the machine learning classifier applies unigrams and bigrams, and/or a combination of unigrams and bigrams or n-grams to the machine learning classifier. 5. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured to apply distribution scaling to the data sets scaling word counts so that pages with a small number of words are not underrepresented. 6. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured to boost a probability of words that are unique for a certain class as compared to other words that occur relatively frequently in other classes. 7. The automated learning and labeling and classification system according to claim 1, wherein the circuitry is configured to ignore a given page of a data set if the given page comprises only little or non-relevant text compared to average pages, and the label of the previous page is assigned during inference. 8. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured to filter out data sets with spikes as representing unlikely scenarios. 9. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured to accept a selection of defined features to be ignored by the machine learning classifier. 10. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured such that the machine learning classifier comprises at least a population of separate rule sets, such that the learning operation recombines and reproduces the best of these rule sets, and/or the machine learning classifier comprises a single set of rules in a defined population, and such that the generic learning operation selects the best classifiers within that set. 11. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured to have a predefined threshold value for a performance strength-based and/or accuracy-based classification of the operation performance. 12. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured to convert the selected unclassified data sets to an assembly of graphic and text data forming a compound data set to be classified, and to pre-process the unclassified data sets by optical character recognition converting images of typed, handwritten or printed text into machine-encoded text. 13. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured to convert the selected unclassified data sets to an assembly of graphic and text data forming a compound data set to be classified, to pre-process and store the graphic data as raster graphics images in tagged image file format, and to store the text data in plain text format or rich text format. 14. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured such that each feature vector comprises a plurality of invariant features associated with a specific data set or an area of interest of a data set. 15. The automated learning, labeling and classification system according to claim 14, wherein the circuitry is configured such that the invariant features of the graphic data of the compound data set of the specific data set comprise scale invariant, rotation invariant, and position invariant features. 16. The automated learning, labeling and classification system according to claim 14, wherein the circuitry is configured such that the area of interest comprises a representation of at least a portion of a subject object within the image or graphic data of the compound data set of the specific data set, the representation comprising at least one of an object axis, an object base point, or an object tip point, and wherein the invariant features comprise at least one of a normalized object length, a normalized object width, a normalized distance from an object base point to a center of a portion of the image or graphic data, an object or portion radius, a number of detected distinguishable parts of the portion or the object, a number of detected features pointing in the same direction, a number of features pointing in the opposite direction of a specified feature, or a number of detected features perpendicular to a specified feature. 17. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured such that the pre-processed labeled features of the feature vectors of the test data sets comprise manually labeled pre-processed features of the feature vectors of the test data sets as a verified gold standard. 18. The automated learning, labeling and classification system according to claim 1, wherein the circuitry is configured, in case that an inconsistently or wrongly classified test data set and/or feature of a test data set is detected, to assign the inconsistently or wrongly classified test data set and/or feature of the test data set to the training data sets if comparable training data sets are triggered within the training data sets based on the confusion matrix, and to create a new labeling feature of the recognizable feature vector if no comparable training data sets are triggered within the training data sets. 19. A fully or partially automated, integrated learning, labeling and classification learning method for closed, self-sustaining pattern recognition, labeling and classification operation, comprising: circuitry configured to implement a machine learning classifier; select unclassified data sets and convert the unclassified data sets into an assembly of graphic and text data forming a compound data set to be classified, wherein, by feature vectors of training data sets, the machine learning classifier is trained for generically improving the classification operation of the automated system during training as a measure of the classification performance if the automated labeling and classification system is applied to unclassified data sets, and to classify unclassified data sets by applying the machine learning classifier of the system to the compound data set; generate training data sets, wherein for each data set of selected test data sets, a feature vector is generated comprising a plurality of labeled features associated with the different selected test data sets; generate a two-dimensional confusion matrix based on the feature vector of the test data sets, wherein a first dimension of the two-dimensional confusion matrix comprises pre-processed labeled features of the feature vectors of the test data sets, and a second dimension of the two-dimensional confusion matrix comprises classified and verified features of the feature vectors of the test data sets by applying the machine learning classifier to the test data sets; and in case that an inconsistently or wrongly classified test data set and/or feature of a test data set is detected, assign the inconsistently or wrongly classified test data set and/or feature of the test data set to the training data sets, and generate additional training data sets, based on the confusion matrix by means of the system, which are added to the training data sets, thereby filling in the gaps in the training data sets and improving the measurable performance of the system. 20. The fully or partially automated, integrated learning, labeling and classification method for closed, self-sustaining pattern recognition, labeling and classification operation according to claim 19, wherein the circuitry is configured, in case that an inconsistently or wrongly classified test data set and/or feature of a test data set is detected, to assign, based on the confusion matrix, the inconsistently or wrongly classified test data set and/or feature of the test data set to the training data sets if comparable training data sets are triggered within the training data sets, and to create a new labeling feature of the recognizable feature vector if no comparable training data sets are triggered within the training data sets. 21. The fully or partially automated, integrated learning, labeling and classification method for closed, self-sustaining pattern recognition, labeling and classification operation according to claim 19, wherein the circuitry is configured to extend the confusion matrix and/or the recognizable feature vector correspondingly by the triggered new labeling feature if no comparable training data sets are triggered within the training data sets.
Please help me write a proper abstract based on the patent claims.
A fully or semi-automated, integrated learning, labeling and classification system and method have closed, self-sustaining pattern recognition, labeling and classification operation, wherein unclassified data sets are selected and converted to an assembly of graphic and text data forming compound data sets that are to be classified. By means of feature vectors, which can be automatically generated, a machine learning classifier is trained for improving the classification operation of the automated system during training as a measure of the classification performance if the automated labeling and classification system is applied to unlabeled and unclassified data sets, and wherein unclassified data sets are classified automatically by applying the machine learning classifier of the system to the compound data set of the unclassified data sets.
1. A method, in an information handling system comprising one or more processors and a memory, of anonymous crowd sourced software tuning, the method comprising: anonymously receiving usage data from a plurality of customer systems, wherein the usage data pertains to a software product and includes at least one unique identifier generated by a selected one of the plurality of customer systems; analyzing the received usage data, wherein the analysis identifies one or more healthy system patterns; comparing the usage data received from each of the plurality of customer systems to at least one of the healthy system patterns; generating a plurality of sets of one or more recommendations based on the comparison, wherein each set of recommendations corresponds to one of the plurality of customer systems; assigning the unique identifier to a selected one set of the one or more recommendations that correspond to the selected customer system; and providing the selected set of the one or more recommendations to the selected customer system, wherein the selected customer system is adapted to identify the selected set of the one or more recommendations based upon the unique identifier. 2. The method of claim 1 further comprising: identifying a healthy system configuration associated with each of the one or more healthy system patterns; comparing a system configuration associated with each of the plurality of customer systems with the identified healthy system configurations, the comparing resulting in a selected one of the healthy system configurations and a corresponding selected healthy system pattern; and wherein the comparing of the usage data received from each of the plurality of customer systems is compared to the selected healthy system pattern of the healthy system configuration found to be similar to a customer system configuration corresponding to one of the plurality of customer systems. 3. The method of claim 2 further comprising: comparing one or more configuration settings in the customer system configuration to corresponding configuration settings in the selected healthy system configuration, wherein the comparing of configuration settings results in one or more configuration setting changes included in the generated recommendations. 4. The method of claim 3 further comprising: comparing customer system health data included in the usage data received from each of the plurality of customer systems to one or more thresholds; and identifying a selected set of one or more healthy systems in response to the comparison of the customer system health data to the thresholds revealing that at least one of the plurality of customer systems are healthy, wherein the healthy configurations and healthy system patterns correspond to the identified healthy systems. 5. The method of claim 4 further comprising: identifying a selected set of one or more unhealthy systems in response to the comparison of the customer system health data to the thresholds revealing that at least one of the plurality of customer systems are unhealthy, wherein the customer configurations correspond to the unhealthy systems. 6. (canceled) 7. The method of claim 1 wherein the generation of the unique identifier is performed by the selected customer system by executing a hash function against the usage data, wherein the unique identifier is retained by the customer system before reception of the usage data. 8. An information handling system comprising: one or more processors; a memory coupled to at least one of the processors; a network adapter that connects the information handling system to a computer network; and a set of instructions stored in the memory and executed by at least one of the processors to provide anonymous crowd sourced software tuning, wherein the set of instructions perform actions of: anonymously receiving usage data from a plurality of customer systems, wherein the usage data pertains to a software product and includes at least one unique identifier generated by a selected one of the plurality of customer systems; analyzing the received usage data, wherein the analysis identifies one or more healthy system patterns; comparing the usage data received from each of the plurality of customer systems to at least one of the healthy system patterns; generating a plurality of sets of one or more recommendations based on the comparison, wherein each set of recommendations corresponds to one of the plurality of customer systems; assigning the unique identifier to a selected one set of the one or more recommendations that correspond to the selected customer system; and providing the selected set of the one or more recommendations to the selected customer system, wherein the selected customer system is adapted to identify the selected set of the one or more recommendations based upon the unique identifier. 9. The information handling system of claim 8 wherein the actions further comprise: identifying a healthy system configuration associated with each of the one or more healthy system patterns; comparing a system configuration associated with each of the plurality of customer systems with the identified healthy system configurations, the comparing resulting in a selected one of the healthy system configurations and a corresponding selected healthy system pattern; and wherein the comparing of the usage data received from each of the plurality of customer systems is compared to the selected healthy system pattern of the healthy system configuration found to be similar to a customer system configuration corresponding to one of the plurality of customer systems. 10. The information handling system of claim 9 wherein the actions further comprise: comparing one or more configuration settings in the customer system configuration to corresponding configuration settings in the selected healthy system configuration, wherein the comparing of configuration settings results in one or more configuration setting changes included in the generated recommendations. 11. The information handling system of claim 10 wherein the actions further comprise: comparing customer system health data included in the usage data received from each of the plurality of customer systems to one or more thresholds; and identifying a selected set of one or more healthy systems in response to the comparison of the customer system health data to the thresholds revealing that at least one of the plurality of customer systems are healthy, wherein the healthy configurations and healthy system patterns correspond to the identified healthy systems. 12. The information handling system of claim 11 wherein the actions further comprise: identifying a selected set of one or more unhealthy systems in response to the comparison of the customer system health data to the thresholds revealing that at least one of the plurality of customer systems are unhealthy, wherein the customer configurations correspond to the unhealthy systems. 13. (canceled) 14. The information handling system of claim 8 wherein the generation of the unique identifier is performed by the selected customer system by executing a hash function against the usage data, wherein the unique identifier is retained by the customer system before reception of the usage data. 15. A computer program product stored in a computer readable storage medium, comprising computer instructions that, when executed by an information handling system, causes the information handling system to provide anonymous crowd sourced software tuning by performing actions comprising: anonymously receiving usage data from a plurality of customer systems, wherein the usage data pertains to a software product and includes at least one unique identifier generated by a selected one of the plurality of customer systems; analyzing the received usage data, wherein the analysis identifies one or more healthy system patterns; comparing the usage data received from each of the plurality of customer systems to at least one of the healthy system patterns; generating a plurality of sets of one or more recommendations based on the comparison, wherein each set of recommendations corresponds to one of the plurality of customer systems; assigning the unique identifier to a selected one set of the one or more recommendations that correspond to the selected customer system; and providing the selected set of the one or more recommendations to the selected customer system, wherein the selected customer system is adapted to identify the selected set of the one or more recommendations based upon the unique identifier. 16. The computer program product of claim 15 wherein the actions further comprise: identifying a healthy system configuration associated with each of the one or more healthy system patterns; comparing a system configuration associated with each of the plurality of customer systems with the identified healthy system configurations, the comparing resulting in a selected one of the healthy system configurations and a corresponding selected healthy system pattern; and wherein the comparing of the usage data received from each of the plurality of customer systems is compared to the selected healthy system pattern of the healthy system configuration found to be similar to a customer system configuration corresponding to one of the plurality of customer systems. 17. The computer program product of claim 16 wherein the actions further comprise: comparing one or more configuration settings in the customer system configuration to corresponding configuration settings in the selected healthy system configuration, wherein the comparing of configuration settings results in one or more configuration setting changes included in the generated recommendations. 18. The computer program product of claim 17 wherein the actions further comprise: comparing customer system health data included in the usage data received from each of the plurality of customer systems to one or more thresholds; and identifying a selected set of one or more healthy systems in response to the comparison of the customer system health data to the thresholds revealing that at least one of the plurality of customer systems are healthy, wherein the healthy configurations and healthy system patterns correspond to the identified healthy systems; and identifying a selected set of one or more unhealthy systems in response to the comparison of the customer system health data to the thresholds revealing that at least one of the plurality of customer systems are unhealthy, wherein the customer configurations correspond to the unhealthy systems. 19. (canceled) 20. The computer program product of claim 15 wherein the generation of the unique identifier is performed by the selected customer system by executing a hash function against the usage data, wherein the unique identifier is retained by the customer system before reception of the usage data.
Please help me write a proper abstract based on the patent claims.
An approach is provided for providing anonymous crowd sourced software tuning. The approach operates by anonymously receiving usage data from a number of software customer systems. The usage data that is received pertains to a software product. The received usage data is analyzed to identify healthy system patterns. The usage data received from each customer system is compared to at least one of the healthy system patterns. In one embodiment, the usage data from a customer system is compared to healthy system patterns from systems with similar configurations as the customer system. Sets of recommendations are generated based on the comparison with each set of recommendations corresponds to one of the software customers. The generated recommendations are provided to the respective software customers.
1-11. (canceled) 12. A machine learning system having a plurality of agents, the system comprising: a model manager for providing a model to the plurality of agents, the model specifying attributes and attribute value data types for an event in which the plurality of agents act; an agent input sub-system for receiving agent-provided inputs from the plurality of agents during an instance of the event, the agent-provided inputs include estimated attribute values that are consistent with the attribute value data types; a ground-truth-based expertise weighting determiner having a processor for determining expertise weights for at least some the plurality of agents in response to at least one ground-truth which is learned from the estimated attribute values; and an adaptive attribute value mixer for determining an estimate value for one or more of the attributes using respective adaptive mixtures of the estimated attribute values. 13. The system of claim 12, wherein the model further specifies a respective range of admissible values for at least some of the attributes. 14. The system of claim 12, wherein the agent-provided inputs further include confidence values for the estimated attribute values. 15. The system of claim 12, wherein the respective adaptive mixtures are determined on an attribute-by-attribute basis responsive to the expertise weights and the confidence values. 16. The system of claim 12, wherein the ground-truth-based expertise weighting determiner learns the at least one ground-truth responsive to labeled examples of ground truths, each of the labeled examples corresponding to a particular attribute value for a particular one of the attributes. 17. The system of claim 16, wherein the labeled examples comprise at least one of actual subject labeled examples and surrogate subject labeled examples. 18. The system of claim 16, wherein the ground-truth learning sub-system learns the at least one ground-truth using an N-1 substitution method, wherein a ground truth label for an estimated attribute value for a respective one of the attributes provided by a respective one of the plurality of agents is compared to ground truth labels for the estimated attribute values for the respective one of the attributes provided by remaining ones of the plurality of agents to assess an ability of the respective one of the plurality of agents at estimating a value for the respective one of the attributes. 19. The system of claim 12, wherein a shared model manager agent from among the plurality of agents provides surrogate estimated attribute values for respective ones of the plurality of agents that fail to provide a respective estimated attribute value for one or more of the attributes using correlation coefficients for pairs of the attributes and confidence levels for the correlation coefficients. 20. The system of claim 12, wherein a shared model manager agent from among the plurality of agents learns prior probabilities for pairs of the attributes and generates, responsive to the prior probabilities, surrogate assessments and surrogate confidence levels for respective ones of the plurality of agents that fail to produce certain attribute values but produce other attribute values for which there are well established prior probabilities.
Please help me write a proper abstract based on the patent claims.
A system and method are provided for shared machine learning. The method includes providing a model to a plurality of agents included in a machine learning system. The model specifies attributes and attribute value data types for an event in which the agents act. The method further includes receiving agent-provided inputs during an instance of the event. The agent-provided inputs include estimated attribute values that are consistent with the attribute value data types. The method also includes determining expertise weights for at least some agents in response to at least one ground-truth which is learned from the estimated attribute values. The method additionally includes determining an estimate value for one or more of the attributes using respective adaptive mixtures of the estimated attribute values.
1. A method, in a data processing system, for optimization of mixed-criticality systems, the method comprising: receiving, by a processor in the data processing system, a plurality of strategies, wherein the plurality of strategies are in a fixed order of criticality; for each strategy in the plurality of strategies, obtaining, by the processor, a multivariate objective function and a multivariate constraint in a multivariate decision variable; maximizing, by the processor, a number of strategies of the plurality of strategies that are feasible in combination; and generating, by the processor, a solution that is feasible for the number of strategies that are feasible in combination, such that the objective of a least-critical strategy that is feasible in combination with the other strategies in the number of strategies is optimized. 2. The method of claim 1, wherein maximizing the number of strategies of the plurality of strategies that are feasible in combination solves: P(ξ):maxsminxƒ(s)([x(1) . . . x(s)],ξ,U(s))s.t.x(1)∈X(ξ,U(1)),[x(1)x(2)]∈X(ξ,U(2)); [x(1) . . . x(s)]∈X(ξ,U(s)); 1≦s≦S where P(86 ) denotes the problem solved, ξ is the multivariate random variable, x=[x(1) . . . x(s)] is the decision variable, ƒ(s) is the objective function of the least-critical strategy supported, and X(s) is the feasible region for the strategy s. 3. The method of claim 1, wherein the plurality of strategies are analyzed with respect to a decreasing level of criticality. 4. The method of claim 1, wherein the multivariate constraint is at east one of a multivariate equality constraint or a multivariate inequality constraint. 5. The method of claim 4, wherein the feasible region for the strategy s (X(s)) is defined by the inequalities F(s)(x, ξ, U(s))≦0 and equalities G(s)(x, ξ, U(s))=0, which make use of the uncertainty parameters U(s) for the strategy s. 6. The method of claim 1, wherein for each strategy in the plurality of strategies, the multivariate objective and the multivariate constraint allow for uncertainty therein. 7. The method of claim 1, wherein the uncertainty sets Z(s) given by ξ, U(s) are at least one of interval-based, polyhedral, ellipsoidal, spectrahedral, or a combination thereof and wherein the selected uncertainty sets solves: P′(ξ):maxsminxmax(ξ(s)∈Z(s)(ξ,U(s))ƒ(s)([x(1). . .x(s)],ξ,ζ(s))s.t.x(1)∈X(ξ,ζ(1)),[x(1)x2]∈X(ξ,ζ(2)); [x(1) . . . x(s)]∈X(ξ, ζ(s)); 1≦s≦S. 8. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive a plurality of strategies, wherein the plurality of strategies are in a fixed order of criticality; for each strategy in the plurality of strategies, obtain a multivariate objective function and a multivariate constraint in a multivariate decision variable; maximize a number of strategies of the plurality of strategies that are feasible in combination; and generate a solution that is feasible for the number of strategies that are feasible in combination, such that the objective of a least-critical strategy that is feasible in combination with the other strategies in the number of strategies is optimized. 9. The computer program product of claim 8, wherein maximizing the number of strategies of the plurality of strategies that are feasible in combination solves: P(ξ):maxsminxƒ(s)([x(1) . . . x(s)],ξ,U(s))s.t.x(1)∈X(ξ,U(1)),[x(1)x(2)]∈X(ξ,U(2)); [x(1) . . . x(s)]∈X(ξ,U(s)); 1≦s≦S where P(ξ) denotes the problem solved, ξ is the multivariate random variable, x=[x(1) . . . x(s)] is the decision variable, ƒ(s) is the objective function of the least-critical strategy supported, and X(s) is the feasible region for the strategy s. 10. The computer program product of claim 8, wherein the plurality of strategies are analyzed with respect to a decreasing level of criticality. 11. The computer program product of claim 8, wherein the multivariate constraint is at least one of a multivariate equality constraint or a multivariate inequality constraint. 12. The computer program product of claim 11, wherein the feasible region for the strategy s (X(s)) is defined by the inequalities F(s)(x, ξ, U(s))≦0 and equalities G(s)(x, ξ, U(s))=0, which make use of the uncertainty parameters U(s) for the strategy s. 13. The computer program product of claim 8, wherein for each strategy in the plurality of strategies, the multivariate objective and the multivariate constraint allow for uncertainty therein. 14. The computer program product of claim 8, wherein the uncertainty sets Z(s) given by ξ, U(s) are at least one of interval-based, polyhedral, ellipsoidal, spectrahedral, or a combination thereof and wherein the selected uncertainty sets solves: P′(ξ):maxs, minxmax(ζ(s)∈Z(s)(ξ,U(s))ƒ(s)([x(1) . . . x(s)],ξ,ζ(s))s.t.x(1)∈X(ξ,ζ(1)),[x(1)x(2)]∈X(ξ,ζ(2)); [x(1) . . . x(s)]∈X(ξ,ζ(s)); 1≦s≦S. 15. An apparatus comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: receive a plurality of strategies, wherein the plurality of strategies are in a fixed order of criticality; for each strategy in the plurality of strategies, obtain a multivariate objective function and a multivariate constraint in a multivariate decision variable; maximize a number of strategies of the plurality of strategies that are feasible in combination; and generate a solution that is feasible for the number of strategies that are feasible in combination, such that the Objective of a least-critical strategy that is feasible in combination with the other strategies in the number of strategies is optimized. 16. The apparatus of claim 15, wherein maximizing the number of strategies of the plurality of strategies that are feasible in combination solves: P(ξ):maxsminxƒ(s)([x(1) . . . x(s)],ξ,U(s))s.t.x(1)∈X(ξ,U(1)),[x(1)x(2)]∈X(ξ,U(2)); [x(1) . . . x(s)]∈X(ξ,U(s)); 1≦s≦S where P(ξ) denotes the problem solved, ξ is the multivariate random variable, x=[x(1) . . . x(s)] is the decision variable, ƒ(s) is the objective function of the least-critical strategy supported, and X(s) is the feasible region for the strategy s. 17. The apparatus of claim 15, wherein the plurality of strategies are analyzed with respect to a decreasing level of criticality. 18. The apparatus of claim 15, wherein the multivariate constraint is at least one of a multivariate equality constraint or a multivariate inequality constraint. 19. The apparatus of claim 18, wherein the feasible region for the strategy s (X(s)) is defined by the inequalities F(s)(x, ξ, U(s))≦0 and equalities G(s)(x, ξ,U(s))=0, which make use of the uncertainty parameters U(s) for the strategy s. 20. The apparatus of claim 15, wherein for each strategy in the plurality of strategies, the multivariate objective and the multivariate constraint allow for uncertainty therein.
Please help me write a proper abstract based on the patent claims.
A mechanism is provided for optimization of mixed-criticality systems. A plurality of strategies is received that are in a fixed order of criticality. For each strategy in the plurality of strategies, a multivariate objective function and a multivariate constraint in a multivariate decision variable is obtained. A number of strategies of the plurality of strategies that are feasible in combination are maximized. A solution that is feasible for the number of strategies that are feasible in combination is generated such that the objective of a least-critical strategy that is feasible in combination with the other strategies in the number of strategies is optimized.
1. A system comprising: one or more computer processors; one or more computer memories; one or more modules incorporated into the one or more computer memories, the one or more modules configuring the one or more computer processors to perform operations, the operations comprising: extracting a training time series corresponding to a process from an initial time series corresponding to the process; modifying outlier data points in the training time series based on predetermined acceptability criteria; training a plurality of prediction methods using the training time series; receiving an actual data point corresponding to the initial time series; using the plurality of prediction methods to determine a set of predicted data points corresponding to the actual data point of the initial time series; determining whether the actual data point is anomalous based on a calculation of whether each of the set of predicted data points is statistically different from the actual data point; and receiving an additional actual data point corresponding to the initial time series and extracting an additional training time series from the initial time series based on the additional actual data point. 2. The system of claim 1, wherein the calculation of whether each of the set of predicted data points is statistically different from the actual data point includes a determination that the Mahalanobis distance between the prediction error and the fitted multivariate normal joint probability distribution of each of the set of predicted data points is within a specified range. 3. The system of claim 1, wherein the additional actual data point corresponds to the initial time series and the operations further comprise extracting an additional training time series having the length offset by an additional index prior to a last data point of the initial time series the additional index reflecting a relative position of the actual data point to the additional actual data point. 4. The system of claim 1, further comprising selecting the combination of each of the plurality of prediction methods to minimize a number of false anomaly detections. 5. The system of claim 1, further comprising representing the determination of whether the actual data point is anomalous in a graphical user interface, the representing including providing a strength of the determination. 6. The system of claim 5, wherein the strength of the determination is based on a number of the plurality of prediction methods that indicate an anomaly with respect to the data point. 7. The system of claim 1, wherein the training time series represents a window of the initial time series that is recent in relation to the actual data point. 8. A method comprising: extracting a training time series corresponding to a process from an initial time series corresponding to the process; modifying outlier data points in the training time series based on predetermined acceptability criteria; training a plurality of prediction methods using the training time series; receiving an actual data point corresponding to the initial time series; using the plurality of prediction methods to determine a set of predicted data points corresponding to the actual data point of the initial time series; determining whether the actual data point is anomalous based on a calculation of whether each of the set of predicted data points is statistically different from the actual data point; and receiving an additional actual data point corresponding to the initial time series and extracting an additional training time series from the initial time series based on the additional actual data point. 9. The method of claim 8, wherein the calculation of whether each of the set of predicted data points is statistically different from the actual data point includes a determination that the Mahalanobis distance between the prediction error and the fitted multivariate normal joint probability distribution of each of the set of predicted data points is within a specified range. 10. The method of claim 8, wherein additional actual data point corresponds to the initial time series and the method further comprises extracting an additional training time series having the length offset by an additional index prior to a last data point of the initial time series the additional index reflecting a relative position of the actual data point to the additional actual data point. 11. The method of claim 8, further comprising selecting the combination of each of the plurality of prediction methods to minimize a number of false anomaly detections. 12. The method of claim 8, further comprising representing the determination of whether the actual data point is anomalous in a graphical user interface, the representing including providing a strength of the determination. 13. The method of claim 12, wherein the strength of the determination is based on a number of the plurality of prediction methods that indicate an anomaly with respect to the data point. 14. The method of claim 8, wherein the training time series represents a window of the initial time series that is recent in relation to the actual data point. 15. A non-transitory machine readable medium comprising a set of instructions that, when executed by a processor, causes the processor to perform operations, the operations comprising: extracting a training time series corresponding to a process from an initial time series corresponding to the process; modifying outlier data points in the training time series based on predetermined acceptability criteria; training a plurality of prediction methods using the training time series; receiving an actual data point corresponding to the initial time series; using the plurality of prediction methods to determine a set of predicted data points corresponding to the actual data point of the initial time series; determining whether the actual data point is anomalous based on a calculation of whether each of the set of predicted data points is statistically different from the actual data point; and receiving an additional actual data point corresponding to the initial time series and extracting an additional training time series from the initial time series based on the additional actual data point. 16. The non-transitory machine readable medium of claim 15, wherein the calculation of whether each of the set of predicted data points is statistically different from the actual data point includes a determination that the Mahalanobis distance between the prediction error and the fitted multivariate normal joint probability distribution of each of the set of predicted data points is within a specified range. 17. The non-transitory machine readable medium of claim 15, wherein the additional actual data point corresponds to the initial time series and the operations further comprise extracting an additional training time series having the length offset by an additional index prior to a last data point of the initial time series the additional index reflecting a relative position of the actual data point to the additional actual data point. 18. The non-transitory machine readable medium of claim 15, the operations further comprising selecting the combination of each of the plurality of prediction methods to minimize a number of false anomaly detections. 19. The non-transitory machine readable medium of claim 15, the operations further comprising representing the determination of whether the actual data point is anomalous in a graphical user interface, the representing including providing a strength of the determination. 20. The non-transitory machine readable medium of claim 19, wherein the strength of the determination is based on a number of the plurality of prediction methods that indicate an anomaly with respect to the data point.
Please help me write a proper abstract based on the patent claims.
A method of detecting anomalies in a time series is disclosed. A training time series corresponding to a process is extracted from an initial time series corresponding to the process, the training time series including a subset of the initial time series. Outlier data points in the training time series are modified based on predetermined acceptability criteria. A plurality of prediction methods are trained using the training time series. An actual data point corresponding to the initial time series is received. The plurality of prediction methods are used to determine a set of predicted data points corresponding to the actual data point. It is determined whether the actual data point is anomalous based on a calculation of whether each of the set of predicted data points is statistically different from the actual data point.
1. A method, comprising: modifying, via a processor, a first number of values in a sequence of a data set to generate a modified sequence such that each difference between each pair of successive values is within a threshold; and determining, via the processor, a satisfiability metric for the modified sequence based on a relationship between a number of modifications to the values in the sequence and a size of the sequence. 2. The method of claim 1, wherein the sequence represents investment data. 3. The method of claim 1, wherein the sequence represents traffic data. 4. The method of claim 1, wherein the sequence represents weather data. 5. The method of claim 1, wherein the satisfiability metric represents a ratio between the number of modifications and the size of the sequence. 6. The method of claim 1, wherein the sequence is a first sequence, the modified sequence is a first modified sequence, the satisfiability metric is a first satisfiability metric, and further comprising: modifying a second number of values in a second sequence of the data set to generate a second modified sequence; determining a second satisfiability metric for the second modified sequence based on a relationship between a number of modifications to values in the second sequence and a size of the second sequence; and selecting one of the first or second sequences based on a comparison of the first satisfiability metric and the second satisfiability metric. 7. The method of claim 6, wherein the first sequence and the second sequence are subsets of the data set. 8. The method of claim 6, further comprising summarizing the selection of the first sequence or second sequence in a table. 9. The method of claim 6, wherein selecting one of the first sequence or the second sequence comprises determining which of the first satisfiability metric or the second satisfiability metric corresponds to a lesser number of modifications in proportion to the respective size of the first sequence and the second sequence. 10. A machine readable memory comprising instructions which, when executed, cause a machine to perform operations comprising: modifying a first number of values in a sequence of a data set to generate a modified sequence such that each difference between each successive pair of values satisfies a threshold; and determine a satisfiability metric for the modified sequence based on a relationship between a number of modifications to the values in the sequence and a size of the sequence. 11. The memory of claim 10, wherein determining the satisfiability metric comprises determining a ratio between the number of modifications and the size of the sequence. 12. The memory of claim 10, wherein the sequence is a first sequence, the modified sequence is a first modified sequence, the satisfiability metric is a first satisfiability metric, and further comprising instructions which, when executed, cause the machine to perform operations comprising: modifying a second number of values in a second sequence of the data set to generate a second modified sequence; determining a second satisfiability metric for the second modified sequence based on a relationship between a number of modifications to values in the second sequence and a size of the second sequence; and selecting one of the first or second sequences based on a comparison of the first satisfiability metric and the second satisfiability metric. 13. The memory of claim 12, wherein the first sequence and the second sequence are subsets of the data set. 14. The memory of claim 12, further comprising summarizing the selection of the first sequence or second sequence in a table. 15. The memory of claim 12, wherein selecting one of the first sequence or the second sequence comprises determining which of the first satisfiability metric or the second satisfiability metric corresponds to a lesser number of modifications in proportion to the respective size of the first sequence and the second sequence. 16. An apparatus comprising: a memory comprising machine readable instructions; and a processor which, when executing the instructions, performs operations comprising: modifying a first number of values in a sequence of a data set to generate a modified sequence such that each difference between each successive pair of values meets a threshold; and determining a satisfiability metric for the modified sequence based on a relationship between a number of modifications to the values in the sequence and a size of the sequence. 17. The apparatus of claim 16, wherein determining the satisfiability metric comprises determining a ratio between the number of modifications and the size of the sequence. 18. The apparatus of claim 16, wherein the sequence is a first sequence, the modified sequence is a first modified sequence, the satisfiability metric is a first satisfiability metric, and the operations further comprise: modifying a second number of values in a second sequence of the data set to generate a second modified sequence; determining a second satisfiability metric for the second modified sequence based on a relationship between a number of modifications to values in the second sequence and a size of the second sequence; and selecting one of the first or second sequences based on a comparison of the first satisfiability metric and the second satisfiability metric. 19. The apparatus of claim 18, wherein the first sequence and the second sequence are subsets of the data set. 20. The apparatus of claim 18, wherein the operations further comprise determining which of the first satisfiability metric or the second satisfiability metric corresponds to a lesser number of modifications in proportion to the respective size of the first sequence and the second sequence.
Please help me write a proper abstract based on the patent claims.
Methods and apparatus for processing data using sequential dependencies are disclosed herein. An example method includes modifying a first number of values in a sequence of a data set to generate a modified sequence such that each difference between each successive pair of values is within a threshold. A satisfiability metric is determined for the modified sequence based on a relationship between a number of modifications to the values in the sequence and a size of the sequence.
1. A method for expanding an answer key to verify a question and answer (Q and A) system, the method comprising: constructing a definition of an extended answer type, wherein the extended answer type represents an answer type of an unrepresented answer, wherein the unrepresented answer is unrepresented in the answer key as a valid response to a question in a set of valid responses to the question in the answer key; creating, using a processor and a memory, the extended answer type in the answer key according to the definition; populating the extended answer type such that the unrepresented answer becomes as additional valid response to the question, the creating and the populating extending the answer key to form an extended answer key; and using the populated extended answer type in the extended answer key to verify that a generated answer from the Q and A system is correct. 2. The method of claim 1, wherein the generated answer is incorrect according to an existing answer type in the answer key. 3. The method of claim 1, further comprising: forming the definition by modifying an abstract definition; constructing a second definition of a second extended answer type, wherein the second extended answer type represents an answer type of a second unrepresented answer. 4. The method of claim 3, wherein the abstract definition is used as a placeholder in the extended answer key for a third extended answer type. 5. The method of claim 1, wherein the unrepresented answer becomes the additional valid answer because a computation using the populated extended answer type results in the additional valid response. 6. The method of claim 1, wherein the unrepresented answer becomes the additional valid answer because a logic described in a metadata of the extended answer key computes to make the unrepresented answer the additional valid response. 7. The method of claim 1, wherein the unrepresented answer becomes the additional valid answer by being a member of a range of values specified in the populated extended answer type. 8. The method of claim 1, wherein the unrepresented answer becomes the additional valid answer by being a member of a range of values, wherein the range of values is computed using logic in a metadata of the extended answer key. 9. The method of claim 1, further comprising: configuring the extended answer type with a changeable condition, wherein a value of the additional valid answer changes when the condition changes. 10. The method of claim 1, wherein the extended answer key in an Extensible Markup Language (XML) document, wherein the definition is an XML structure. 11. The method of claim 1, wherein the question comprise a sentence in natural language, wherein the Q and A system is configured to respond to the natural language question. 12. The method of claim 1, wherein the question comprise a sentence in natural language, wherein the Q and A system is configured to respond to the natural language question using another extended answer type to represent answers one of (i) based on a cultural reference, (ii) in a particular language, and (iii) from a set of synonyms.
Please help me write a proper abstract based on the patent claims.
A method for expanding an answer key to verify a question and answer system is provided in the illustrative embodiments. A definition is constructed of an extended answer type. The extended answer type represents an answer type of an unrepresented answer. The unrepresented answer is unrepresented in the answer key as a valid response to a question in a set of valid responses to the question in the answer key. The extended answer type is created in the answer key according to the definition. The extended answer type is populated such that the unrepresented answer becomes as additional valid response to the question, the creating and the populating extending the answer key to form an extended answer key. The populated extended answer type in the extended answer key is used to verify that a generated answer from the Q and A system is correct.
1. A method of classifying elements in a ground truth training set, the method comprising: performing, by the information handling system, comprising a processor and a memory, annotation operations on a ground truth training set using an annotator to generate a machine-annotated training set; assigning, by the information handling system, elements from the machine-annotated training set to one or more clusters; analyzing, by the information handling system, the one or more clusters to identify at least a first prioritized cluster containing one or more elements which are frequently misclassified; and displaying, by the information handling system, machine-annotated training set elements associated with the first prioritized cluster along with a warning that the first prioritized cluster contains one or more elements which are frequently misclassified to solicit verification or correction feedback from a human subject matter expert (SME) for inclusion in an accepted training set. 2. The method of claim 1, where the annotator comprises a dictionary annotator, rule-based annotator, or a machine learning annotator. 3. The method of claim 1, where assigning elements from the machine-annotated training set to one or more clusters comprises: generating a vector representation for each element from the machine-annotated training set; and grouping the vector representations for the elements from the machine-annotated training set elements into one or more clusters. 4. The method of claim 1, where analyzing the one or more clusters comprises identifying a group of elements from a confusion matrix that are commonly confused with one another. 5. The method of claim 4, where analyzing the one or more clusters comprises: applying one or more feature selection algorithms to the group of elements from the confusion matrix that are commonly confused with one another to identify error characteristics of each misclassified element; and generating a vector representation for each misclassified element from the error characteristics of each misclassified element. 6. The method of claim 5, where analyzing the one or more clusters comprises detecting an alignment between a vector representation for each misclassified element and a vector representation of the one or more clusters. 7. The method of claim 1, further comprising displaying a reclassification recommendation for a correct classification for at least one of the one or more elements which are frequently misclassified. 8. The method of claim 7, where each reclassification recommendation is paired with a corresponding element which is frequently misclassified based on information derived from a confusion matrix. 9. The method of claim 1, further comprising verifying or correcting classifications for all machine-annotated training set elements in a cluster as a single group based on verification or correction feedback from the human subject matter expert. 10. The method of claim 1, where each element is an entity/relationship element. 11. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on an information handling system, causes the system to classify elements in a ground truth training set by: performing annotation operations on a ground truth training set using an annotator to generate a machine-annotated training set; assigning elements from the machine-annotated training set to one or more clusters; analyzing the one or more clusters to identify at least a first prioritized cluster containing one or more elements which are frequently misclassified; and displaying machine-annotated training set elements associated with the first prioritized cluster along with a warning that the first prioritized cluster contains one or more elements which are frequently misclassified to solicit verification or correction feedback from a human subject matter expert (SME) for inclusion in an accepted training set. 12. The computer program product of claim 10, wherein the computer readable program, when executed on the system, causes the system to assign elements from the machine-annotated training set to one or more clusters by: generating a vector representation for each element from the machine-annotated training set; and grouping the vector representations for the elements from the machine-annotated training set elements into one or more clusters. 13. The computer program product of claim 10, wherein the computer readable program, when executed on the system, causes the system to analyze the one or more clusters by identifying a group of elements from a confusion matrix that are commonly confused with one another. 14. The computer program product of claim 13, wherein the computer readable program, when executed on the system, causes the system to analyze the one or more clusters by: applying one or more feature selection algorithms to the group of elements from the confusion matrix that are commonly confused with one another to identify error characteristics of each misclassified element; and generating a vector representation for each misclassified element from the error characteristics of each misclassified element. 15. The computer program product of claim 14, wherein the computer readable program, when executed on the system, causes the system to analyze the one or more clusters by detecting an alignment between a vector representation for each misclassified element and a vector representation of the one or more clusters. 16. The computer program product of claim 14, wherein the computer readable program, when executed on the system, causes the system to display a reclassification recommendation for a correct classification for at least one of the one or more elements which are frequently misclassified, where each reclassification recommendation is paired with a corresponding element Which is frequently misclassified based on information derived from a confusion matrix. 17. The computer program product of claim 10, further comprising computer readable program, when executed on the system, causes the system to verify or correct classifications for all machine-annotated training set elements in a cluster as a single group based on verification or correction feedback from the human subject matter expert. 18. An information handling system comprising: one or more processors; a memory coupled to at least one of the processors; and a set of instructions stored in the memory and executed by at least one of the processors to classify elements in a ground truth training set, wherein the set of instructions are executable to perform actions of: performing, by the system, annotation operations on a ground truth training set using an annotator to generate a machine-annotated training set; assigning, by the system, elements from the machine-annotated training set to one or more clusters; analyzing, by the system, the one or more clusters to identify at least a first prioritized cluster containing one or more elements which are frequently misclassified; and displaying, by the system, machine-annotated training set elements associated with the first prioritized cluster along with a warning that the first prioritized cluster contains one or more elements which are frequently misclassified to solicit verification or correction feedback from a human subject matter expert (SME) for inclusion in an accepted training set. 19. The information handling system of claim 18, where analyzing the one or more clusters comprises identifying a group of elements from a confusion matrix that are commonly confused with one another. 20. The information handling system of claim 19, where analyzing the one or more clusters comprises: applying one or more feature selection algorithms to the group of elements from the confusion matrix that are commonly confused with one another to identify error characteristics of each misclassified element; and generating a vector representation for each misclassified element from the error characteristics of each misclassified element. 21. The information handling system of claim 20, where analyzing the one or more clusters comprises detecting an alignment between a vector representation for each misclassified element and a vector representation of the one or more clusters. 22. The information handling system of claim 18, further comprising displaying a reclassification recommendation for a correct classification for at least one of the one or more elements which are frequently misclassified, where each reclassification recommendation is paired with a corresponding element which is frequently misclassified based on information derived from a confusion matrix. 23. The information handling system of claim 18, further comprising verifying or correcting all classifications for all machine-annotated training set elements in a cluster as a single group based on verification or correction feedback from the human subject matter expert. 24. The information handling system of claim 18, further comprising verifying or correcting classifications for all machine-annotated training set elements in a cluster one at a time based on verification or correction feedback from the human subject matter expert.
Please help me write a proper abstract based on the patent claims.
A method, system and a computer program product are provided for classifying elements in a ground truth training set by iteratively assigning machine-annotated training set elements to clusters which are analyzed to identify a prioritized cluster containing one or more elements which are frequently misclassified and display machine-annotated training set elements associated with the first prioritized cluster along with a warning that the first prioritized cluster contains one or more elements which are frequently misclassified to solicit verification or correction feedback from a human subject matter expert (SME) for inclusion in an accepted training set.
1. A method for classifying documents, the method comprising: classifying a document from among a plurality of documents in a first class, in response to applying statistical analysis to data associated with the document; and classifying the document in a second class, in response to determining that a rule from among a plurality of rules applies to the document, wherein a proposed rule is added to the plurality of rules, in response to determining that application of the proposed rule to one or more of the plurality of documents to which the rule is applicable does not diminish accuracy of overall classification for the plurality of documents. 2. The method of claim 1, wherein the proposed rule is not added to the plurality of rules, in response to determining that application of the proposed rule to one or more of the plurality of documents to which the rule is applicable diminishes accuracy of overall classification for the plurality of documents. 3. The method of claim 1, wherein a scoring mechanism is utilized when a proposed rule is added to the plurality of rules such that a favorable score is assigned to the proposed rule, in response to determining that application of the proposed rule to one or more of the plurality of documents to which the rule is applicable enhances accuracy of overall classification for the plurality of documents. 4. The method of claim 3, wherein a first score is assigned to a first proposed rule, wherein the first score is more favorable than a second score assigned to a second proposed rule, in response to determining that the first rule enhances the accuracy of the overall classification for the plurality of documents more than the second rule. 5. The method of claim 1, wherein a scoring mechanism is utilized when a proposed rule is added to the plurality of rules such that an unfavorable score is assigned to the proposed rule, in response to determining that application of the proposed rule to one or more of the plurality of documents to which the rule is applicable diminishes accuracy of overall classification for the plurality of documents. 6. The method of claim 5, wherein a first score is assigned to a first proposed rule, wherein the first score is less favorable than a second score assigned to a second proposed rule, in response to determining that the first rule diminishes the accuracy of the overall classification for the plurality of documents more than the second rule. 7. The method of claim 1, wherein a scoring mechanism is utilized when a proposed rule is added to the plurality of rules such that a neutral score is assigned to the proposed rule, in response to determining that application of the proposed rule to one or more of the plurality of documents to which the rule is applicable neither enhances nor diminishes accuracy of overall classification for the plurality of documents. 8. The method of claim 1, wherein a scoring mechanism is utilized when a proposed rule is added to the plurality of rules such that: a favorable score is assigned to the proposed rule, in response to determining that application of the proposed rule to one or more of the plurality of documents to which the rule is applicable enhances accuracy of overall classification for the plurality of documents, and an unfavorable score is assigned to the proposed rule, in response to determining that application of the proposed rule to one or more of the plurality of documents to which the rule is applicable diminishes accuracy of overall classification for the plurality of documents, wherein the score assigned to the proposed rule after the proposed rule is added to the plurality of rules is re-evaluated periodically to determine whether a more favorable or less favorable score is to be assigned to the rule based on analysis of data obtained from applying the rule to one or more of the plurality of documents after a period of time has elapsed since the rule was added. 9. The method of claim 1, wherein a data structure is implemented to include N indicators associated with a rule that is applicable to N corresponding documents, wherein a respective indicator provides information about whether application of the rule to a corresponding document has enhanced or diminished the classification for the corresponding document. 10. The method of claim 9, wherein the score associated with the rule is improved, in response to determining that the application of the rule to a portion of the N documents has enhanced the classification of said portion of the N documents.
Please help me write a proper abstract based on the patent claims.
Machines, systems and methods for classifying documents, the method comprising: classifying a document from among a plurality of documents in a first class, in response to applying statistical analysis to data associated with the document; classifying the document in a second class, in response to determining that a rule from among a plurality of rules applies to the document, wherein a proposed rule is added to the plurality of rules, in response to determining that application of the proposed rule to one or more of the plurality of documents to which the rule is applicable does not diminish accuracy of overall classification for the plurality of documents.
1-25. (canceled) 26. A computer-implemented method, comprising: collecting input data associated with a plurality of instances; generating, based at least in part on the input data, a predictive model usable to resume an instance, the predictive model including a classifier; generating a serialization schedule for the instance based at least in part on the predictive model; and causing a set of operations to be performed to launch the instance based at least in part on the serialization schedule. 27. The computer-implemented method of claim 26, wherein generating the predictive model is based at least in part on a role of the particular virtual machine instance. 28. The computer-implemented method of claim 27, wherein the role corresponds to a service executed by a customer of a computing resource service provider; and wherein the input data indicates a first set of intervals of time during which the service is active and a second set of intervals of time during which the service is idle. 29. The computer-implemented method of claim 26, wherein generating the predictive model is based at least in part on a plurality of predictive models. 30. The computer-implemented method of claim 26, wherein the computer-implemented method further comprises seeding a second predictive model based at least in part on the predictive model, the second predictive model associated with a second customer distinct from a customer associated with the predictive model. 31. The computer-implemented method of claim 26, wherein the serialization schedule further comprises an indication of a start time for at least one operation of the set of operations such that the instance is available to a customer prior to a predicted start time, the predicted start time determined based at least in part on the predictive model. 32. A system, comprising: one or more processors; and memory that stores computer-executable instructions that, if executed, cause the one or more processors to: generate a predictive model associated with a first instance, the predictive model generated based at least in part on the input data associated with a plurality of other instances, the predictive model usable to determine a start time of an event for making available the first instance; and cause the first instance to be instantiated by at least initiating one or more operations to make the first instance available prior to the start time. 33. The system of claim 32, wherein the input data further comprises information indicating operations of the plurality of other instances initiated at least in part by requests from users. 34. The system of claim 32, wherein the one or more operations includes an operation of loading a portion of a virtual machine image associated with the first instance into memory of a server computer system. 35. The system of claim 34, wherein the start time is determined such that the operation of loading a portion of a virtual machine image is completed prior to the event for making available the first instance. 36. The system of claim 32, wherein the input data further comprises information indicating price information associated with a market of instances. 37. The system of claim 32, wherein the input data further comprises information indicating a first interval of time during which operations were executed by the plurality of other instances and a second interval of time during which the plurality of other instances were idle. 38. The system of claim 32, wherein the memory further includes computer-executable instructions that, if executed, cause the one or more processors to: generate a schedule based at least in part on the predictive model, the schedule including the start time; and cause at least one instance of the plurality of other instances to be instantiated based at least in part on the schedule. 39. A non-transitory computer-readable storage medium having stored thereon executable instructions that, as a result of being executed by one or more processors of a computer system, cause the computer system to at least: obtain input data associated with execution of a plurality of virtual machine instances; generate a predictive model that indicates a probability of receiving a request to instantiate a particular virtual machine instance of the plurality of virtual machine instances by at least: analyzing the input data to generate one or more classifiers of the input data; and generating the predictive model based at least in part on the one or more classifiers; and cause one or more operations involved in instantiating the particular virtual machine instance to occur in accordance with the predictive model. 40. The non-transitory computer-readable storage medium of claim 39, wherein the instructions that cause the computer system to obtain the input data further include instructions that cause the computer system to obtain usage data for the plurality of virtual machine instances. 41. The non-transitory computer-readable storage medium of claim 39, wherein the instructions that cause the computer system to obtain the input data further include instructions that cause the computer system to obtain information indicating a plurality of commands transmitted to a computing resource service provider to perform operations associated with the plurality of virtual machine instances. 42. The non-transitory computer-readable storage medium of claim 39, wherein the instructions further comprise instructions that, as a result of being executed by the one or more processors, cause the computer system to generate one or more additional predictive models based at least in part on obtaining additional input data associated with the plurality of virtual machine instances. 43. The non-transitory computer-readable storage medium of claim 42, wherein the instructions further comprise instructions that, as a result of being executed by the one or more processors, cause the computer system to generate a set of schedules for instantiating virtual machine instances based at least in part on the predictive model and the one or more additional predictive models. 44. The non-transitory computer-readable storage medium of claim 43, wherein the instructions further comprise instructions that, as a result of being executed by the one or more processors, cause the computer system to correlate the set of schedules to determine a start time for causing the one or more operations to occur. 45. The non-transitory computer-readable storage medium of claim 39, wherein the instructions that cause the computer system to obtain the input data further include instructions that cause the computer system to obtain information indicating idle interval of the plurality of virtual machine instances.
Please help me write a proper abstract based on the patent claims.
Remote computing resource service providers allow customers to execute virtual computer systems in a virtual environment on hardware provided by the computing resource service provider. The hardware may be distributed between various geographic locations connected by a network. The distributed environment may increase latency of various operations of the virtual computer systems executed by the customer. To reduce latency of various operations predictive modeling is used to predict the occurrence of various operations and initiate the operations before they may occur, thereby reducing the amount of latency perceived by the customer.
1. A method to receive assistance in making a decision, comprising: operating a personal computing device by a decision maker; accessing a social network via the personal computing device, said accessing including passing personal information through an interactive interface to a computing server communicatively coupled to the personal computing device; receiving displayable web page information from the computing server, the displayable information forming at least a portion of the interactive interface; passing first input information to the computing server via the interactive interface, the first input information identifying a group of one or more advisers; passing second input information to the computing server via the interactive interface, the second input information describing a decision to be made by the decision maker; passing third input information to the computing server via the interactive interface, the third input information soliciting advice from the group of one or more advisers, the advice including decision assistance regarding the decision to be made; and receiving first output information from the computing server via the interactive interface, the first output information including the advice. 2. The method of claim 1 wherein the interactive interface includes one or more of communications through an Internet web site, communications through electronic mail, and communications associated with a short message service (SMS). 3. The method of claim 1, comprising: assigning at least one grade to the advice. 4. The method of claim 3 wherein the at least one grade is derived according to a selected norm-referenced system, a selected criterion-referenced system, or a selected peer-evaluation referenced system. 5. The method of claim 1, comprising: receiving advice from a plurality of advisers; and assigning a grade to the advice from the plurality of advisers, the assigning including assigning one grade to each separate instance of advice or the assigning including assigning a single grade to all of the advice received from the plurality of advisers. 6. The method of claim 1 wherein the second input information describing the decision to be made includes a plurality of choices for the group of one or more advisers to consider. 7. A method to receive decision-assistance information, comprising: operating a personal computing device by a representative of a business entity; accessing a social network via the personal computing device, said accessing including passing system-wide unique account information through an interactive interface to a computing server communicatively coupled to the personal computing device; receiving displayable web-page information from the computing server, the displayable information forming at least a portion of the interactive interface; passing first input information to the computing server via the interactive interface, the first input information identifying at least one decision theme, the decision theme including a set of words; and receiving first output information from the computing server via the interactive interface, the first output information including an alert corresponding to the decision theme, the alert indicating a decision maker has solicited advice regarding a decision to be made and the decision to be made is associated with the decision theme. 8. The method of claim 7 wherein the set of words includes one or more of stems of words, synonyms, antonyms, and related words as defined in a dictionary. 9. The method of claim 8 wherein the set of words is weighed by a probability indicating how likely a decision within the decision theme is to include a particular word. 10. The method of claim 7 wherein the at least one decision theme corresponds to an entry in a classification database of categories of decisions to be made. 11. The method of claim 7 wherein the first output information is a stream of alerts and the representative of the business entity has exchanged one or more credits for access to the stream of alerts, the access associated with at least one of a number of alerts, a geographic region, a group sharing a common demographic parameter, and a time frame. 12. The method of claim 11 wherein the one or more credits are received based on at least one of money paid by the representative of the business entity, a quantity of advice communicated into the computing server and attributed to the business entity, and a quality of advice communicated into the computing server and attributed to the business entity. 13. A decision-assistance server, comprising: a processor module; one or more memory storage devices; a storage interface module coupled to the one or more memory storage devices; an input/output interface module to pass information to and from the decision-assistance server, the information passed to and from the decision-assistance server including: first input information from a decision maker identifying a group of one or more advisers, second input information describing in human language a decision to be made by the decision maker, and first output advice information provided by the group of one or more advisers; a natural-language-detection module to detect analyzable word objects within the second input information and to generate decision-to-be-made information; at least one theme-operations module to determine at least one theme present amongst the second input information; and a decision-processing module to coordinate communication of the decision-to-be-made information to the group of one or more advisers and to coordinate communication of the first output advice to the decision maker. 14. The decision-assistance server of claim 13, comprising: an account-processing module, the account-processing module arranged to service a plurality of registered user accounts, the account-processing module arranged to associate at least some of the plurality of registered user accounts with individuals, respectively, and others of the plurality of registered-user accounts with businesses, respectively. 15. The decision-assistance server of claim 14 wherein the account processing module is arranged to manage more than 100,000 registered-user accounts. 16. The decision-assistance server of claim 15 wherein the decision-processing module is arranged to manage decision-to-be-made information associated with more than 100,000 active decisions to be made. 17. The decision-assistance server of claim 16 wherein the at least one theme-operations module is arranged to automatically determine theme information associated with each managed decision to be made. 18. The decision-assistance server of claim 17 wherein the decision-processing module is arranged to: manage a plurality of subscriptions, each of the plurality of subscriptions identifying at least one theme; determine when a theme is identified in an active decision to be made; and direct communication of at least some of the second input information and at least some of the first output information to an account of a business subscribed to at least one theme. 19. The decision-assistance server of claim 16, comprising: Timing logic arranged monitor how long each active decision to be made has been active. 20. The decision-assistance server of claim 16, comprising: a database arranged store decision-to-be-made information associated with the more than 100,000 active decisions to be made.
Please help me write a proper abstract based on the patent claims.
A decision-assistance system provides a tool for decision makers to receive assistance in making a decision. Through a web page, a decision maker selects a group of one or more advisers and inputs information describing a decision to be made. The decision maker solicits advice from the group of advisers, and the advisers provide advice to assist the decision maker. Other entities can be granted access to streams of information corresponding to particular themes of the decisions to be made. Business entities, for example, can use the streams of information to connect with decision makers facing decisions relevant to the goods and services provided by the business entity.
1. A computer-implemented method comprising: receiving a request to perform a predictive analysis in association with multiple time series data sets, the multiple time series data sets determined from raw machine data; parsing the request to identify each of the time series data sets to use in the predictive analysis; for each time series data set, initiating an object to perform the predictive analysis for the corresponding time series data set, the predictive analysis predicting expected outcomes based on the corresponding time series data set; concurrently executing each object to generate one or more expected outcomes associated with the corresponding time series data set; and providing the one or more expected outcomes associated with each of the corresponding time series data sets for display. 2. The computer-implemented method of claim 1 further comprising converting a set of raw data to the set of time series data. 3. The computer-implemented method of claim 1, wherein the request to perform a predictive analysis comprises a predict command that includes an indication of each of the time series data sets. 4. The computer-implemented method of claim 1, wherein the request to perform the predictive analysis comprises an indication of a first time series data set and a first forecasting algorithm to utilize to perform the predictive analysis based on the first time series data set, and an indication of a second time series data set and a second forecasting algorithm to utilize to perform the predictive analysis based on the second time series data set. 5. The computer-implemented method of claim 1, wherein the request to perform the predictive analysis comprises an indication of a first time series data set, an indication of a second time series data set, and an indication of a forecasting algorithm to utilize to perform the predictive analysis in association with the first time series data set and the second time series data set. 6. The computer-implemented method of claim 1, wherein the request to perform the predictive analysis comprises an indication of a first time series data set and first corresponding parameters to utilize to perform the predictive analysis in association with the first time series data set, and an indication of a second time series data set and second corresponding parameters to utilize to perform the predictive analysis in association with the second time series data set. 7. The computer-implemented method of claim 1, wherein parsing the request further identifies a forecasting algorithm to utilize for each of the time series data sets. 8. The computer-implemented method of claim 1, wherein parsing the request identifies a first forecasting algorithm associated with a first time series data set and a second forecasting algorithm associated with a second time series data set. 9. The computer-implemented method of claim 1, wherein initiating the object for each time series data set comprises initiating a first object for executing the predictive analysis for a first time series data set and initiating a second object for executing the predictive analysis for a second time series data set. 10. The computer-implemented method of claim 1, wherein the generated one or more expected outcomes associated with the corresponding time series data sets are aggregated and provided for concurrent display. 11. The computer-implemented method of claim 1 further comprising determining that the request to perform a predictive analysis comprises a request to perform concurrent predictive analysis for the multiple time series data sets. 12. The computer-implemented method of claim 1, wherein each object performs the predictive analysis for the corresponding time series data set based on a designated forecasting algorithm specified in the received request to perform the predictive analysis. 13. The computer-implemented method of claim 1, wherein each object performs the predictive analysis for the corresponding time series data set by: accessing the corresponding time series data set; and applying a forecasting algorithm designated for the corresponding time series data set to generate the one or more expected outcomes. 14. The computer-implemented method of claim 1, wherein each object performs the predictive analysis for the corresponding time series data set by: accessing the corresponding time series data set; determining that the corresponding time series data set has at least one missing data value; generating a predicted missing value for each of the at least one missing data values; and using the time series data set and the predicted missing values for each of the at least one missing data values to determine periodicity associated with the corresponding time series data set. 15. The computer-implemented method of claim 1, wherein each object performs the predictive analysis for the corresponding time series data set by: determining that the corresponding time series data set has at least one missing data value; generating a predicted missing value for each of the at least one missing data values; using the corresponding time series data set and the predicted missing values for each of the at least one missing data values to determine periodicity associated with the corresponding time series data set; and using the periodicity to generate a forecasting model used to generate the one or more expected outcomes associated with the corresponding time series data set. 16. The computer-implemented method of claim 1, wherein the one or more expected outcomes associated with each of the corresponding time series data sets are concurrently presented to a user. 17. The computer-implemented method of claim 1, wherein the one or more expected outcomes associated with each of the corresponding time series data sets are concurrently presented as a graphical visualization in connection with the corresponding time series data sets. 18. The computer-implemented method of claim 1, wherein the one or more expected outcomes associated with each of the corresponding time series data sets are concurrently presented in a tabular format. 19. One or more computer-readable storage media having instructions stored thereon, wherein the instructions, when executed by a computing device, cause the computing device to: receive a request to perform a predictive analysis in association with multiple time series data sets, the multiple time series data sets determined from raw machine data; parse the request to identify each of the time series data sets to use in the predictive analysis; for each time series data set, initiate an object to perform the predictive analysis for the corresponding time series data set, the predictive analysis predicting expected outcomes based on the corresponding time series data set; concurrently execute each object to generate one or more expected outcomes associated with the corresponding time series data set; and provide the one or more expected outcomes associated with each of the corresponding time series data sets for display. 20. A computing device comprising: one or more processors; and a memory coupled with the one or more processors, the memory having instructions stored thereon, wherein the instructions, when executed by the one or more processors, cause the computing device to: receive a request to perform a predictive analysis in association with multiple time series data sets, the multiple time series data sets determined from raw machine data; parse the request to identify each of the time series data sets to use in the predictive analysis; for each time series data set, initiate an object to perform the predictive analysis for the corresponding time series data set, the predictive analysis predicting expected outcomes based on the corresponding time series data set; concurrently execute each object to generate one or more expected outcomes associated with the corresponding time series data set; and provide the one or more expected outcomes associated with each of the corresponding time series data sets for display.
Please help me write a proper abstract based on the patent claims.
Embodiments of the present invention are directed to facilitating concurrent forecasting associating with multiple time series data sets. In accordance with aspects of the present disclosure, a request to perform a predictive analysis in association with multiple time series data sets is received. Thereafter, the request is parsed to identify each of the time series data sets to use in predictive analysis. For each time series data set, an object is initiated to perform the predictive analysis for the corresponding time series data set. Generally, the predictive analysis predicts expected outcomes based on the corresponding time series data set. Each object is concurrently executed to generate expected outcomes associated with the corresponding time series data set, and the expected outcomes associated with each of the corresponding time series data sets are provided for display.
1. A method for configuring a Quantum Annealing (QA) device, the QA device having a plurality of qubits and a plurality of couplers at overlapping intersections of the qubits, the method comprising: mapping a node of a neural network that have a plurality of nodes and connections between the nodes to a qubit in the QA device; and mapping a connection of the neural network to a coupler at an intersection in the QA device where two qubits corresponding to two nodes connected by the connection intersect. 2. The method of claim 1, further comprising: mapping a node of the neural network to a chain of qubits. 3. The method of claim 2, wherein mapping the node of the neural network to the chain of qubits includes: configuring a coupling between qubits in the chain to be a ferromagnetic coupling. 4. The method of claim 1, wherein the neural network is a deep learning neural network. 5. The method of claim 1, further comprising: configuring a coupler associated with a faulty qubit in the QA device with a zero weight; and setting a connection associated with a node in the neural network that is mapped to the faulty qubit with a zero weight. 6. The method of claim 2, further comprising: discarding quantum samples that include states of qubits in a chain of qubits that disagree with each other when a sample average is computed. 7. The method of claim 2, further comprising: using a state value of majority qubits that agree with each other in a chain of qubits including a faulty qubit as a state value of the chain of qubits in a quantum sample when a percentage of qubits in each chain of qubits that agree is greater than a voting threshold parameter in the quantum sample. 8. The method of claim 1, further comprising: applying a gauge transformation to qubits of the QA device. 9. The method of claim 8, wherein the gauge transformation is a basket weave gauge transformation. 10. The method of claim 8, wherein applying a gauge transformation to qubits of the QA device includes: generating quantum samples from qubits in the QA device with multiple different gauge transformation arrangements; and averaging the quantum samples to calculate a model expectation. 11. The method of claim 10, wherein the multiple different gauge transformation arrangements includes one of: an identity transformation where no qubits are inverted; a basket weave gauge transformation where a first half of qubits in the QA device are inverted and a second half of qubits are not inverted; a complement of the above basket weave gauge transformation where the second half of the qubits in the QA device are inverted and the first half of the qubits are not inverted; and a negative of the identity transformation where all qubits are inverted. 12. The method of claim 1, further including: calibrating a scale factor βeff for generating quantum samples from a quantum annealing process. 13. The method of claim 12, wherein calibrating the scale factor βeff includes: constructing a restricted Boltzmann machine (RBM) of a particular size; choosing a particular value for the scale factor βeff; performing the quantum annealing process to generate the quantum samples using a quotient of an energy functional of the RBM being divided by the scale factor βeff as a final Hamiltonian; repeating choosing a particular value, performing a quantum annealing process for multiple times; and determining a value of the scale factor βeff that leads to the smallest difference between model expectations of the RBM based on the quantum samples and model expectations of the RBM based on the energy functional of the RBM. 14. The method of claim 13, wherein calibrating the scale factor βeff further includes: calculating model expectations of the RBM based on the quantum samples; calculating model expectations of the RBM based on the energy functional of the RBM; and comparing model expectations of the RBM based on the quantum samples with model expectations of the RBM based on the energy functional of the RBM. 15. A method for training a neural network using a quantum annealing (QA) device including qubits configured with biases and couplers configured with weights, where an original restricted Boltzmann machine (RBM) of one layer of the neural network is mapped onto the QA device that is configured to act as a quantum RBM, the method comprising: generating quantum samples at the QA device; calculating an update to biases and weights for the original RBM and the quantum RBM with a classical computer based on the quantum samples; and using the update to biases and weights to perform a next iteration of training the neural network. 16. The method of claim 15, further comprising: initializing the biases and the weights of the original RBM and the quantum RBM to random values. 17. The method of claim 15, wherein generating quantum samples at the QA device includes: using a quotient of an energy functional of the RBM being divided by the scale factor βeff as a final Hamiltonian for a quantum annealing process at the QA device; running the quantum annealing process for multiple times to generate multiple quantum samples. 18. The method of claim 15, wherein calculating the update to biases and weights for the original RBM and the quantum RBM includes: averaging multiple quantum samples to calculate a model expectation that is consequently used for calculating updates to the biases and weights. 19. The method of claim 15, wherein using the update to biases and weights to perform the next iteration of training the neural network includes: configuring the biases and the weights of the original RBM and the quantum RBM with values of the update to biases and weights for the next iteration of training the neural network; and repeating the steps of generating quantum samples, calculating an update to biases and weights, and using the update to biases and weights to perform the next iteration.
Please help me write a proper abstract based on the patent claims.
Aspects of the disclosure provide a method for configuring a Quantum Annealing (QA) device. Then QA device has a plurality of qubits and a plurality of couplers at overlapping intersections of the qubits. The method includes mapping a node of a neural network that have a plurality of nodes and connections between the nodes to a qubit in the QA device, and mapping a connection of the neural network to a coupler at an intersection in the QA device where two qubits corresponding to two nodes connected by the connection intersect. The method further includes mapping a node of the neural network to a chain of qubits. In an embodiment, a coupling between qubits in the chain is configured to be a ferromagnetic coupling in order to map the node of the neural network to the chain of qubits.
1. A mobile compute device for distributed machine learning, the mobile compute device comprising: a data management module to (i) identify an input dataset including a plurality of dataset elements for machine learning and (ii) select a subset of the dataset elements; and a communication module (i) transmit the subset to a cloud server for machine learning and (ii) receive, from the cloud server, a set of learned parameters for local data classification in response to transmittal of the subset to the cloud server, wherein the learned parameters are based on an expansion of features extracted by the cloud server from the subset of the dataset elements. 2. The mobile compute device of claim 1, wherein to identify the input dataset comprises to identify a set of images for classification. 3. The mobile compute device of claim 1, wherein the learned parameters include one or more transformations of the features extracted by the cloud server. 4. The mobile compute device of claim 1, further comprising a classification module to perform local classification of dataset elements based on the learned parameters. 5. The mobile compute device of claim 4, wherein each of the dataset elements comprises an image; and wherein to perform the local classification comprises to recognize a particular object in one or more images based on the learned parameters. 6. The mobile compute device of claim 1, wherein to receive the set of learned parameters comprises to receive a set of learned parameters for local data classification in response to transmittal of the subset to the cloud server in real-time. 7. The mobile compute device of claim 1, wherein the communication module is to periodically update the set of learned parameters based on a selection of a new subset of the dataset elements, transmittal of the new subset to the cloud server, and receipt of an updated set of learned parameters from the cloud server. 8. The mobile compute device of claim 1, wherein to select the subset of the dataset elements comprises to select a random sample of the dataset elements. 9. One or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to execution by a mobile compute device, cause the mobile compute device to: identify an input dataset including a plurality of dataset elements for machine learning; select a subset of the dataset elements; transmit the subset to a cloud server for machine learning; and receive, from the cloud server, a set of learned parameters for local data classification in response to transmittal of the subset to the cloud server, wherein the learned parameters are based on an expansion of features extracted by the cloud server from the subset of the dataset elements. 10. The one or more machine-readable storage media of claim 9, wherein to identify the input dataset comprises to identify a set of images for classification. 11. The one or more machine-readable storage media of claim 9, wherein the learned parameters include one or more transformations of the features extracted by the cloud server. 12. The one or more machine-readable storage media of claim 9, wherein the plurality of instructions further cause the mobile compute device to perform local classification of dataset elements based on the learned parameters. 13. The one or more machine-readable storage media of claim 12, wherein each of the dataset elements comprises an image; and wherein to perform the local classification comprises to recognize a particular object in one or more images based on the learned parameters. 14. The one or more machine-readable storage media of claim 9, wherein the plurality of instructions further cause the mobile compute device to periodically update the set of learned parameters based on a selection of a new subset of the dataset elements, transmittal of the new subset to the cloud server, and receipt of an updated set of learned parameters from the cloud server. 15. The one or more machine-readable storage media of claim 9, wherein to select the subset of the dataset elements comprises to select a random sample of the dataset elements. 16. A cloud server for distributed machine learning, the cloud server comprising: a communication module to receive a dataset from a mobile compute device; a feature determination module to extract one or more features from the received dataset; and a feature expansion module to generate an expanded feature set based on the one or more extracted features; wherein the communication module is further to transmit the expanded feature set to the mobile compute device as learned parameters for data classification. 17. The cloud server of claim 16, wherein to generate the expanded feature set comprises to: identify one or more transformations to apply to the extracted features; and apply the one or more identified transformations to each of the extracted features to generate one or more additional features for each of the extracted features. 18. The cloud server of claim 17, wherein the dataset comprises a set of images; and wherein the one or more transformations comprise at least one of a rotational transformation or a perspective transformation. 19. The cloud server of claim 17, wherein the dataset comprises a set of images; and wherein the one or more transformations comprise a transformation associated with an illumination of a corresponding image. 20. The cloud server of claim 17, wherein to identify one or more transformations comprises to: identify a type of transformation to apply to the extracted features; and discretize a space of the type transformations to identify a finite number of transformations of the type of transformations to apply. 21. One or more machine-readable storage media comprising a plurality of instructions stored thereon that, in response to execution by a cloud server, cause the cloud server to: receive a dataset from the mobile compute device; extract one or more features from the received dataset; generate an expanded feature set based on the one or more extracted features; and transmit the expanded feature set to the mobile compute device as learned parameters for data classification. 22. The one or more machine-readable storage media of claim 21, wherein to generate the expanded feature set comprises to: identify one or more transformations to apply to the extracted features; and apply the one or more identified transformations to each of the extracted features to generate one or more additional features for each of the extracted features. 23. The one or more machine-readable storage media of claim 22, wherein the dataset comprises a set of images; and wherein the one or more transformations comprise at least one of a rotational transformation or a perspective transformation. 24. The one or more machine-readable storage media of claim 22, wherein to identify one or more transformations comprises to: identify a type of transformation to apply to the extracted features; and discretize a space of the type transformations to identify a finite number of transformations of the type of transformations to apply. 25. The one or more machine-readable storage media of claim 21, wherein the dataset received from the mobile compute device consists of a random subset of data elements extracted by the mobile compute device from a data superset.
Please help me write a proper abstract based on the patent claims.
Technologies for distributed machine learning include a mobile compute device to identify an input dataset including a plurality of dataset elements for machine learning and select a subset of the dataset elements. The mobile compute device transmits the subset to a cloud server for machine learning and receives, from the cloud server, a set of learned parameters for local data classification in response to transmitting the subset to the cloud server. The learned parameters are based on an expansion of features extracted by the cloud server from the subset of the dataset elements.
1. A computer-implemented method for generating and applying at least one model component, comprising: in a training system that includes one or more computing devices: providing at least one seed item; identifying, for each seed item, a set of candidate items; using a computer-implemented label-generating component to generate a label for each pairing of a particular seed item and a particular candidate item, to collectively provide label information, the label being generated, using the label-generating component, by: identifying a set of documents that have established respective evaluation measures, each evaluation measure reflecting an assessed relevance between a particular document in the set of documents and the particular seed item; determining whether the particular candidate item is found in each document in the set of documents, to provide retrieval information; and generating the label for the particular candidate item based on the evaluation measures associated with the documents in the set of documents and the retrieval information; using a computer-implemented feature-generating component to generate a set of feature values for each said pairing of a particular seed item and a particular candidate item, to collectively provide feature information; using a computer-implemented model-generating component to generate and store a model component based on the label information and the feature information; and in a model-application system that includes one or more computing devices: receiving an input item; applying the model component to generate a set of zero, one, or more related items that are determined, by the model component, to be related to the input item; generating an output result based at least on the set of related items; and providing the output result to an end user, the model-application system leveraging use of the model component to facilitate efficient generation of the output result. 2. The method of claim 1, wherein said identifying of the set of candidate items, as applied with respect to the particular seed item, comprises identifying one or more items that have a nexus to the particular seed item, as assessed based on one or more data sources. 3. The method of claim 1, wherein each document, in the set of documents, is associated with a collection of text items, and wherein the collection of text items encompasses text items within the document as well as text items that are determined to relate to the document. 4. The method of claim 1, wherein said generating of the label for the particular candidate item comprises: generating a retrieved gain measure, corresponding to an aggregation of evaluation measures associated with a subset of documents, among the set of documents, that match the particular candidate item; generating a total gain available measure, corresponding to an aggregation of evaluation measures associated with all of the documents in the set of documents; generating a documents-retrieved measure, which corresponds to a number of documents, among the set of documents, that match the particular candidate item; and generating the label based on the retrieved gain measure, the total gain available measure, and the documents-retrieved measure. 5. The method of claim 4, wherein the label is generated by multiplying the total gain available measure by the documents-retrieved measure, to form a product, and dividing the retrieved gain measure by the product. 6. The method of claim 4, wherein at least one of the retrieved gain measure, the total gain available measure, and/or the documents-retrieved measure is modified by an exponential balancing parameter. 7. The method of claim 1, wherein said generating of the set of feature values, for the pairing of the particular seed item and the particular candidate item, comprises determining at least one feature value that assesses a text-based similarity between the particular seed item and the particular candidate item. 8. The method of claim 1, wherein said generating of the set of feature values, for the pairing of the particular seed item and the particular candidate item, comprises determining at least one feature value by applying a language model component to determine a probability of an occurrence of the particular candidate item within a language. 9. The method of claim 1, wherein said generating of the particular set of feature values, for the pairing of the particular seed item and the particular candidate item, comprises determining at least one feature value by applying a translation model component to determine a probability that the particular seed item is transformable into the particular candidate item, or vice versa. 10. The method of claim 1, wherein said generating of the particular set of feature values, for the pairing of the particular seed item and the particular candidate item, comprises determining at least one feature value by determining characteristics of prior user behavior pertaining to the particular seed item and/or the particular candidate item. 11. The method of claim 1, wherein the model component that is generated corresponds to a first model component, and wherein the method further comprises: using the training system to generate a second model component; using the model-application system to apply the first model component to generate an initial set of related items that are related to the input item; and using the model-application system to apply the second model component to select a subset of related items from among the initial set of related items. 12. The method of claim 11, wherein the said training system generates the second model component by: using the first model component to generate a plurality of new individual candidate items; generating a plurality of group candidate items, each of which reflects a particular combination of one or more new individual candidate items; using another computer-implemented label-generating component to generate new label information for the group candidate items; using another computer-implemented feature-generating component to generate new feature information for the group candidate items; and using another computer-implemented model-generating component to generate the second model component based on the new label information and the new feature information. 13. The method of claim 1, wherein each of the set of candidate items corresponds to a group candidate item that includes a combination of individual candidate items, selected from among a set of possible combinations, the individual candidate items being generated using any type of candidate-generating component. 14. The method of claim 13, wherein said using of the feature-generating component to generate feature information comprises, for each particular group candidate item: determining a set of feature values for each individual candidate item that is associated with the particular group candidate item, to overall provide a collection of feature sets that is associated with the particular group candidate item; and determining at least one feature value that provides group-based information that summarizes the collection of feature sets. 15. The method of claim 1, wherein: the model-application system implements a search service, the input item corresponds to an input query, and the set of related items corresponds to a set of linguistic items that are determined to be related to the input query. 16. A computer readable storage medium for storing computer readable instructions, the computer readable instructions implementing a training system when executed by one or more processing devices, the computer readable instructions comprising: logic configured to identify, for each of a set of seed items, a set of candidate items; logic configured to generate a label, for each pairing of a particular seed item and a particular candidate item, based on: evaluation measures which measure an extent to which documents in a set of documents have been assessed as being relevant to the particular seed item; and retrieval information which reflects an extent to which the particular candidate item is found in the set of documents; logic configured to generate a set of feature values for each said pairing of a particular seed item and a particular candidate item, said logic configured to generate a label collectively providing label information, when applied to all pairings of seed items and candidate items, said logic configured to generate a set of feature values collectively providing feature information, when applied to all pairings of seed items and candidate items; and logic configured to generate a model component based on the label information and the feature information, the model component, when applied by a model-application system, identifying, zero, one, or more related items with respect to an input item, each particular candidate item corresponding to a particular individual candidate item that includes a single linguistic item, or a particular group candidate item that includes a combination of individual candidate items. 17. The computer readable storage medium of claim 16, wherein said logic configured to generate the label for the particular candidate item comprises: logic configured to generate a retrieved gain measure, corresponding to an aggregation of evaluation measures associated with a subset of documents, among the set of documents, that match the particular candidate item; and logic configured to generate a total gain available measure, corresponding to an aggregation of evaluation measures associated with all of the documents in the set of documents; logic configured to generate a documents-retrieved measure, which corresponds to a number of documents, among the set of documents, that match the particular candidate item; and logic configured to generate the label based at least on the retrieved gain measure, the total gain available measure, and the documents-retrieved measure. 18. One or more computing devices for implementing at least a training system, comprising: a candidate-generating component configured to generate a set of candidate items for each seed item, for a plurality of seed items; a label-generating component configured to generate a label for each pairing of a particular seed item and a particular candidate item, to collectively provide label information, said label being generated, using the label-generating component, by: identifying a set of documents that have established respective evaluation measures, each evaluation measure reflecting an assessed relevance between a particular document in the set of documents and the particular seed item; determining whether the particular candidate item is found in each document in the set of documents, to provide retrieval information; and generating the label for the particular candidate item based on the evaluation measures associated with the documents in the set of documents and the retrieval information; a feature-generating component configured to generate a set of feature values for each said pairing of a particular seed item and a particular candidate item, to collectively provide feature information; and a model-training component configured to generate and store a model component based on the label information and the feature information. 19. The one or more computing devices of claims 18, further comprising a model-application system, implemented by the one or more computing devices, and comprising: a user interface component configured to receive an input item from an end user; an item-expansion component configured to apply the model component to generate a set of zero, one, or more related items that are determined, by the model component, to be related to the input item; and a processing component configured to generate an output result based on the set of related items, the user interface component further being configured to provide the output result to the end user. 20. The one or more computing devices of claim 19, wherein: the model component that is generated by the training system corresponds to a first model component, the training system is further configured to generate a second model component, the item-expansion component, of the model-application system, is further configured to: apply the first model component to generate an initial set of related items that are related to the input item; and apply the second model component to select a subset of related items from among the initial set of related items.
Please help me write a proper abstract based on the patent claims.
A computer-implemented training system is described herein for generating at least one model component based on labeled training data. The training system produces the labels in the training data by leveraging textual information expressed in already-evaluated documents. In another implementation, the training system generates a first model component and a second model component. In one implementation, in an application phase, a computer-implemented model-application system applies the first model component to identify an initial set of related items that are related to an input item (such as a query). The model-application system then applies the second model component to select a subset of related items from among the initial set of related items.
1. An information processing device configured to process trade data for asset management and trading, said information processing device comprising: one or more processors configured to determine price of a commodity indicated by said trade data, said price being valid in an event the one or more processors receive a user input indicating authorization of a user to revoke a deal of said commodity within a predetermined time duration, wherein said one or more processors are configured to determine said price by using a neural network, said neural network being trained by said one or more processors based on one or more data sets corresponding to said commodity. 2. The information processing device as claimed in claim 1, wherein said one or more processors are configured to train said neural network by performing one or more steps of data filtering, data validation, data sampling, and/or data sorting on said one or more data sets, wherein said one or more data sets correspond to spot prices of said commodity in an event said neural network is trained by said one or more processors. 3. The information processing device as claimed in claim 2, wherein said one or more data sets correspond to one or more of historical actual prices, historical commercial prices, periodic prices, and/or aperiodic prices, of said commodity. 4. The information processing device as claimed in claim 1, wherein said one or more processors are further configured to predict volatility related to said price of said commodity by using said neural network. 5. The information processing device as claimed in claim 4, wherein said one or more processors are further configured to retrain said neural network to adjust said determined price of said commodity to keep said asset management and said trading profitable. 6. The information processing device as claimed in claim 4, wherein said one or more processors are configured to retrain said neural network based on a feedback mechanism which comprises adjusting said price depending on performance estimation in real time. 7. The information processing device as claimed in claim 6, wherein said performance estimation in real time comprises comparison between stored historical volatility data and said predicted volatility, wherein said stored historical volatility and said volatility are predicted for same time intervals. 8. The information processing device as claimed in claim 6, wherein retraining of said neural network further comprises validating said price by estimating volume pricing for different strikes for said commodity. 9. The information processing device as claimed in claim 1, wherein said one or more processors are further configured to estimate at the money (ATM) volatility for a plurality of time intervals of same duration. 10. The information processing device as claimed in claim 9, wherein said one or more processors are further configured to generate a message indicating avoidance of trading of said commodity in an event a difference between each of said plurality of time intervals exceeds a predetermined threshold value. 11. A method for processing trade data for asset management and trading, said method comprising: in an information processing device: determining price of a commodity indicated by said trade data, wherein said price being valid in an event the information processing device receives a user input indicating authorization of a user to revoke a deal of said commodity within a predetermined time duration, and wherein said price is determined by using a neural network, said neural network being trained by said information processing device based on one or more data sets corresponding to said commodity. 12. A system for asset management and trading, said system comprising: a client computer comprising one or more processors configured to: receive trade data for said asset management and said trading; and determine price of a commodity indicated by said trade data, wherein said price being valid in an event the one or more processors receive a user input indicating authorization of a user to revoke a deal of said commodity within a predetermined time duration, and wherein said one or more processors are configured to determine said price by using a neural network, said neural network being trained by said one or more processors based on one or more data sets corresponding to said commodity, and a server configured to transmit said trade data to said client computer.
Please help me write a proper abstract based on the patent claims.
An information processing device for processing trade data for asset management and trading is disclosed. The information processing device includes the one or more processors configured to determine price of a commodity indicated by the trade data. The price being valid in an event the one or more processors receive a user input indicating authorization of a user to revoke a deal of the commodity within predetermined time duration. The one or more processors are configured to determine the price by using a neural network, the neural network being trained by the one or more processors based on one or more data sets corresponding to the commodity.
1. A system for decoding output from spiking reservoirs, the system comprising: one or more processors and a non-transitory memory having instructions encoded thereon such that when the instructions are executed, the one or more processors perform operations of: training a neural network having a spiking reservoir comprised of spiking neurons by using a set of training patterns; presenting each test pattern in a set of test patterns to the spiking reservoir; generating output spikes from the spiking reservoir via a set of readout neurons; measuring the output spikes, resulting in a plurality of measurements, and using the plurality of measurements to compute firing rate codes, each firing rate code corresponding to a test pattern in the set of test patterns P; and decoding performance of the neural network, using the firing rate codes, by computing a discriminability index (DI) to discriminate between test patterns in the set of test patterns P. 2. The system as set forth in claim 1, wherein the neural network exhibits continuous plasticity. 3. The system as set forth in claim 1, wherein the one or more processors further perform operations of: computing, for each test pattern p, firing rates fip of a sink neuron i in the neural network as the total number of output spikes during a duration d; estimating a maximum firing rate fmaxp from the firing rates fip of all sink neurons in the neural network for the test pattern p; and computing a firing rate code for each test pattern p using fmaxp and fip. 4. The system as set forth in claim 3, wherein the DI is a product of a separability measure, ε, and a uniqueness measure, γ, wherein the separability measure is defined as a measure of a degree of separation of firing rate codes for the set of test patterns P, and wherein the uniqueness measure is defined as a number of unique firing rate codes produced by the neural network relative to a maximum possible number of unique firing rate codes. 5. The system as set forth in claim 4, wherein the separability measure is computed according to the following: ɛ = 1 - D intra D inter , where Dintra is defined an average pair-wise distance between firing rate codes computed from all possible unique pairs of firing rate codes generated by the neural network for the same test pattern, and Dinter is defined as an average pair-wise distance between firing rate codes computed from all possible unique pairs of firing rate codes generated by the neural network for the set of test patterns P. 6. The system as set forth in claim 5, wherein the uniqueness measure is computed according to the following: γ = #  S P , where #S represents the total number of unique firing rate codes for the set of test patterns P. 7. A computer-implemented method for decoding output from spiking reservoirs, comprising: an act of causing one or more processors to execute instructions stored on a non-transitory memory such that upon execution, the one or more processors perform operations of: training a neural network having a spiking reservoir comprised of spiking neurons by using a set of training patterns; presenting each test pattern in a set of test patterns to the spiking reservoir; generating output spikes from the spiking reservoir via a set of readout neurons; measuring the output spikes, resulting in a plurality of measurements, and using the plurality of measurements to compute firing rate codes, each firing rate code corresponding to a test pattern in the set of test patterns P; and decoding performance of the neural network, using the firing rate codes, by computing a discriminability index (DI) to discriminate between test patterns in the set of test patterns P. 8. The method as set forth in claim 7, wherein the neural network exhibits continuous plasticity. 9. The method as set forth in claim 7, wherein the one or more processors further perform operations of: computing, for each test pattern p, firing rates fip of a sink neuron i in the neural network as the total number of output spikes during a duration d; estimating a maximum firing rate fmaxp from the firing rates fip of all sink neurons in the neural network for the test pattern p; and computing a firing rate code for each test pattern p using fmaxp and fip. 10. The method as set forth in claim 9, wherein the DI is a product of a separability measure, ε, and a uniqueness measure, γ, wherein the separability measure is defined as a measure of a degree of separation of firing rate codes for the set of test patterns P, and wherein the uniqueness measure is defined as a number of unique firing rate codes produced by the neural network relative to a maximum possible number of unique firing rate codes. 11. The method as set forth in claim 10, wherein the separability measure is computed according to the following: ɛ = 1 - D intra D inter , where Dintra is defined as an average pair-wise distance between firing rate codes computed from all possible unique pairs of firing rate codes generated by the neural network for the same test pattern, and Dinter is defined as an average pair-wise distance between firing rate codes computed from all possible unique pairs of firing rate codes generated by the neural network for the set of test patterns P. 12. The method as set forth in claim 11, wherein the uniqueness measure is computed according to the following: γ = #  S P , where #S represents the total number of unique firing rate codes for the set of test patterns P. 13. A computer program product for decoding output from spiking reservoirs, the computer program product comprising: computer-readable instructions stored on a non-transitory computer-readable medium that are executable by a computer having one or more processors for causing the processor to perform operations of: training a neural network having a spiking reservoir comprised of spiking neurons by using a set of training patterns; presenting each test pattern in a set of test patterns to the spiking reservoir; generating output spikes from the spiking reservoir via a set of readout neurons; measuring the output spikes, resulting in a plurality of measurements, and using the plurality of measurements to compute firing rate codes, each firing rate code corresponding to a test pattern in the set of test patterns P; and decoding performance of the neural network, using the firing rate codes, by computing a discriminability index (DI) to discriminate between test patterns in the set of test patterns P. 14. The computer program product as set forth in claim 13, wherein the neural network exhibits continuous plasticity. 15. The computer program product as set forth in claim 13, further comprising instructions for causing the one or more processors to perform operations of: computing, for each test pattern p, firing rates fip of a sink neuron i in the neural network as the total number of output spikes during a duration d; estimating a maximum firing rate fmaxp from the firing rates fip of all sink neurons in the neural network for the test pattern p; and computing a firing rate code for each test pattern p using fmaxp and fip. 16. The computer program product as set forth in claim 15, wherein the DI is a product of a separability measure, ε, and a uniqueness measure, γ, wherein the separability measure is defined as a measure of a degree of separation of firing rate codes for the set of test patterns P, and wherein the uniqueness measure is defined as a number of unique firing rate codes produced by the neural network relative to a maximum possible number of unique firing rate codes. 17. The computer program product as set forth in claim 16, wherein the separability measure is computed according to the following: ɛ = 1 - D intra D inter , where Dintra is defined as an average pair-wise distance between firing rate codes computed from all possible unique pairs of firing rate codes generated by the neural network for the same test pattern, and Dinter is defined as an average pair-wise distance between firing rate codes computed from all possible unique pairs of firing rate codes generated by the neural network for the set of test patterns P. 18. The computer program product as set forth in claim 17, wherein the uniqueness measure is computed according to the following: γ = #  S P , where #S represents the total number of unique firing rate codes for the set of test patterns P. 19. The system as set forth in claim 1, wherein the set of test patterns P are input patterns from images obtained around a vehicle, and wherein the set of test patterns P are used to assist the vehicle in autonomous driving. 20. A system for decoding output from spiking reservoirs, the system comprising: one or more processors and a non-transitory memory having instructions encoded thereon such that when the instructions are executed, the one or more processors perform operations of: providing an input signal to a neural network, the neural network having a spiking reservoir comprised of spiking neurons trained by: presenting each test pattern in a set of test patterns to the spiking reservoir; generating; output spikes from the spiking reservoir via a set of readout neurons; measuring the output spikes, resulting in a plurality of measurements, and using the plurality of measurements to compute firing rate codes, each firing rate code corresponding to a test pattern in a set of test patterns P; and determining performance of the neural network, using the firing rate codes, by computing a discriminability index (DI) to discriminate between test patterns in the set of test patterns P; obtaining a readout code from the neural network produced in response to the input signal; and identifying a component of the input signal based on the readout code.
Please help me write a proper abstract based on the patent claims.
Described is a system for decoding spiking reservoirs even when the spiking reservoir has continuous synaptic plasticity. The system uses a set of training patterns to train a neural network having a spiking reservoir comprised of spiking neurons. A test pattern duration d is estimated for a set of test patterns P, and each test pattern is presented to the spiking reservoir for a duration of d/P seconds. Output spikes from the spiking reservoir are generated via readout neurons. The output spikes are measured and the measurements are used to compute firing rate codes, each firing rate code corresponding to a test pattern in the set of test patterns P. The firing rate codes are used to decode performance of the neural network by computing a discriminability index (DI) to discriminate between test patterns in the set of test patterns P.
1. A time-series prediction apparatus that predicts transition of time-series data on a matter, comprising: a relevance level calculation part that calculates a relevance level which is an index of strength of a causal relation between a plurality of matters including a prediction target matter, based on time-series data relevant to each of the matters and on time-series data relevant to the causal relation between the matters; and a transition prediction part that predicts transition of the time-series data relevant to the matter based on the relevance level. 2. The time-series prediction apparatus according to claim 1, wherein the relevance level calculation part calculates the relevance level based on collocation frequency of terms relevant to the respective matters in the time-series data relevant to the causal relation between the matters. 3. The time-series prediction apparatus according to claim 1, wherein based on time-series data relevant to a matter which is in a causal relation with the prediction target matter, the transition prediction part builds a plurality of prediction models for predicting the transition of the time-series data relevant to the prediction target matter, and the transition prediction part integrates prediction results of the respective prediction models while weighing each of the prediction models according to the relevance level. 4. The time-series prediction apparatus according to claim 1, wherein the time-series prediction apparatus generates a graph representing temporal transition of the time-series data. 5. The time-series prediction apparatus according to claim 4, wherein the time-series prediction apparatus generates a graph representing temporal transition of the relevance level. 6. The time-series prediction apparatus according to claim 1, wherein the time-series prediction apparatus extracts, from time-series data relevant to the causal relation between the matters, time-series data containing both of terms relevant to the respective matters, and generates information indicating appearance frequency of the terms included in the time-series data extracted. 7. The time-series prediction apparatus according to claim 1, further comprising a time-series data collection part that acquires, over the Internet, the time-series data relevant to each of the plurality of matters including the prediction target matter and the time-series data relevant to the causal relation between the matters. 8. A time-series prediction method executed using an information processing apparatus that predicts transition of time-series data on a matter, the method comprising the steps, performed by the information processing apparatus, of: calculating a relevance level which is an index of strength of a causal relation between a plurality of matters including a prediction target matter, based on time-series data relevant to each of the matters and on time-series data relevant to the causal relation between the matters; and predicting transition of the time-series data relevant to the matter based on the relevance level. 9. The time-series prediction method according to claim 8, further comprising the step, performed by the time-series prediction apparatus, of: calculating the relevance level based on collocation frequency of terms relevant to the respective matters in the time-series data relevant to the causal relation between the matters. 10. The time-series prediction method according to claim 8, further comprising the steps, performed by the time-series prediction apparatus, of: based on time-series data relevant to a matter which is in a causal relation with the prediction target matter, building a plurality of prediction models for predicting the transition of the time-series data relevant to the prediction target matter; and integrating prediction results of the respective prediction models while weighing each of the prediction models according to the relevance level. 11. The time-series prediction method according to claim 8, further comprising the step, performed by the time-series prediction apparatus, of: generating a graph representing temporal transition of the time-series data. 12. The time-series prediction method according to claim 11, further comprising the step, performed by the time-series prediction apparatus, of: generating a graph representing temporal transition of the relevance level. 13. The time-series prediction method according to claim 8, further comprising the step, performed by the time-series prediction apparatus, of: extracting, from time-series data relevant to the causal relation between the matters, time-series data containing both of terms relevant to the respective matters, and generating information indicating a frequency of appearance of the terms included in the time-series data extracted. 14. The time-series prediction method according to claim 8, further comprising the step, performed by the time-series prediction apparatus, of: acquiring, over the Internet, the time-series data relevant to each of the plurality of matters including the prediction target matter and the time-series data relevant to the causal relation between the matters.
Please help me write a proper abstract based on the patent claims.
A time-series prediction apparatus 10, which is an information processing apparatus that predicts transition of time-series data on a matter, calculates a relevance level which is an index of strength of a causal relation between a plurality of matters including a prediction target matter, based on time-series data relevant to each of the matters and on time-series data relevant to the causal relation between the matters, and predicts transition of the time-series data relevant to the matter based on the calculated relevance level. The time-series prediction apparatus 10 calculates the relevance level based on collocation frequency of terms relevant to the respective matters in the time-series data relevant to the causal relation between the matters. The time-series prediction apparatus 10 builds multiple prediction models for predicting the transition of the time-series data relevant to the prediction target matter based on time-series data relevant to a matter which is in a causal relation with the prediction target matter, and integrates prediction results of the respective prediction models while weighing each of the prediction models according to the relevance level.
1. An information processing device comprising: a learning unit that learns a relational expression between vibration strengths at different frequencies based on a time series of frequency characteristics of a vibration strength detected during a learning period by a vibration detector placed on a monitoring target; and an anomaly detection unit that learns a relational expression between vibration strengths at different frequencies based on a time series of frequency characteristics of a vibration strength detected during a new period by the vibration detector, and determines whether or not there is an anomaly in the monitoring target based on a relational expression related to a new frequency, which is different from the relational expression learned during the learning period. 2. The information processing device according to claim 1, wherein each of the learning unit and the anomaly detection unit learns a relational expression between vibration strengths at different resonance frequencies. 3. The information processing device according to claim 1, wherein the anomaly detection unit extracts the relational expression related to a new frequency which is higher than frequencies related to the relational expressions learned during the learning period, from the relational expressions learned during the new period, as the relational expression related to the new frequency. 4. The information processing device according to claim 1, wherein the anomaly detection unit determines whether or not there is an anomaly in the monitoring target based on the new relational expression in a case that the time series of the frequency characteristics of the vibration strength detected during the new period does not satisfy a relation represented by the relational expression learned during the learning period. 5. The information processing device according to claim 1, wherein the anomaly detection unit determines that there is an anomaly in the monitoring target in a case that number of the new relational expressions is equal to or larger than a predetermined threshold value. 6. The information processing device according to claim 1, wherein the vibration detector detects vibration generated by raindrops falling on the monitoring target. 7. An anomaly detection method comprising: learning a relational expression between vibration strengths at different frequencies based on a time series of frequency characteristics of a vibration strength detected during a learning period by a vibration detector placed on a monitoring target; and learning a relational expression between vibration strengths at different frequencies based on a time series of frequency characteristics of a vibration strength detected during a new period by the vibration detector, and determining whether or not there is an anomaly in the monitoring target based on a relational expression related to a new frequency, which is different from the relational expression learned during the learning period. 8. The anomaly detection method according to claim 7, wherein, in the learning a relational expression, a relational expression between vibration strengths at different resonance frequencies is learned. 9. A non-transitory computer readable storage medium recording thereon a program, causing a computer to perform a method comprising: learning a relational expression between vibration strengths at different frequencies based on a time series of frequency characteristics of a vibration strength detected during a learning period by a vibration detector placed on a monitoring target; and learning a relational expression between vibration strengths at different frequencies based on a time series of frequency characteristics of a vibration strength detected during a new period by the vibration detector, and determining whether or not there is an anomaly in the monitoring target based on a relational expression related to a new frequency, which is different from the relational expression learned during the learning period. 10. The non-transitory computer readable storage medium recording thereon the program according to claim 9, wherein, in the learning a relational expression, a relational expression between vibration strengths at different resonance frequencies is learned.
Please help me write a proper abstract based on the patent claims.
A sign of the landslide disaster is easily detected. A model learning unit (120) of an anomaly detection device (100) learns a relational expression between vibration strengths at frequencies based on a time series of frequency characteristics of a vibration strength detected during a learning period by a vibration sensor placed on a monitoring target. The anomaly detection unit (140) learns a relational expression between vibration strengths at frequencies based on a time series of frequency characteristics of a vibration strength detected during a new period by the vibration sensor. Then, the anomaly detection unit (140) determines whether or not there is an anomaly in the monitoring target based on a relational expression related to a new frequency, which is different from the relational expression learned during the learning period.
1. A non-transitory computer readable medium comprising program code executable by a processor for causing the processor to: receive a plurality of time series, each time series of the plurality of time series comprising a plurality of data points arranged in a sequential order over a period of time; filter the plurality of time series using a preset time duration to identify a subset of time series that have time durations that exceed the preset time duration, the preset time duration being a minimum time duration usable with a preselected forecasting process; and in response to identifying the subset of time series that exceeds the preset time duration: determine that a time series of the subset of time series does not include a time period with inactivity; determine that the time series exhibits a repetitive characteristic based on the time series comprising a pattern that repeats over a predetermined time period; determine that the time series comprises a magnitude spike with a value above a preset magnitude threshold; and in response to determining that the time series (i) lacks the time period with inactivity, (ii) exhibits the repetitive characteristic, and (iii) comprises the magnitude spike with the value above the preset magnitude threshold: generate a data set that includes the time series; and generate a predictive forecast from the data set using the preselected forecasting process, the predictive forecast indicating a progression of the time series over a future period of time. 2. The non-transitory computer readable medium of claim 1, wherein the preselected forecasting process comprises: determining the repetitive characteristic exhibited by the time series; generating an adjusted time series by removing the repetitive characteristic from the time series; determining, using the adjusted time series, an effect of one or more moving events that occur on different dates for two or more consecutive years on the adjusted time series; generating a residual time series by removing the effect of the one or more moving events from the adjusted time series; generating, using the residual time series, a base forecast that is independent of the repetitive characteristic and the effect of the one or more moving events; and generating the predictive forecast by including the repetitive characteristic and the effect of the one or more moving events into the base forecast. 3. The non-transitory computer readable medium of claim 1, further comprising program code executable by the processor for causing the processor to determine that the time series comprises the magnitude spike with the value above the preset magnitude threshold by: removing the repetitive characteristic from the time series to generate a base time series; determining one or more magnitude differences between the time series and the base time series; determining that the one or more magnitude differences exceed the preset magnitude threshold; and in response to determining that the one or more magnitude differences exceed the preset magnitude threshold, determining that the time series comprises the magnitude spike with the value above the preset magnitude threshold. 4. The non-transitory computer readable medium of claim 1, further comprising program code executable by the processor for causing the processor to generate the data set that includes the time series by: determining a time-series group for the time series from a plurality of time-series groups using a clustering method; and including the time series in the time-series group, the time-series group being the data set. 5. The non-transitory computer readable medium of claim 4, further comprising program code executable by the processor for causing the processor to: determine the time-series group for the time series from the plurality of time-series groups using the clustering method by: determining an attribute of the time series comprising a frequency of events in the time series, a timing of events in the time series, an average percentage of lift with respect to a base time series, or a maximum percentage of lift with respect to the base time series; using the attribute of the time series as input for the clustering method; and receiving the time-series group as output from the clustering method. 6. The non-transitory computer readable medium of claim 1, wherein the time series is a first time series, and further comprising program code executable by the processor for causing the processor to: determine that a time duration of a second time series of the plurality of time series is below the preset time duration usable with the preselected forecasting process; or determine that the second time series comprises the time period with the inactivity; or determine that the second time series does not exhibit the repetitive characteristic based on an absence of the event; or determine that the second time series does not comprise the magnitude spike with the value above the preset magnitude threshold; and in response to determining that (i) the time duration of the second time series is below the preset time duration, (ii) the second time series comprises the time period with the inactivity, (iii) the second time series does not exhibit the repetitive characteristic, or (iii) the second time series does not comprise the magnitude spike with the value above the preset magnitude threshold, flag the second time series as incompatible with the preselected forecasting process. 7. The non-transitory computer readable medium of claim 6, further comprising program code executable by the processor for causing the processor to: select another forecasting process for use with the second time series; and use the other forecasting process to generate another forecast from the second time series. 8. The non-transitory computer readable medium of claim 1, wherein the preset time duration usable with the preselected forecasting process is a first preset time duration, and further comprising program code executable by the processor for causing the processor to: prior to determining the time series exhibits the repetitive characteristic, determine that a time duration of the time series is above the first preset time duration and below a second preset time duration and, in response: aggregate the time series with another time series to generate an aggregate time series; and use the aggregate time series as the time series. 9. The non-transitory computer readable medium of claim 8, wherein the first preset time duration is one year and the second preset time duration is two years. 10. The non-transitory computer readable medium of claim 1, wherein the non-transitory computer readable medium comprises two or more computer readable media distributed among two or more worker nodes in a communications grid computing system, the two or more worker nodes being separate computing devices that are remote from one another. 11. A method comprising: receiving a plurality of time series, each time series of the plurality of time series comprising a plurality of data points arranged in a sequential order over a period of time; filtering the plurality of time series using a preset time duration to identify a subset of time series that have time durations that exceed the preset time duration, the preset time duration being a minimum time duration usable with a preselected forecasting process; and in response to identifying the subset of time series that exceeds the preset time duration: determining that a time series of the subset of time series does not include a time period with inactivity; determining that the time series exhibits a repetitive characteristic based on the time series comprising a pattern that repeats over a predetermined time period; determining that the time series comprises a magnitude spike with a value above a preset magnitude threshold; and in response to determining that the time series (i) lacks the time period with inactivity, (ii) exhibits the repetitive characteristic, and (iii) comprises the magnitude spike with the value above the preset magnitude threshold: generating a data set that includes the time series; and generating a predictive forecast from the data set using the preselected forecasting process, the predictive forecast indicating a progression of the time series over a future period of time. 12. The method of claim 11, wherein the preselected forecasting process comprises: determining the repetitive characteristic exhibited by the time series; generating an adjusted time series by removing the repetitive characteristic from the time series; determining, using the adjusted time series, an effect of one or more moving events that occur on different dates for two or more consecutive years on the adjusted time series; generating a residual time series by removing the effect of the one or more moving events from the adjusted time series; generating, using the residual time series, a base forecast that is independent of the repetitive characteristic and the effect of the one or more moving events; and generating the predictive forecast by including the repetitive characteristic and the effect of the one or more moving events into the base forecast. 13. The method of claim 11, further comprising determining that the time series comprises the magnitude spike with the value above the preset magnitude threshold by: removing the repetitive characteristic from the time series to generate a base time series; determining one or more magnitude differences between the time series and the base time series; determining that the one or more magnitude differences exceed the preset magnitude threshold; and in response to determining that the one or more magnitude differences exceed the preset magnitude threshold, determining that the time series comprises the magnitude spike with the value above the preset magnitude threshold. 14. The method of claim 11, further comprising generating the data set that includes the time series by: determining a time-series group for the time series from a plurality of time-series groups using a clustering method; and including the time series in the time-series group, the time-series group being the data set. 15. The method of claim 14, further comprising determining the time-series group for the time series from the plurality of time-series groups using the clustering method by: determining an attribute of the time series comprising a frequency of events in the time series, a timing of events in the time series, an average percentage of lift with respect to a base time series, or a maximum percentage of lift with respect to the base time series; using the attribute of the time series as input for the clustering method; and receiving the time-series group as output from the clustering method. 16. The method of claim 11, wherein the time series is a first time series, and further comprising: determining that a time duration of a second time series of the plurality of time series is below the preset time duration usable with the preselected forecasting process; or determining that the second time series comprises the time period with the inactivity; or determining that the second time series does not exhibit the repetitive characteristic based on an absence of the event; or determining that the second time series does not comprise the magnitude spike with the value above the preset magnitude threshold; and in response to determining that (i) the time duration of the second time series is below the preset time duration, (ii) the second time series comprises the time period with the inactivity, (iii) the second time series does not exhibit the repetitive characteristic, or (iii) the second time series does not comprise the magnitude spike with the value above the preset magnitude threshold, flagging the second time series as incompatible with the preselected forecasting process. 17. The method of claim 16, further comprising: selecting another forecasting process for use with the second time series; and using the other forecasting process to generate another forecast from the second time series. 18. The method of claim 11, wherein the preset time duration usable with the preselected forecasting process is a first preset time duration, and further comprising prior to determining the time series exhibits the repetitive characteristic, determining that a time duration of the time series is above the first preset time duration and below a second preset time duration and, in response: aggregating the time series with another time series to generate an aggregate time series; and using the aggregate time series as the time series. 19. The method of claim 18, wherein the first preset time duration is one year and the second preset time duration is two years. 20. The method of claim 11, wherein: generating the data set comprises a first worker node of a communications grid computing system receiving information from a second worker node of the communications grid computing system, generating the data set based on the information, and transmitting the data set to a third worker node of the communications grid computing system; and generating the predictive forecast comprises the third worker node of the communications grid computing system receiving the data set and generating the predictive forecast based on the data set. 21. A system comprising: a processing device; and a memory device in which instructions executable by the processing device are stored for causing the processing device to: receive a plurality of time series, each time series of the plurality of time series comprising a plurality of data points arranged in a sequential order over a period of time; filter the plurality of time series using a preset time duration to identify a subset of time series that have time durations that exceed the preset time duration, the preset time duration being a minimum length usable with a preselected forecasting process; and in response to identifying the subset of time series that exceeds the preset time duration: determine that a time series of the subset of time series does not include a time period with inactivity; determine that the time series exhibits a repetitive characteristic based on the time series comprising a pattern that repeats over a predetermined time period; determine that the time series comprises a magnitude spike with a value above a preset magnitude threshold; and in response to determining that the time series (i) lacks the time period with inactivity, (ii) exhibits the repetitive characteristic, and (iii) comprises the magnitude spike with the value above the preset magnitude threshold: generate a data set that includes the time series; and generate a predictive forecast from the data set using the preselected forecasting process, the predictive forecast indicating a progression of the time series over a future period of time. 22. The system of claim 21, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to generate the predictive forecast by: determining the repetitive characteristic exhibited by the time series; generating an adjusted time series by removing the repetitive characteristic from the time series; determining, using the adjusted time series, an effect of one or more moving events that occur on different dates for two or more consecutive years on the adjusted time series; generating a residual time series by removing the effect of the one or more moving events from the adjusted time series; generating, using the residual time series, a base forecast that is independent of the repetitive characteristic and the effect of the one or more moving events; and generating the predictive forecast by including the repetitive characteristic and the effect of the one or more moving events into the base forecast. 23. The system of claim 21, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to: determine that the time series comprises the magnitude spike with the value above the preset magnitude threshold by: removing the repetitive characteristic from the time series to generate a base time series; determining one or more magnitude differences between the time series and the base time series; determining that the one or more magnitude differences exceed the preset magnitude threshold; and in response to determining that the one or more magnitude differences exceed the preset magnitude threshold, determining that the time series comprises the magnitude spike with the value above the preset magnitude threshold. 24. The system of claim 21, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to generate the data set that includes the time series by: determining a time-series group for the time series from a plurality of time-series groups using a clustering method; and including the time series in the time-series group, the time-series group being the data set. 25. The system of claim 24, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to: determine the time-series group for the time series from the plurality of time-series groups using the clustering method by: determining an attribute of the time series comprising a frequency of events in the time series, a timing of events in the time series, an average percentage of lift with respect to a base time series, or a maximum percentage of lift with respect to the base time series; using the attribute of the time series as input for the clustering method; and receiving the time-series group as output from the clustering method. 26. The system of claim 21, wherein the time series is a first time series, and wherein the memory device further comprises instructions executable by the processing device for causing the processing device to: determine that a time duration of a second time series of the plurality of time series is below the preset time duration usable with the preselected forecasting process; or determine that the second time series comprises the time period with the inactivity; or determine that the second time series does not exhibit the repetitive characteristic based on an absence of the event; or determine that the second time series does not comprise the magnitude spike with the value above the preset magnitude threshold; and in response to determining that (i) the time duration of the second time series is below the preset time duration, (ii) the second time series comprises the time period with the inactivity, (iii) the second time series does not exhibit the repetitive characteristic, or (iii) the second time series does not comprise the magnitude spike with the value above the preset magnitude threshold, flag the second time series as incompatible with the preselected forecasting process. 27. The system of claim 26, wherein the memory device further comprises instructions executable by the processing device for causing the processing device to: select another forecasting process for use with the second time series; and use the other forecasting process to generate another forecast from the second time series. 28. The system of claim 21, wherein the preset time duration usable with the preselected forecasting process is a first preset time duration, and wherein the memory device further comprises instructions executable by the processing device for causing the processing device to: prior to determining the time series exhibits the repetitive characteristic, determine that a time duration of the time series is above the first preset time duration and below a second preset time duration and, in response: aggregate the time series with another time series to generate an aggregate time series; and use the aggregate time series as the time series. 29. The system of claim 28, wherein the first preset time duration is one year and the second preset time duration is two years. 30. The system of claim 21, further comprising a plurality worker nodes in a communications grid computing system, wherein: a first worker node of the plurality of worker nodes is configured to generate the data set and transmit the data set to a second worker node of the plurality of worker nodes; and the second worker node of the plurality of worker nodes is configured to receive the data set and generate the predictive forecast based on the data set.
Please help me write a proper abstract based on the patent claims.
Data sets for a three-stage predictor can be automatically determined. For example, multiple time series can be filtered to identify a subset of time series that have time durations that exceed a preset time duration. Whether a time series of the subset of time series includes a time period with inactivity can be determined. Whether the time series exhibits a repetitive characteristic can be determined based on whether the time series has a pattern that repeats over a predetermined time period. Whether the time series includes a magnitude spike with a value above a preset magnitude can be determined. If the time series (i) lacks the time period with inactivity, (ii) exhibits the repetitive characteristic, and (iii) has the magnitude spike with the value above the preset magnitude threshold, the time series can be included in a data set for use with the three-stage predictor.
1. An electronic device, comprising: a sensing unit comprising at least one sensor and being configured to acquire sensing data; and a computing device configured to: extract sensor specific features from the sensing data; generate a motion activity vector, a voice activity vector, and a spatial environment vector as a function of the sensor specific features; process the motion activity vector, voice activity vector, and spatial environment vector so as to determine a base level context of the electronic device relative to its surroundings, the base level context having a plurality of aspects each based on at least one of the motion activity vector, voice activity vector, and spatial environment vector; wherein one aspect of the plurality of aspects of the base level context is a mode of locomotion of a user carrying the electronic device, and another aspect of the plurality of aspects of the base level context is a nature of biologically generated sounds within audible distance of the user or a nature of physical space around the user; and determine meta level context of the electronic device relative to its surroundings as a function of the base level context, wherein the meta level context comprises at least one inference made from at least two aspects of the plurality of aspects of the base level context. 2. The electronic device of claim 1, wherein each aspect of the base level context based on the motion activity vector is mutually exclusive of one another; wherein each aspect of the base level context based on the voice activity vector is mutually exclusive of one another; and wherein each aspect of the base level context based on the spatial environment vector is mutually exclusive of one another. 3. The electronic device of claim 1, wherein the mode of locomotion of the user carrying the electronic device is based upon the motion activity vector, the nature of biologically generated sounds within audible distance of the user is based on the voice activity vector, and the nature of physical space around the user is based upon the spatial environment vector. 4. The electronic device of claim 1, wherein the computing device is further configured to facilitate performance of at least one contextual function of the electronic device as a function of the meta level context of the electronic device. 5. The electronic device of claim 1, wherein the determined mode of locomotion of the user comprises one of the user being stationary, walking, going up stairs, going down stairs, jogging, cycling, climbing, using a wheelchair, and riding in or on a vehicle; wherein the determined nature of the biologically generated sounds comprises one of a telephone conversation engaged in by the user, a multiple party conversation engaged in by the user, the user speaking, another party speaking, background conversation occurring around the user, and an animal making sounds; and wherein the determined nature of the physical space around the user comprises an office environment, a home environment, a shopping mall environment, a street environment, a stadium environment, a restaurant environment, a bar environment, a beach environment, a nature environment, a temperature of the physical space, a barometric pressure of the physical space, and a humidity of the physical space. 6. The electronic device of claim 1, wherein the computing device is configured to process the motion activity vector, voice activity vector, and spatial environment vector by: generating a motion activity posteriorgram as a function of the motion activity vector, the motion activity posteriorgram representing a probability of each element of the motion activity vector as a function of time; generating a voice activity posteriorgram as a function of the voice activity vector, the voice activity posteriorgram representing a probability of each element of the voice activity vector as a function of time; and generating a spatial environment posteriorgram as a function of the spatial environment vector, the spatial environment posteriorgram representing a probability of each element of the spatial environment vector as a function of time. 7. The electronic device of claim 6, wherein a sum of each probability of the motion activity posteriorgram at any given time equals one; wherein a sum of each probability of the voice activity posteriorgram at any given time equals one; and wherein a sum of each probability of the spatial environment posteriorgram at any given time equals one. 8. The electronic device of claim 1, wherein the sensing unit consists essentially of one sensor. 9. The electronic device of claim 1, wherein the sensing unit comprises a plurality of sensors; and wherein the motion activity vector, voice activity vector, and spatial environment vector are generated as a function of a fusion of the sensor specific features. 10. The electronic device of claim 9, wherein the plurality of sensors comprise at least two of an accelerometer, pressure sensor, microphone, gyroscope, magnetometer, GPS unit, and barometer. 11. The electronic device of claim 1, further comprising a printed circuit board (PCB) having at least one conductive trace thereon; further comprising a system on chip (SoC) mounted on the PCB and electrically coupled to the at least one conductive trace; and wherein the computing device comprises a sensor chip mounted on the PCB in a spaced apart relation with the SoC and electrically coupled to the at least one conductive trace such that the sensor chip and SoC are electrically coupled; and wherein the sensor chip comprises an micro-electromechanical system (MEMS) sensing unit, and a control circuit configured to perform the extracting, generating, processing, and determining. 12. An electronic device, comprising: a computing device configured to: extract sensor specific features from sensing data; generate a motion activity vector, a voice activity vector, and a spatial environment vector as a function of the sensor specific features; process the motion activity vector, voice activity vector, and spatial environment vector so as to determine a base level context of the electronic device relative to its surroundings, the base level context having a plurality of aspects each based on at least one of the motion activity vector, voice activity vector, and spatial environment vector; wherein at least one aspect of the plurality of aspects of the base level context is one of: a mode of locomotion of the user carrying the electronic device, a nature of biologically generated sounds within audible distance of the user, or a nature of physical space around the user; and determine meta level context of the electronic device relative to its surroundings as a function of the base level context, wherein the meta level context comprises at least one inference made from at least two aspects of the plurality of aspects of the base level context. 13. The electronic device of claim 12, wherein each aspect of the base level context based on the motion activity vector is mutually exclusive of one another; wherein each aspect of the base level context based on the voice activity vector is mutually exclusive of one another; and wherein each aspect of the base level context based on the spatial environment vector is mutually exclusive of one another. 14. The electronic device of claim 12, wherein the computing device is configured to process the motion activity vector, voice activity vector, and spatial environment vector by: generating a motion activity posteriorgram as a function of the motion activity vector, the motion activity posteriorgram representing a probability of each element of the motion activity vector as a function of time; generating a voice activity posteriorgram as a function of the voice activity vector, the voice activity posteriorgram representing a probability of each element of the voice activity vector as a function of time; and generating a spatial environment posteriorgram as a function of the spatial environment vector, the spatial environment posteriorgram representing a probability of each element of the spatial environment vector as a function of time. 15. The electronic device of claim 14, wherein a sum of each probability of the motion activity posteriorgram at any given time equals one; wherein a sum of each probability of the voice activity posteriorgram at any given time equals one; and wherein a sum of each probability of the spatial environment posteriorgram at any given time equals one. 16. An electronic device, comprising: a printed circuit board (PCB) having at least one conductive trace thereon; a system on chip (SoC) mounted on the PCB and electrically coupled to the at least one conductive trace; and a sensor chip mounted on the PCB in a spaced apart relation with the SoC and electrically coupled to the at least one conductive trace such that the sensor chip and SoC are electrically coupled, and configured to acquire sensing data; wherein the sensor chip comprises: a micro-electromechanical system (MEMS) sensing unit; an embedded processing node configured to: preprocess the sensing data, extract sensor specific features from the sensing data, generate a motion activity posteriorgram, a voice activity posteriorgram, and a spatial environment posteriorgram as a function of the sensor specific features, process the motion activity posteriorgram, voice activity posteriorgram, and spatial environment posteriorgram so as to determine a base level context of the electronic device relative to its surroundings, the base level context having a plurality of aspects, wherein a first aspect of the plurality of aspects of the base level context is determined based upon the motion activity posteriorgram, a second aspect of the plurality of aspects of the base level context is determined based upon the voice activity posteriorgram, and a third aspect of the plurality of aspects of the base level context is determined based upon the spatial environment posteriorgram, and determine meta level context of the electronic device relative to its surroundings as a function of the base level context and at least one known pattern, wherein the meta level context comprises at least one inference made from at least two aspects of the plurality of aspects of the base level context. 17. The electronic device of claim 16, further comprising at least one additional sensor external to the MEMS sensing unit; wherein the SoC is configured to acquire additional data from the at least one additional sensor; wherein the embedded processing node is further configured to receive the additional data from the SoC and to also extract the sensor specific features from the additional data. 18. The electronic device of claim 16, wherein the embedded processing node is configured to generate the motion activity posteriorgram, voice activity posteriorgram, and spatial environment posteriorgram to represent a probability of each element of a motion activity vector, a voice activity vector, and a spatial environment vector as a function of time, respectively. 19. The electronic device of claim 16, wherein a sum of each probability of the motion activity posteriorgram at any given time equals one; wherein a sum of each probability of the voice activity posteriorgram at any given time equals one; and wherein a sum of each probability of the spatial environment posteriorgram at any given time equals one. 20. The electronic device of claim 16, wherein the sensor chip consists essentially of one MEMS sensing unit. 21. The electronic device of claim 16, wherein the sensor chip comprises a plurality of MEMS sensing units; and wherein the motion activity posteriorgram, voice activity posteriorgram, and spatial environment posteriorgram are generated as a function of a fusion of the sensor specific features. 22. A method of operating an electronic device, the method comprising: acquiring sensing data from a sensing unit; extracting sensor specific features from the sensing data, using a computing device; generating a motion activity vector, a voice activity vector, and a spatial environment vector as a function of the sensor specific features, using the computing device; processing the motion activity vector, voice activity vector, and spatial environment vector so as to determine a base level context of the electronic device relative to its surroundings, the base level context having a plurality of aspects based on the motion activity vector, voice activity vector, and spatial environment vector, using the computing device; wherein one aspect of the plurality of aspects of the base level context is a mode of locomotion of a user carrying the electronic device, and another aspect of the plurality of aspects of the base level context is a nature of biologically generated sounds within audible distance of the user or a nature of physical space around the user; and determining meta level context of the electronic device relative to its surroundings as a function of the base level context, wherein the meta level context comprises at least one inference made from at least two aspects of the plurality of aspects of the base level context, using the computing device. 23. The method of claim 22, wherein processing the motion activity vector, voice activity vector, and spatial environment vector comprises: generating a motion activity posteriorgram as a function of the motion activity vector, the motion activity posteriorgram representing a probability of each element of the motion activity vector as a function of time; generating a voice activity posteriorgram as a function of the voice activity vector, the voice activity posteriorgram representing a probability of each element of the voice activity vector as a function of time; and generating a spatial environment posteriorgram as a function of the spatial environment vector, the spatial environment posteriorgram representing a probability of each element of the spatial environment vector as a function of time.
Please help me write a proper abstract based on the patent claims.
An electronic device described herein includes a sensing unit having at least one sensor to acquire sensing data. An associated computing device extracts sensor specific features from the sensing data, and generates a motion activity vector, a voice activity vector, and a spatial environment vector as a function of the sensor specific features. The motion activity vector, voice activity vector, and spatial environment vector are processed to determine a base level context of the electronic device relative to its surroundings, with the base level context having aspects each based on the motion activity vector, voice activity vector, and spatial environment vector. Meta level context of the electronic device relative to its surroundings is determined as a function of the base level context, with the meta level context being at least one inference made from at least two aspects of the plurality of aspects of the base level context.
1. A method of training a neural network with back propagation, comprising: generating error events representing a gradient of a cost function for the neural network based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal; and updating the weights of the neural network based on the error events. 2. The method of claim 1, in which the weights of the neural network are updated based on a single error event. 3. The method of claim 1, in which the input events comprise signed spikes. 4. The method of claim 1, in which the input events includes only positive spikes. 5. The method of claim 1, further comprising: receiving an input vector; and generating the input events corresponding to the input vector. 6. The method of claim 1, further comprising generating output events via the forward pass through the neural network, the output events generated at timings based on an occurrence of a predefined event. 7. The method of claim 1, in which the error events are generated based on a computed error and a mean squared error cost. 8. An apparatus for training a neural network with back propagation, comprising: a memory; and at least one processor coupled to the memory, the at least one processor configured: to generate error events representing a gradient of a cost function for the neural network based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal; and to update the weights of the neural network based on the error events. 9. The apparatus of claim 8, in which the at least one processor is further configured to update the weights of the neural network based on a single error event. 10. The apparatus of claim 8, in which the input events comprise signed spikes. 11. The apparatus of claim 8, in which the input events includes only positive spikes. 12. The apparatus of claim 8, in which the at least one processor is further configured: to receive an input vector; and to generate the input events corresponding to the input vector. 13. The apparatus of claim 8, in which the at least one processor is further configured to process the input events via the forward pass through the neural network to generate output events at timings based on an occurrence of a predefined event. 14. The apparatus of claim 8, in which the at least one processor is further configured to generate the error events based on a computed error and a mean squared error cost. 15. An apparatus for training a neural network with back propagation, comprising: means for generating error events representing a gradient of a cost function for the neural network based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal; and means for updating the weights of the neural network based on the error events. 16. The apparatus of claim 15, in which the weights of the neural network are updated based on a single error event. 17. The apparatus of claim 15, in which the input events comprise signed spikes. 18. The apparatus of claim 15, in which the input events includes only positive spikes. 19. The apparatus of claim 15, further comprising: means for receiving an input vector; and means for generating the input events corresponding to the input vector. 20. The apparatus of claim 15, further comprising means for generating output events via the forward pass through the neural network at timings based on an occurrence of a predefined event. 21. The apparatus of claim 15, in which the error events are generated based on a computed error and a mean squared error cost. 22. A non-transitory computer-readable medium having encoded thereon program code for training a neural network with back propagation, the program code being executed by a processor and comprising: program code to generate error events representing a gradient of a cost function for the neural network based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal; and program code to update the weights of the neural network based on the error events. 23. The non-transitory computer-readable medium of claim 22, further comprising program code to update the weights of the neural network based on a single error event. 24. The non-transitory computer-readable medium of claim 22, in which the input events comprise signed spikes. 25. The non-transitory computer-readable medium of claim 22, in which the input events includes only positive spikes. 26. The non-transitory computer-readable medium of claim 22, further comprising: program code to receive an input vector; and program code to generate the input events corresponding to the input vector. 27. The non-transitory computer-readable medium of claim 22, in which the forward pass through the neural network generates an output event at timings based on an occurrence of a predefined event. 28. The non-transitory computer-readable medium of claim 22, in which the error events are generated based on a computed error and a mean squared error cost.
Please help me write a proper abstract based on the patent claims.
A method of training a neural network with back propagation includes generating error events representing a gradient of a cost function for the neural network. The error events may be generated based on a forward pass through the neural network resulting from input events, weights of the neural network and events from a target signal. The method further includes updating the weights of the neural network based on the error events.
1. A method performed by a computing system, comprising: receiving, at a server, a request for a recommendation for a service from a first user, the request received as a first message from a mobile computing device associated with the first user via a messaging service provided by a wireless telecommunications network of the mobile computing device; analyzing, at the server, the first message to extract a first set of parameters to be used for performing a search for the service, the analyzing including: determining, by the server, whether the first set of parameters satisfy a criterion for performing the search, and responsive to a determination that the first set of parameters do not satisfy the criterion, exchanging a set of messages between the server and the mobile computing device to obtain the first set of parameters from the first user until the analyzing determines that the first set of parameters satisfy the criterion, the exchanging including: receiving an input from a second user associated with the server regarding at least one of the first set of parameters to be obtained from the user, and sending, by the server, at least one of the set of messages to the mobile computing device eliciting a response from the first user for the at least one of the first set of parameters; performing, at the server, the search using the set of parameters to retrieve a plurality of recommendations for the service; generating, by the server, a second message including the plurality of recommendations; and sending, by the server, the second message to the mobile computing device via the messaging service. 2. The method of claim 1 further comprising: receiving, at the server, a response from the first user regarding a selection of a first recommendation from the plurality of the recommendations via a third message from the mobile computing device; and storing, by the server, at a storage system associated with the server, an indication that the first user accepted the first recommendation as part of user habit data. 3. The method of claim 2 further comprising: analyzing, by the server, the third message from the first user to determine if the third message includes a reason for the selection of the first recommendation; and responsive to a determination that the third message includes the reason for the selection of the first recommendation, storing the reason as part of the user habit data. 4. The method of claim 2 further comprising: analyzing, by the server, the third message from the first user to determine if the third message includes a reason for rejecting any of the plurality of recommendations; and responsive to a determination that the third message includes the reason for rejecting, storing the reason as part of the user habit data. 5. The method of claim 2, wherein performing the search using the set of parameters further includes: analyzing, by the server, the user habit data to determine one or more of the set of parameters to generate the plurality of recommendations that is customized to the first user. 6. The method of claim 2 further comprising: receiving, by the server, a request from the first user to perform a task associated with a service associated with the first recommendation, the request received via a fourth message from the mobile computing device; and performing, by the server, the task using an application programming interface (API) associated with the service associated with the first recommendation. 7. The method of claim 1 further comprising: receiving, at the server, a response from the first user regarding a rejection of a first recommendation from the plurality of the recommendations via a third message from the mobile computing device; and storing, by the server, at a storage system associated with the server, an indication that the first user rejected the first recommendation as part of user habit data. 8. The method of claim 7 further comprising: analyzing, by the server, the third message from the first user to determine a reason for the rejection; and performing, by the server, a revised search using one or more parameters determined based on the reason for rejection to retrieve a second plurality of recommendations, the second plurality of recommendations excluding a first set of recommendations that match the reason for rejection and including a second set of recommendations that overcome the reason for rejection. 9. The method of claim 1, wherein performing the search to retrieve the plurality of recommendations for the service further includes: determining, at the server, a second set of parameters to customize the plurality of recommendations to the first user, the second set of parameters determined based on at least one of a context of the request, user habits data, or user profile data, and performing, by the server, a second search based on the second set of parameters to generate a second plurality of recommendations that is customized to the first user. 10. The method of claim 9, wherein determining the second set of parameters includes receiving, at the server, one or more of the second set of parameters as input from the second user. 11. The method of claim 10, wherein the one or more of the second set of parameters received from the second user are determined by the second user based on user habits data generated by the server. 12. The method of claim 10, wherein the one or more of the second set of parameters received from the second user are determined by the second user based on the context of the request for the recommendation from the first user, the one or more parameters relating to the first user and/or an aspect of the service in the plurality of recommendations. 13. The method of claim 9, wherein the user habits data is generated by the server based on information regarding one or more recommendations accepted and/or rejected in the past by the first user and reasons for accepting and/or rejecting the one or more recommendations. 14. The method of claim 1, wherein the analyzing the first message to extract the first set of parameters includes analyzing the first message using artificial intelligence technique. 15. The method of claim 1, wherein the messaging service provided by the wireless telecommunications network of the mobile computing device is a short messaging service, and wherein the first message is a text message. 16. The method of claim 1, wherein exchanging the set of messages between the server and the mobile computing device includes: sending, by the server, a first message of the set of messages to the first user requesting the first user to provide one or more parameters of the first set of parameters; receiving a response from the first user for the first message as a second message of the set of messages, the response including the one or more parameters; and sending, by the server, a third message of the set of messages forming a sequence, wherein a next message of the sequence sent to the first user is based on a response received from the user for a previous message of the sequence. 17. The method of claim 1, wherein performing the search includes: performing, by the server, the search in at a plurality of computing devices accessible via a communication network. 18. A computer-readable storage medium storing computer-readable instructions, comprising: instructions for receiving, at a server, a first set of messages from a mobile computing device associated with a first user, wherein at least one of the first set of messages contains a request for a recommendation for a service; instructions for sending, by the server, a second set of messages to the mobile computing device to elicit information from the first user that is to be used by the server in generating the recommendation, wherein at least some of the second set of messages are sent in response to at least some of the first set of messages received from the first user, wherein at least some of the first set of messages are responses to the at least some of the second set of messages, and includes information requested by the at least some the second set of messages; instructions for analyzing, at the server, the first set of messages using an artificial intelligence technique to determine a first set of parameters for generating the recommendation, the analyzing including: extracting at least some of the first set of parameters provided by the first user in the first set of messages, deriving at least some of the first set of parameters based on information provided in the first set of messages, and deriving at least some of the first set of parameters based on user habits data of the first user; instructions for performing, at the server, the search using the first set of parameters to retrieve a first recommendation for the service; and instructions for sending, by the server, the first recommendation to the first user as a first message to the mobile computing device. 19. The computer-readable storage medium of claim 18 further comprising: instructions for receiving a second message from the mobile computing device including a response from the first user regarding a rejection of the first recommendation; instructions for analyzing the second message to determine a reason for rejection; instructions for generating a “not-preferred” parameter based on the reason for rejection that is used to identify a set of recommendations that match the reason for rejection; instructions for performing a revised search using the “not-preferred” parameter to retrieve a plurality of recommendations, the plurality of recommendations excluding the set of recommendations that match the “not-preferred” parameter; and instructions for sending one of the plurality of recommendations to the first user as a third message to the mobile computing device. 20. The computer-readable storage medium of claim 19 further comprising: instructions for storing the rejection of the first recommendation and the reason for rejection as part of user habits data of the first user in a storage system associated with the server. 21. The computer-readable storage medium of claim 18, wherein the instructions for receiving the first set of messages from the mobile computing device includes instructions for receiving the first set of messages from the mobile computing device via a messaging service provided by a wireless telecommunications network of the mobile computing device. 22. The computer-readable storage medium of claim 21, wherein the instructions for receiving the first set of messages via the messaging service includes receiving the first set of messages via a short messaging service, and wherein the first set of messages are text messages. 23. The computer-readable storage medium of claim 18, wherein the instructions for receiving the first set of messages from the mobile computing device includes instructions for receiving the first set of messages from the mobile computing device via a social networking application executing at the mobile computing device. 24. The computer-readable storage medium of claim 18, wherein the instructions for deriving at least some of the first set of parameters based on information provided in the first set of messages includes: instructions for receiving an input from a second user associated with the server regarding a context of the request, and instructions for deriving the at least some of the first set of parameters based on the context. 25. A system, comprising: a first module to receive, at a server, a first set of messages from a mobile computing device associated with a first user, wherein at least one of the first set of messages contains a request for a recommendation for a service; a second module to send, by the server, a second set of messages to the mobile computing device to elicit information from the first user that is to be used by the server in generating the recommendation, wherein at least some of the first set of messages are responses to the at least some of the second set of messages, and includes information requested by the at least some the second set of messages; a third module to determine, at the server, a context of the request using an artificial intelligence technique, the context including explicit parameters and implicit parameters, which are used for performing a search to generate the recommendation, the third module further configured to determine the context by: extracting the explicit parameters from the first set of messages, the explicit parameters provided by the first user in the first set of messages, deriving at least some of the implicit parameters based on the information provided in the first set of messages and/or information received by a second user associated with the server, and deriving at least some of the implicit parameters based on user habits data of the first user; and a fourth module to perform, at the server, a search based on the context to retrieve a first recommendation for the service, wherein the second module is further configured to send the first recommendation to the first user as a first message to the mobile computing device. 26. The system of claim 25, wherein the second module is further configured to receive a second message from the mobile computing device including a rejection of the first recommendation, and wherein the fourth module is further configured to perform a revised search for retrieving a plurality of recommendations, the plurality of recommendations excluding a set of recommendations that match with a reason for the rejection. 27. The system of claim 26 further comprising: a fifth module to store the rejection of the first recommendation and the reason for rejection as part of user habits data of the first user in a storage system associated with the server. 28. The system of claim 25, wherein the first module is configured to receive the first set of messages from the mobile computing device via a messaging service provided by a wireless telecommunications network of the mobile computing device. 29. The system of claim 28, wherein first module is configured to receive the first set of messages via a short messaging service, and wherein the first set of messages are text messages.
Please help me write a proper abstract based on the patent claims.
Technology is directed to text message based concierge services (“the technology”). A user interacts with a concierge service (CS) via text messages to obtain a specific concierge service. For example, the user can send a text message to the CS, e.g., to a contact number provided by the CS, requesting for a recommendation of a restaurant, and the CS can respond by sending the recommendation as a text message. The CS determines a context of the request and generates recommendations that are personalized to the user and is relevant to the context. The CS can use various techniques, e.g., artificial intelligence, machine learning, natural language processing, to determine a context of the request and generate the recommendations accordingly. The CS can also receive additional information from a person associated with the CS, such as a concierge, to further customize or personalize the recommendations to the user.
1. A computer system comprising a processor and a non-transitory computer-readable memory, the memory encoded with computer-executable instructions for: an intention inference engine which: receives, as an input, a description of a context; generates, as output, a set of weighted expressions, each weighed expression comprising a restriction over the description of the context and a confidence factor resulting between the combination of the user model and of the query input; an intelligent ranking engine which: receives, as input, the weighted list of expressions generated by the intention inference engine; and generates, as output, a sorted list of resources matching the weighted list of expressions. 2. The computer system of claim 1 in which the intention inference engine generates the set of weighted expressions by: post-processing one or more historical signals; extracting at least one similarity patterns from the historical signals by solving sub-classification problems in which only a subset of the problem dimensions is used for the solution; defining a resulting set of expressions for each pattern. 3. The computer system of claim 1 further comprising a smart resource browser in which the smart resource browser displays the sorted list of resources matching the weighted list of expressions and receives a navigation action from the user, the navigation action comprising a mute action or an explore action. 4. The computer system of claim 3 in which, in response to receiving a mute action, the smart resource browser removes a resource indicated in the mute action from the sorted list of resources matching the weighted list of expressions and downscores components of the resource indicated in the mute action within the description of the context. 5. The computer system of claim 3 in which, in response to receiving an explore action, the smart resource browser starts a new search for the resource indicated in the explore action and upscores components of the resource indicated in the explore action within the description of the context. 6. The computer system of claim 1 further comprising a preemptive file retriever in which the preemptive file retriever, automatically or by manual user triggering, collects and processes contextual information, provides a description of a context using the contextual information, to the intention inference engine, and displays a sorted list of resources from intelligent ranking engine based on the description of the context provided by the preemptive file retriever. 7. The computer system of claim 1 in which the intelligent ranking engine in generates a predictive daily digest, the predictive daily digest including resources relevant to a particular day. 8. The computer system of claim 1 further comprising an evolving intention-driven resource view in which resources are organized by inferring the intention of recurring actions that the user is doing and by automatically grouping resources that are relevant or that fulfill the intention. 9. A non-transitory computer-readable medium encoded with computer-executable instructions for: an intention inference engine which: receives, as an input, a description of a context; generates, as output, a set of weighted expressions, each weighed expression comprising a restriction over the description of the context and a confidence factor resulting between the combination of the user model and of the query input; an intelligent ranking engine which: receives, as input, the weighted list of expressions generated by the intention inference engine; and generates, as output, a sorted list of resources matching the weighted list of expressions. 10. The non-transitory computer-readable medium of claim 9 in which the intention inference engine generates the set of weighted expressions by: post-processing one or more historical signals; extracting at least one similarity patterns from the historical signals by solving sub-classification problems in which only a subset of the problem dimensions is used for the solution; defining a resulting set of expressions for each pattern. 11. The non-transitory computer-readable medium of claim 9 further comprising a smart resource browser in which the smart resource browser displays the sorted list of resources matching the weighted list of expressions and receives a navigation action from the user, the navigation action comprising a mute action or an explore action. 12. The non-transitory computer-readable medium of claim 11 in which, in response to receiving a mute action, the smart resource browser removes a resource indicated in the mute action from the sorted list of resources matching the weighted list of expressions and downscores components of the resource indicated in the mute action within the description of the context. 13. The non-transitory computer-readable medium of claim 11 in which, in response to receiving an explore action, the smart resource browser starts a new search for the resource indicated in the explore action and upscores components of the resource indicated in the explore action within the description of the context. 14. The non-transitory computer-readable medium of claim 9 further comprising a preemptive file retriever in which the preemptive file retriever, automatically or by manual user triggering, collects and processes contextual information, provides a description of a context using the contextual information, to the intention inference engine, and displays a sorted list of resources from intelligent ranking engine based on the description of the context provided by the preemptive file retriever. 15. The non-transitory computer-readable medium of claim 9 in which the intelligent ranking engine in generates a predictive daily digest, the predictive daily digest including resources relevant to a particular day. 16. The non-transitory computer-readable medium of claim 9 further comprising an evolving intention-driven resource view in which resources are organized by inferring the intention of recurring actions that the user is doing and by automatically grouping resources that are relevant or that fulfill the intention. 17. A computer-implemented machine-learning method comprising: receiving a description of a context; generating a set of weighted expressions, each weighed expression comprising a restriction over the description of the context and a confidence factor resulting between the combination of the user model and of the query input; generating a sorted list of resources matching the weighted list of expressions. 18. The method of claim 17 in which the set of weighted expressions is generated by: post-processing one or more historical signals; extracting at least one similarity patterns from the historical signals by solving sub-classification problems in which only a subset of the problem dimensions is used for the solution; defining a resulting set of expressions for each pattern. 19. The method of claim 17 further comprising displays the sorted list of resources matching the weighted list of expressions and receiving a navigation action from the user, the navigation action comprising a mute action or an explore action. 20. The method of claim 19 in which, in response to receiving a mute action, removing a resource indicated in the mute action from the sorted list of resources matching the weighted list of expressions and downscoring components of the resource indicated in the mute action within the description of the context. 21. The method of claim 19 in which, in response to receiving an explore action, starting a new search for the resource indicated in the explore action and upscoring components of the resource indicated in the explore action within the description of the context. 22. The method of claim 17 further comprising, automatically or by manual user triggering, collecting and processing contextual information and displaying a sorted list of resources based on the description of the context. 23. The method of claim 17 further comprising generating a predictive daily digest, the predictive daily digest including resources relevant to a particular day. 24. The method of claim 17 further comprising inferring the intention of recurring actions that the user is doing and automatically grouping resources that are relevant or that fulfill the intention. 25. (canceled) 26. (canceled) 27. (canceled)
Please help me write a proper abstract based on the patent claims.
A computer-implemented machine-learning method and system for searching for resources by predicting an intention and pushing resources directly to users based on the predicted intention. The method includes receiving a description of a context; generating a set of weighted expressions, each weighed expression comprising a restriction over the description of the context and a confidence factor resulting between the combination of the user model and of the query input; and generating a sorted list of resources matching the weighted list of expressions. The system includes computer instructions for an intention inference engine and an intelligent ranking engine.
1. A system comprising: a computing device configured to: receive health-related information from an individual; receive taste-related preference data from a individual; based on the health-related information, determine a suggested formulation for a supplement for the individual; based on the taste-related preference data, determine a suggested filler medium; receive individual alterations to each of the suggested formulation and suggested filler medium.
Please help me write a proper abstract based on the patent claims.
The one or more embodiments disclosed herein provide a method for automatically assembling multiple compounds into a single edible custom composition, in which each compound is individually customized to proportions formulated from a profile of an individual or group. The single custom mixture can contain a plurality of compounds including foods or flavors, nutritional additives, herbals, biologics, or pharmacologically active substances. Using the method and a related algorithm, the formulation of a custom mixture is suggested.
1. A method, comprising: receiving, for a plurality of computer systems, performance information and configuration information; grouping the plurality of computer systems into a plurality of clusters based at least in part on the performance information, wherein the plurality of clusters includes a first cluster and a second cluster; automatically identifying a system configuration associated with the first cluster from the configuration information; and automatically sending the system configuration associated with the first cluster to the second cluster. 2. The method as recited in claim 1, wherein the system configuration includes network topology. 3. The method as recited in claim 1, wherein the performance information includes one or more of the following: an amount of processing capacity utilized or an amount of storage capacity utilized. 4. The method as recited in claim 3, wherein: a lower amount of processing capacity utilized is associated with a healthier cluster; a lower amount of storage capacity utilized is associated with a healthier cluster; and the first cluster is a healthiest cluster in the plurality of clusters. 5. The method as recited in claim 4 further comprising generating a priority list associated with an order in which defect resolution is performed on one or more clusters, other than the first cluster, in the plurality of clusters. 6. The method as recited in claim 5, wherein generating the priority list is based at least in part on health, such that an unhealthier cluster has a higher priority in the priority list than a healthier cluster. 7. The method as recited in claim 5, wherein generating the priority list is based at least in part on a number of computers systems, such that a cluster with more computer systems has a higher priority in the priority list than a cluster with fewer computer systems. 8. A computer program product, the computer program product being embodied in a non-transitory computer readable storage medium and comprising computer instructions for: receiving, for a plurality of computer systems, performance information and configuration information; grouping the plurality of computer systems into a plurality of clusters based at least in part on the performance information, wherein the plurality of clusters includes a first cluster and a second cluster; automatically identifying a system configuration associated with the first cluster from the configuration information; and automatically sending the system configuration associated with the first cluster to the second cluster. 9. The computer program product as recited in claim 8, wherein the system configuration includes network topology. 10. The computer program product as recited in claim 8, wherein the performance information includes one or more of the following: an amount of processing capacity utilized or an amount of storage capacity utilized. 11. The computer program product as recited in claim 10, wherein: a lower amount of processing capacity utilized is associated with a healthier cluster; a lower amount of storage capacity utilized is associated with a healthier cluster; and the first cluster is a healthiest cluster in the plurality of clusters. 12. The computer program product as recited in claim 11 further comprising computer instructions for generating a priority list associated with an order in which defect resolution is performed on one or more clusters, other than the first cluster, in the plurality of clusters. 13. The computer program product as recited in claim 12, wherein generating the priority list is based at least in part on health, such that an unhealthier cluster has a higher priority in the priority list than a healthier cluster. 14. The computer program product as recited in claim 12, wherein generating the priority list is based at least in part on a number of computers systems, such that a cluster with more computer systems has a higher priority in the priority list than a cluster with fewer computer systems. 15. A system, comprising: a plurality of computer systems; and a central repository configured to: receive, for the plurality of computer systems, performance information and configuration information; group the plurality of computer systems into a plurality of clusters based at least in part on the performance information, wherein the plurality of clusters includes a first cluster and a second cluster; automatically identify a system configuration associated with the first cluster from the configuration information; and automatically send the system configuration associated with the first cluster to the second cluster. 16. The system as recited in claim 15, wherein the system configuration includes network topology. 17. The system as recited in claim 15, wherein the performance information includes one or more of the following: an amount of processing capacity utilized or an amount of storage capacity utilized. 18. The system as recited in claim 17, wherein: a lower amount of processing capacity utilized is associated with a healthier cluster; a lower amount of storage capacity utilized is associated with a healthier cluster; and the first cluster is a healthiest cluster in the plurality of clusters. 19. The system as recited in claim 18, wherein the central repository is further configured to generate a priority list associated with an order in which defect resolution is performed on one or more clusters, other than the first cluster, in the plurality of clusters. 20. The system as recited in claim 19, wherein generating the priority list is based at least in part on health, such that an unhealthier cluster has a higher priority in the priority list than a healthier cluster. 21. The system as recited in claim 19, wherein generating the priority list is based at least in part on a number of computers systems, such that a cluster with more computer systems has a higher priority in the priority list than a cluster with fewer computer systems.
Please help me write a proper abstract based on the patent claims.
Performance information and configuration information is received for the plurality of computer systems. The computer systems are grouped into a plurality of clusters based at least in part on the performance information, where the plurality of clusters includes a first cluster and a second cluster. A system configuration associated with the first cluster is automatically identified from the configuration information and is automatically sent to the second cluster.
1. (canceled) 2. A computer-implemented method comprising: obtaining a plurality of training cases; and training a neural network having a plurality of layers on the plurality of training cases, each of the layers including one or more feature detectors, each of the feature detectors having a corresponding set of weights, wherein training the neural network on the plurality of training cases comprises, for a first training case of the plurality of training cases: determining a first set of one or more feature detectors to disable during processing of the first training case, disabling the first set of one or more feature detectors in accordance with the determining, processing the first training case using the neural network with the first set of one or more feature detectors disabled to generate a predicted output for the first training case, and enabling the first set of one or more feature detectors after processing the first training case using the neural network and prior to processing a second training case of the plurality of training cases using the neural network. 3. The method of claim 2, wherein training the neural network on the plurality of training cases further comprises: for the second training case: determining a second, different set of one or more feature detectors to disable during processing of the second training case, disabling the second set of one or more feature detectors in accordance with the determining, and processing the second training case using the neural network with the second set of one or more feature detectors disabled to generate a predicted output for the second training case. 4. The method of claim 2, wherein a subset of the feature detectors are associated with respective probabilities of being disabled during processing of each of the training cases, and wherein determining the first set of one or more feature detectors to disable during processing of the first training case comprises: determining whether to disable each of the feature detectors in the subset based on the respective probability associated with the feature detector. 5. The method of claim 4, wherein training the neural network further comprises: adjusting the weights of each of the feature detectors in the neural network to generate trained values for each weight in the set of weights corresponding to the feature detector. 6. The method of claim 5, further comprising: normalizing the trained weights for each of the feature detectors in the subset, wherein normalizing the trained weights comprises multiplying the trained values of the weights for each of the one or more feature detectors in the subset by a respective probability of the feature detector not being disabled during processing of each of the training cases. 7. The method of claim 4, wherein the subset includes feature detectors in a first layer of the plurality of layers and feature detectors in one or more second layers of the plurality of layers, wherein the feature detectors in the first layer are associated with a first probability and wherein the feature detectors in the one or more second layers are associated with a second, different probability. 8. The method of claim 7, wherein the first layer is an input layer of the neural network and the one or more second layers are hidden layers of the neural network. 9. The method of claim 7, wherein the first layer and the one or more second layers are hidden layers of the neural network. 10. The method of claim 2, wherein determining the first set of one or more feature detectors to disable during processing of the first training case comprises: determining to disable the same feature detectors that were disabled during processing of a preceding training case. 11. A system comprising one or more computers and one or more storage devices storing instructions that, when executed by the one or more computers, cause the one or more computers to perform operations comprising: obtaining a plurality of training cases; and training a neural network having a plurality of layers on the plurality of training cases, each of the layers including one or more feature detectors, each of the feature detectors having a corresponding set of weights, wherein training the neural network on the plurality of training cases comprises, for a first training case of the plurality of training cases: determining a first set of one or more feature detectors to disable during processing of the first training case, disabling the first set of one or more feature detectors in accordance with the determining, processing the first training case using the neural network with the first set of one or more feature detectors disabled to generate a predicted output for the first training case, and enabling the first set of one or more feature detectors after processing the first training case using the neural network and prior to processing a second training case of the plurality of training cases using the neural network. 12. The system of claim 11, wherein training the neural network on the plurality of training cases further comprises: for the second training case: determining a second, different set of one or more feature detectors to disable during processing of the second training case, disabling the second set of one or more feature detectors in accordance with the determining, and processing the second training case using the neural network with the second set of one or more feature detectors disabled to generate a predicted output for the second training case. 13. The system of claim 11, wherein a subset of the feature detectors are associated with respective probabilities of being disabled during processing of each of the training cases, and wherein determining the first set of one or more feature detectors to disable during processing of the first training case comprises: determining whether to disable each of the feature detectors in the subset based on the respective probability associated with the feature detector. 14. The system of claim 13, wherein training the neural network further comprises: adjusting the weights of each of the feature detectors in the neural network to generate trained values for each weight in the set of weights corresponding to the feature detector. 15. The system of claim 14, the operations further comprising: normalizing the trained weights for each of the feature detectors in the subset, wherein normalizing the trained weights comprises multiplying the trained values of the weights for each of the one or more feature detectors in the subset by a respective probability of the feature detector not being disabled during processing of each of the training cases. 16. The system of claim 13, wherein the subset includes feature detectors in a first layer of the plurality of layers and feature detectors in one or more second layers of the plurality of layers, wherein the feature detectors in the first layer are associated with a first probability and wherein the feature detectors in the one or more second layers are associated with a second, different probability. 17. The system of claim 16, wherein the first layer is an input layer of the neural network and the one or more second layers are hidden layers of the neural network. 18. The system of claim 16, wherein the first layer and the one or more second layers are hidden layers of the neural network. 19. The system of claim 11, wherein determining the first set of one or more feature detectors to disable during processing of the first training case comprises: determining to disable the same feature detectors that were disabled during processing of a preceding training case. 20. A non-transitory computer storage medium encoded with a computer program, the program comprising instructions that when executed by one or more computers cause the one or more computers to perform operations comprising: obtaining a plurality of training cases; and training a neural network having a plurality of layers on the plurality of training cases, each of the layers including one or more feature detectors, each of the feature detectors having a corresponding set of weights, wherein training the neural network on the plurality of training cases comprises, for a first training case of the plurality of training cases: determining a first set of one or more feature detectors to disable during processing of the first training case, disabling the first set of one or more feature detectors in accordance with the determining, processing the first training case using the neural network with the first set of one or more feature detectors disabled to generate a predicted output for the first training case, and enabling the first set of one or more feature detectors after processing the first training case using the neural network and prior to processing a second training case of the plurality of training cases using the neural network. 21. The computer storage medium of claim 20, wherein training the neural network on the plurality of training cases further comprises: for the second training case: determining a second, different set of one or more feature detectors to disable during processing of the second training case, disabling the second set of one or more feature detectors in accordance with the determining, and processing the second training case using the neural network with the second set of one or more feature detectors disabled to generate a predicted output for the second training case.
Please help me write a proper abstract based on the patent claims.
A system for training a neural network. A switch is linked to feature detectors in at least some of the layers of the neural network. For each training case, the switch randomly selectively disables each of the feature detectors in accordance with a preconfigured probability. The weights from each training case are then normalized for applying the neural network to test data.
1. A method for allocating memory in an artificial nervous system simulator implemented in hardware, comprising: determining memory resource requirements for one or more components of an artificial nervous system being simulated; and allocating portions of a shared memory pool to the components based on the determination. 2. The method of claim 1, wherein the allocating is performed when compiling the artificial nervous system being simulated. 3. The method of claim 1, wherein the allocating is performed dynamically as memory resource requirements change. 4. The method of claim 1, wherein at least a portion of the shared memory pool comprises a memory located on a separate chip than a processor of the artificial nervous system simulator. 5. The method of claim 1, wherein: the components comprise artificial neurons; and determining memory resource requirements comprises determining resources based on at least one of a state or type of the artificial neurons. 6. The method of claim 1, wherein the shared memory pool is implemented as a distributed architecture comprising memory banks, write clients, read clients and a router interfacing the memory banks with the write clients and the read clients. 7. The method of claim 1, wherein the allocating comprises varying an amount of the shared memory pool allocated to the components based on the determination. 8. An apparatus for allocating memory in an artificial nervous system simulator implemented in hardware, comprising a processing system configured to: determine memory resource requirements for one or more components of an artificial nervous system being simulated; and allocate portions of a shared memory pool to the components based on the determination. 9. The apparatus of claim 8, wherein the processing system is configured to perform the allocation when compiling the artificial nervous system being simulated. 10. The apparatus of claim 8, wherein the processing system is configured to perform the allocation dynamically as memory resource requirements change. 11. The apparatus of claim 8, wherein at least a portion of the shared memory pool comprises a memory located on a separate chip than a processor of the artificial nervous system simulator. 12. The apparatus of claim 8, wherein: the components comprise artificial neurons; and the processing system is configured to determine resources based on at least one of a state or type of the artificial neurons. 13. The apparatus of claim 8, wherein the shared memory pool is implemented as a distributed architecture comprising memory banks, write clients, read clients and a router interfacing the memory banks with the write clients and the read clients. 14. The apparatus of claim 8, wherein the processing system is also configured to vary an amount of the shared memory pool allocated to the components based on the determination. 15. An apparatus for allocating memory in an artificial nervous system simulator implemented in hardware, comprising: means for determining memory resource requirements for one or more components of an artificial nervous system being simulated; and means for allocating portions of a shared memory pool to the components based on the determination. 16. The apparatus of claim 15, wherein the allocating is performed when compiling the artificial nervous system being simulated. 17. The apparatus of claim 15, wherein the allocating is performed dynamically as memory resource requirements change. 18. The apparatus of claim 15, wherein at least a portion of the shared memory pool comprises a memory located on a separate chip than a processor of the artificial nervous system simulator. 19. The apparatus of claim 15, wherein: the components comprise artificial neurons; and the means for determining memory resource requirements comprises means for determining resources based on at least one of a state or type of the artificial neurons. 20. The apparatus of claim 15, wherein the shared memory pool is implemented as a distributed architecture comprising memory banks, write clients, read clients and a router interfacing the memory banks with the write clients and the read clients. 21. The apparatus of claim 15, wherein the allocating comprises varying an amount of the shared memory pool allocated to the components based on the determination. 22. A computer-readable medium having instructions executable by a computer stored thereon for allocating memory in an artificial nervous system simulator implemented in hardware, comprising: instructions for determining memory resource requirements for one or more components of an artificial nervous system being simulated; and instructions for allocating portions of a shared memory pool to the components based on the determination.
Please help me write a proper abstract based on the patent claims.
Aspects of the present disclosure provide methods and apparatus for allocating memory in an artificial nervous system simulator implemented in hardware. According to certain aspects, memory resource requirements for one or more components of an artificial nervous system being simulated may be determined and portions of a shared memory pool (which may include on-chip and/or off-chip RAM) may be allocated to the components based on the determination.
1-20. (canceled) 21. A system comprising: one or more processors; and memory storing instructions that, when executed by the one or more processors, configure the one or more processors to perform operations comprising: receiving an input value; iteratively performing the following steps until a predetermined threshold is exceed: applying a level of complexity of a machine learning model to the input value; determining whether a current level of complexity of the level of complexity is able to classify the input value; determining whether the predetermined threshold has been exceeded when the current level of complexity is not able to classify the input value; and increasing the level of complexity of the machine learning model when the current level of complexity is able to classify the input value. 22. The system of claim 21, wherein an amount of computing time used to classify the input value depends, at least in part, on a first level of complexity of the current level of complexity of the machine learning model. 23. The system of claim 21, the operations further comprising: if a first level of complexity of the level of complexity of the machine learning model is able to classify the input value, classifying the input value into one of two or more categories. 24. The system of claim 21, wherein applying the level of complexity of the machine learning model to the input value includes: applying at least one biased first level of complexity to the input value to generate at least one class label. 25. The system of claim 23, wherein applying at least one biased first level of complexity to the input value to generate at least one class label includes: applying a negatively biased first level of complexity to the input value to generate a first class label; and applying a positively biased first level of complexity to the input value to generate a second class label. 26. The system of claim 25, wherein determining whether the current level of complexity is able to classify the input value comprises: comparing the first class label to the second class label; and determining whether a consensus exists between the negatively biased first level of complexity and the positively biased first level of complexity based, at least in part, on the comparing. 27. The system of claim 25, the operations further comprising: adjusting one or both of i) the negatively biased first level of complexity and ii) the positively biased first level of complexity to modify a likelihood that the current level of complexity is able to classify the input value with a predetermined confidence level. 28. The system of claim 21, wherein the input value is based, at least in part, on collected information from one or more of the following: capturing an image, capturing an audio sample, or receiving a search query. 29. A computing device comprising: an input port to receive an input value having an initial level of complexity; a memory device storing a plurality of machine learning models, wherein abilities of the machine learning models to classify the input value are different from one another; and a processor to apply one or more of the plurality of the machine learning models based, at least in part, on the initial level of complexity of the input value, wherein the processor is configured to iteratively apply an increasing level of complexity of the machine learning models to the input value. 30. The computing device of claim 29, wherein the abilities of the machine learning models to classify the input value comprise: the abilities of the machine learning models to classify the input value into one of two or more categories. 31. The computing device of claim 29, wherein the configuration of the processor to iteratively apply the increasing level of complexity of the machine learning models to the input value includes the processor being configured to: iteratively perform the following steps until a predetermined threshold is exceed: apply a level of complexity of a machine learning model to the input value; determine whether a current level of complexity of the level of complexity is able to classify the input value; determine whether the predetermined threshold has been exceeded when the current level of complexity is not able to classify the input value; and increase the level of complexity of the machine learning model when the current level of complexity is able to classify the input value. 32. The computing device of claim 31, wherein the configuration of the processor to apply the level of complexity of the machine learning model to the input value includes the processor being configured to: apply at least one biased first level of complexity to the input value to generate at least one class label. 33. The computing device of claim 32, wherein the configuration of the processor to apply at least one biased first level of complexity to the input value to generate at least one class label includes the processor being configured to: apply a negatively biased level of complexity to the input value to generate a first class label; and apply a positively biased level of complexity to the input value to generate a second class label. 34. The computing device of claim 33, wherein the processor is configured to: compare the first class label to the second class label; and determine whether a consensus exists between the negatively biased level of complexity and the positively biased level of complexity based, at least in part, on the comparing. 35. The computing device of claim 33, wherein the processor is configured to: adjust one or both of i) the negatively biased level of complexity and ii) the positively biased level of complexity to modify a likelihood that the level of complexity is able to classify the input value. 36. The computing device of claim 29, wherein the processor is configured to apply the plurality of the machine learning models on the input value sequentially in order of increasing ability of the machine learning models to classify the input value. 37. The computing device of claim 29, wherein a computing cost of classifying the input value is proportional to the initial level of complexity of the input value. 38. One or more computer-readable storage media of a client device storing computer-executable instructions that, when executed by one or more processors of the client device, configure the one or more processors to perform operations comprising: receiving an input value; iteratively performing the following steps until a predetermined threshold is exceed: applying a level of complexity of a machine learning model to the input value; determining whether a current level of complexity of the level of complexity is able to classify the input value; determining whether the predetermined threshold has been exceeded when the current level of complexity is not able to classify the input value; and increasing the level of complexity of the machine learning model when the current level of complexity is able to classify the input value. 39. The one or more computer-readable storage media of claim 38, wherein applying the level of complexity of the machine learning model to the input value includes: applying at least one biased first level of complexity to the input value to generate at least one class label. 40. The one or more computer-readable storage media of claim 39, wherein applying at least one biased first level of complexity to the input value to generate at least one class label includes: applying a negatively biased first level of complexity to the input value to generate a first class label; and applying a positively biased first level of complexity to the input value to generate a second class label.
Please help me write a proper abstract based on the patent claims.
Scalable-effort machine learning may automatically and dynamically adjust the amount of computational effort applied to input data based on the complexity of the input data. This is in contrast to fixed-effort machine learning, which uses a one-size-fits-all approach to applying a single classifier algorithm to both simple data and complex data. Scalable-effort machine learning involves, among other things, classifiers that may be arranged as a series of multiple classifier stages having increasing complexity (and accuracy). A first classifier stage may involve relatively simple machine learning models able to classify data that is relatively simple. Subsequent classifier stages have increasingly complex machine learning models and are able to classify more complex data. Scalable-effort machine learning includes algorithms that can differentiate among data based on complexity of the data.
1. A method of scheduling events for industrial equipment using an autoregressive integrated moving average (ARIMA) model for predicting future operating hours based on a times series of past operating hours of the industrial equipment, comprising defining a maximum possible value for each parameter p, d, q of an ARIMA(p,d,q) model, p defining a number of autoregressive terms to include in the ARIMA model, d defining a number of differencing operations to perform in the ARIMA model, and q defining a number of moving average terms to include in the ARIMA model, the maximum possible values identified as pr, dr, and qr; for all possible combinations of p from 0 to pr, d from 0 to dr, and q from 0 to qr, performing the following steps: determining a set of ARIMA coefficients associated with a training interval of the time series; predicting a set of N future hours based on the determined coefficients; computing at least one performance measure of the predicted set of N future operating hours with respect to actual time series data; and ranking all possible combinations of ARIMA(p,d,q) based on the computed performance measures; selecting a preferred set of (p, d, q) parameters based on the ranking; generating a predicted future time series of industrial operating hours using the selected ARIMA(p,d,q) model; and scheduling events based on the predicted future operating hours. 2. The method of claim 1, wherein the at least one performance measure is selected from the group consisting of: MAPE, SMAPE, MAE, and SSE. 3. The method of claim 1, wherein at least two different performance measures are computed for each possible ARIMA model. 4. The method of claim 1, wherein the ranking step creates a listing of all possible ARIMA models from the smallest value to the largest value of performance measure, for each computed performance measure. 5. The method of claim 1, wherein the ARIMA(p,d,q) model used in the analysis is the wavelet-ARIMA(p,d,q) model. 6. The method of claim 5, wherein the wavelet-ARIMA(p,d,q) module used in the analysis is the Daubechies wavelet transform. 7. A computer program product comprising a non-transitory computer readable recording medium having recorded thereon a computer program comprising instructions for, when executed on a computer, instructing said computer to perform a method for scheduling events associated with industrial equipment, using an autoregressive integrated moving average (ARIMA) model for predicting future operating hours based on past operating hours of the industrial equipment, comprising defining a maximum possible value for each parameter p, d, q of an ARIMA(p,d,q) model, p defining a number of autoregressive terms to include in the ARIMA model, d defining a number of differencing operations to perform in the ARIMA model, and q defining a number of moving average terms to include in the ARIMA model, the maximum possible values identified as pr, dr, and qr; for all possible combinations of p from 0 to pr, d from 0 to dr, and q from 0 to qr, performing the following steps: determining a set of ARIMA coefficients associated with a training interval of the time series; predicting a set of N future hours based on the determined coefficients; computing at least one performance measure of the predicted set of N future operating hours with respect to actual time series data; and ranking all possible combinations of ARIMA(p,d,q) based on the computed performance measures; selecting a preferred set of (p, d, q) parameters based on the ranking; generating a predicted future time series of industrial operating hours using the selected ARIMA(p,d,q) model; and scheduling events based on the predicted future operating hours. 8. The computer program product of claim 7, wherein the at least one performance measure is selected from the group consisting of: MAPE, SMAPE, MAE, and SSE. 9. The computer program product of claim 7, wherein at least two different performance measures are computed for each possible ARIMA model. 10. The computer program product of claim 7, wherein the ranking step creates a listing of all possible ARIMA models from the smallest value to the largest value of performance measure, for each computed performance measure. 11. The computer program product of claim 7, wherein the ARIMA(p,d,q) model used in the analysis is the wavelet-ARIMA(p,d,q) model. 12. The computer program product of claim 11, wherein the wavelet-ARIMA(p,d,q) module used in the analysis is the Daubechies wavelet transform. 13. A method of scheduling gas turbine maintenance events using an autoregressive integrated moving average (ARIMA) model for predicting future gas turbine operating hours based on a times series of past operating hours of the gas turbine, comprising defining a maximum possible value for each parameter p, d, q of an ARIMA(p,d,q) model, p defining a number of autoregressive terms to include in the ARIMA model, d defining a number of differencing operations to perform in the ARIMA model, and q defining a number of moving average terms to include in the ARIMA model, the maximum possible values identified as pr, dr, and qr; for all possible combinations of p from 0 to pr, d from 0 to dr, and q from 0 to qr, performing the following steps: determining a set of ARIMA coefficients associated with a training interval of the time series; predicting a set of N future operating hours based on the determined coefficients; computing at least one performance measure of the predicted set of N future operating hours with respect to actual time series data; and ranking all possible combinations of ARIMA(p,d,q) based on the computed performance measures; selecting a preferred set of (p, d, q) parameters based on the ranking; generating a predicted future time series of gas turbine operating hours using the selected ARIMA(p,d,q) model; and scheduling gas turbine maintenance events based on the predicted future time series of gas turbine operating hours. 14. The method as defined in claim 13 wherein the scheduled maintenance inventions comprises a set of disassembly maintenance events, each disassembly maintenance event to be scheduled after a predetermined number of operating hours. 15. The method as defined in claim 14 where the set of disassembly maintenance events includes a combustion inspection, hot gas path inspection and a major inspection. 16. The method as defined in claim 15 wherein the combustion inspection is scheduled more frequently than the hot gas path inspection, which is scheduled more frequently than the major inspection. 17. The method as defined in claim 16 wherein the combustion inspection is scheduled about every 8000 operating hours of the gas turbine, the hot gas path inspection is scheduled ever 16,000 operating hours, and the major inspection is scheduled every 32,000 operating hours. 18. The method of claim 13, wherein the at least one performance measure is selected from the group consisting of: MAPE, SMAPE, MAE, and SSE. 19. The method of claim 13, wherein at least two different performance measures are computed for each possible ARIMA model. 20. The method of claim 13, wherein the ARIMA(p,d,q) model used in the analysis is the wavelet-ARIMA(p,d,q) model. 21. The method of claim 20, wherein the wavelet-ARIMA(p,d,q) module used in the analysis is the Daubechies wavelet transform.
Please help me write a proper abstract based on the patent claims.
A generalized autoregressive integrated moving average (ARIMA) model for use in predictive analytics of time series is based upon creating all possible ARIMA models (by knowing a priori the largest possible values of the p, d and q parameters forming the model), and utilizing the results of at least two different performance measures to ultimately choose the ARIMA(p,d,q) model that is most appropriate for the time series under study. The method of the present invention allows each parameter to range over all possible values, and then evaluates the complete universe of all possible ARIMA models based on these combinations of p, d and q to find the specific p, d and q parameters that yield the “best” (i.e., lowest value) performance measure results. This generalized ARIMA model is particularly useful in predicting future operating hours of power plants and scheduling maintenance events on the gas turbines at these plants.
1. A system for modeling incremental effect, the system comprising: a memory device; and a processing device operatively coupled to the memory device, wherein the processing device is configured to execute computer-readable program code to: split data for observations into development data and validation data; create a test group model from the development data based on test group observations that are subject to a treatment; create a control group model from the development data based on control group observations that are not subject to the treatment; create a shadow dependent variable for the development data, wherein the shadow dependent variable is dependent on the test group observations, the control group observations, and a measurement performance variable; score the development data by applying the test group model and the control group model to the development data; create cubic spline basis functions for the test group model and the control group model; standardize the shadow dependent variable and the cubic spline basis functions using the development data; create a design matrix of the standardized shadow dependent variable and the cubic spline basis functions; conduct a singular value decomposition on the design matrix; utilize a binary search algorithm to determine tuning parameters for a set of degree of freedoms from the singular value decomposition; calculate a parameter vector for each of the tuning parameters; create a scoring formula based on the standardized cubic spline basis functions and the parameter vector for each of the tuning parameters; calculate scores for each of the tuning parameters using the scoring formula and the validation data; calculate an incremental effect area index of the scores for the tuning parameters values using the validation data; identify a tuning parameter from the tuning parameters corresponding to a score from the scores that has a highest incremental effect area index; and wherein the tuning parameter with the score having the highest incremental effect area index is used to rank order an incremental effect of the treatment. 2. The system of claim 1, wherein the observations are further split into holding data that is used to determine the accuracy of the incremental effect model score. 3. The system of claim 1, wherein the shadow dependent variable is defined by the following equation: Z = { n n t  Y if   the   individual   is   in   test - n n c  Y if   the   individual   is   in   control ; and wherein nt is a number of test group observation, nc is a number of control group observations, n is a total number of observations, and Y is the measurement performance variable. 4. The system of claim 1, wherein the cubic spline basis functions of the test group are U 1 = P 1 ,  U 2 = P 1 2 ,  U 3 = P 1 3 ,  U 4 = ( P 1 - a 1 ) 3 · 1  ( P 1 ≤ a 1 ) ,  U 5 = ( P 1 - a 2 ) 3 · 1  ( P 1 ≤ a 2 ) ,  …   …   …   … U k + 3 = ( P 1 - a k ) 3 · 1  ( P 1 ≤ a k ) ; the cubic spline basis functions of the control group are V 1 = P 2 ,  V 2 = P 2 2 ,  V 3 = P 2 3 ,  V 4 = ( P 2 - b 1 ) 3 · 1  ( P 2 ≤ b 1 ) ,  V 5 = ( P 2 - b 2 ) 3 · 1  ( P 2 ≤ b 2 ) ,  …   …   …   … V k + 3 = ( P 2 - b k ) 3 · 1  ( P 2 ≤ b k ) . 5. The system of claim 1, wherein standardizing the shadow dependent variable and the cubic spline basis functions using the development data comprises subtracting the variable's mean and dividing the difference by the variable's standard deviation, wherein the mean the standard deviation are calculated from the development data. 6. The system of claim 1, wherein conducting the value decomposition for the design matrix (X) comprises using a formula X=Q1DQ2T; and wherein Q1 and Q2 are n×(2k+6) and (2k+6)×(2k+6) orthogonal matrices, D is a (2k+6)×(2k+6) diagonal matrix, with diagonal entries d1≧d2≧ . . . ≧d2k+6≧0 called the singular values of matrix X. 7. The system of claim 1, wherein utilizing the binary search algorithm to determine the tuning parameters for the set of degree of freedoms from the singular value decomposition comprises: set δ as an estimation error allowed; identify the tuning parameters for each dfj; initialize end points of the searching interval by letting x1=0 and x2=u; calculate x = x 1 + x 2 2 and df = ∑ i = 1 2  k + 6   d i 2 d i 2 + x ; when |df−dfj|≦δ then x is the value of the turning parameter corresponding to dfj; when |df−dfj|>δ then update the end points such that if df<dfj then let x2=x, otherwise let x1=x, recalculate x = x 1 + x 2 2 and df = ∑ i = 1 2  k + 6   d i 2 d i 2 + x , and iterate until |df−drj|≦δ is met. 8. The system of claim 1, wherein the parameter vector is calculated for each of the tuning parameters λj using the following formula: β ^ ridge  ( λ j ) = Q 2  Diag  ( d 1 d 1 2 + λ j , d 2 d 2 2 + λ j , …  , d 2  k + 6 d 2  k + 6 2 + λ j )  Q 1 T  z * . 9. The system of claim 1, wherein the scoring formula is S(λj)=(U1*, U2*, . . . , Uk+3*, V1*, V2*, . . . , Vk+3*){circumflex over (β)}ridge(λj). 10. The system of claim 1, wherein calculating the incremental effect area index of the scores for the tuning parameters values using the validation data comprises: ranking the observations in the validation data based on the scores from low to high; determining an average response (Y) value for the test group and the an average response variable (Y) value for the control group for increasing percentages of observations of the scores from lowest to highest; determining a cumulative incremental effect value that is equal to the difference between the average response (Y) value for the test group and the average response (Y) value for the control group for the increasing percentages of observations of the scores from lowest to highest; assuming the cumulative incremental effect value is C(p) when the percentage of observations is p; and calculating the incremental effect area index using formula: 1 - 1 C  ( 1 )  { p 1 + p 2 2  C  ( p 1 ) + ∑ i = 2 s   p i + 1 - p i - 1 2  C  ( p i ) + p s - p s - 1 2  C  ( p s ) } . 11. A computer program product for modeling incremental effect, the computer program product comprising at least one non-transitory computer-readable medium having computer-readable program code portions embodied therein, the computer-readable program code portions comprising: an executable portion configured to split data for observations into development data and validation data; an executable portion configured to create a test group model from the development data based on test group observations that are subject to a treatment; an executable portion configured to create a control group model from the development data based on test group observations that fail to be subject to the treatment; an executable portion configured to create a shadow dependent variable for the development data, wherein the shadow dependent variable is dependent on the test group observations, the control group observations, and a measurement performance variable; an executable portion configured to score the development data by applying the test group model and the control group model to the development data; an executable portion configured to create cubic spline basis functions for the test group model and the control group model; an executable portion configured to standardize the shadow dependent variable and the cubic spline basis functions using the development data; an executable portion configured to create a design matrix of the standardized shadow dependent variable and the cubic spline basis functions; an executable portion configured to conduct a singular value decomposition on the design matrix; an executable portion configured to utilize a binary search algorithm to determine tuning parameters for a set of degree of freedoms from the singular value decomposition; an executable portion configured to calculate a parameter vector for each of the tuning parameters; an executable portion configured to create a scoring formula based on the standardized cubic spline basis functions and the parameter vector for each of the tuning parameters; an executable portion configured to calculate scores for each of the tuning parameters using the scoring formula and the validation data; an executable portion configured to calculate an incremental effect area index of the scores for the tuning parameters values using the validation data; an executable portion configured to identify a tuning parameter from the tuning parameters that has a highest incremental effect area index; and wherein the tuning parameter with the score having the highest incremental effect area index is used to rank order an incremental effect of the treatment. 12. The computer program product of claim 11, wherein the observations are further split into holding data that is used to determine the accuracy of the incremental effect model score. 13. The computer program product of claim 11, wherein the shadow dependent variable is defined by the following equation: Z = { n n t  Y if   the   individual   is   in   test - n n c  Y if   the   individual   is   in   control ; and wherein nt is a number of test group observation, nc is a number of control group observations, n is a total number of observations, and Y is the measurement performance variable. 14. The computer program product of claim 11, wherein the cubic spline basis functions of the test group are U 1 = P 1 ,  U 2 = P 1 2 ,  U 3 = P 1 3 ,  U 4 = ( P 1 - a 1 ) 3 · 1  ( P 1 ≤ a 1 ) ,  U 5 = ( P 1 - a 2 ) 3 · 1  ( P 1 ≤ a 2 ) ,  …   …   …   … U k + 3 = ( P 1 - a k ) 3 · 1  ( P 1 ≤ a k ) ; the cubic spline basis functions of the control group are V 1 = P 2 ,  V 2 = P 2 2 ,  V 3 = P 2 3 ,  V 4 = ( P 2 - b 1 ) 3 · 1  ( P 2 ≤ b 1 ) ,  V 5 = ( P 2 - b 2 ) 3 · 1  ( P 2 ≤ b 2 ) ,  …   …   …   … V k + 3 = ( P 2 - b k ) 3 · 1  ( P 2 ≤ b k ) . 15. The computer program product of claim 11, wherein standardizing the shadow dependent variable and the cubic spline basis functions using the development data comprises subtracting the variable's mean and dividing the difference by the variable's standard deviation, wherein the mean the standard deviation are calculated from the development data. 16. The computer program product of claim 11, wherein conducting the value decomposition for the design matrix (X) comprises using a formula X=Q1DQ2T; and wherein Q1 and Q2 are n×(2k+6) and (2k+6)×(2k+6) orthogonal matrices, D is a (2k+6)×(2k+6) diagonal matrix, with diagonal entries d1≧d2≧ . . . ≧d2k+6≧0 called the singular values of matrix X. 17. The computer program product of claim 11, wherein utilizing the binary search algorithm to determine the tuning parameters for the set of degree of freedoms from the singular value decomposition comprises: set δ as an estimation error allowed; identify the tuning parameters for each dfj; initialize end points of the searching interval by letting x1=0 and x2=u; calculate x = x 1 + x 2 2 and df = ∑ i = 1 2  k + 6   d i 2 d i 2 + x ; when |df−df|≦δ then×is the value of the turning parameter corresponding to dfj; when |df−df|>δ then update the end points such that if df<dfj then let x2=x, otherwise let x1=x, recalculate x = x 1 + x 2 2 and df = ∑ i = 1 2  k + 6   d i 2 d i 2 + x , and iterate until |df−dfj|δ is met. 18. The computer program product of claim 11, wherein the parameter vector is calculated for each of the tuning parameters λj using the following formula: β ^ ridge  ( λ j ) = Q 2  Diag  ( d 1 d 1 2 + λ j , d 2 d 2 2 + λ j , …  , d 2  k + 6 d 2  k + 6 2 + λ j )  Q 1 T  z * . 19. The computer program product of claim 11, wherein the scoring formula is S(λj)=(U1*, U2*, . . . , V1*, V2*, . . . , Vk+3*){circumflex over (β)}ridge(λj). 20. The computer program product of claim 11, wherein calculating the incremental effect area index of the scores for the tuning parameters values using the validation data comprises: ranking the observations in the validation data based on the scores from low to high; determining an average response (Y) value for the test group and the an average response variable (Y) value for the control group for increasing percentages of observations of the scores from lowest to highest; determining a cumulative incremental effect value that is equal to the difference between the average response (Y) value for the test group and the average response (Y) value for the control group for the increasing percentages of observations of the scores from lowest to highest; assuming the cumulative incremental effect value is C(p) when the percentage of observations is p; and calculating the incremental effect area index using formula: 1 - 1 C  ( 1 )  { p 1 + p 2 2  C  ( p 1 ) + ∑ i = 2 s   p i + 1 - p i - 1 2  C  ( p i ) + p s - p s - 1 2  C  ( p s ) } . 21. A method for modeling incremental effect, the method comprising: splitting, by a processor, data for observations into development data and validation data; creating, by a processor, a test group model from the development data based on test group observations that are subject to a treatment; creating, by a processor, a control group model from the development data based on test group observations that fail to be subject to the treatment; creating, by a processor, a shadow dependent variable for the development data, wherein the shadow dependent variable is dependent on the test group observations, the control group observations, and a measurement performance variable; scoring, by a processor, the development data by applying the test group model and the control group model to the development data; creating, by a processor, cubic spline basis functions for the test group model and the control group model; standardizing, by a processor, the shadow dependent variable and the cubic spline basis functions using the development data; creating, by a processor, a design matrix of the standardized shadow dependent variable and the cubic spline basis functions; conducting, by a processor, a singular value decomposition on the design matrix; utilizing, by a processor, a binary search algorithm to determine tuning parameters for a set of degree of freedoms from the singular value decomposition; calculating, by a processor, a parameter vector for each of the tuning parameters; creating, by a processor, a scoring formula based on the standardized cubic spline basis functions and the parameter vector for each of the tuning parameters; calculating, by a processor, scores for each of the tuning parameters using the scoring formula and the validation data; calculating, by a processor, an incremental effect area index of the scores for the tuning parameters values using the validation data; identifying, by a processor, a tuning parameter from the tuning parameters that has a highest incremental effect area index; and wherein the tuning parameter with the score having the highest incremental effect area index is used to rank order an incremental effect of the treatment.
Please help me write a proper abstract based on the patent claims.
Embodiments of the invention are directed to systems, methods and computer program products for utilizing a shadow ridge rescaling technique to model incremental treatment effect at the individual level, based on a randomized test and control data. A shadow dependent variable is introduced with its mathematical expectation being exactly the incremental effect. Ridge regression is utilized to regress the shadow dependent variable on a set of variables generated from the test model score and the control model score. A tuning parameter in the ridge regression is selected so that the score can best rank order the incremental effect of the treatment. The final score is a nonlinear function of the test model score plus a nonlinear function of a control model score, and outperforms the traditional differencing score method.
1. A computer-implemented method, comprising: training a machine learning model using data identifying a modality for operating a computing device and data identifying first brain activity of a user of the computing device while the computing device is operating in the modality; receiving data identifying second brain activity of the user while operating the computing device; utilizing the machine learning model and the data identifying the second brain activity of the user to select one of a plurality of modalities for operating the computing device; and causing the computing device to operate in accordance with the selected modality. 2. The computer-implemented method of claim 1, further comprising exposing data identifying the selected one of the plurality of modalities by way of an application programming interface (API). 3. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a first virtual machine is executed on the computing device; and a second modality in which a second virtual machine is executed on the computing device. 4. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a first virtual desktop is displayed by the computing device; and a second modality in which a second virtual desktop is displayed by the computing device. 5. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which messages directed to the user received at the computing device are suppressed; and a second modality in which messages directed to the user received at the computing device are not suppressed. 6. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a first plurality of user interface windows are presented by the computing device; and a second modality in which a second plurality of user interface windows are presented by the computing device. 7. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a user interface element corresponding to a first application that can be selected to execute the first application on the computing device is emphasized; and a second modality in which a user interface element corresponding to a second application that can be selected to execute the second application on the computing device is emphasized. 8. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which a hardware component of the computing device is enabled; and a second modality in which the hardware component of the computing device is not enabled. 9. The computer-implemented method of claim 1, wherein the plurality of modalities comprise: a first modality in which an application executing on the computing device is presented in a full screen mode of operation; and a second modality in which the application executing on the computing device is not presented in the full screen mode of operation. 10. An apparatus, comprising: one or more processors; and at least one computer storage medium having computer executable instructions stored thereon which, when executed by the one or more processors, cause the apparatus to expose an application programming interface (API) for providing data identifying a modality for operating the apparatus, receive a request at the API, utilize a machine learning model to select one of a plurality of modalities for operating the apparatus, the one of the plurality of modalities for operating the apparatus being selected based, at least in part, upon data identifying brain activity of a user of the apparatus, and provide data identifying the selected one of the plurality of modalities for operating the apparatus responsive to the request. 11. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which a first virtual machine is executed by the one or more processors; and a second modality in which a second virtual machine is executed by the one or more processors. 12. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which a first virtual desktop is presented by the apparatus on a display device; and a second modality in which a second virtual desktop is presented by the apparatus on a display device. 13. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which messages directed to the user received at the apparatus device are suppressed; and a second modality in which messages directed to the user received at the apparatus are not suppressed. 14. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which a first plurality of user interface windows are presented by the apparatus on a display device; and a second modality in which a second plurality of user interface windows are presented by the apparatus on a display device. 15. The apparatus of claim 10, wherein the plurality of modalities comprise: a first modality in which a user interface element corresponding to a first application that can be selected to execute the first application on the one or more processors is emphasized; and a second modality in which a user interface element corresponding to a second application that can be selected to execute the second application on the one or more processors is emphasized. 16. A computer storage medium having computer executable instructions stored thereon which, when executed by one or more processors, cause the processors to: expose an application programming interface (API) for providing data identifying a modality for operating a computing device; receive a request at the API; utilize a machine learning model to select one of a plurality of modalities for operating the computing device, the one of the plurality of modalities for operating the computing device being selected based, at least in part, upon data identifying brain activity of a user of the computing device; and provide data identifying the selected one of the plurality of modalities for operating the computing device responsive to the request. 17. The computer storage medium of claim 16, wherein the plurality of modalities comprise: a first modality in which an application executing on the computing device is presented in a full screen mode of operation; and a second modality in which the application executing on the computing device is not presented in the full screen mode of operation. 18. The computer storage medium of claim 16, wherein the plurality of modalities comprise: a first modality in which a hardware component of the computing device is enabled; and a second modality in which the hardware component of the computing device is not enabled. 19. The computer storage medium of claim 16, wherein the plurality of modalities comprise: a first modality in which a user interface element corresponding to a first application that can be selected to execute the first application on the computing device is emphasized; and a second modality in which a user interface element corresponding to a second application that can be selected to execute the second application on the computing device is emphasized. 20. The computer storage medium of claim 16, wherein the plurality of modalities comprise: a first modality in which a first plurality of user interface windows are presented by the computing device; and a second modality in which a second plurality of user interface windows are presented by the computing device.
Please help me write a proper abstract based on the patent claims.
Technologies are described herein for modifying the modality of a computing device based upon a user's brain activity. A machine learning classifier is trained using data that identifies a modality for operating a computing device and data identifying brain activity of a user of the computing device. Once trained, the machine learning classifier can select a mode of operation for the computing device based upon a user's current brain activity and, potentially, other biological data. The computing device can then be operated in accordance with the selected modality. An application programming interface can also expose an interface through which an operating system and application programs executing on the computing device can obtain data identifying the modality selected by the machine learning classifier. Through the use of this data, the operating system and application programs can modify their mode of operation to be most suitable for the user's current mental state.
1. A multimodal data analyzer comprising instructions embodied in one or more non-transitory machine accessible storage media, the multimodal data analyzer configured to cause a computing system comprising one or more computing devices to: access a set of time-varying instances of multimodal data having at least two different modalities, each instance of the multimodal data having a temporal component; and algorithmically learn a feature representation of the temporal component of the multimodal data using a deep learning architecture. 2. The multimodal data analyzer of claim 1, configured to classify the set of multimodal data by applying a temporal discriminative model to the feature representation of the temporal component of the multimodal data. 3. The multimodal data analyzer of claim 1, configured to, using the deep learning architecture, identify short-term temporal features in the multimodal data. 4. The multimodal data analyzer of claim 1, wherein the multimodal data comprises recorded speech and the multimodal data analyzer is configured to identify an intra-utterance dynamic feature of the recorded speech. 5. The multimodal data analyzer of claim 1, configured to, using the deep learning architecture, identify a long-term temporal feature in the multimodal data. 6. The multimodal data analyzer of claim 1, wherein the multimodal data comprises recorded speech and the multimodal data analyzer is configured to identify an inter-utterance dynamic feature in the recorded speech. 7. The multimodal data analyzer of claim 1, wherein the multimodal data comprises audio and video, and the multimodal data analyzer is configured to (i) identify short-term dynamic features in the audio and video data and (ii) infer a long-term dynamic feature based on a combination of temporally-spaced audio and video short-term dynamic features. 8. The multimodal data analyzer of claim 1, wherein the temporal deep learning architecture comprises a hybrid model having a generative component and a discriminative component, and wherein the multimodal data analyzer uses output of the generative component as input to the discriminative component. 9. The multimodal data analyzer of claim 1, wherein the multimodal data analyzer is configured to identify at least two different temporally-spaced events in the multimodal data and infer a correlation between the at least two different temporally-spaced multimodal events. 10. The multimodal data analyzer of claim 1, configured to algorithmically learn the feature representation of the temporal component of the multimodal data using an unsupervised machine learning technique. 11. The multimodal data analyzer of claim 1, configured to algorithmically infer missing data both within a modality and across modalities. 12. A method for classifying multimodal data, the multimodal data comprising data having at least two different modalities, the method comprising, with a computing system comprising one or more computing devices: accessing a set of time-varying instances of multimodal data, each instance of the multimodal data having a temporal component; and algorithmically classifying the set of time-varying instances of multimodal data using a discriminative temporal model, the discriminative temporal model trained using a feature representation generated by a deep temporal generative model based on the temporal component of the multimodal data. 13. The method of claim 12, comprising identifying, within each modality of the multimodal data, a plurality of short-term features having different time scales. 14. The method of claim 13, comprising, for each modality within the multimodal data, inferring a long-term dynamic feature based on the short-term dynamic features identified within the modality. 15. The method of claim 13, comprising fusing short-term features across the different modalities of the multimodal data, and inferring a long-term dynamic feature based on the short-term features fused across the different modalities of the multimodal data. 16. A system for algorithmically recognizing a multimodal event in data, the system comprising: a data access module to access a set of time-varying instances of multimodal data, each instance of the multimodal data having a temporal component; a classifier module to classify different instances in the set of time-varying instances of multimodal data as indicative of different short-term events; and an event recognizer module to (i) recognize a longer-term multimodal event based on a plurality of multimodal short-term events identified by the classifier module and (ii) generate a semantic label for the recognized multimodal event. 17. The system of claim 16, wherein the classifier module is to apply a deep temporal generative model to the temporal component of the audio-visual data. 18. The system of claim 17, wherein the event recognizer module is to use a discriminative temporal model to recognize the longer-term multimodal event. 19. The system of claim 18, wherein the system is to train the discriminative temporal model using a feature representation generated by the deep temporal generative model. 20. The system of claim 16, wherein the event recognizer module is to recognize the longer-term multimodal event by correlating a plurality of different short-term multimodal events having different time scales.
Please help me write a proper abstract based on the patent claims.
Technologies for analyzing temporal components of multimodal data to detect short-term multimodal events, determine relationships between short-term multimodal events, and recognize long-term multimodal events, using a deep learning architecture, are disclosed.
1. A method of training a neural network model, comprising: determining a specificity of a plurality of filters after a predetermined number of training iterations; and training each filter of the plurality of filters based at least in part on the specificity. 2. The method of claim 1, further comprising determining whether to continue the training of each filter based at least in part on the specificity. 3. The method of claim 2, further comprising stopping training for a specific filter of the plurality of filters when the specificity of the specific filter is greater than a threshold. 4. The method of claim 2, further comprising stopping training of a specific filter when a change in the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 5. The method of claim 2, further comprising eliminating a specific filter from the neural network model when the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 6. The method of claim 5, further comprising continuing training of the neural network model after eliminating the specific filter. 7. The method of claim 1, in which the specificity is based at least in part on entropy, change from original values, variance weight values, difference from other filters, cross correlation with other filters, or a combination thereof. 8. The method of claim 1, in which the neural network model is trained while an error function is augmented with a pooled measure of the specificity of the plurality of filters. 9. The method of claim 1, further comprising determining a target complexity of the neural network model, based at least in part on memory specification, power specifications, or a combination thereof. 10. The method of claim 9, in which filters are selectively trained based at least in part on the determined target complexity, prioritizing filters to train based at least in part on the determined target complexity, or a combination thereof. 11. The method of claim 1, further comprising: prioritizing filters to apply to an input based at least in part on the specificity of each of the plurality of filters; and selecting a number of prioritized filters based at least in part on a target complexity of the neural network model. 12. The method of claim 11, in which the target complexity is based at least in part on memory specification, power specifications, or a combination thereof. 13. An apparatus for training a neural network model, comprising: a memory unit; and at least one processor coupled to the memory unit, the at least one processor configured: to determine a specificity of a plurality of filters after a predetermined number of training iterations; and to train each filter of the plurality of filters based at least in part on the specificity. 14. The apparatus of claim 13, in which the at least one processor is further configured to determine whether to continue the training of each filter based at least in part on the specificity. 15. The apparatus of claim 14, in which the at least one processor is further configured to stop training for a specific filter of the plurality of filters when the specificity of the specific filter is greater than a threshold. 16. The apparatus of claim 14, in which the at least one processor is further configured to stop training of a specific filter when a change in the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 17. The apparatus of claim 14, in which the at least one processor is further configured to eliminate a specific filter from the neural network model when the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 18. The apparatus of claim 17, in which the at least one processor is further configured to continue training of the neural network model after eliminating the specific filter. 19. The apparatus of claim 13, in which the specificity is based at least in part on entropy, change from original values, variance weight values, difference from other filters, cross correlation with other filters, or a combination thereof. 20. The apparatus of claim 13, in which the at least one processor is further configured to train the neural network model while augmenting an error function with a pooled measure of the specificity of the plurality of filters. 21. The apparatus of claim 13, in which the at least one processor is further configured to determine a target complexity of the neural network model, based at least in part on memory specification, power specifications, or a combination thereof. 22. The apparatus of claim 21, in which the at least one processor is further configured to selectively train filters based at least in part on the determined target complexity, prioritizing filters to train based at least in part on the determined target complexity, or a combination thereof. 23. The apparatus of claim 13, in which the at least one processor is further configured: to prioritize filters to apply to an input based at least in part on the specificity of each of the plurality of filters; and to select a number of prioritized filters based at least in part on a target complexity of the neural network model. 24. The apparatus of claim 23, in which the target complexity is based at least in part on memory specification, power specifications, or a combination thereof. 25. A apparatus of training a neural network model, comprising: means for determining a specificity of a plurality of filters after a predetermined number of training iterations; and means for training each filter of the plurality of filters based at least in part on the specificity. 26. The apparatus of claim 25, further comprising means for determining whether to continue the training of each filter based at least in part on the specificity. 27. The apparatus of claim 26, further comprising means for stopping training for a specific filter of the plurality of filters when the specificity of the specific filter is greater than a threshold. 28. The apparatus of claim 26, further comprising means for stopping training of a specific filter when a change in the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 29. The apparatus of claim 26, further comprising means for eliminating a specific filter from the neural network model when the specificity of the specific filter is less than a threshold after the predetermined number of training iterations. 30. The apparatus of claim 29, further comprising means for continuing training of the neural network model after eliminating the specific filter. 31. The apparatus of claim 25, in which the specificity is based at least in part on entropy, change from original values, variance weight values, difference from other filters, cross correlation with other filters, or a combination thereof. 32. A non-transitory computer-readable medium for training a neural network model, the computer-readable medium having program code recorded thereon, the program code being executed by a processor and comprising: program code to determine a specificity of a plurality of filters after a predetermined number of training iterations; and program code train each filter of the plurality of filters based at least in part on the specificity.
Please help me write a proper abstract based on the patent claims.
A method of training a neural network model includes determining a specificity of multiple filters after a predetermined number of training iterations. The method also includes training each of the filters based on the specificity.
1. A method of feature extraction, comprising: determining a reference model for feature extraction; fine-tuning the reference model for a plurality of different tasks; and storing a set of weight differences calculated during the fine-tuning, each set corresponding to a different task. 2. The method of claim 1, in which the reference model comprises a localization model. 3. The method of claim 1, in which the reference model comprises a feature learning model. 4. The method of claim 1, in which the storing comprises storing only non-zero weight differences. 5. The method of claim 1, in which the fine-tuning comprises applying a task specific classifier. 6. An apparatus for feature extraction, comprising: a memory; and at least one processor coupled to the memory, the at least one processor configured: to determine a reference model for feature extraction; to fine-tune the reference model for a plurality of different tasks; and to store a set of weight differences calculated during fine-tuning, each set corresponding to a different task. 7. The apparatus of claim 6, in which the reference model comprises a localization model. 8. The apparatus of claim 6, in which the reference model comprises a feature learning model. 9. The apparatus of claim 6, in which the at least one processor is further configured to store only non-zero weight differences. 10. The apparatus of claim 6, in which the at least one processor is further configured to apply a task specific classifier. 11. An apparatus for feature extraction, comprising: means for determining a reference model for feature extraction; means for fine-tuning the reference model for a plurality of different tasks; and means for storing a set of weight differences calculated during fine-tuning, each set corresponding to a different task. 12. The apparatus of claim 11, in which the reference model comprises a localization model. 13. The apparatus of claim 11, in which the reference model comprises a feature learning model. 14. The apparatus of claim 11, in which the means for storing stores only non-zero weight differences. 15. The apparatus of claim 11, further including means for applying a task specific classifier. 16. A non-transitory computer-readable medium having encoded thereon program code to be executed by a processor, the program code comprising: program code to determine a reference model for feature extraction; program code to fine-tune the reference model for a plurality of different tasks; and program code to store a set of weight differences calculated during fine-tuning, each set corresponding to a different task. 17. The computer-readable medium of claim 16, in which the reference model comprises a localization model. 18. The computer-readable medium of claim 16, in which the reference model comprises a feature learning model. 19. The computer-readable medium of claim 16, further comprising program code to store only non-zero weight differences. 20. The computer-readable medium of claim 16, further comprising program code to apply a task specific classifier.
Please help me write a proper abstract based on the patent claims.
Feature extraction includes determining a reference model for feature extraction and fine-tuning the reference model for different tasks. The method also includes storing a set of weight differences calculated during the fine-tuning. Each set may correspond to a different task.
1. A multi-objective semiconductor product capacity planning system, comprising: a data input module, accepting inputs of machine information, product information and order information, the machine information defining a plurality of production stations and a capacity limit of the plurality of production stations, the product information defining a plurality of product categories and a production cost of the plurality of product categories, the order information defining a demand quantity of order for a plurality of customer orders and a product price; a capacity planning module, receiving input data from the data input module, coordinating the demand quantity of order with the machine information and the product information to plan a satisfied number of order to satisfy the capacity limit, determining the satisfied number of order by a capacity allocation proportion of order to decide a capacity utilization rate of each of the orders and a satisfied priority of order to arrange a production sequence of each of the orders, combining the capacity allocation proportion of order and the satisfied priority of order as a resource allocation and transforming the resource allocation into a gene combination by a chromosome encoding method; and a computing module, receiving the gene combination from the capacity planning module, calculating the gene combination several times to generate a plurality of new candidate solutions by using a multi-objective genetic algorithm, sorting the plurality of new candidate solutions by using a plurality of planning objectives as evaluation criteria to generate a new gene combination, and repeating the calculation to form a candidate solution set until a stop condition is satisfied, transforming the candidate solution set into a plurality of suggestive plans and selecting one of the plurality of suggestive plans to arrange the production stations for manufacturing the product categories. 2. The multi-objective semiconductor product capacity planning system of claim 1, wherein, the plurality of planning objectives of the computing module comprise a financial index related to a revenue, a profit or a gross margin, or a production index related to a production quantity or a capacity utilization. 3. The multi-objective semiconductor product capacity planning system of claim 1, wherein, the plurality of planning objectives are a revenue maximization, a profit maximization and a gross margin maximization. 4. The multi-objective semiconductor product capacity planning system of claim 1, wherein, the computing module sorts out and generates the new gene combination by a Pareto front method. 5. The multi-objective semiconductor product capacity planning system of claim 1, further comprising a report module for presenting the plurality of suggestive plans. 6. A multi-objective semiconductor product capacity planning method, applicable to a multi-objective semiconductor product capacity planning system comprising a data input module, a capacity planning module and a computing module, the method comprising: receiving machine information from a production machine of each production stations, and product information and order information input by the data input module; planning a satisfied number of order by the capacity planning module, deciding a capacity utilization rate of each of the orders as a capacity allocation proportion of order and arranging a production sequence of each of the orders as a satisfied priority of order, combining the capacity allocation proportion of order and the satisfied priority of order as a resource allocation to form a gene combination by a chromosome encoding method; using a multi-objective genetic algorithm for the evolution of the gene combination for generating a plurality of new candidate solutions by the computing module; using a plurality of planning objectives as the evaluation criteria to sort the plurality of new candidate solutions for generating a new gene combination by the computing module; repeating the calculation to form a candidate solution set by the computing module until a stop condition is satisfied; and transforming the candidate solution set into a plurality of suggestive plans and selecting one of the plurality of suggestive plans to arrange the production stations for manufacturing a product. 7. The method of claim 6, further comprising following step: using a revenue, a profit, a gross margin, a production quantity or a capacity utilization as an index of the plurality of planning objectives. 8. The method of claim 6, further comprising following step: serving a revenue maximization, a profit maximization and a gross margin maximization as the plurality of planning objectives through the planning module. 9. The method of claim 6, further comprising following step: sorting by a Pareto front method and generating the new gene combination. 10. The method of claim 6, further comprising following step: presenting the plurality of suggestive plans by using a report module.
Please help me write a proper abstract based on the patent claims.
Disclosure is a multi-objective semiconductor product capacity planning system and method thereof. The system comprises a data input module, a capacity planning module and a computing module. The machine information of the production stations, the product information and the order information are input by the data input module. According to the demand quantity of order, capacity information and product information, the capacity planning module plans a capacity allocation to determine the satisfied quantity of orders. The capacity allocation information is used to form a gene combination by chromosome encoding method. The computing module calculates the gene combination several times to generate numerous candidate solutions by a multi-objective genetic algorithm. The numerous candidate solutions sorts out and generates a new gene combination, and repeats the calculation to form candidate solution set until stop condition is satisfied. The candidate solution set is transformed into numerous suggestive plans as options.
1. A method for providing a platform for building a dynamic knowledgebase, the method comprising: a. importing a traditional knowledgebase from an entity, wherein the traditional knowledgebase is imported at regular intervals; b. receiving login credentials from a user, wherein the login credentials are received via at least a web portal or a mobile application; c. retrieving social knowledgebase from a social network associated with the user, wherein the social knowledgebase comprises at least one of one or more articles created by the user, comments on the one or more articles, wherein the social knowledgebase information is retrieved upon receiving an approval from the user; d. providing recommendations to the user, wherein the recommendations are based on articles that are more clickable in the traditional knowledgebase and the social knowledgebase; e. sending a notification to the user, wherein the notification comprises the latest happenings in at least the web portal or the mobile application; f. monitoring a list of top contributors and influencers of the social knowledgebase in the social network associated with the user; and g. sending the list of top contributors and influencers to a backend system, wherein the backend system provides targeted campaigns and call center services to the user. h. allowing users to submit articles for the growth of the social knowledgebase. 2. The method as claimed in claim 1, wherein the traditional knowledgebase comprises at least one of product catalogues, frequently asked questions, usage instructions, trouble shooting tips, customer reviews, visual tours, customer information, dealer information, videos and URLs. 3. The method as claimed in claim 1, wherein the login credentials are received upon the user inputting the login credentials into the web portal or the mobile application on a communication device associated with the user. 4. The method as claimed in claim 1, wherein the login credentials are fetched from a social network profile associated with the user, wherein the login credentials are fetched upon receiving approval from the user. 5. The method as claimed in claim 1, wherein the recommendations are at least one of text, image, video. 6. The method as claimed in claim 1, wherein the articles that are more clickable are based on one or more parameters. 7. The method as claimed in claim 6, wherein the one or more parameters are at least one of keywords, context, trends, (user's personal) social network reading preferences/feedback (View/soft/hard recommendation/comments). 8. The method as claimed as in claim 1, wherein the notifications are sent to the user by at least one of in the web portal, mobile application, E-mail, message on the social network associated with the user. 9. A system for providing a platform for building a dynamic knowledgebase, the system comprising: a. an import module, wherein the import module is configured to import a traditional knowledgebase from an entity, wherein the import module imports the traditional knowledgebase at regular intervals; b. a receiving module, wherein the receiving module is configured to receive login credentials from a user, wherein the receiving module receives the login credentials via a web portal or mobile application; c. a retrieval module, wherein the retrieval module is configured to retrieve a social knowledgebase from a social network associated with the user, wherein the social knowledgebase comprises at least one of one or more articles created by the user, comments on the one or more articles, wherein the social knowledge is retrieved upon receiving an approval from the entity; d. a recommendation module, wherein the recommendation module is configured to provide recommendations to the user, wherein the recommendations are based on articles that are more clickable in the traditional knowledgebase and the social knowledgebase; e. a notification module, wherein the notification module is configured to send a notification to the user, wherein the notification comprises the latest happenings in the web portal or mobile application; f. a monitoring module, wherein the monitoring module is configured to monitor a list of top contributors and influencers of the social knowledgebase in the social network associated with the user; and g. a transmission module, wherein the transmission module is configured to send the list of top contributors and influencers to a backend system, wherein the backend system provides targeted campaigns and call center services to the user. h. a submission module, wherein the submission module is configured to allow the user to write articles and contribute to the growth of the social knowledgebase. 10. The system as claimed in claim 9, wherein the recommendations sent by the recommendation module are at least one of text, image, video. 11. The system as claimed as in claim 9, wherein the notification module sends the notifications to the user by at least one of in the web portal, mobile application, E-mail, message on the social network associated with the user.
Please help me write a proper abstract based on the patent claims.
The present invention provides a method and system for providing a platform for building a dynamic social knowledgebase. The method includes importing a traditional knowledgebase from an entity at regular intervals, receiving login credentials from a user via a web portal or mobile application. Retrieving social knowledgebase from a social network associated with the user, providing recommendations to the user based on articles that are more clickable in the traditional knowledgebase and the social knowledgebase, sending a notification to the user based on the latest happenings in the web portal or mobile application, monitoring a list of top contributors and influencers of the social knowledgebase in the social network associated with the user, and sending the list of top contributors and influencers to a backend system for analysis & use.
1. An information processing apparatus comprising: a course setting unit that sets a course containing at least one place associated with positional information; a course information generation unit that generates first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course; and a course information provision unit that provides the first course information to a second user different from the first user. 2. The information processing apparatus according to claim 1, further comprising: a place identification unit that identifies the place the first user has visited, wherein the course information generation unit generates the first course information which displays, in association with the place, the first user behavior information generated at the place in a case where the place the first user has visited is contained in the course. 3. The information processing apparatus according to claim 2, wherein the place identification unit further identifies the place the second user has visited, and the information processing apparatus further includes a course information update unit that updates the first course information by additionally associating second user behavior information generated from a behavior of the second user at the place with the place in a case where the place the second user has visited is contained in the course. 4. The information processing apparatus according to claim 3, wherein the course information update unit updates the first course information by adding a new place to the first course information and associating the second user behavior information with the place in a case where the place the second user has visited is not contained in the course. 5. The information processing apparatus according to claim 4, wherein the course information update unit adds a new place to the first course information and also adds the place to the course. 6. The information processing apparatus according to claim 2, wherein the course setting unit sets the course containing the place the first user has visited. 7. The information processing apparatus according to claim 2, wherein the place identification unit further identifies a place the second user has visited, and in a case where the place the second user has visited is contained in the course, the course information generation unit generates second course information regarding the course, the second course information displaying, in association with the place, second user behavior information generated from a behavior of the second user at the place. 8. The information processing apparatus according to claim 2, wherein the place identification unit calculates a moving speed of the first user on the basis of a history of positional information of the first user, distinguishes between a staying period and a moving period of the first user on the basis of the moving speed, and identifies a location of the first user in the staying period as the place the first user has visited. 9. The information processing apparatus according to claim 8, wherein the place identification unit distinguishes a period in which the moving speed is smaller than a first threshold as the staying period and distinguishes a period in which the moving speed is larger than the first threshold as the moving period. 10. The information processing apparatus according to claim 9, wherein in a case where a difference between the first threshold and a local maximum value or a local minimum value of the moving speed in a first staying period or a first moving period is smaller than or equal to a predetermined value, the place identification unit combines the first staying period or the first moving period with a second staying period or a second moving period before or after the first staying period or the first moving period. 11. The information processing apparatus according to claim 10, wherein the place identification unit combines the first staying period or the first moving period with one of the second staying period and the second moving period which has a larger difference between the first threshold and the local maximum value or the local minimum value of the moving speed in the period than the other has. 12. The information processing apparatus according to claim 11, wherein the first threshold is set on the basis of a frequency of staying and a frequency of moving for each moving speed in a behavior recognition result for the first user or a behavior recognition result for an average user. 13. The information processing apparatus according to claim 8, wherein the place identification unit calculates a moving acceleration of the first user on the basis of the history of the positional information, removes noise data from the history of the positional information on the basis of the moving acceleration, and then, calculates the moving speed of the first user on the basis of the history of the positional information. 14. The information processing apparatus according to claim 2, further comprising: a route identification unit that identifies a moving route of the first user, wherein the course information generation unit generates the first course information which displays, on the moving route, the place the first user has visited. 15. The information processing apparatus according to claim 14, wherein the route identification unit calculates a moving acceleration of the first user on the basis of a history of the positional information, removes noise data from the history of the positional information on the basis of the moving acceleration, and then, traces the history of the positional information to identify the moving route. 16. The information processing apparatus according to claim 15, wherein in a case where the moving acceleration when the first user moves from a first point to a second point is larger than a positive threshold, the route identification unit removes data corresponding to the second point as the noise data. 17. The information processing apparatus according to claim 15, wherein in a case where the moving acceleration when the first user moves from a first point to a second point is smaller than a negative threshold, the route identification unit refers to a history of a moving distance of the last three sections having, as the latest section, a section from the first point to the second point, and removes, as the noise data, data corresponding to a point sandwiched by sections of the last three sections the moving distance of which is not the smallest. 18. An information processing method comprising: setting a course containing at least one place associated with positional information; generating first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course; and providing the first course information to a second user different from the first user. 19. A program causing a computer to execute: a function of setting a course containing at least one place associated with positional information; a function of generating first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course; and a function of providing the first course information to a second user different from the first user.
Please help me write a proper abstract based on the patent claims.
Provided is an information processing apparatus including a course setting unit that sets a course containing at least one place associated with positional information, a course information generation unit that generates first course information regarding the course on the basis of first user behavior information generated from a behavior of a first user having visited the course, and a course information provision unit that provides the first course information to a second user different from the first user.
1. A method for conducting an interaction with a human user, the method comprising: collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user; making a set of inferences about the user in accordance with the data; and tailoring an output to be delivered to the user in accordance with the set of inferences. 2. The method of claim 1, wherein the set of inferences relates to one or more personal characteristics of the user. 3. The method of claim 2, wherein the one or more personal characteristics include an age of the user. 4. The method of claim 2, wherein the one or more personal characteristics include a gender of the user. 5. The method of claim 2, wherein the one or more personal characteristics include a socioeconomic group to which the user belongs. 6. The method of claim 1, wherein the set of inferences relates to a current affective state of the user. 7. The method of claim 1, wherein the making the set of inferences comprises: extracting at least one feature from the data; and classifying the at least one feature in accordance with at least one model that defines a potential characteristic of the user. 8. The method of claim 7, wherein the plurality of features include at least one of: a lexical content of an utterance made by the user or a linguistic content of an utterance made by the user. 9. The method of claim 7, wherein the plurality of features include at least one of: one or more pauses within an utterance made by the user, one or more increments in a duration of phones uttered by the user relative to a pre-computed average, a latency of the user to produce a response to a prompt, a probability distribution of unit durations, or timing Information related to one or more user interruptions to a previous output. 10. The method of claim 7, wherein the plurality of features include at least one of: a fundamental frequency range within an utterance made by the user, a fundamental frequency slope along one or more words, a probability distribution of a slope, or a probability distribution of a plurality of fundamental frequency values. 11. The method of claim 7, wherein the plurality of features include at least one of: a range of energy excursions within an utterance made by the user, a slope of energy within an utterance made by the user, a probability distribution of normalized energy, or a probability distribution of energy slopes. 12. The method of claim 7, wherein the plurality of features include at least one of: a color of at least a portion of a face of the user, a shape of at least a portion of a face of the user, a texture of at least a portion of a face of the user, an orientation of at least a portion of a face of the user, or a movement of at least a portion of a face of the user. 13. The method of claim 7, wherein the plurality of features include at least one of: whether the user is looking at a display on which the output is to be presented, a percentage of time spent by the user looking at a display on which the output is to be presented, a part of a display on which the output is to be presented on which the user is focused, how close a focus of the user is to a desired part of a display on which the output is to be presented, or a percentage of time spent by the user looking at a desired part of a display on which the output is to be presented. 14. The method of claim 7, wherein the plurality of features include at least one of: a shape of an area below a face of the user, a color of an area below a face of the user, or a texture of an area below a face of the user. 15. The method of claim 7, wherein the plurality of features include at least one of: a pose of a portion of a body of the user as a function of time or a motion of a portion of a body of the user as a function of time. 16. The method of claim 7, wherein the plurality of features include at least one of: a shape of footwear worn by the user, a color of footwear worn by the user, or a texture of footwear worn by the user. 17. The method of claim 7, wherein the classifying is performed using a statistical classifier. 18. The method of claim 7, wherein the classifying is performed using a training-based classifier. 19. A non-transitory computer readable medium containing an executable program for conducting an interaction with a human user, where the program performs steps comprising: collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user; making a set of inferences about the user in accordance with the data; and tailoring an output to be delivered to the user in accordance with the set of inferences. 20. A system for conducting an interaction with a human user, the system comprising: a plurality of multimodal sensors positioned in a vicinity of the user for collecting data about the user; a plurality of classifiers for making a set of inferences about the user in accordance with the data; and an output selection module for tailoring an output to be delivered to the user in accordance with the set of inferences.
Please help me write a proper abstract based on the patent claims.
The present invention relates to a method and apparatus for tailoring the output of an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user, making a set of inferences about the user in accordance with the data, and tailoring an output to be delivered to the user in accordance with the set of inferences.
1. A method comprising the steps of: building a forecast for an autonomous agent, said building at least comprising: assigning a selected parameter of said autonomous agent to a state value; adding a new policy to a set of policies, said new policy mapping actions of said autonomous agent for optimizing said state value; and adding a new forecast to a set of forecasts, said forecast at least comprising a prediction of a next state of said autonomous agent following execution of said new policy; initiating said mapped actions according to said new policy; evaluating a state of said autonomous agent following completion of said mapped actions, said evaluation at least comprising comparing said state with said forecast; and determining whether to build an additional forecast, said determining at least in part based on said evaluation. 2. The method as recited in claim 1, further comprising the steps of: determining if said forecast is ineffective; and pruning said forecast from said set of forecasts upon said determination. 3. The method as recited in claim 1, further comprising the step of determining whether to terminate the method, said determining at least in part based on said evaluation. 4. The method as recited in claim 1, in which said set of forecasts comprises a hierarchical structure. 5. The method as recited in claim 1, in which said new policy further comprises starting and stopping criteria. 6. The method as recited in claim 1, in which a state predicted by any forecast in said set of forecasts is associated with at least one of said policies in said set of policies. 7. The method as recited in claim 1, in which said selected parameter comprises at least one of an observation signal, a forecast of interest, a function of a combination of observation signals, and a function of forecast values in said set of forecasts. 8. The method as recited in claim 3, in which said step of determining whether to terminate the method is further based on a threshold value. 9. The method as recited in claim 1, in which said forecast comprises General Value Functions. 10. A method comprising: steps for building a forecast for an autonomous agent; steps for initiating mapped actions; steps for evaluating a state of said autonomous agent following completion of said mapped actions; and steps for determining whether to build an additional forecast. 11. The method as recited in claim 10, further comprising: steps for determining if said forecast is ineffective; steps for pruning said forecast from said set of forecasts upon said determination; and steps for determining whether to terminate the method. 12. A non-transitory computer-readable storage medium with an executable program stored thereon, wherein the program instructs one or more processors to perform the following steps: building a forecast for an autonomous agent, said building at least comprising: assigning a selected parameter of said autonomous agent to a state value; adding a new policy to a set of policies, said new policy mapping actions of said autonomous agent for optimizing said state value; and adding a new forecast to a set of forecasts, said forecast at least comprising a prediction of a next state of said autonomous agent following execution of said new policy; initiating said mapped actions according to said new policy; evaluating a state of said autonomous agent following completion of said mapped actions, said evaluation at least comprising comparing said state with said forecast; and determining whether to build an additional forecast, said determining at least in part based on said evaluation. 13. The program instructing the one or more processors as recited in claim 12, further comprising the steps of: determining if said forecast is ineffective; and pruning said forecast from said set of forecasts upon said determination. 14. The program instructing the one or more processors as recited in claim 12, further comprising the step of determining whether to terminate the method, said determining at least in part based on said evaluation. 15. The program instructing the one or more processors as recited in claim 12, in which said set of forecasts comprises a hierarchical structure. 16. The program instructing the one or more processors as recited in claim 12, in which said new policy further comprises starting and stopping criteria. 17. The program instructing the one or more processors as recited in claim 12, in which a state predicted by any forecast in said set of forecasts is associated with at least one of said policies in said set of policies. 18. The program instructing the one or more processors as recited in claim 12, in which said selected parameter comprises at least one of an observation signal, a forecast of interest, a function of a combination of observation signals, and a function of forecast values in said set of forecasts. 19. The program instructing the one or more processors as recited in claim 14, in which said step of determining whether to terminate the method is further based on a threshold value. 20. The program instructing the one or more processors as recited in claim 12, in which said forecast comprises General Value Functions.
Please help me write a proper abstract based on the patent claims.
A method comprises building a forecast for an autonomous agent. The building at least comprises assigning a selected parameter of the autonomous agent to a state value, adding a new policy to a set of policies where the new policy maps actions of the autonomous agent for optimizing the state value, and adding a new forecast to a set of forecasts where the forecast at least comprises a prediction of a next state of the autonomous agent following execution of the new policy. The mapped actions are initiated according to the new policy. A state of the autonomous agent is evaluated following completion of the mapped actions. The evaluation at least comprises comparing the state with the forecast. Whether to build an additional forecast is determined. The determining is at least in part based on the evaluation.
1. A computer-implemented method for generating a character response during an interaction with a user, the method comprising: evaluating user input data that is associated with a user device to determine a user intent and an assessment domain; selecting at least one inference algorithm from a plurality of inference algorithms based on at least one of the user intent and the assessment domain, wherein the at least one inference algorithm implements machine learning functionality; computing a character response to the user input data based on the at least one inference algorithm, the user input data, a set of personality characteristics associated with a character, and data representing knowledge associated with the character; and causing the user device to output the character response to the user. 2. The computer-implemented method of claim 1, wherein the at least one inference algorithm comprises at least a first inference algorithm and a second inference algorithm, wherein the first inference algorithm implements machine learning functionality. 3. The computer-implemented method of claim 1, wherein computing the character response to the user input data comprises: generating an inference based on the at least one inference algorithm, the user input data, and the data representing knowledge associated with the character; selecting the set of personality characteristics from a plurality of sets of personality characteristics based on at least one of the inference, the user intent, and the assessment domain; and generating the character response based on the inference and the set of personality characteristics. 4. The computer-implemented method of claim 1, wherein the data representing knowledge associated with the character includes information obtained from at least one of a World Wide Web, a script, a book, and a user-specific history. 5. The computer-implemented method of claim 1, further comprising updating the data representing knowledge associated with the character based on at least one of the user input data and the character response. 6. The computer-implemented method of claim 1, further comprising, in a batch mode: generating training data based on at least one of the user input data, the character response, the data representing knowledge associated with the character, and one or more data sources; and performing one or more operations that train the at least one inference algorithm based on the training data. 7. The computer-implemented method of claim 5, wherein the one or more data sources include a gamification platform that includes at least one of software and hardware that implement game mechanics to entice the user to provide input that can be used to train the at least one inference algorithm. 8. The computer-implemented method of claim 1, wherein causing the user device to output the character response comprises generating at least one of a physical action, a sound, and an image. 9. The computer-implemented method of claim 1, wherein the user device comprises a robot, a walk around character, a toy, or a computing device. 10. The computer-implemented method of claim 1, wherein the set of personality characteristics comprises a plurality of parameters, wherein each parameter is associated with a different personality dimension. 11. The computer-implemented method of claim 1, wherein the at least one inference algorithm comprises a Markov model, a computer vision system, a theory of mind system, a neural network, or a support vector machine. 12. A character engine that executes on one or more processors, the character engine comprising: a user intent engine that, when executed by the one or more processors, evaluates user input data that is associated with a user device to determine a user intent; a domain engine that, when executed by the one or more processors, evaluates at least one of the user input data and the user intent to determine an assessment domain; and an inference engine that, when executed by the one or more processors: selects at least one inference algorithm from a plurality of inference algorithms based on at least one of the user intent and the assessment domain, wherein the at least one inference algorithm implements machine learning functionality; and computes a character response to the user input data based on the at least one inference algorithm, the user input data, a set of personality characteristics associated with a character, and data representing knowledge associated with the character; and an output device abstraction infrastructure that, when executed by the one or more processors, causes the user device to output the character response to the user. 13. The character engine of claim 12, wherein the at least one inference algorithm comprises at least a first inference algorithm and a second inference algorithm, wherein the first inference algorithm implements machine learning functionality. 14. The character engine of claim 12, wherein the inference engine computes the character response to the user input data by: generating an inference based on the at least one inference algorithm, the user input data, and the data representing knowledge associated with the character; selecting the set of personality characteristics from a plurality of sets of personality characteristics based on at least one of the inference, the user intent, and the assessment domain; and generating the character response based on the inference and the set of personality characteristics. 15. The character engine of claim 12, wherein the data representing knowledge associated with the character includes information obtained from at least one of a World Wide Web, a movie script, a book, and a user-specific history. 16. The character engine of claim 12, wherein causing the user device to output the character response comprises generating at least one of a physical action, a sound, and an image. 17. The character engine of claim 12, wherein the user device comprises a robot, a walk around character, a toy, or a computing device. 18. The character engine of claim 12, wherein the set of personality characteristics comprises a plurality of parameters, wherein each parameter is associated with a different personality dimension. 19. The character engine of claim 12, wherein the at least one inference algorithm comprises a Markov model, a computer vision system, a theory of mind system, a neural network, or a support vector machine. 20. A computer-readable storage medium including instructions that, when executed by a processor, cause the processor to generate a character response during an interaction with a user by performing the steps of: selecting at least one inference algorithm from a plurality of inference algorithms based on at least one of a user intent and an assessment domain, wherein the at least one inference algorithm implements machine learning functionality; causing the at least one inference algorithm to compute an inference based on user input data and data representing knowledge associated with a character; and causing a personality engine associated with the character to compute a character response to the user input data based on the inference.
Please help me write a proper abstract based on the patent claims.
In one embodiment, a character engine models a character that interacts with users. The character engine receives user input data from a user device, and analyzes the user input data to determine a user intent and an assessment domain. Subsequently, the character engine selects inference algorithm(s) that include machine learning capabilities based on the intent and the assessment domain. The character engine computes a response to the user input data based on the selected inference algorithm(s) and a set of personality characteristics that are associated with the character. Finally, the character engine causes the user device to output the response to the user. In this fashion, the character engine includes sensing functionality, thinking and learning functionality, and expressing functionality. By aggregating advanced sensing techniques, inference algorithms, character-specific personality characteristics, and expressing algorithms, the character engine provides a realistic illusion that users are interacting with the character.
1. A system for facilitating communication between one or more applications and one or more rules engines, the system comprising: a design tool configured to provide a platform syntax for defining rules for the one or more applications and to associate each rule with a respective predetermined one of the one or more rules engines, wherein the association of each rule with the respective predetermined one of the rules engines is based on predetermined criteria; a rule processor configured to translate each rule defined using the design tool from the platform syntax into a rules engine syntax of the respective predetermined one of the rules engines; a repository configured to store each rule, each translated rule defined using the design tool, an association of each rule defined using the design tool to its respective translated rule, and the association of each rule defined using the design tool with its respective predetermined one of the rules engines; and a rules execution platform configured to receive a request from one of the applications to execute one of the rules defined using the design tool, identify the respective predetermined one of the rules engines associated with the requested rule, and transmit the requested rule to the respective predetermined one of the rules engines associated with the requested rule. 2. The system of claim 1 wherein the predetermined criteria comprises an environment in which the design tool is initiated. 3. The system of claim 1 wherein the predetermined criteria comprises information corresponding to a user that defined the requested rule using the design tool. 4. The system of claim 3 wherein the information comprises a role of the user. 5. The system of claim 4 wherein the role comprises a functional role of the user. 6. The system of claim 1 wherein the design tool is configured to present various views depending on a role of a user accessing the design tool. 7. The system of claim 6 wherein the role comprises a system role.
Please help me write a proper abstract based on the patent claims.
Methods, mediums, and systems are described for providing a platform coupled to one or more rules engines. The platform may provide runtime rule services to one or more applications. Different rules engines may be used for different types of rules, such as calculations, decisions, process control, transformation, and validation. Rules engines can be added, removed, and reassigned to the platform. When the platform receives a request for services from an application, the platform selects one of the rules engines to handle the request and instructs the selected rules engine to execute the rule. The rules engine may be selected automatically. The platform may be implemented through a service-oriented architecture.
1. A method for classifying binary data, the method comprising: obtaining training data having a predefined sample size, wherein the training data is composed of separable datasets; determining an exact bound on Vapnik-Chervonenkis (VC) dimension of a hyperplane classifier for the training data, wherein the exact bound is based one or more variables defining the hyperplane; and minimizing the exact bound on the VC dimension; and based on the minimizing of the exact bound, determining the optimal values of the one or more variables defining the hyperplane; generating a binary classifier for predicting one class to which a given data sample belongs. 2. The method as claimed in claim 1, wherein the exact bound is a function of the distances of closest and furthest from amongst the training data from the hyperplane, wherein the hyperplane classifies plurality of points within the training data with zero error. 3. The method as claimed in claim 1, wherein the datasets are one of linearly separable datasets and non-linearly separable datasets. 4. The method as claimed in claim 2, wherein for the notional hyperplane depicted by the following relation: uTx+v=0, the exact bound on the VC dimension for the hyperplane classifier is a function of h, being defined by: h = Max i = 1 , 2 , …  , M   u T  x i + v  Min i = 1 , 2 , …  , M   u T  x i + v  wherein, xi, i=1,2, . . . , M depict data points within the training data. 5. The method as claimed in claim 2, wherein the function to be minimized is another function of h added to a misclassification error parameter. 6. The method as claimed in claim 4, wherein the minimizing the exact bound further comprises: reducing the linear fractional programming problem of minimizing the h to obtain a linear programming problem; by solving the linear programming problem so obtained, obtaining a decision function for classifying the test data. 7. The method as claimed in claim 6, wherein the decision function has a low VC dimension. 8. The method as claimed in claim 6, wherein the objective of, the linear programming problem includes a function of the misclassification error. 9. A system for classifying test data, the system comprising: a processor; a data classification module, wherein the data classification module is to, obtaining training data having a predefined sample size, wherein the training data is composed of separable datasets having; determining an exact bound on the Vapnik-Chervonenkis (VC) dimension of a hyperplane classifier for the training data, wherein the exact bound depends on the variables defining the said hyperplane minimizing the exact bound on the VC dimension; and based on the minimizing of the exact bound, determining the optimal values of the variables defining the hyperplane, thus generating a binary classifier for predicting one class to which a given data sample belongs. 10. The system as claimed in claim 8, wherein the data classification module for nonlinearly separable datasets in a first dimension, is to map samples of training data from the first dimension to a higher dimension using a mapping function φ. 11. The system as claimed in claim 9, wherein for a notional hyperplane depicted by the relation uTφ(x)+v=0, the data classification module is to: minimize an exact bound on the VC dimension of a hyperplane classifier wherein the said classifier separates samples that have been transformed from the input dimension to a higher dimension by means of the mapping function (φ); wherein the minimization task is achieved by solving a fractional programming problem that has been reduced to a linear programming problem. 12. The system as claimed in claim 9, where data classification module utilizes a Kernel function K, wherein, K is a function of two input vectors ‘a’ and ‘b’ with K being positive definite; and K(a,b)=φ(a)Tφ(b) with K(a,b) being an inner product of the vectors obtained by transforming vectors ‘a’ and ‘b’ into a higher dimensional space by using the mapping function φ. 13. The system as claimed in claim 8, wherein alternatively the data classification module is to further: obtain a tolerance regression parameter, for a plurality of points within the training data; obtain the value of a hypothetical function or measurement at each of said training samples derive a classification problem in which the samples of each of the two classes are determined by using the given data and the tolerance parameter define a notional hyperplane, wherein the notional hyperplane classifies the plurality of points within the derived classification problem with minimal error; and based on the notional hyperplane, generates a regressor corresponding to the plurality of points. 14. The system as claimed in claim 13, wherein for the notional hyperplane is defined by, wTx+ηy+v=0, the data classification module generates the regressor defined by, y = - 1 η  ( w T  x + b ) 15. The system as claimed in claim 14, wherein for the points forming a linearly separable dataset, the regressor is a linear regressor. 16. The system as claimed in claim 14, wherein for the points forming a nonlinearly separable dataset, the regressor is a kernel regressor. 17. The system as claimed in claim 14, wherein the regressor further includes an error parameter. 18. The method as claimed in claim 8, in which the solution of the linear programming problem yields a set of weights or co-efficients, with each weight corresponding to an input feature, attribute, or co-ordinate, and wherein the set of input features with non-zero weights constitutes a set of selected features to allow feature selection. 19. The method as claimed in claim 18, in which only the selected features are used to next compute a classifier, thus eliminating the noise or confusion introduced by features that are less discriminative. 20. The method as claimed in claim 5, in which the constraints are modified so that one of the terms of the objective function is non-essential and can be removed. 21. The method as claimed in claim 20, in which the removal of a term in the objective function removes the need to choose a hyper-parameter weighting the mis-classification error, thus simplifying the use of the said method. 22. The method as claimed in claim 17, in which the constraints are modified so that one of the terms of the objective function is non-essential and can be removed. 23. The method as claimed in claim 4, in which the Max function is, replaced by a “soft Max” function in which distance is measured as a weighted function of distances from a plurality of hyperplanes, and in which the Min, function is replaced by a “soft Min” function. 24. The system as claimed in claim 14, in which the Max function is replaced by a “soft Max” function in which distance is measured as a weighted function of distances from a plurality of hyperplanes, and in which the Min function is replaced by a “soft Min” function.
Please help me write a proper abstract based on the patent claims.
Systems and methods for classifying test data based on maximum margin classifier are described. In one implementation, the method includes obtaining training data having a predefined sample size, wherein the training data is composed of separable data-sets. For the training data, a Vapnik-Chervonenkis (VC) dimension for the training data is determined. For the VC dimension, an exact bound is subsequently determined. The exact bound may be minimized for obtaining the minimum VC classifier for predicting at least one class to which samples of the training data belong.
1. A method for performing block retrieval on a block to be processed of a urine sediment image, comprising: using a plurality of decision trees to perform block retrieval on the block to be processed, wherein each of the plurality of decision trees comprises a judgment node and a leaf node, and the judgment node judges the block to be processed to make it reach the leaf node by using a block retrieval feature in a block retrieval feature set to form a block retrieval result at the leaf node, wherein the block retrieval result comprises a retrieved block, and at least two decision trees in the plurality of decision trees are different in structures thereof and/or judgments performed by the judgment nodes thereof by using the block retrieval feature; and integrating the block retrieval results of the plurality of decision trees so as to form a final block retrieval result. 2. The method according to claim 1, characterized in that the step of integrating the block retrieval results of the plurality of decision trees comprises: voting for the blocks retrieved by the plurality of decision trees, wherein if there are m decision trees in the plurality of decision trees altogether which retrieve a specific block, the ballot of the specific block is m, with m being a positive integer; and arranging the blocks retrieved by the plurality of decision trees in a descending order of the ballot. 3. The method according to claim 2, characterized in that only the retrieved blocks with ballots greater than a preset threshold value are listed. 4. The method according to claim 1, characterized in that the step of using a plurality of decision trees to perform block retrieval on the block to be processed comprises: on each decision tree, in response to the block to be processed being judged by the judgment node and reaching the leaf node, acquiring a block belonging to the leaf node as a block retrieval result, wherein the block belonging to the leaf node is set in a manner as follows: training the plurality of decision trees by using a training sample block in a training sample block set so that on each decision tree, the training sample block is judged by the judgment node and reaches a corresponding leaf node, and becomes a block belonging to the corresponding leaf node. 5. The method according to claim 4, characterized in that a classification tag is preset for the training sample block in the training sample block set so that the retrieved blocks comprised in the block retrieval result also carry classification tags. 6. An apparatus for performing block retrieval on a block to be processed of a urine sediment image, comprising: a block retrieval unit configured to use a plurality of decision trees to perform block retrieval on the block to be processed, wherein each of the plurality of decision trees comprises a judgment node and a leaf node, and the judgment node judges the block to be processed to make it reach the leaf node by using a block retrieval feature in a block retrieval feature set to form a block retrieval result at the leaf node, wherein the block retrieval result comprises a retrieved block, and at least two decision trees in the plurality of decision trees are different in structures thereof and/or judgments performed by the judgment nodes thereof by using the block retrieval feature; and an integration unit configured to integrate the block retrieval results of the plurality of decision trees so as to form a final block retrieval result. 7. The apparatus according to claim 6, characterized in that the integration unit is further configured to: vote for the blocks retrieved by the plurality of decision trees, wherein if there are m decision trees in the plurality of decision trees altogether which retrieve a specific block, the ballot of the specific block is m, with m being a positive integer; and arrange the blocks retrieved by the plurality of decision trees in a descending order of the ballot. 8. The apparatus according to claim 7, characterized in that the integration unit is further configured to only list the retrieved blocks with ballots greater than a preset threshold value. 9. The apparatus according to claim 6, characterized in that the block retrieval unit is configured to, on each decision tree, in response to the block to be processed being judged by the judgment node and reaching the leaf node, acquire a block belonging to the leaf node as a block retrieval result, wherein the block belonging to the leaf node is set in a manner as follows: training the plurality of decision trees by using a training sample block in a training sample block set so that on each decision tree, the training sample block is judged by the judgment node and reaches a corresponding leaf node, and becomes a block belonging to the corresponding leaf node. 10. The apparatus according to claim 9, characterized in that a classification tag is preset for the training sample block in the training sample block set so that the retrieved blocks comprised in the block retrieval result also carry classification tags. 11. A device for performing block retrieval on a block to be processed of a urine sediment image, comprising: a memory for storing executable instructions, the executable instructions, when executed, implementing the method of claim 1; and a processor for executing the executable instructions. 12. A machine-readable medium on which an executable instruction is stored, wherein when the executable instruction is executed, a machine is caused to perform the method of claim 1.
Please help me write a proper abstract based on the patent claims.
The inventive concepts herein relate to performing block retrieval on a block to be processed of a urine sediment image. The method comprises: using a plurality of decision trees to perform block retrieval on the block to be processed, wherein each of the plurality of decision trees comprises a judgment node and a leaf node, and the judgment node judges the block to be processed to make it reach the leaf node by using a block retrieval feature in a block retrieval feature set to form a block retrieval result at the leaf node, and at least two decision trees in the plurality of decision trees are different in structures thereof and/or judgments performed by the judgment nodes thereof by using the block retrieval feature; and integrating the block retrieval results of the plurality of decision trees so as to form a final block retrieval result.
1. A method of updating a set of classifiers comprising: applying a first set of classifiers to a first set of data; and requesting, from a remote device, a classifier update based at least in part on at least one of an output of the first set of classifiers or a performance measure of the application of the first set of classifiers. 2. The method of claim 1, in which the requesting is based at least in part on context information. 3. The method of claim 1, in which the performance measure comprises an accuracy of the classifiers, a level of agreement of multiple classifiers, or a speed of computation of the classifiers. 4. The method of claim 1, in which the first set of classifiers and the classifier update are built on a same feature generator. 5. The method of claim 1, in which the first set of classifiers comprises a general classifier and the classifier update comprises a specific classifier. 6. The method of claim 5, further comprising applying the specific classifier to an object to identify a specific class of the object. 7. The method of claim 1, in which the remote device is configured to apply the first set of classifiers. 8. The method of claim 7, further comprising: computing features and transmitting the computed features to the remote device, the remote device applying the first set of classifiers to the computed features to compute a classification. 9. An apparatus for updating a set of classifiers comprising: a memory; and at least one processor coupled to the memory, the at least one processor being configured: to apply a first set of classifiers to a first set of data; and to request, from a remote device, a classifier update based at least in part on at least one of an output of the first set of classifiers or a performance measure of the application of the first set of classifiers. 10. The apparatus of claim 9, in which the at least one processor is further configured to request the classifier update based at least in part on context information. 11. The apparatus of claim 9, in which the performance measure comprises an accuracy of the classifiers, a level of agreement of multiple classifiers, or a speed of computation of the classifiers. 12. The apparatus of claim 9, in which the first set of classifiers and the classifier update are built on a same feature generator. 13. The apparatus of claim 9, in which the first set of classifiers comprises a general classifier and the classifier update comprises a specific classifier. 14. The apparatus of claim 13, in which the at least one processor is further configured to apply the specific classifier to an object to identify a specific class of the object. 15. The apparatus of claim 9, in which the remote device is configured to apply the first set of classifiers. 16. The apparatus of claim 15, in which the at least one processor is further configured: to compute features and transmit the computed features to the remote device, the remote device applying the first set of classifiers to the computed features to compute a classification. 17. An apparatus for updating a set of classifiers comprising: means for applying a first set of classifiers to a first set of data; and means for requesting, from a remote device, a classifier update based at least in part on at least one of an output of the first set of classifiers or a performance measure of the application of the first set of classifiers. 18. A computer program product for updating a set of classifier comprising: a non-transitory computer readable medium having encoded thereon program code, the program code comprising: program code to apply a first set of classifiers to a first set of data; and program code to request, from a remote device, a classifier update based at least in part on at least one of an output of the first set of classifiers or a performance measure of the application of the first set of classifiers.
Please help me write a proper abstract based on the patent claims.
A method of updating a set of classifiers includes applying a first set of classifiers to a first set of data. The method further includes requesting, from a remote device, a classifier update based on an output of the first set of classifiers or a performance measure of the application of the first set of classifiers.
1. A method for implementing a convolutional neural network (CNN) accelerator on a target, comprising: identifying characteristics and parameters for the CNN accelerator; identifying resources on the target; and generating a design for the CNN accelerator in response to the characteristics and parameters of the CNN accelerator and the resources on the target. 2. The method of claim 1, wherein identifying characteristics and parameters for the CNN accelerator comprises receiving the characteristics from a user. 3. The method of claim 2, wherein the characteristics for the CNN accelerator comprises: a number and sequence of stages of layers; sizes and coefficients of filters; and sizes, strides, and padding of images. 4. The method of claim 1, wherein the characteristics for the CNN accelerator comprises a range of characteristics that allows the CNN accelerator to execute a plurality of CNN algorithms. 5. The method of claim 4, wherein generating the design for the CNN accelerator comprises implementing configurable status registers (CSR), programmable by a user at runtime, to configure the target to support characteristics required for executing one of the plurality of CNN algorithms. 6. The method of claim 1, wherein generating the design for the CNN accelerator comprises assigning an appropriate size for buffers in response to sizes of images to be processed by the CNN accelerator. 7. The method of claim 1, wherein generating the design for the CNN accelerator comprises generating computation units in response to available resources on the target. 8. The method of claim 1, wherein generating the computation units comprises generating processing elements utilizing digital signal processor blocks, memory blocks, and adders on the target. 9. The method of claim 1, wherein generating the design for the CNN accelerator comprises generating a sequencer unit that coordinates transmission of data to appropriate processing elements on the CNN accelerator at appropriate times in order to time multiplex computations on the processing elements. 10. The method of claim 1 further comprising: identifying a CNN algorithm to execute on the CNN accelerator; identifying a variation of the CNN accelerator that supports execution of the CNN algorithm; and setting configurable status registers on the target to support the variation of the CNN accelerator. 11. A method for implementing a convolutional neural network (CNN) accelerator on a target, comprising: identifying a CNN algorithm to execute on the CNN accelerator; identifying a variation of the CNN accelerator that supports execution of the CNN algorithm; and setting configurable status registers on the target to support the variation of the CNN accelerator. 12. The method of claim 11 further comprising: determining whether a different CNN algorithm is to be executed on the CNN accelerator; identifying a different variation of the CNN accelerator that supports execution of the different CNN algorithm; and setting the configurable status registers on the target to support the different variation of the CNN accelerator. 13. The method of claim 11, wherein setting the configurable status registers adds or subtracts convolution layers on the CNN accelerator. 14. The method of claim 11, wherein setting the configurable status registers sets filter coefficients. 15. The method of claim 11, wherein setting the configurable status registers removes one or more pooling layers. 16. The method of claim 11, wherein setting the configurable status registers reduces a size of a filter. 17. The method of claim 11 further comprising programming the target to implement the CNN accelerator with a configuration file. 18. A non-transitory computer readable medium including a sequence of instructions stored thereon for causing a computer to execute a method for implementing a convolutional neural network (CNN) accelerator on a target, comprising: identifying characteristics and parameters for the CNN accelerator; identifying resources on the target; and generating a design for the CNN accelerator in response to the characteristics and parameters of the CNN accelerator and the resources on target. 19. The non-transitory computer readable medium of claim 18, wherein the characteristics for the CNN accelerator comprises a range of characteristics that allows the CNN accelerator to execute a plurality of CNN algorithms. 20. The non-transitory computer readable medium of claim 19, wherein generating the design for the CNN accelerator comprises implementing configurable status registers (CSR), programmable by a user at runtime, to configure the target to support characteristics required for executing one of the plurality of CNN algorithms.
Please help me write a proper abstract based on the patent claims.
A method for implementing a convolutional neural network (CNN) accelerator on a target includes identifying characteristics and parameters for the CNN accelerator. Resources on the target are identified. A design for the CNN accelerator is generated in response to the characteristics and parameters of the CNN accelerator and the resources on the target.
1. A system comprising; a fuzzy logic-based adaptive power management system; a photovoltaic system; a first capacitor based energy storage system; a second battery energy based storage system; and a storage of knowledge of system operation and operation of energy storage related devices within the system, wherein the management storage system communicates with the photovoltaic system, first capacitor and battery based energy system and storage of knowledge to influence energy fluctuations ahead of detailed control loops in power electronic devices, the fuzzy logic based adaptive system includes a filter based power coordination layer for power conditioning among the energy based storage system and a fuzzy logic based control adjustment for monitoring operational status of all energy storage devices taking into account their dynamic characteristics to fine tune control settings with the system adaptively and influence optimal power or energy distributions within the system. 2. The system of claim 1, wherein the filter based power coordination ensures that a supercapacitor storage device compensates sudden changes in rapidly fluctuating PV output power while the battery covers a smoothing power profile, and during normal operation periods, the references for different energy storage devices work sufficiently with the references being modifiable under certain conditions in order to improve the overall system performance. 3. The system of claim 2, wherein the fuzzy logic based control adjustment comprises that during the system operation, the energy storage device will keep switching among different operation modes and present different dynamics, achieving smooth changes over various operation modes and maintain consistent system performance includes tuning the control along with those changes, and the fuzzy logic adjustment providing advantages in non-linear system control without requiring a precise mathematical modeling or sophisticated computations in certain situations. 4. The system of claim 1, wherein the fuzzy logic based control adjustment comprises fuzzy logic based smoothing control, fuzzy logic based battery power control and fuzzy logic based ultracapacitor power control. 5. The system of claim 1, wherein the fuzzy logic based smoothing control comprises low pass filtering that influences a smoothing power profile, a difference between smoothing power and actual power being covered by discharging or charging of a hybrid electrical energy system, a parameter in the smoothing control determining a smoothing performance in that as the parameter becomes larger the more fluctuating power needs to be compensated by the hybrid ESS which means more energy and power will be requested out of energy storages. 6. The system of claim 1, wherein the fuzzy logic based smoothing control comprises preventing the saturation or depletion of energy capacity, and ensuring sustainable system operation with states of charge for capacitance of the battery energy and ultracapacitor being updatable when different unit sizes are applied in the PV system, with a power-intensive storage the ultracapacitor may presents a relatively fluctuating state of charge profiles is prone to energy depletion and saturation so positive big and negative big range take up a larger range than a battery in the system 7. The system of claim 1, wherein the fuzzy logic based control adjustment comprises fuzzy logic based ultracapacitor power control that adjusts a simulated ultracapacitor reference current by adding a deviating value, output of the ultracapacitor reference current being directly applicable on a converter current control loop, and including fuzzy rules for preventing the ultracapacitor from energy depletion or saturation. 8. The system of claim 1, wherein the fuzzy logic based control adjustment comprises fuzzy logic based battery power control that adjusts a simulated battery reference current by adding a deviating value and an output of the battery reference current can be directly applied on a converter current control loop. 9. The system of claim 1, wherein fuzzy control in the fuzzy logic-based adaptive power management system are configures from the heuristic knowledge of the system operation in the storage of knowledge, the fuzzy control being tuneable through system simulation studies and configuration of the adaptive power management system keeping the system in a sustainable operation status and preserving energy storage devices acceptable life cycles. 10. A method comprising; employing a fuzzy logic-based adaptive power management system in a an electrical energy system; coupling a photovoltaic system to the power management system; coupling a first capacitor based energy storage system to the power management system; coupling a second battery energy based storage system to the power management system; and coupling a storage of knowledge of system operation and operation of energy storage related devices within an electric energy system to the power management system, the management storage system communicating with the photovoltaic system, first capacitor and battery based energy system and storage of knowledge for influencing energy fluctuations ahead of detailed control loops in power electronic devices, the fuzzy logic based adaptive system including a filter based power coordination layer for power conditioning among the energy based storage system and a fuzzy logic based control adjustment for monitoring operational status of all energy storage devices taking into account their dynamic characteristics for fine tuning control settings with the system adaptively and influencing optimal power or energy distributions within the system. 11. The method of claim 10, wherein the filter based power coordination ensures that a supercapacitor storage device provides for compensating sudden changes in rapidly fluctuating PV output power while the battery covers a smoothing power profile, and during normal operation periods, the references for different energy storage devices work sufficiently with the references being modifiable under certain conditions in order to improve the overall system performance. 12. The method of claim 11, wherein the fuzzy logic based control adjustment comprises that during the system operation, the energy storage device will provide for keeping switching among different operation modes and present different dynamics, achieving smooth changes over various operation modes and maintaining consistent system performance including tuning the control along with those changes, and also providing advantages in non-linear system control without requiring a precise mathematical modeling or sophisticated computations in certain situations. 13. The method of claim 10, wherein the fuzzy logic based control adjustment comprises fuzzy logic based smoothing control, fuzzy logic based battery power control and fuzzy logic based ultracapacitor power control. 14. The method of claim 10, wherein the fuzzy logic based smoothing control comprises low pass filtering that influences a smoothing power profile, a difference between smoothing power and actual power being covered by discharging or charging of a hybrid electrical energy system, a parameter in the smoothing control determining a smoothing performance in that as the parameter becomes larger the more fluctuating power needs to be compensated by hybrid electric storage system which means more energy and power will be requested out of energy storages. 15. The method of claim 10, wherein the fuzzy logic based smoothing control comprises preventing the saturation or depletion of energy capacity, and ensuring sustainable system operation with states of charge for capacitance of the battery energy and ultracapacitor being updatable when different unit sizes are applied in the PV system, with a power-intensive storage the ultracapacitor may presents a relatively fluctuating state of charge profiles is prone to energy depletion and saturation so positive big and negative big range take up a larger range than a battery in the system 16. The method of claim 10, wherein the fuzzy logic based control adjustment comprises fuzzy logic based ultracapacitor power control adjusting a simulated ultracapacitor reference current by adding a deviating value, output of the ultracapacitor reference current being directly applicable on a converter current control loop, and including fuzzy rules for preventing the ultracapacitor from energy depletion or saturation. 17. The method of claim 10, wherein the fuzzy logic based control adjustment comprises fuzzy logic based battery power control adjusting a simulated battery reference current by adding a deviating value and an output of the battery reference current can be directly applied on a converter current control loop. 18. The method of claim 10, wherein fuzzy control in the fuzzy logic-based adaptive power management system comprises configuring from a heuristic knowledge of the system operation in the storage of knowledge, tuning the fuzzy control through system simulation and configuration of the adaptive power management system while keeping the system in a sustainable operation status and preserving energy storage devices acceptable life cycles. 19. The method of claim 10, wherein the fuzzy logic based control adjustment comprises fuzzy logic based smoothing control, fuzzy logic based battery power control and fuzzy logic based ultracapacitor power control.
Please help me write a proper abstract based on the patent claims.
A hybrid ultracapacitor-battery energy storage system is integrated with a photovoltaic system to help solve fluctuations. A fuzzy-logic-based adaptive power management system enables optimization of the power/energy distributions and a filter-based power coordination layer serving as a rudimentary step for power coordination among the hybrid storage system and a fuzzy-logic-based control adjustment layer that keeps monitoring the operation status of all the energy storage devices, taking into account their dynamic characteristics, and fine-tuning the control settings adaptively.
1. A computer-implemented method for determining if an account identifier is computer-generated, comprising: receiving the account identifier; dividing the account identifier into a plurality of fragments; determining one or more features of at least one of the fragments; determining the commonness of at least one of the fragments; and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments. 2. The method of claim 1, wherein determining if the account identifier is computer-generated comprises providing the features of at least one of the fragments, and the commonness of at least one of the fragments, to a probabilistic classifier model. 3. The method of claim 2, wherein the probabilistic classifier model is trained with a training set of a plurality of account identifiers that are known to be or known to not be computer-generated, and wherein the training set allows the probabilistic classifier model to independently weigh the one or more features in order to more accurately determine if the account identifier is computer-generated. 4. The method of claim 1, further comprising: determining one or more features of the account identifier by counting characters by character type. 5. The method of claim 1, wherein determining the commonness of at least one of the fragments comprises determining the frequency of occurrence of at least one of the fragments in data in a data store. 6. The method of claim 1, wherein at least one of the fragments is truncated to contain only consonants. 7. The method of claim 1, wherein each character of at least one of the fragments is hashed according to character type. 8. The method of claim 7, wherein each character type is selected from a group including consonant, vowel, number, and punctuation mark. 9. A system for determining if an account identifier is computer-generated, the system including: a data storage device storing instructions determining if an account identifier is computer-generated; and a processor configured to execute the instructions to perform a method including: receiving the account identifier; dividing the account identifier into a plurality of fragments; determining one or more features of at least one of the fragments; determining the commonness of at least one of the fragments; and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments. 10. The system of claim 9, wherein determining if the account identifier is computer- generated comprises providing the features of at least one of the fragments, and the commonness of at least one of the fragments, to a probabilistic classifier model. 11. The system of claim 10, wherein the probabilistic classifier model is trained with a training set of a plurality of account identifiers that are known to be or known to not be computer-generated, and wherein the training set allows the probabilistic classifier model to independently weigh the one or more features in order to more accurately determine if the account identifier is computer-generated. 12. The system of claim 9, wherein the processor is further configured for: determining one or more features of the account identifier by counting characters by character type. 13. The system of claim 9, wherein determining the commonness of at least one of the fragments comprises determining the frequency of occurrence of at least one of the fragments in data in a data store. 14. The system of claim 9, wherein at least one of the fragments is truncated to contain only consonants. 15. The system of claim 9, wherein each character of at least one of the fragments is hashed according to character type. 16. A non-transitory computer-readable medium storing instructions that, when executed by a processor, cause the processor to perform a method for determining whether an account identifier is computer-generated, the method including: receiving the account identifier; dividing the account identifier into a plurality of fragments; determining one or more features of at least one of the fragments; determining the commonness of at least one of the fragments; and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments. 17. The non-transitory computer-readable medium of claim 16, wherein determining if the account identifier is computer-generated comprises providing the features of at least one of the fragments, and the commonness of at least one of the fragments, to a probabilistic classifier model. 18. The non-transitory computer-readable medium of claim 17, wherein the probabilistic classifier model is trained with a training set of a plurality of account identifiers that are known to be or known to not be computer-generated, and wherein the training set allows the probabilistic classifier model to independently weigh the one or more features in order to more accurately determine if the account identifier is computer-generated. 19. The non-transitory computer-readable medium of claim 16, wherein determining the commonness of at least one of the fragments comprises determining the frequency of occurrence of at least one of the fragments in data in a data store. 20. The non-transitory computer-readable medium of claim 16, wherein each character of at least one of the fragments is hashed according to character type.
Please help me write a proper abstract based on the patent claims.
Systems and methods are disclosed for determining if an account identifier is computer-generated. One method includes receiving the account identifier, dividing the account identifier into a plurality of fragments, and determining one or more features of at least one of the fragments. The method further includes determining the commonness of at least one of the fragments, and determining if the account identifier is computer-generated based on the features of at least one of the fragments, and the commonness of at least one of the fragments.
1. A method comprising: for a port of a neural module comprising a plurality of neurons and a plurality of ports: classifying the port into one of a plurality of port types, wherein the port is classified into a different port type that another port of the plurality of ports; interconnecting the port to at least one synapse classified into the same port type as the port; maintaining synaptic connectivity information indicative of the at least one synapse, a total sum of synaptic weights of the at least one synapse, and a target sum; and selectively updating a set of learning rules for the at least one synapse based on the total sum and the target sum. 2. The method of claim 1, wherein the synaptic connectivity information comprises a list of all synapses the port is connected to. 3. The method of claim 1, wherein a first port and a second port of the plurality of ports are classified into a first port type and a second port type, respectively, and the first port type and the second port type are different port types. 4. The method of claim 1, wherein the selectively updating comprises: determining whether the total sum exceeds or is less than the target sum; and updating the set of learning rules in response to determining the total sum exceeds or is less than the target sum. 5. The method of claim 1, further comprising: for a neuron of the neural module: maintaining neural information indicative of a membrane potential of the neuron, a first set of ports of the plurality of ports the neuron sends output to, and a second set of ports of the plurality of ports the neuron receives input from. 6. A system comprising a computer processor, a computer-readable hardware storage medium, and program code embodied with the computer-readable hardware storage medium for execution by the computer processor to implement a method comprising: for a port of a neural module comprising a plurality of neurons and a plurality of ports: classifying the port into one of a plurality of port types, wherein the port is classified into a different port type that another port of the plurality of ports; interconnecting the port to at least one synapse classified into the same port type as the port; maintaining synaptic connectivity information indicative of the at least one synapse, a total sum of synaptic weights of the at least one synapse, and a target sum; and selectively updating a set of learning rules for the at least one synapse based on the total sum and the target sum. 7. The system of claim 6, wherein the synaptic connectivity information comprises a list of all synapses the port is connected to. 8. The system of claim 6, wherein a first port and a second port of the plurality of ports are classified into a first port type and a second port type, respectively, and the first port type and the second port type are different port types. 9. The system of claim 6, wherein the selectively updating comprises: determining whether the total sum exceeds or is less than the target sum; and updating the set of learning rules in response to determining the total sum exceeds or is less than the target sum. 10. The system of claim 6, the method further comprising: for a neuron of the neural module: maintaining neural information indicative of a membrane potential of the neuron, a first set of ports of the plurality of ports the neuron sends output to, and a second set of ports of the plurality of ports the neuron receives input from. 11. A computer program product comprising a computer-readable hardware storage medium having program code embodied therewith, the program code being executable by a computer to implement a method comprising: for a port of a neural module comprising a plurality of neurons and a plurality of ports: classifying the port into one of a plurality of port types, wherein the port is classified into a different port type that another port of the plurality of ports; interconnecting the port to at least one synapse classified into the same port type as the port; maintaining synaptic connectivity information indicative of the at least one synapse, a total sum of synaptic weights of the at least one synapse, and a target sum; and selectively updating a set of learning rules for the at least one synapse based on the total sum and the target sum. 12. The computer program product of claim 11, wherein the synaptic connectivity information comprises a list of all synapses the port is connected to. 13. The computer program product of claim 11, wherein a first port and a second port of the plurality of ports are classified into a first port type and a second port type, respectively, and the first port type and the second port type are different port types. 14. The computer program product of claim 11, wherein the selectively updating comprises: determining whether the total sum exceeds or is less than the target sum; and updating the set of learning rules in response to determining the total sum exceeds or is less than the target sum. 15. The computer program product of claim 11, the method further comprising: for a neuron of the neural module: maintaining neural information indicative of a membrane potential of the neuron, a first set of ports of the plurality of ports the neuron sends output to, and a second set of ports of the plurality of ports the neuron receives input from.
Please help me write a proper abstract based on the patent claims.
The present invention provides an event-driven universal neural network circuit. The circuit comprises a plurality of neural modules. Each neural module comprises multiple digital neurons such that each neuron in a neural module has a corresponding neuron in another neural module. An interconnection network comprising a plurality of digital synapses interconnects the neural modules. Each synapse interconnects a first neural module to a second neural module by interconnecting a neuron in the first neural module to a corresponding neuron in the second neural module. Corresponding neurons in the first neural module and the second neural module communicate via the synapses. Each synapse comprises a learning rule associating a neuron in the first neural module with a corresponding neuron in the second neural module. A control module generates signals which define a set of time steps for event-driven operation of the neurons and event communication via the interconnection network.
1. An encoding method of a processor for a pattern matching application comprising the steps of: encoding a plurality of input data before streaming them into the processor, encoding and placing a plurality of patterns on loop structures on the processor, and matching the encoded input data on the processor using the loop structures, wherein the plurality of input data are application data, and the loop structures contain the encoded patterns. 2. The encoding method according to claim 1, wherein both the plurality of input data and the plurality of patterns are encoded into subsets of characters. 3. The encoding method according to claim 1, wherein the encoded input data are put in a single self-loop state transition element (STE) or a loop structure with multiple STEs on the processor. 4. The encoding method according to claim 3, wherein the single self-loop STE or the loop structure with multiple STEs contains a set of characters. 5. The encoding method according to claim 4, wherein the looped STE structure remains activated when the set of characters in the looped STE structure and another set of characters streamed in serial are identical, and the looped STE structure is turned off when the set of characters in the looped STE structure and the another set of characters streamed in serial are not identical. 6. The encoding method according to claim 3, wherein an STE comprises an array of memory, and a value in the memory cell indicates whether the encoded input data matches with the encoded patterns on the processor. 7. The encoding method according to claim 1, wherein the processor is a non-von Neumann processor based on the architecture of a dynamic random-access memory (DRAM). 8. The encoding method according to claim 6, wherein the processor further comprises a routing matrix for implementing connections among STEs, Boolean logic gates, and counters on the processor. 9. The encoding method according to claim 1, wherein the encoding and the matching are performed in parallel. 10. An automata design method of the processor for applying the encoding method according to claim 1 comprises: exact matching automata, Hamming distance automata, Levenshtein automata, and Damerau-Levenshtein automata. 11. The automata design method according to claim 10, wherein in the exact matching automata, whether the plurality of input data and the plurality of patterns are exactly identical is determined. 12. The automata design method according to claim 10, wherein in the Hamming distance automata, an one-to-one encoding method, an one-to-many encoding method, a many-to-one encoding method, and a many-to-many encoding method are used. 13. The automata design method according to claim 10, wherein in the Hamming distance automata, a ladder structure with a predetermined level is constructed to match the plurality of input data with the plurality of patterns within a predetermined distance. 14. The automata design method according to claim 10, wherein the Hamming distance automata is used to match the plurality of input data and the plurality of patterns with sliding windows, and a size of the plurality of input data is larger than that of the plurality of patterns. 15. The automata design method according to claim 10, wherein in the Levenshtein automata, left-shifted and right-shifted encoding are used for capturing insertions and deletions. 16. The automata design method according to claim 10, wherein in the Damerau-Levenshtein automata, AND logic gates are used for capturing transpositions of adjacent characters.
Please help me write a proper abstract based on the patent claims.
The subset encoding method and related automata designs for improving the space efficiency for many applications on the Automata Processor (AP) are presented. The method is a general method that can take advantage of the character-or ability of STEs (State Transition Elements) on the AP, and can relieve the problems of limited hardware capacity and inefficient routing. Experimental results show that after applying the subset encoding method on Hamming distance automata, up to 3.2× more patterns can be placed on the AP if a sliding window is required. If a sliding window is not required, up to 192× more patterns can be placed on the AP. For a Levenshtein distance, the subset encoding can split the Levenshtein automata into small chunks and make them routable on the AP. The impact of the subset encoding method depends on the character size of the AP.
1. A machine learning system including a computer for evaluating at least one characteristic of a heart valve, an inflow tract, an outflow tract or a combination thereof, the machine learning system comprising: a transformation function configured to predict at least one of an unknown anatomical characteristic or an unknown physiological characteristic of at least one of a heart valve, an inflow tract or an outflow tract, using at least one of a known anatomical characteristic or a known physiological characteristic of the at least one heart valve, inflow tract or outflow tract; and a production mode configured to use the transformation function to predict at least one of the unknown anatomical characteristic or the unknown physiological characteristic of the at least one heart valve, inflow tract or outflow tract, based on at least one of the known anatomical characteristic or the known physiological characteristic of the at least one heart valve, inflow tract or outflow tract, wherein the production mode is further configured to receive one or more feature vectors. 2. A machine learning system as in claim 1, wherein the machine learning system is configured to compute and store in a feature vector the at least one known anatomical characteristic or known physiological characteristic of the at least one heart valve, inflow tract or outflow tract. 3. A machine learning system as in claim 2, wherein the machine learning system is configured to calculate an approximate blood flow through the at least one heart valve, inflow tract or outflow tract. 4. A machine learning system as in claim 3, wherein the machine learning system is further configured to store quantities associated with the approximate blood flow through the at least one heart valve, inflow tract or outflow tract. 5. A machine learning system as in claim 4, wherein the machine learning system is further configured to perturb the at least one known anatomical characteristic or known physiological characteristic of the at least one heart valve, inflow tract or outflow tract stored in the feature vector. 6. A machine learning system as in claim 5, wherein the machine learning system is further configured to calculate a new approximate blood flow through the at least one heart valve, inflow tract or outflow tract with the perturbed at least one known anatomical characteristic or known physiological characteristic. 7. A machine learning system as in claim 6, wherein the machine learning system is further configured to store quantities associated with the new approximate blood flow through the perturbed at least one heart valve, inflow tract or outflow tract. 8. A machine learning system as in claim 7, wherein the machine learning system is further configured to repeat the perturbing, calculating and storing steps to create a set of feature vectors and quantity vectors and to generate the transformation function. 9. A machine learning system as in claim 1, wherein the production mode is configured to apply the transformation function to the feature vectors. 10. A machine learning system as in claim 9, wherein the production mode is configured to generate one or more quantities of interest. 11. A machine learning system as in claim 10, wherein the production mode is configured to store the quantities of interest. 12. A machine learning system as in claim 11, wherein the production mode is configured to process the quantities of interest to provide data for use in at least one of evaluation, diagnosis, prognosis, treatment or treatment planning related to a heart in which the heart valve resides. 13. A computer-implemented machine learning method for evaluating at least one characteristic of a heart valve, an inflow tract, an outflow tract or a combination thereof the method comprising: predicting, with a transformation function on a computer, at least one of an unknown anatomical characteristic or an unknown physiological characteristic of at least one of a heart valve, an inflow tract or an outflow tract, using at least one of a known anatomical characteristic or a known physiological characteristic of the at least one heart valve, inflow tract or outflow tract; maintaining, in a feature vector on the computer, the at least one known anatomical characteristic or known physiological characteristic of the at least one heart valve, inflow tract or outflow tract; and using a production mode of a machine learning system on the computer to direct the transformation function to predict at least one of the unknown anatomical characteristic or the unknown physiological characteristic of the at least one heart valve, inflow tract or outflow tract, based on at least one of the known anatomical characteristic or the known physiological characteristic of the at least one heart valve, inflow tract or outflow tract. 14. A method as in claim 13, further comprising using the computer to calculate an approximate blood flow through the at least one heart valve, inflow tract or outflow tract. 15. A method as in claim 14, further comprising using the computer to store quantities associated with the approximate blood flow through the at least one heart valve, inflow tract or outflow tract. 16. A method as in claim 15, further comprising using the computer to perturb the at least one known anatomical characteristic or known physiological characteristic of the at least one heart valve, inflow tract or outflow tract stored in the feature vector. 17. A method as in claim 16, further comprising using the computer to calculate a new approximate blood flow through the at least one heart valve, inflow tract or outflow tract with the perturbed at least one known anatomical characteristic or known physiological characteristic. 18. A method as in claim 17, further comprising using the computer to store quantities associated with the new approximate blood flow through the perturbed at least one heart valve, inflow tract or outflow tract. 19. A method as in claim 18, further comprising using the computer to repeat the perturbing, calculating and storing steps to create a set of feature vectors and quantity vectors and to generate the transformation function. 20. A method as in claim 13, further comprising using the computer to perform the following steps: receiving patient-specific data selected from the group consisting of anatomic data, physiologic data, and hemodynamic data; generating a digital model of the at least one heart valve, inflow tract or outflow tract, based on the received data; discretizing the digital model; applying boundary conditions to at least one inflow portion and at least one outflow portion of the digital model; and initializing and solving mathematical equations of blood flow through the digital model. 21. A method as in claim 20, further comprising storing quantities and parameters that characterize at least one of an anatomic state or a physiologic state of the digital model and the blood flow. 22. A method as in claim 21, further comprising perturbing at least one of an anatomic parameter or a physiologic parameter that characterizes the digital model. 23. A method as in claim 22, further comprising at least one of re-discretizing or re-solving the mathematical equations with the at least one anatomic parameter or physiologic parameter. 24. A method as in claim 23, further comprising storing quantities and parameters that characterize at least one of the anatomic state or the physiologic state of the perturbed model and blood flow. 25. A method as in claim 13, further comprising receiving one or more feature vectors with the production mode. 26. A method as in claim 25, further comprising using the production mode to apply the transformation function to the feature vectors. 27. A method as in claim 26, further comprising using the production mode to generate one or more quantities of interest. 28. A method as in claim 27, further comprising using the production mode to process the quantities of interest to provide data for use in at least one of evaluation, diagnosis, prognosis, treatment or treatment planning related to a heart in which the at least one heart valve, inflow tract or outflow tract resides.
Please help me write a proper abstract based on the patent claims.
A machine learning system for evaluating at least one characteristic of a heart valve, an inflow tract, an outflow tract or a combination thereof may include a training mode and a production mode. The training mode may be configured to train a computer and construct a transformation function to predict an unknown anatomical characteristic and/or an unknown physiological characteristic of a heart valve, inflow tract and/or outflow tract, using a known anatomical characteristic and/or a known physiological characteristic the heart valve, inflow tract and/or outflow tract. The production mode may be configured to use the transformation function to predict the unknown anatomical characteristic and/or the unknown physiological characteristic of the heart valve, inflow tract and/or outflow tract, based on the known anatomical characteristic and/or the known physiological characteristic of the heart valve, inflow tract and/or outflow tract.
1. A system, comprising: a memory; and one or more processors electronically coupled to the memory and configured to: access a training set stored in the memory, the training set having a plurality of personal data training records and a plurality of categories associated with each personal data training record, wherein the plurality of categories comprises an action taken by an individual corresponding to the associated personal data training record; determine a subset of the plurality of categories based on at least a first plurality of personal data training records in the training set; determine a prediction function that outputs an outcome score based on values of the subset of the plurality of categories; test the accuracy of the prediction function based at least on a second plurality of personal data training records in the training set different from the first plurality of personal data training records in the training set; access a data set stored in memory, the data set having a number of personal data records greater than the number of personal data training records in the training set, the data set having the plurality of categories associated with each personal data record; and process a subset of the personal data records from the data set based on the prediction function to determine an outcome score for each personal data record in the subset of personal data records. 2. The system of claim 1, wherein determining the subset of the plurality of categories comprises determining a weight for each category. 3. The system of claim 2, wherein determining a weight for each category comprises: initializing a coefficient; and computing a weight vector that optimizes a function based on the subset of the plurality of categories, the function including the coefficient multiplied by the sum of the absolute values of each element in the weight vector; wherein each element in the weight vector corresponds to a weight for a category. 4. The system of claim 3, wherein the one or more processors are further configured to iterate the steps of determining a prediction function and testing the accuracy of the prediction function using successively smaller coefficients until the difference between successive accuracies is less than a threshold. 5. The system of claim 3, wherein the size of the subset of the plurality of categories is related to the magnitude of the coefficient. 6. The system of claim 1, wherein the prediction function substantially satisfies the equation P = e θ T x 1 + e θ T x ; wherein P is the prediction function, wherein e is Euler's number, wherein θ is a column vector of parameters, and wherein x is a column vector of values corresponding to categories of a personal data record. 7. The system of claim 6, wherein determining the subset of the plurality of categories comprises determining a value for θ that substantially minimizes the equation Y(θ,α1)=−ΣmεMAm log Pm(θ)+(1−Am)log(1−Pm(θ))−α1(α2∥θ∥1+½(1−α2)∥θ∥22) wherein M is the first plurality of personal data training records in the training set, wherein Am is the action taken by an individual corresponding to the personal data record mεM, wherein Pm(θ)=eθTxm/(1+eθTxm) is the outcome score for personal data record m, wherein α1 is a coefficient, and wherein α2 is a constant coefficient. 8. A computer implemented method, comprising: accessing a training set stored in memory, the training set having a plurality of personal data training records and a plurality of categories associated with each personal data training record, and the plurality of categories comprises an action taken by an individual corresponding to the associated personal data training record; determining a subset of the plurality of categories based on the action of at least a first personal data training record in the training set; determining a prediction function that outputs an outcome score based on the subset of the plurality of categories; testing the accuracy of the prediction function based on at least a second personal data training record in the training set different from the first personal data training record in the training set; accessing a data set stored in memory, the data set having a number of personal data records greater than the number of personal data training records in the training set, the data set having a plurality of categories associated with each personal data record; and processing a subset of the personal data records from the data set based on the subset of categories to determine an outcome score for each personal data record in the subset of personal data records. 9. The method of claim 8, further comprising: receiving an action taken by an individual corresponding to at least one personal data record in the subset of personal data records from the data set; replacing the outcome score for the at least one personal data record with the received outcome; and moving the at least one personal data record from the data set to the training set. 10. The method of claim 8, wherein determining the subset of the plurality of categories comprises determining a weight for each category. 11. The method of claim 10, wherein determining a weight for each category comprises: initializing a coefficient; and computing a weight vector that optimizes a function based on the subset of the plurality of categories, the function including the coefficient multiplied by the sum of the absolute values of each element in the weight vector; wherein each element in the weight vector corresponds to a weight for a category. 12. The method of claim 11, wherein the one or more processors are further configured to iterate the steps of determining a prediction function and testing the accuracy of the prediction function using successively smaller coefficients until the difference between successive accuracies is less than a threshold. 13. The method of claim 11, wherein the size of the subset of the plurality of categories is related to the magnitude of the coefficient. 14. The method of claim 8, wherein the prediction function substantially satisfies the equation P = e θ T x 1 + e θ T x ; wherein P is the prediction function, wherein e is Euler's number, wherein θ is a column vector of parameters, and wherein x is a column vector of values corresponding to categories of a personal data record. 15. A tangible computer-readable device having instructions stored thereon that, when executed by at least one computing device, causes the at least one computing device to perform operations comprising: accessing a training set stored in memory, the training set having a plurality of personal data training records and a plurality of categories associated with each personal data training record, and the plurality of categories comprises an action taken by an individual corresponding to the associated personal data training record; determining a subset of the plurality of categories based on the outcome of at least a first personal data training record in the training set; determining a prediction function based on the subset of the plurality of categories; testing the accuracy of the prediction function based on at least a second personal data training record in the training set different from the first personal data training record in the training set; accessing a data set stored in memory, the data set having a number of personal data records greater than the number of personal data training records in the training set, the data set having a plurality of categories associated with each personal data record; and processing a subset of the personal data records from the data set based on the subset of categories to determine an outcome score for each personal data record in the subset of personal data records. 16. The computer-readable device of claim 15, wherein the operation of determining the subset of the plurality of categories comprises determining a weight for each category. 17. The computer-readable device of claim 16, wherein the operation of determining a weight for each category comprises: initializing a coefficient; and computing a weight vector that optimizes a function based on the subset of the plurality of categories, the function including the coefficient multiplied by the sum of the absolute values of each element in the weight vector; wherein each element in the weight vector corresponds to a weight for a category. 18. The computer-readable device of claim 17, wherein the one or more processors are further configured to iterate the steps of determining a prediction function and testing the accuracy of the prediction function using successively smaller coefficients until the difference between successive accuracies is less than a threshold. 19. The computer-readable device of claim 17, wherein the size of the subset of the plurality of categories is related to the magnitude of the coefficient. 20. The computer-readable device of claim 15, wherein the prediction function substantially satisfies the equation P = e θ T x 1 + e θ T x ; wherein P is the prediction function, wherein e is Euler's number, wherein θ is a column vector of parameters, and wherein x is a column vector of values corresponding to categories of a personal data record.
Please help me write a proper abstract based on the patent claims.
Disclosed herein are system, method, and computer program product embodiments for performing a regression analysis on lawfully collected personal data records. The analysis enables discovery of individuals likely to perform certain actions based on their personal data records and the personal data records and actions of others. The disclosed system, method, and computer program product may process vast quantities of data, including personal data records with thousands of categories and lawfully stored databases with millions of personal data records. Through the regression analysis, the disclosed system, method, and computer program product learn the most relevant categories for predicting an individual's actions based on input data provided by a user. The analysis then analyzes the categories of personal data records stored in a lawfully stored database to predict actions of individuals associated with those records and outputs results to the user.
1. A method for determining complex interactions among system inputs, comprising: using semi-Restricted Boltzmann Machines (RBMs) with factorized gated interactions of different orders to model complex interactions among system inputs, applying semi-RBMs to train a deep neural network with high-order within-layer interactions for learning a distance metric and a feature mapping; and tuning the deep neural network by minimizing margin violations between positive query document pairs and corresponding negative pairs. 2. The method of claim 1, comprising identifying complex nonlinear system input interactions for data denoising and data visualization. 3. The method of claim 1, wherein the semi-RBMs have gated interactions with a combination of orders ranging from 1 to m to approximate an arbitrary-order combinatorial input feature interactions in words and in Transcription Factors (TFs). 4. The method of claim 1, wherein hidden units of the semi-RBMs act as binary switches controlling interactions between input features. 5. The method of claim 1, comprising using factorization to reduce the number of parameters. The method of claim 1, comprising sampling from the semi-RBMs by using either fast deterministic damped mean-field updates or prolonged Gibbs sampling. 6. The method of claim 1, wherein parameters of semi-RBMs are learned using Contrastive Divergence. 7. The method of claim 1, wherein after a semi-RBM is learned, comprising treating inferred hidden activities of input data as new data to learn another semi-RBM and forming a deep belief net with gated high order interactions. 8. The method of claim 1, wherein with pairs of discrete representations of a query and a document, using semi-RBMs with gated arbitrary-order interactions to pre-train a deep neural network and generating a similarity score between a query and a document, in which a penultimate layer corresponds to a non-linear feature embedding of the original system input features. 9. The method of claim 8, further comprising using back-propagation to fine-tune parameters of the deep gated high-order neural network to make positive pairs of query, wherein document always have larger similarity scores than negative pairs based on margin maximization. 10. The method of claim 1, comprising modeling complex interactions between different words in documents and queries and predicting the bindings of TFs given some other TFs for understanding deep semantic information for information retrieval and TF binding redundancy and TF interactions for gene regulation. 11. The method of claim 1, comprising applying high-order semi-RBMs for modeling feature interactions including word interactions in documents or protein interactions in biology. 12. The method of claim 1, wherein the deep neural network has multiple layers. 13. The method of claim 1, comprising providing a given discretized query and document representation as input to a non-linear SSI system, and applying the semi-RBMs to pre-train the SSI system. 14. The method of claim 13, comprising fine-tuning the non-linear SSI system using back-propagation to minimize a margin-based rank loss. 15. The method of claim 13, wherein the discrete document representation includes a Bag of Word representation or a discretized term frequency—inverse document frequency(TF-IDF) representation. 16. The method of claim 1, comprising training by minimizing a margin ranking loss on a tuple (q, d+, d−): ∑ ( q , d + , d - )  max  ( 0 , 1 - f  ( q , d + ) + f  ( q , d - ) ) , where q is the query, d+ is a relevant document, and d− is an irrelevant document, f(·,·) is a similarity score.
Please help me write a proper abstract based on the patent claims.
Systems and method are disclosed for determining complex interactions among system inputs by using semi-Restricted Boltzmann Machines (RBMs) with factorized gated interactions of different orders to model complex interactions among system inputs; applying semi-RBMs to train a deep neural network with high-order within-layer interactions for learning a distance metric and a feature mapping; and tuning the deep neural network by minimizing margin violations between positive query document pairs and corresponding negative pairs.
1. A method comprising: constructing a model of a stochastic decision process that receives delayed observations at run time, wherein the stochastic decision process is executed by an agent; finding an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon; bounding an error of the agent policy according to an observation delay of the received delayed observations; and offering a reward to the agent using the agent policy having the error bounded according to the observation delay of the received delayed observations. 2. The method of claim 1, wherein finding the agent policy comprises: updating an agent belief state upon receiving each of the delayed observation; and determining a next agent action according to the expected total reward of a remaining decision epoch given an updated agent belief state. 3. The method of claim 2, wherein the agent belief state is updated using the delayed observation, a history of observations at runtime and a history of agent actions at runtime. 4. The method of claim 2, wherein the agent executes the next agent action in a next decision epoch. 5. The method of claim 1, further comprising: storing a history of observations at runtime; storing a history of agent actions at runtime; and recalling the history of observations at runtime and the history of agent actions at runtime to find the agent policy. 6. The method of claim 1, wherein the expected total reward comprises all rewards that the agent receives when a given agent action is executed in a current agent belief state. 7. The method of claim 1, wherein the observation delay of the received delayed observations is a maximum observation delay among the received delayed observations that is considered by the model. 8. A computer program product for planning in uncertain environments, the computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the processor to perform a method comprising: receiving a model of a stochastic decision process that receives delayed observations at run time, wherein the stochastic decision process is executed by an agent; finding an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon; and bounding an error of the agent policy according to an observation delay of the received delayed observations. 9. The computer program product of claim 8, wherein finding the agent policy comprises: updating an agent belief state upon receiving each of the delayed observation; and determining a next agent action according to the expected total reward of a remaining decision epoch given an updated agent belief state. 10. The computer program product of claim 9, wherein the agent belief state is updated using the delayed observation, a history of observations at runtime and a history of agent actions at runtime. 11. The computer program product of claim 8, further comprising: storing a history of observations at runtime; storing a history of agent actions at runtime; and recalling the history of observations at runtime and the history of agent actions at runtime to find the agent policy. 12. The computer program product of claim 8, wherein the expected total reward comprises all rewards that the agent receives when a given agent action is executed in a current agent belief state. 13. The computer program product of claim 8, wherein the observation delay of the received delayed observations is a maximum observation delay among the received delayed observations that is considered by the model. 14. A decision engine configured execute a stochastic decision process receiving delayed observations using an agent policy comprising: a computer program product comprising a computer readable storage medium having program instructions embodied therewith, the program instructions executable by a processor to cause the decision engine to: receive a model of the stochastic decision process that receives a plurality of delayed observations at run time, wherein the stochastic decision process is executed by an agent; find an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon; and bound an error of the agent policy according to an observation delay of the received delayed observations. 15. The decision engine of claim 14, wherein the agent policy comprises: an agent belief state updated upon receiving each of the delayed observation; and a next agent action extracted according to the expected total reward of a remaining decision epoch given the agent belief state. 16. The decision engine of claim 15, wherein the agent belief state is updated using the delayed observation, a history of observations at runtime and a history of agent actions at runtime. 17. The decision engine of claim 14, wherein the program instructions executable by the processor to cause the decision engine to: store a history of observations at runtime; store a history of agent actions at runtime; and recall the history of observations at runtime and the history of agent actions at runtime to find the agent policy. 18. The decision engine of claim 14, wherein the expected total reward comprises all rewards that the agent receives when a given agent action is executed in a current agent belief state. 19. The decision engine of claim 14, wherein the observation delay of the received delayed observations is a maximum observation delay among the received delayed observations that is considered by the model.
Please help me write a proper abstract based on the patent claims.
A method for determining a policy that considers observations delayed at runtime is disclosed. The method includes constructing a model of a stochastic decision process that receives delayed observations at run time, wherein the stochastic decision process is executed by an agent, finding an agent policy according to a measure of an expected total reward of a plurality of agent actions within the stochastic decision process over a given time horizon, and bounding an error of the agent policy according to an observation delay of the received delayed observations.
1. A method for providing electronic assistance to a user, the method comprising: providing a virtual assistant platform configured to share data across a plurality of virtual assistants; activating a first agent in one of the virtual assistants, the first agent located on a device client installed on a device of the user, and the first agent being configured to perform one or more tasks; and activating a second agent in one of the virtual assistants, the second agent located on the device of the user or on another device and facilitating communication with the first agent. 2. The method of claim 1, further comprising configuring the first agent and second agent to access one or more shared data stores, the shared data stores providing the virtual assistants with shared capabilities. 3. The method of claim 2, wherein one or more of the shared data stores comprises a world ontology understood by all of the virtual assistants. 4. The method of claim 1, wherein the first agent comprises a main agent configured to manage tasks of one or more other agents on the device, the method further comprising activating at least one of the other agents on the device. 5. The method of claim 4, wherein at least one of the other agents on the device is an adapter agent configured to communicate with an object. 6. The method of claim 4, further comprising providing an agent bus configured to deliver only communications between the main agent and the other agents on the device. 7. The method of claim 1, further comprising providing on the virtual assistant platform an agent store from which the user may obtain at least one additional agent. 8. The method of claim 7, further comprising registering each of the additional agents for use on the user's device. 9. A virtual assistant platform (VAP) operating on one or more computer servers and on one or more devices, the VAP comprising: a plurality of virtual assistants, each of the virtual assistants comprising at least one agent; and one or more shared data stores accessible by each of the virtual assistants, the shared data stores providing the virtual assistants with shared capabilities. 10. The VAP of claim 9, wherein one or more of the shared data stores comprises a world ontology understood by all of the virtual assistants. 11. The VAP of claim 10, wherein the world ontology is included in an ontology hierarchy that further includes one or more domain ontologies within one or more of the data stores. 12. The VAP of claim 9, further comprising a group virtual assistant to which one or more of the virtual assistants subscribes, the group virtual assistant being configured to distribute information to each of the subscribing virtual assistants according to a status of the virtual assistant. 13. The VAP of claim 9, wherein one of the virtual assistants is an administrator virtual assistant configured to communicate with all of the other virtual assistants. 14. The VAP of claim 13, further comprising a virtual assistant bus configured to deliver only communications between the virtual assistants. 15. The VAP of claim 9, further comprising a device client installed on each of the devices on which the VAP operates, the device client modifying operations of the device so that one or more of the virtual assistants operate on the device. 16. The VAP of claim 15, wherein in each device, the virtual assistant operating on the device includes a main agent and a plurality of other agents, wherein the main agent communicates with the other agents and each of the other agents performs one or more tasks. 17. The VAP of claim 16, further comprising an agent bus on each device, the agent bus configured to deliver only communications between the main agent and the other agents on the device. 18. The VAP of claim 16, further comprising a device bus configured to deliver only communications between main agents of the devices on which one of the virtual assistants is operating. 19. The VAP of claim 9, further comprising an execution environment including a plurality of VAP-implementation services for configuring one or more of the VAs and one or more of the agents. 20. The VAP of claim 19, wherein the execution environment further includes an application programming interface for agents to access the VAP-implementation services.
Please help me write a proper abstract based on the patent claims.
A virtual assistant platform (“VAP”) provides a self-supporting and expandable architectural framework for virtual assistants (“VAs”) to communicate with a user via an electronic device. VAs may communicate with other devices, software programs, and other VAs. VAs may include intelligent agents configured to perform particular tasks. The VAP may include an execution environment that provides an interface between the VA and the electronic device and a framework of services for the intelligent agents. A VA may participate in or coordinate a group of VAs in which knowledge and tasks can be shared and cooperatively executed. The execution environment may include an agent store for registering agents for use on the VAP, storing agent code and data, and distributing agents to requesting users. Through the agent store, new VAs and agents may be distributed to users to expand their use of the VAP.
1. An apparatus for estimating water demand, the apparatus comprising: a water demand estimation setting unit configured to collect user input data; a control unit configured to collect the record data and the user input data from the SCADA system, set an upper limit value and a lower limit value among the data collected, compare a value with the upper limit value and the lower limit value to extract a value (normal data) present within the limit value range, set the extracted normal data as water demand estimation data, perform a learning process on each of a plurality of algorithm combination groups including at least one algorithm to select any one algorithm combination group, and input the record data and the user input data to the selected algorithm combination group to calculate water demand estimation data; a storage unit configured to store the record data collected from the SCADA system, store the user input data, and store the water demand estimation data; and a water demand estimation output unit configured to output the calculated water demand estimation data. 2. The apparatus of claim 1, wherein the control unit includes: a record data collecting unit configured to collect the record data from the SCADA system; a record data processing unit configured to set an upper limit value and a lower limit value among the data collected, compare a value with the upper limit value and the lower limit value to extract a value (normal data) present within the limit value range, and set the extracted normal data as water demand estimation data by the record data collecting unit; an algorithm combining unit configured to generate a plurality of algorithm combination groups including at least one algorithm and select any one algorithm combination group among the generated algorithm combination groups; and a water demand estimation result calculating unit configured to calculate water demand estimation data by inputting the record data and the user data to the selected algorithm combination group. 3. The apparatus of claim 2, wherein the algorithm combining unit sets the number of algorithms to be included in at least one algorithm combination group, performs a learning process on each of the algorithm combination groups including algorithms combined according to the number, and extracts any one algorithm combination group. 4. The apparatus of claim 3, wherein the algorithm combining unit gives a weighted value of each algorithm according to the learning process performed on each of the combined algorithm groups, and extracts an algorithm combination having the uppermost weight value or an algorithm combination having the smallest error value with respect to reference estimation result data. 5. The apparatus of claim 2, wherein the control unit further includes a calculation result verifying unit configured to verify the result calculated by the water demand estimation result calculating unit. 6. The apparatus of claim 2, wherein the control unit further includes: an error compensating unit configured to give a weighted value to water demand estimation data within a threshold range period from the current time, among the water demand estimation data calculated by the water demand estimation result calculating unit, and compensate an error with respect to hourly estimation result data. 7. The apparatus of claim 1, wherein the storage unit includes: a record storage unit configured to store record data collected from the SCADA server; a setting storage unit configured to store user input data; and an estimation data storage unit configured to store calculated water demand estimation data under the control of the control unit. 8. The apparatus of claim 7, wherein the record storage unit collects record data periodically from the SCADA server or outputs data collected in real time to the control unit periodically.
Please help me write a proper abstract based on the patent claims.
There is provided an apparatus for forecasting water demand of a waste system using an automation system. The apparatus for estimating water demand includes a water demand estimation setting unit configured to collect user input data, a control unit configured to collect the record data and the user input data from the SCADA system, perform a learning process on each of a plurality of algorithm combination groups including at least one algorithm to select any one algorithm combination group, and input the record data and the user input data to the selected algorithm combination group to calculate water demand estimation data, a storage unit configured to store the record data collected from the SCADA system, store the user input data, and store the water demand estimation data, and a water demand estimation output unit configured to output the calculated water demand estimation data.
1. A method, in a data processing system, for tailoring question answering system output based on user expertise, the method comprising: receiving, by the data processing system, an input question from a questioning user; determining, by the data processing system, a set of features associated with text of the input question; determining, by the data processing system, an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model; generating, by the data processing system, one or more candidate answers for the input question; and tailoring, by the data processing system, output of the one or more candidate answers based on the expertise level of the questioning user. 2. The method of claim 1, wherein determining the set of features associated with the text of the input question comprises: extracting a plurality of features from the text of the input question using an annotation engine pipeline in the data processing system. 3. The method of claim 2, wherein the plurality of features comprises at least one of content words formed into unigram/ngram lexical features, social hedges, specificity of words, specific experience level indicators, or references to external expertise. 4. The method of claim 2, wherein determining the set of features associated with the text of the input question further comprises: obtaining features from the questioning user's posting history within a collection of question and answer postings. 5. The method of claim 4, wherein determining the set of features associated with the text of the input question further comprises: obtaining features from responses by other users within the collection of question and answer postings. 6. The method of claim 1, wherein the trained expertise model comprises a question partition trained using questions in a collection of question and answer postings and an answer partition trained using answers in the collection of question and answer postings and wherein determining the expertise level of the questioning user comprises determining the expertise level of the questioning user using the question partition of the trained expertise model. 7. The method of claim 1, wherein generating the one or more candidate answers for the input question comprises generating the one or more candidate answers from a collection of question and answer postings. 8. The method of claim 7, wherein tailoring output of the one or more candidate answers comprises: determining an expertise level of a contributing user providing evidence for a given candidate answer, comprising: obtaining features from the contributing user's posting history within the collection of question and answer postings; and obtaining features from responses by other users within the collection of question and answer postings. 9. The method of claim 1, wherein tailoring output of the one or more candidate answers comprises: determining an expertise level of each of the one or more candidate answers using the trained expertise model; and ranking the one or more candidate answers based on the expertise levels of the one or more candidate answers. 10. The method of claim 1, wherein tailoring output of the one or more candidate answers comprises: selecting only candidate answers that have a high confidence score and match the expertise level of the questioning user. 11. The method of claim 1, wherein training the trained expertise model comprises: harvesting a collection of question and answer postings; labeling questions and answers in the collection with predetermined expertise levels; determining a set of features associated with text of each question and answer; and training a machine learning model based on the predetermined expertise levels and the sets of features associated with the text of the questions and answers to form the trained expertise model. 12. The method of claim 11, wherein determining the set of features associated with text of a given question or answer comprises: extracting a plurality of features from the text of the given question or answer using an annotation engine pipeline; obtaining features from posting history of a contributing user associated with the given question or answer; and obtaining features from responses by other users within the collection of question and answer postings. 13. A computer program product comprising a computer readable storage medium having a computer readable program stored therein, wherein the computer readable program, when executed on a computing device, causes the computing device to: receive an input question from a questioning user; determine a set of features associated with text of the input question; determine an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model; generate one or more candidate answers for the input question; and tailor output of the one or more candidate answers based on the expertise level of the questioning user. 14. The computer program product of claim 13, wherein determining the set of features associated with the text of the input question comprises: extracting a plurality of features from the text of the input question using an annotation engine pipeline. 15. The computer program product of claim 13, wherein tailoring output of the one or more candidate answers comprises: determining an expertise level of each of the one or more candidate answers using the trained expertise model; and ranking the one or more candidate answers based on the expertise levels of the one or more candidate answers. 16. The computer program product of claim 13, wherein tailoring output of the one or more candidate answers comprises: selecting only candidate answers that have a high confidence score and match the expertise level of the questioning user. 17. The computer program product of claim 13, wherein training the trained expertise model comprises: harvesting a collection of question and answer postings; labeling questions and answers in the collection with predetermined expertise levels; determining a set of features associated with text of each question and answer; and training a machine learning model based on the predetermined expertise levels and the sets of features associated with the text of the questions and answers to form the trained expertise model. 18. An apparatus comprising: a processor; and a memory coupled to the processor, wherein the memory comprises instructions which, when executed by the processor, cause the processor to: receive an input question from a questioning user; determine a set of features associated with text of the input question; determine an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model; generate one or more candidate answers for the input question; and tailor output of the one or more candidate answers based on the expertise level of the questioning user. 19. The apparatus of claim 18, wherein tailoring output of the one or more candidate answers comprises: determining an expertise level of each of the one or more candidate answers using the trained expertise model; and ranking the one or more candidate answers based on the expertise levels of the one or more candidate answers. 20. The apparatus of claim 18, wherein training the trained expertise model comprises: harvesting a collection of question and answer postings; labeling questions and answers in the collection with predetermined expertise levels; determining a set of features associated with text of each question and answer; and training a machine learning model based on the predetermined expertise levels and the sets of features associated with the text of the questions and answers to form the trained expertise model.
Please help me write a proper abstract based on the patent claims.
A mechanism is provided in a data processing system for tailoring question answering system output based on user expertise. The mechanism receives an input question from a questioning user and determines a set of features associated with text of the input question. The mechanism determines an expertise level of the questioning user based on the set of features associated with the text of the input question using a trained expertise model. The mechanism generates one or more candidate answers for the input question and tailors output of the one or more candidate answers based on the expertise level of the questioning user.
1. A computer-implemented system for cooperatively evolving a deep neural network structure that solves a provided problem when trained on a source of training data containing labeled examples of data sets for the problem, the system comprising: a memory storing a candidate supermodule genome database having a pool of candidate supermodules, each of the candidate supermodules identifying respective values for a plurality of supermodule hyperparameters of the supermodule, the supermodule hyperparameters including supermodule global topology hyperparameters identifying a plurality of modules in the candidate supermodule and module interconnects among the modules in the candidate supermodule, at least one of the modules in each candidate supermodule including a neural network, each candidate supermodule having associated therewith storage for an indication of a respective supermodule fitness value; the memory further storing a blueprint genome database having a pool of candidate blueprints for solving the provided problem, each of the candidate blueprints identifying respective values for a plurality of blueprint topology hyperparameters of the blueprint, the blueprint topology hyperparameters including a number of included supermodules, and interconnects among the included supermodules, each candidate blueprint having associated therewith storage for an indication of a respective blueprint fitness value; an instantiation module which instantiates each of at least a training subset of the blueprints in the pool of candidate blueprints, at least one of the blueprints being instantiated more than once, each instantiation of a candidate blueprint including identifying for the instantiation a supermodule from the pool of candidate supermodules for each of the supermodules identified in the blueprint; a training module which trains neural networks on training data from the source of training data, the neural networks are modules which are identified by supermodules in each of the blueprint instantiations, the training includes modifying submodules of the neural network modules in dependence upon back-propagation algorithms; an evaluation module which, for each given one of the blueprints in the training subset of blueprints: evaluates each instantiation of the given blueprint on validation data, to develop a blueprint instantiation fitness value associated with each of the blueprint instantiations, updates fitness values of all supermodules identified for inclusion in each instantiation of the given blueprint in dependence upon the fitness value of the blueprint instantiation, and updates a blueprint fitness value for the given blueprint in dependence upon the fitness values for the instantiations of the blueprint; a competition module which: selects blueprints for discarding from the pool of candidate blueprints in dependence upon their updated fitness values, and selects supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values; a procreation module which: forms new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules, and forms new blueprints in dependence upon a respective set of at least one parent blueprint from the pool of candidate blueprints; and a solution harvesting module providing for deployment a selected one of the blueprints remaining in the candidate blueprint pool, instantiated with supermodules selected from the candidate supermodule pool. 2. The system of claim 1, wherein each supermodule in the pool of candidate supermodules further belongs to a subpopulation of the supermodules, wherein the blueprint topology hyperparameters of blueprints in the pool of candidate blueprints also identify a supermodule subpopulation for each included supermodule, wherein the instantiation module selects, for each supermodule identified in the blueprint, a supermodule from the subpopulation of supermodules which is identified by the blueprint, wherein the competition module, in selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values, does so in further dependence upon the subpopulation to which the supermodules belong, wherein the procreation module, in forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules, forms the new supermodules only in dependence upon parent supermodules which belong to the same subpopulation, and wherein the system is further configured to comprise a re-speciation module which re-speciates the supermodules in the pool of candidate supermodules into updated subpopulations. 3. The system of claim 2, wherein the competition module selects supermodules for discarding from the subpopulation with a same subpopulation identifier (ID). 4. The system of claim 1, further configured to comprise a control module which repeatedly invokes, for each of a plurality of generations, the training module, the evaluation module, the competition module, and the procreation module. 5. The system of claim 1, wherein the validation data is data previously unseen during training of a particular supermodule. 6. The system of claim 1, wherein a particular supermodule is identified in a plurality of blueprint instantiations, and wherein the evaluation module updates a supermodule fitness value associated with the particular supermodule in dependence of respective blueprint instantiation fitness values associated with each of the blueprint instantiations in the plurality. 7. The system of claim 6, wherein the supermodule fitness value is an average of the respective blueprint instantiation fitness values. 8. The system of claim 1, wherein the evaluation module assigns a supermodule fitness value to a particular supermodule if the supermodule fitness value is previously undetermined. 9. The system of claim 1, wherein the evaluation module, for a particular supermodule, merges a current supermodule fitness value with a previously determined supermodule fitness. 10. The system of claim 9, wherein the merging includes averaging. 11. The system of claim 1, wherein the evaluation module updates the blueprint fitness value for the given blueprint by averaging the fitness values for the instantiations of the blueprint. 12. The system of claim 1, wherein the supermodule hyperparameters further comprise module topology hyperparameters that identify a plurality of submodules of the neural network and interconnections among the submodules. 13. The system of claim 12, wherein crossover and mutation of the module topology hyperparameters during procreation includes modifying a number of submodules and/or interconnections among them. 14. A computer-implemented method of cooperatively evolving a deep neural network structure that solves a provided problem when trained on a source of training data containing labeled examples of data sets for the problem, the method including: storing a candidate supermodule genome database having a pool of candidate supermodules, each of the candidate supermodules identifying respective values for a plurality of supermodule hyperparameters of the supermodule, the supermodule hyperparameters including supermodule global topology hyperparameters identifying a plurality of modules in the candidate supermodule and module interconnects among the modules in the candidate supermodule, at least one of the modules in each candidate supermodule including a neural network, each candidate supermodule having associated therewith storage for an indication of a respective supermodule fitness value; storing a blueprint genome database having a pool of candidate blueprints for solving the provided problem, each of the candidate blueprints identifying respective values for a plurality of blueprint topology hyperparameters of the blueprint, the blueprint topology hyperparameters including a number of included supermodules, and interconnects among the included supermodules, each candidate blueprint having associated therewith storage for an indication of a respective blueprint fitness value; instantiating each of at least a training subset of the blueprints in the pool of candidate blueprints, at least one of the blueprints being instantiated more than once, each instantiation of a candidate blueprint including identifying for the instantiation a supermodule from the pool of candidate supermodules for each of the supermodules identified in the blueprint; training neural networks on training data from the source of training data, the neural networks are modules which are identified by supermodules in each of the blueprint instantiations, the training further includes modifying submodules of the neural networks in dependence upon back-propagation algorithms; for each given one of the blueprints in the training subset of blueprints, evaluating each instantiation of the given blueprint on validation data, to develop a blueprint instantiation fitness value associated with each of the blueprint instantiations, updating fitness values of all supermodules identified for inclusion in each instantiation of the given blueprint in dependence upon the fitness value of the blueprint instantiation, and updating a blueprint fitness value for the given blueprint in dependence upon the fitness values for the instantiations of the blueprint; selecting blueprints for discarding from the pool of candidate blueprints in dependence upon their updated fitness values; selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values; forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules; forming new blueprints in dependence upon a respective set of at least one parent blueprint from the pool of candidate blueprints; and deploying a selected one of the blueprints remaining in the candidate blueprint pool, instantiated with supermodules selected from the candidate supermodule pool. 15. The computer-implemented method of claim 14, wherein each supermodule in the pool of candidate supermodules further belongs to a subpopulation of the supermodules, wherein the blueprint topology hyperparameters of blueprints in the pool of candidate blueprints also identify a supermodule subpopulation for each included supermodule, wherein the instantiating includes selecting, for each supermodule identified in the blueprint, a supermodule from the subpopulation of supermodules which is identified by the blueprint, wherein, in selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values, doing so in further dependence upon the subpopulation to which the supermodules belong, wherein, in forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules, forming the new supermodules only in dependence upon parent supermodules which belong to the same subpopulation, and wherein re-speciating the supermodules in the pool of candidate supermodules into updated subpopulations. 16. The computer-implemented method of claim 14, further including repeatedly performing, for each of a plurality of generations, the training, the evaluating, the updating, the selecting, the forming, and the deploying. 17. The computer-implemented method of claim 14, wherein a particular supermodule is identified in a plurality of blueprint instantiations, and further including updating a supermodule fitness value associated with the particular supermodule in dependence of respective blueprint instantiation fitness values associated with each of the blueprint instantiations in the plurality. 18. A non-transitory computer readable storage medium impressed with computer program instructions to cooperatively evolve a deep neural network structure that solves a provided problem when trained on a source of training data containing labeled examples of data sets for the problem, the instructions, when executed on a processor, implement a method comprising: storing a candidate supermodule genome database having a pool of candidate supermodules, each of the candidate supermodules identifying respective values for a plurality of supermodule hyperparameters of the supermodule, the supermodule hyperparameters including supermodule global topology hyperparameters identifying a plurality of modules in the candidate supermodule and module interconnects among the modules in the candidate supermodule, at least one of the modules in each candidate supermodule including a neural network, each candidate supermodule having associated therewith storage for an indication of a respective supermodule fitness value; storing a blueprint genome database having a pool of candidate blueprints for solving the provided problem, each of the candidate blueprints identifying respective values for a plurality of blueprint topology hyperparameters of the blueprint, the blueprint topology hyperparameters including a number of included supermodules, and interconnects among the included supermodules, each candidate blueprint having associated therewith storage for an indication of a respective blueprint fitness value; instantiating each of at least a training subset of the blueprints in the pool of candidate blueprints, at least one of the blueprints being instantiated more than once, each instantiation of a candidate blueprint including identifying for the instantiation a supermodule from the pool of candidate supermodules for each of the supermodules identified in the blueprint; training neural networks on training data from the source of training data, the neural networks are modules which are identified by supermodules in each of the blueprint instantiations, the training further includes modifying submodules of the neural networks in dependence upon back-propagation algorithms; for each given one of the blueprints in the training subset of blueprints, evaluating each instantiation of the given blueprint on validation data, to develop a blueprint instantiation fitness value associated with each of the blueprint instantiations, updating fitness values of all supermodules identified for inclusion in each instantiation of the given blueprint in dependence upon the fitness value of the blueprint instantiation, and updating a blueprint fitness value for the given blueprint in dependence upon the fitness values for the instantiations of the blueprint; selecting blueprints for discarding from the pool of candidate blueprints in dependence upon their updated fitness values; selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values; forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules; forming new blueprints in dependence upon a respective set of at least one parent blueprint from the pool of candidate blueprints; and deploying a selected one of the blueprints remaining in the candidate blueprint pool, instantiated with supermodules selected from the candidate supermodule pool. 19. The non-transitory computer readable storage medium of claim 18, wherein each supermodule in the pool of candidate supermodules further belongs to a subpopulation of the supermodules, wherein the blueprint topology hyperparameters of blueprints in the pool of candidate blueprints also identify a supermodule subpopulation for each included supermodule, wherein the instantiating includes selecting, for each supermodule identified in the blueprint, a supermodule from the subpopulation of supermodules which is identified by the blueprint, wherein, in selecting supermodules from the candidate supermodule pool for discarding in dependence upon their updated fitness values, doing so in further dependence upon the subpopulation to which the supermodules belong, wherein, in forming new supermodules in dependence upon a respective set of at least one parent supermodule from the pool of candidate supermodules, forming the new supermodules only in dependence upon parent supermodules which belong to the same subpopulation, and wherein re-speciating the supermodules in the pool of candidate supermodules into updated subpopulations. 20. The non-transitory computer readable storage medium of claim 18, implementing the method further comprising repeatedly performing, for each of a plurality of generations, the training, the evaluating, the updating, the selecting, the forming, and the deploying.
Please help me write a proper abstract based on the patent claims.
The technology disclosed relates to evolving a deep neural network based solution to a provided problem. In particular, it relates to providing an improved cooperative evolution technique for deep neural network structures. It includes creating blueprint structures that include a plurality of supermodule structures. The supermodule structures include a plurality of modules. The modules are neural networks. A first loop of evolution executes at the blueprint level. A second loop of evolution executes at the supermodule level. Further, multiple mini-loops of evolution execute at each of the subpopulations of the supermodules. The first loop, the second loop, and the mini-loops execute in parallel.
1. A method comprising: receiving data comprising values corresponding a plurality of variables; generating a score based on the received data and using a boosted ensemble of segmented scorecard models, the boosted ensemble of segmented scorecard models comprising two or more segmented scorecard models; and providing data comprising the score. 2. A method as in claim 1, wherein the score ƒ is calculated by the boosted ensemble of segmented scorecard models based on: f = β * s + w 0 + ∑ i = 1 I  f i  ( x i ) , wherein s is a score derived from a previous ensemble, β is a shrinking parameter; and wherein the enhanced scorecard model optimizes β and predictor bin weights so that the score ƒ is better than the score s. 3. A method as in claim 1, wherein the providing data comprises at least one of: displaying the score, transmitting data comprising the score to a remote computing system, loading data comprising the score into memory, or storing data comprising the score. 4. A method as in claim 1, wherein at least one of the receiving, generating, and providing is implemented by at least one data processor forming part of at least one computing system. 5. A method as in claim 1, wherein the boosted ensemble of segmented scorecard models is generated by: training a first segmented scorecard model; identifying or generating a second segmented scorecard model that provides an enhanced score relative to the first segmented scorecard model; enumerating split variables and split values in the second segmented scorecard model using a segmentation search algorithm; and forming the boosted ensemble of segmented scorecard models based on both the first segmented scorecard model and the second segmented scorecard model. 6. A non-transitory computer program product storing instructions which, when executed by at least one data processor forming part of at least one computing system, results in operations comprising: receiving data comprising values corresponding a plurality of variables; generating a score based on the received data and using a boosted ensemble of segmented scorecard models, the boosted ensemble of segmented scorecard models comprising two or more segmented scorecard models; and providing data comprising the score. 7. A computer program product as in claim 6, wherein the score ƒ is calculated by the boosted ensemble of segmented scorecard models based on: f = β * s + w 0 + ∑ i = 1 I  f i  ( x i ) , wherein s is a score derived from a previous ensemble, β is a shrinking parameter; and wherein the enhanced scorecard model optimizes β and predictor bin weights so that the score ƒ is better than the score s. 8. A computer program product as in claim 6, wherein the providing data comprises at least one of: displaying the score, transmitting data comprising the score to a remote computing system, loading data comprising the score into memory, or storing data comprising the score. 9. A computer program product as in claim 6, wherein the boosted ensemble of segmented scorecard models is generated by: training a first segmented scorecard model; identifying or generating a second segmented scorecard model that provides an enhanced score relative to the first segmented scorecard model; enumerating split variables and split values in the second segmented scorecard model using a segmentation search algorithm; and forming the boosted ensemble of segmented scorecard models based on both the first segmented scorecard model and the second segmented scorecard model. 10. A method comprising: training a first segmented scorecard model; identifying or generating a second segmented scorecard model that provides an enhanced score relative to the first segmented scorecard model; enumerating split variables and split values in the second segmented scorecard model using a segmentation search algorithm; and forming a boosted ensemble of scorecard models based on both the first model and the second segmented scorecard model. 11. A method as in claim 10, wherein the first segmented scorecards is trained using at least one of a scorecard model, a regression, or a neural network. 12. A method as in claim 11, wherein at least one of the training, identifying, enumerating, and forming is implemented by at least one data processor forming part of at least one computing system. 13. A method as in claim 10, further comprising: enabling local or remote access to the boosted ensemble of segmented scorecard models to enable scores to be generated.
Please help me write a proper abstract based on the patent claims.
Data is received that include values that correspond to a plurality of variables. A score is then generated based on the received data and using a boosted ensemble of segmented scorecard models. The boosted ensemble of segmented scorecard models includes two or more segmented scorecard models. Subsequently, data including the score can be provided (e.g., displayed, transmitted, loaded, stored, etc.). Related apparatus, systems, techniques and articles are also described.