Spaces:
Sleeping
Sleeping
example patent applications from validation set
Browse files- patent_application1.csv +2 -0
- patent_application2.csv +2 -0
patent_application1.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
patent_number,decision,title,abstract,claims,background,summary,description,cpc_label,ipc_label,filing_date,patent_issue_date,date_published,examiner_id
|
2 |
+
['14365653'],['REJECTED'],['SYSTEM AND METHOD FOR DEVICE ACTION AND CONFIGURATION BASED ON USER CONTEXT DETECTION FROM SENSORS IN PERIPHERAL DEVICES'],"['A system and method for device action and configuration based on user context detection from sensors in peripheral devices are disclosed. A particular embodiment includes: a peripheral device including one or more sensors to produce sensor data; and logic, at least a portion of which is partially implemented in hardware, the logic configured to determine a context from the sensor data and to perform at least one action based on the determined context, the at least one action including modifying a configuration in a mobile device for sending notifications to a user.']","['1-20. (canceled) 21. A mobile device comprising: logic, at least a portion of which is partially implemented in hardware, the logic configured to determine a context from sensor data and to perform at least one action based on the determined context, the at least one action including modifying a configuration in a mobile device for sending notifications to a user. 22. The mobile device as claimed in claim 21 wherein the sensor data being encoded with audio signals and received on a microphone line via a microphone conductor of an audio jack. 23. The mobile device as claimed in claim 21 including a sensor data receiver to receive sensor data produced by one or more sensors in a peripheral device and to provide the received sensor data to the logic for processing. 24. The mobile device as claimed in claim 23 wherein the sensor data receiver includes a wireless transceiver, the sensor data being received via a wireless data transmission. 25. The mobile device as claimed in claim 21 wherein the sensor data is of a type from the group consisting of: biometric data, heart rate data, temperature data, pressure data, acceleration data, galvanic skin response data, and global positioning system data. 26. The mobile device as claimed in claim 21 wherein the mobile device is a mobile phone. 27. A system comprising: a peripheral device including one or more sensors to produce sensor data; and logic, at least a portion of which is partially implemented in hardware, the logic configured to determine a context from the sensor data and to perform at least one action based on the determined context, the at least one action including modifying a configuration in a mobile device for sending notifications to a user. 28. The system as claimed in claim 27 wherein the sensor data being encoded with audio signals and received on a microphone line via a microphone conductor of an audio jack. 29. The system as claimed in claim 28 wherein the peripheral device including a microcontroller coupled to the one or more sensors to receive the sensor data generated by the one or more sensors, the microcontroller being further configured to encode the sensor data into an audio band signal, the peripheral device including an adder to combine the encoded data with audio signals on the microphone line, the adder being further configured to transfer the combined audio signals via the microphone conductor of the audio jack. 30. The system as claimed in claim 27 including a sensor data receiver to receive the sensor data produced by the one or more sensors in the peripheral device and to provide the received sensor data to the logic for processing. 31. The system as claimed in claim 30 wherein the peripheral device includes a wireless transceiver, the sensor data being sent via a wireless data transmission. 32. The system as claimed in claim 27 wherein the sensor data produced by the one or more sensors in the peripheral device is biometric data. 33. The system as claimed in claim 27 wherein the sensor data is of a type from the group consisting of: heart rate data, temperature data, pressure data, acceleration data, galvanic skin response data, and global positioning system data. 34. The system as claimed in claim 27 wherein the logic is implemented in a mobile phone. 35. The system as claimed in claim 27 wherein the peripheral device is from the group consisting of: a headset and an earbud accessory. 36. A non-transitory machine-useable storage medium embodying instructions which, when executed by a machine, cause the machine to: receive sensor data produced by one or more sensors in a peripheral device; transfer the sensor data to a mobile device for processing; determine a context from the sensor data; and perform at least one action based on the determined context, the at least one action including modifying a configuration in the mobile device for sending notifications to a user. 37. The machine-useable storage medium as claimed in claim 36 wherein the instructions being further configured to receive the sensor data on a microphone line via a microphone conductor of an audio jack. 38. The machine-useable storage medium as claimed in claim 36 wherein the instructions being further configured to receive the sensor data via a wireless data transmission. 39. The machine-useable storage medium as claimed in claim 36 wherein the sensor data produced by the one or more sensors in the peripheral device is biometric data. 40. The machine-useable storage medium as claimed in claim 36 wherein the sensor data is of a type from the group consisting of: heart rate data, temperature data, pressure data, acceleration data, galvanic skin response data, and global positioning system data.']","[""<SOH> BACKGROUND <EOH>Smartphones are becoming the predominant link between people and information. Most current smartphones or other mobile devices provide a capability to use mobile software applications (apps). A mobile software application (app) can embody a defined set of functionality and can be installed and executed on a mobile device, such as a smartphone, as tablet device, laptop computer, a digital camera, or other form of mobile computing, imaging, or communications device. Conventional mobile apps are available that focus on particular applications or functionality sets. Additionally, most standard mobile phones and other mobile devices have an audio/microphone connector or audio jack into which a headset, earbuds, or other peripheral device connector can be plugged. Most standard headsets or earbud accessories also include a microphone so the user can both hear audio from the phone and speak into the phone via the headset or earbud accessory. A plug connected to the headsets, earbuds, or other peripheral device can include separate conductive elements to transfer electrical signals corresponding to the left ear audio, right ear audio, microphone audio, and ground. The plug is compatible with the mobile device audio jack. The standard headsets or earbud accessories are configured to be placed over or attached to the ear(s) of a person, and include one or more speakers and a microphone. The headset may also include an arm that is attached to a housing that supports the microphone. The arm may be movable between a stored position and an extended, operative position. The headset, earbuds, the arm, and/or other types of peripheral devices may include one or more physiological or biometric sensors, environmental sensors, and/or other types of data-producing elements. Computing devices, communication devices, imaging devices, electronic devices, accessories, or other types of peripheral devices designed to be worn or attached to a user (denoted as wearables or wearable devices) and the associated user experience are also becoming very popular. Mobile phone headsets and earbud accessories are examples of such wearables. Because wearable devices are typically worn by or attached to the user all or most of the time, it is important that wearables serve as a helpful tool aiding the user when needed, and not become an annoying distraction when the user is trying to focus on other things. One form of as wearable device is a heart rate (FIR) monitor. Existing heart rate monitoring solutions in the market are mostly electrocardiogram (ECG) based chest straps that transmit data to a watch that has a display. An electrocardiogram (EKG or ECG) is a test that determines heart rate based on the electrical activity of the heart. Other types of conventional HR monitors are also ECG based, but only have a watch on one hand and the user needs to pause to measure HR by touching it with the other hand. A Valencell™ brand product has a PPG (photoplethysmography) based solution for HR monitoring in earphones. PPG is an optical sensing technique that allows measurement of blood pulsation from the skin surface. The Valencell™ brand product has a sensor in the earbud and as digital signal processor (DSP) and Bluetooth™ radio in a medallion or other separate component connected to the earbuds. The user can clip the separate medallion on their clothes or wear the separate component. HR data is wirelessly transmitted periodically from the medallion or other separate component to an app in a mobile phone. Other biometric data like calories, VO2 (oxygen consumption), etc. can also be calculated by the app in the mobile phone. However, for wearable devices and other peripheral devices, it is very important to be able to ascertain the user's environment and context. Although existing systems gather some forms of biometric data, this data is not used to determine a user's environment and context nor used to make decisions based on a user's dynamically determined context.""]","['<SOH> BRIEF DESCRIPTION OF THE DRAWINGS <EOH>The various embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which: FIG. 1 illustrates an example embodiment configured for sending data from a peripheral device to a mobile device via the audio/microphone wire and the audio jack; FIG. 2 illustrates an example embodiment configured for sending data from a peripheral device to a mobile device via a wireless data connection; FIG. 3 illustrates a system diagram of an example embodiment; FIGS. 4 through 6 illustrate examples of the placement of sensors in various types of peripheral devices (e.g., headsets and earbuds); FIGS. 7 through 9 illustrate example embodiments in which accelerometer data with a microphone input can be used to detect movement and/or sounds of the user associated with chewing and a type of food being eaten; FIG. 10 is a processing flow chart illustrating an example embodiment of a method as described herein; and FIG. 11 shows a diagrammatic representation of a machine in the example form of a mobile computing and/or communication system within which a set of instructions when executed and/or processing logic when activated may cause the machine to perform any one or more of the methodologies described and/or claimed herein. detailed-description description=""Detailed Description"" end=""lead""?']","[""TECHNICAL FIELD This patent application relates to electronic systems, peripheral devices, mobile devices, and computer-implemented software, according to various example embodiments, and more specifically to a system and method for device action and configuration based on user context detection from sensors in peripheral devices. BACKGROUND Smartphones are becoming the predominant link between people and information. Most current smartphones or other mobile devices provide a capability to use mobile software applications (apps). A mobile software application (app) can embody a defined set of functionality and can be installed and executed on a mobile device, such as a smartphone, as tablet device, laptop computer, a digital camera, or other form of mobile computing, imaging, or communications device. Conventional mobile apps are available that focus on particular applications or functionality sets. Additionally, most standard mobile phones and other mobile devices have an audio/microphone connector or audio jack into which a headset, earbuds, or other peripheral device connector can be plugged. Most standard headsets or earbud accessories also include a microphone so the user can both hear audio from the phone and speak into the phone via the headset or earbud accessory. A plug connected to the headsets, earbuds, or other peripheral device can include separate conductive elements to transfer electrical signals corresponding to the left ear audio, right ear audio, microphone audio, and ground. The plug is compatible with the mobile device audio jack. The standard headsets or earbud accessories are configured to be placed over or attached to the ear(s) of a person, and include one or more speakers and a microphone. The headset may also include an arm that is attached to a housing that supports the microphone. The arm may be movable between a stored position and an extended, operative position. The headset, earbuds, the arm, and/or other types of peripheral devices may include one or more physiological or biometric sensors, environmental sensors, and/or other types of data-producing elements. Computing devices, communication devices, imaging devices, electronic devices, accessories, or other types of peripheral devices designed to be worn or attached to a user (denoted as wearables or wearable devices) and the associated user experience are also becoming very popular. Mobile phone headsets and earbud accessories are examples of such wearables. Because wearable devices are typically worn by or attached to the user all or most of the time, it is important that wearables serve as a helpful tool aiding the user when needed, and not become an annoying distraction when the user is trying to focus on other things. One form of as wearable device is a heart rate (FIR) monitor. Existing heart rate monitoring solutions in the market are mostly electrocardiogram (ECG) based chest straps that transmit data to a watch that has a display. An electrocardiogram (EKG or ECG) is a test that determines heart rate based on the electrical activity of the heart. Other types of conventional HR monitors are also ECG based, but only have a watch on one hand and the user needs to pause to measure HR by touching it with the other hand. A Valencell™ brand product has a PPG (photoplethysmography) based solution for HR monitoring in earphones. PPG is an optical sensing technique that allows measurement of blood pulsation from the skin surface. The Valencell™ brand product has a sensor in the earbud and as digital signal processor (DSP) and Bluetooth™ radio in a medallion or other separate component connected to the earbuds. The user can clip the separate medallion on their clothes or wear the separate component. HR data is wirelessly transmitted periodically from the medallion or other separate component to an app in a mobile phone. Other biometric data like calories, VO2 (oxygen consumption), etc. can also be calculated by the app in the mobile phone. However, for wearable devices and other peripheral devices, it is very important to be able to ascertain the user's environment and context. Although existing systems gather some forms of biometric data, this data is not used to determine a user's environment and context nor used to make decisions based on a user's dynamically determined context. BRIEF DESCRIPTION OF THE DRAWINGS The various embodiments are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings in which: FIG. 1 illustrates an example embodiment configured for sending data from a peripheral device to a mobile device via the audio/microphone wire and the audio jack; FIG. 2 illustrates an example embodiment configured for sending data from a peripheral device to a mobile device via a wireless data connection; FIG. 3 illustrates a system diagram of an example embodiment; FIGS. 4 through 6 illustrate examples of the placement of sensors in various types of peripheral devices (e.g., headsets and earbuds); FIGS. 7 through 9 illustrate example embodiments in which accelerometer data with a microphone input can be used to detect movement and/or sounds of the user associated with chewing and a type of food being eaten; FIG. 10 is a processing flow chart illustrating an example embodiment of a method as described herein; and FIG. 11 shows a diagrammatic representation of a machine in the example form of a mobile computing and/or communication system within which a set of instructions when executed and/or processing logic when activated may cause the machine to perform any one or more of the methodologies described and/or claimed herein. DETAILED DESCRIPTION In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments. It will be evident, however, to one of ordinary skill in the art that the various embodiments may be practiced without these specific details. In the various embodiments described herein, a system and method for device action and configuration based on user context detection from sensors in peripheral devices are disclosed. The various embodiments described herein provide various ways to determine status and detect events to ascertain the user's context, and to make actionable decisions based on the determined context. In an example embodiment described herein, a peripheral device, such as a wearable device (e.g., a headset or earbuds), is configured to include a data-producing component. In one embodiment, the data-producing component can be a biometric sensor, such as a heart rate sensor, which can produce sensor data in the peripheral device. In the example embodiment, this sensor data can be transmitted to a mobile device, such as a mobile phone, with which the peripheral device is in data communications via a wired or a wireless data connection. In an embodiment using a wireless data connection, a standard wireless protocol, such as a Bluetooth link, or frequency modulation (FM) radio can be used. In an embodiment using a wired data connection, the peripheral device can be coupled to a mobile device via an audio/microphone wire and an audio jack of the mobile device. The sensor data can be transferred from the peripheral device to the mobile device via the microphone conductor of the audio jack. In various embodiments, the described data-producing component(s) in the peripheral device can be an accelerometer a galvanic skin response (GSR) detector, a temperature sensor, a pressure sensor, and/or the like. It will be apparent to those of ordinary skill in the art in view of the disclosure herein that many other types of data-producing components in the peripheral device may be similarly deployed. For example, these other types of data-producing components can include environmental sensors, motion sensors, image or video-producing devices, audio capture devices, global positioning systems (GPS), and the like. Additionally, these data-producing components in the peripheral device can be grouped into sensor modules that include a variety of different types of sensors or other types of data-producing components. In each case, the data captured or generated by the data-producing components in the peripheral device can be transferred to a mobile device via a wired or wireless data connection as described. Various embodiments are described in more detail below. In an example embodiment described herein, the data captured generated by the data-producing components in the peripheral device (denoted sensor data) can be transferred to a software application (app) executing in the mobile device. The app can use the sensor data to detect status and events based on the dynamic conditions measured or determined by the sensors in the peripheral devices that are used regularly by people. The sensor data allows the app in a mobile device to determine the user's context (e.g., if the user is engaged in activities and does not want to be disturbed, if the user is looking for help and suggestions the device, or the like). In other words, the sensor data received from the data-producing, components in the peripheral device allow the system to determine the context of the user. From the user context, the system can also offer the help that the user is looking for more easily. Based on the dynamically determined context, the system can also automatically perform actions, suppress actions, or configure system functionality in a manner consistent with the dynamically determined context. These data-producing components in the peripheral device also allow the user and the system to monitor user wellness in real-time and over extended periods of time thereby enabling the user to make positive lifestyle changes. The various embodiments described herein enable the system to receive sensor data from a plurality of peripheral device sensors, determine user context from the sensor data, and to make contextually-appropriate decisions for the user. In this manner, the system can be is useful tool for the user. The system can automatically determine user context based on real-time user and environmental context events and status that are detected using data from sensors installed in peripheral devices. The context events that can be dynamically determined, by the system can include: what the user is doing, how the user is feeling, what kind of assistance the user needs, whether the user wants assistance or wants not be disturbed at all, how is the user's health impacted by certain activites, and a variety of other user-relevant states and/or events. Referring now to FIG. 1 an example embodiment 100 described herein is configured for sending data from a peripheral device to a mobile device via the audio/microphone wire and the audio jack. In the embodiment of FIG. 1, a peripheral device 110 (e.g., headsets, earbuds, or the like) can include one or more sensors 112. As described above, these sensors can be biometric sensors, environmental sensors, or other data-producing components. In a particular example embodiment, the sensors can he optical sensors for detecting heart rate, an infrared (IR) LED, an accelerometer, and/or the like. The peripheral device 110 can also include a microphone 114, which can transfer audio signals from the peripheral device 110 to a mobile device 130 via an electrical (audio/microphone) wire and audio jack in a standard manner. The peripheral device 110 can also be configured to include a microcontroller (e.g., an MSP430, or other type of microcontroller). It will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of standard microcontrollers, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic circuits, or other circuitry or logic can be similarly used as the microcontroller of the example embodiments. The microcontroller 116 can receive the sensor data produced by the sensors 112. The sensor data produced by the one or more sensors 112 in the peripheral device 110 can be encoded into a modulation format and sent to the microcontroller 116 for processing. In one example embodiment, the sensor data is provided as I2C signals. I2C (also denoted PC or Inter-Integrated Circuit) is a multimaster, serial, single-ended computer bus used for attaching low-speed peripherals to a motherboard, embedded system, cellphone, or other electronic device. It will be apparent to those of ordinary skill in the art that the sensor data can be provided in a variety of different forms, formats, protocols, or signals. The microcontroller 116 can convert the sensor data to an audio band signal using FSK (frequency-shift keying) or other well-known encoding technique. The converted data from the sensors 112 is added into or otherwise combined with the audio/microphone wire signals using an adder 118 for transfer to a mobile device 130 via the standard audio jack 140. Referring still to FIG. 1, a mobile device 130 of an example embodiment is shown coupled to the peripheral device 110 via audio jack 140. It will be apparent to those of ordinary skill in the art that devices other than a mobile phone can be similarly used. For example, the mobile device 130 can also include a smartphone, a tablet device, laptop computer, a personal digital assistant (PDA), global positioning system (GPS) device, an imaging device, an audio or video player or capture device, or other form of mobile computing, communications, or imaging device. Such mobile devices 130 can include standard components, such as an audio encoder/decoder (codec) 132 and analog-to-digital converter (ADC) 124 as part of a sensor data receiver 133. As described above, mobile device 110 can also include an application (app) 131, which can comprise downloaded software, firmware, or other form of customized processing logic. App 131 can be configured to include a filtering component 142 and Context Detection and Device Action Logic 332. Filtering component 142 can include a low pass filter (LPF) 144 and a high pass filter (HPF) 146. App 131 can also be configured as processing logic or logic, at least a portion of which is partially implemented in hardware, the logic including the filtering component 142 and the Context Detection and Device Action Logic 332. The Context Detection and Device Action Logic 332 of an example embodiment is described in more detail below. Sensor data sent from the peripheral device 110 to the mobile device 130 via the audio/microphone wire and the audio jack 140 is received at the sensor data receiver 133 and sampled in the standard codec 132 provided in a conventional mobile device 130. The codec 132 can use the analog-to-digital converter (ADC) 134, to produce digital signals that are received by the filtering component 142 of the app 131 executing on the mobile device 130. The LPF 144 can be used to isolate the standard audio signals produced by microphone 114. These audio signals can be passed to an audio modem. The HPF 146 can be used to isolate the encoded sensor data received from the sensors 112. The isolated sensor data can be passed to a decoder component, which processes and analyzes the sensor data produced in peripheral device 110. In this manner, the example embodiment can send sensor data produced in as peripheral device to a mobile device for processing by a mobile device app via the audio/microphone wire and the audio jack of the mobile device. The described embodiment provides the advantage that sensor data can be transferred from the peripheral device to the mobile device via the audio jack without having to modify the hardware of the mobile device. Further, the embodiment does not require a wireless connection to the mobile device. However, referring now to FIG. 2, in another example embodiment 10 data transfer from the peripheral device 150 to the mobile device 170 can be effected using standard Bluetooth™ Low Energy technology or frequency modulation (FM) radio signals provided by a wireless transceiver 158 in the peripheral device 150. In the example embodiment shown in FIG. 2, the peripheral device 150 (e.g., headsets, earbuds, or the like) can include one or more sensors 112. As described above, these sensors can be biometric sensors, environmental sensors, or other data-producing component. Peripheral device 150 can also be configured to include a microcontroller 156. It will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of standard microcontrollers, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic circuits, or other circuitry or logic can be similarly used as the microcontroller of the example embodiments. The microcontroller 156 can receive the sensor data produced by the sensors 112. The sensor data produced by the one or more sensors 112 in the peripheral device 150 can be encoded into a pre-defined data format by the microcontroller 156 and sent to the wireless transceiver 158. The wireless transceiver 158 allows the peripheral device 150 to wirelessly transmit peripheral device data, such as sensor data from sensors 112 to the mobile device 170. A wireless transceiver 172 of the sensor data receiver 173 in the mobile device 170 allows the mobile device 170 to receive sensor data wirelessly from the peripheral device 150. As described above, mobile device 170 can also include an application (app) 171, which can comprise downloaded software, firmware, or other form of customized processing logic. The app 171 can include Context Detection and Device Action Logic 332. The Context Detection and Device Action Logic 332 of an example embodiment is described in more detail below. The app 171 can receive the sensor data from the wireless transceiver 172 via the sensor data receiver 173. In this manner, the example embodiment can transfer sensor data produced in a peripheral device to a mobile device for processing by a mobile device app via a wireless data connection. The Bluetooth™ solution would be simpler, but would also be more costly and would consume more electrical power. The FM solution would require modifications to the mobile device and may not work with any mobile phone. The various embodiments described herein detect a particular state or event based on sensor data received from a peripheral device, and then determine the broader user context based on the state/event detection. For example, sensor data received from a peripheral device can be used to infer the user context, which can be used to determine if the user is having a meal, or snacking, or drinking, or engaged in other identifiable activities, so the system can take actions based on the broader context. According to various example embodiments, the following usages describe examples of the system behaviors and capabilities in response to detection of certain user context events or states. Referring now to FIG. 3, a system diagram 300 of an example embodiment is illustrated. As described above, a peripheral device 310 can include a plurality of sensors or other data-generating components 112. Examples of the placement of sensors in various types of peripheral devices (e.g. headsets and earbuds) are shown in FIGS. 4 through 6. Sensor data generated by these sensors 112 can be transferred to the mobile device 330 and received by the sensor data receiver 333 in several ways as also described above. The sensor data is processed in the mobile device 330 by Context Detection and Device Action Logic 332 executing in a processing environment provided by app 331. Logic 332 comprises a plurality of processing modules for processing the sensor data to determine a context and to perform actions based on the determined context. In particular, the Logic 332 includes Context Determination Logic 350, Decision Logic 360, an Event Recorder 370, and an Action Dispatcher 380. The data processing performed by each of these processing modules is described below in relation to several described example embodiments. In a first example embodiment, an accelerometer and/or a microphone or other audio capture device of data-generating components 112 in the peripheral device 310 is used for detecting that a user is chewing. As shown in FIGS. 7 through 9, accelerometer data with a microphone input can be used to detect movement and/or sounds of the user associated with chewing and a type of food being eaten (e.g., crunchy, chewy, or soft food). This data can be used by the Context Determination Logic 350 to determine if the user is having a meal. In this example embodiment, the determined context is one associated with the user having a meal. This context determination is passed from the Context Determination Logic 350 to the Decision Logic 360. This context determination and any associated detected events or states can also be logged by an Event Recorder 370. The Decision Logic 360 can use the context determination to make a decision related to performing (or not performing) an action based on the determined context. For example in a particular embodiment, the Decision Logic 360 can cause the mobile device 330, via the Action Dispatcher 380, to trigger or configure one or more of the actions described below based on the determined context: Device Action 1: If the user doesn't want to be disturbed during dinner based on a pre-configured preference, the mobile device 330 can be configured by the Action Dispatcher 380 to suppress notifications during the meal or other detected events. Device Action 2: Because the determined context is one associated with the user having a meal, the mobile device 330 can be configured by the Action Dispatcher 380 to set a reminder that is triggered after completion of the meal. For example, the mobile device 330 can be configured to automatically remind the user to take his/her medicine after lunch. Device Action 3: Based on the sensor data, such as the accelerometer data and/or microphone input used to detect movement and/or sounds of the user associated with chewing, the Context Determination Logic 350 can determine the rate at which the user is chewing and swallowing. If the user is determined to be swallowing too quickly based on the user's determined rate in comparison to pre-stored data corresponding to normative standards for human chewing and swallowing, the mobile device 330 can be configured by the Action Dispatcher 380 to issue a notification to the user to gently coach her/him to slow down and chew/swallow properly. Device Action 4: Based on the sensor data and the determination that the user is eating, the Context Determination Logic 350 can also determine the times of day and lengths of time when the user is eating. If the user is determined to be eating for a short period of time, or intermittently, then the Context Determination Logic 350 can determine the user is snacking and log the activity using the Event Recorder 370. Based on the sensor data, such as the accelerometer data and/or microphone input used to detect movement author sounds of the user associated with chewing, the Context Determination Logic 350 can also determine the likely type of food being consumed (e.g., a crunchy, chewy, or soft food). The Context Determination Logic 350 can also be configured to prompt the user to enter information identifying the type of snack they are consuming. This log can give an accurate calorie consumption picture for the user over a pre-determined time frame. It will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of different actions can be triggered or configured based on the detection of a context associated with a user consuming a meal. In a second example embodiment, a heart rate monitor or sensor and/or GSR (galvanic skin response) sensor of data-generating components 112 in the peripheral device 310 can be used for detecting stress in the user. The heart rate of the user as detected by the heart rate sensor can be compared with pre-stored normative standards of human heart rates. Elevated heart rates can be indicative of stress. The GSR sensor measures the electrical conductance of the skin, which can be indicative of moisture or sweat on the skin. Skin moisture/sweat levels can be compared with pre-stored normative standards of human skin moisture/sweat levels. Elevated skin moisture/sweat levels can he indicative of stress. This data can be used by the Context Determination Logic 350 to determine if the user is experiencing a stress episode. The Context Determination Logic 350 can also determine the timing, length, and severity of the detected stress episode. This information can be logged using the Event Recorder 370. Additionally, the context determination (e.g., a stress episode) can be passed from the Context Determination Logic 350 to the Decision Logic 360. The Decision Logic 360 can use the context determination to make a decision related to performing (or not performing) an action based on the determined context. For example in a particular embodiment, the Decision Logic 360 can cause the mobile device 330, via the Action Dispatcher 380, to trigger or configure one or more of the actions described below based on the determined context: Device Action 5: Upon the detection of the user stress episode, the mobile device 330 can be configured by the Action Dispatcher 380 to issue a notification or warning to the user and suggest that s/he take a break and relax. The mobile device 330 can also be configured by the Action Dispatcher 380 to issue a notification or warning to a third party (e.g., call or text paramedics) based on the timing, length, and/or severity of the detected stress episode. The mobile device 330 can also be configured by the Action Dispatcher 380 to issue a notification or warning, to a third party based on the detection of the cessation of heart beat or other events associated with emergency situations or severe medical conditions. Device Action 6: Given the detection of user stress over time, the Context Determination Logic 350 and the Decision Logic 360 can build datasets to enable the user to look at his/her cumulative stress data and determine stress patterns, such as the time of the day or the specific tasks being performed that are producing higher levels of stress in the user. Device Action 7: Upon the detection of the user stress episode, the mobile device 330 can be configured by the. Action Dispatcher 380 to suppress notifications until the stress level is reduced. It will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of different actions can be triggered or configured based on the detection of a context associated with a user stress episode or medical condition. In a third example embodiment, a temperature sensor (thermometer) of generating components components 112 in the peripheral device 310 can be used for detecting and monitoring the user's core body temperature in real-time. The user's real-time body temperature as measured by the thermometer can be compared with pre-stored normative standards of human body temperature. Elevated body temperature can be indicative of disease, infection, stress, or other medical condition. This data can be used by the Context Determination Logic 350 to determine if the user is experiencing as medical condition, The Context Determination Logic 350 can also determine the timing, length, and severity of the detected medical condition. This information can be logged using the Event Recorder 370. Additionally, the context determination (e.g., a medical condition) can be passed from the Context Determination Logic 350 to the Decision Logic 360. The Decision Logic 360 can use the context determination to make a decision related to performing (or not performing) an action based on the determined context. For example in a particular embodiment, the Decision Logic 360 can cause the mobile device 330, via the Action Dispatcher 380, to trigger or configure one or more of the actions described below based on the determined context: Device Action 8: Upon the detection of the elevated user body temperature relative to comparisons with nominal core body temperature data, the mobile device 330 can be configured by the Action Dispatcher 380 to issue a notification or warning to the user notifying the user of the presence of slight fevers that may be signs of oncoming infection. The mobile device 330 can also be configured by the Action Dispatcher 380 to issue a notification or warning to a third party (e.g., call or text paramedics) based on the timing, length, and/or severity of the detected medical condition. The mobile device 330 can also be configured by the Action Dispatcher 380 to issue a notification or warning to a third party based on the detection of the user's core body temperature being above or below a safe level or other events associated with emergency situations or severe medical conditions. It will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of different actions can be triggered or configured based on the detection of a context associated with as user medical condition. In a fourth example embodiment, a heart rate monitor or sensor of data generating components 112 in the peripheral device 310 can be used for detecting the user's mood. The heart rate of the user as detected by the heart rate sensor can he compared with pre-stored normative standards of human heart rates associated with particular moods. Elevated, heart rates can be indicative of energetic or active moods. Slower heart rates can be indicative of more mellow or somber moods. This data can he used by the Context Determination Logic 350 to determine the user's mood. The Context Determination Logic 350 can also determine the timing, length, and severity of the detected mood. This information can be logged using the Event Recorder 370. Additionally, the context determination (e.g., the user's mood) can be passed from the Context Determination Logic 350 to the Decision Logic 360. The Decision Logic 360 can use the context determination to make a decision related to performing (or not performing) an action based on the determined context. For example in a particular embodiment, the Decision Logic 360 can cause the mobile device 330, via the Action Dispatcher 380, to trigger or configure one or more of the actions described below based on the determined context: Device Action 9: Upon the detection of the user's mood, the mobile device 330 can be configured by the Action Dispatcher 380 to play only relevant sections of a song to maintain the user's heart rate. Software in the app 331 running in the mobile device 330 can analyze a song and determine the song's BPM (beats per minute) at different sections of the song. Different songs or various portions of a song can be matched to the current heart rate of the user as measured by the heart rate monitor in the peripheral device 310. For example, if the user's heart rate, and thus the user's mood, is suggestive of music with a pace of 180 BPM, the app 331 can play only the part of a song where the song's BPM is 180, instead of playing the complete song. If the target pace is 180 RPM and the current song has ended, the next song can be played 30 seconds, for example, from its start to avoid a low tempo beginning, so the user doesn't slow down and the target pace is maintained. In this manner, the embodiment can match multimedia content with the current mood of the consumer. It will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of different actions can be triggered or configured based on the detection of a context associated with a user's mood. In other various embodiments, the sensors of data-generating components 112 in the peripheral device 310 can be used for detecting other user contexts. For example, the pressure sensor can be used to measure atmospheric pressure and thereby infer certain weather conditions. A user can be notified of rapid changes in pressure, which may he indicative of the approach of weather events. In other embodiments, a global positioning system (GPS) receiver of the data-generating components 112 can be used to determine the location of the user. For example, the Context Determination Logic 350 can use GPS data to determine if a user is currently at work or at a residence. The mobile device 330 can be configured differently depending on the location of the user. The GPS data can also be used to determine if the user is stationary or moving. In other embodiments, image or audio capture devices of the data-generating components 112 can be used to record audio or video clips in the proximity of the user. Static images can also be taken. The recordings can transferred to the app 331 where they can be parsed and analyzed to extract context information related to the user's current situation. For example, the audio and image information can be used to determine that the user is walking in the city based on traffic noise or images corresponding to city locations. It will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of other well-known sensing devices or technologies can be included in the sensor modules added to the peripheral device 310. As such, it will be apparent to those of ordinary skill in the art in view of the disclosure herein that a variety of additional user-associated states and events can be detected and contextually relevant. ac lions can be taken (or suppressed) in response thereto. Referring now to FIG. 10, a processing flow diagram illustrates an example embodiment of as method for device action and configuration based on user context detection from sensors in peripheral devices as described herein. The method 1000 of an example embodiment includes: receiving, sensor data produced by one or more sensors in a peripheral device (processing block 1010); transferring the sensor data to a mobile device for processing (processing block 1020); determining a context from the sensor data (processing block 1030); and performing at least one action based on the determined context, the at least one action including modifying a configuration in the mobile device for sending notifications to a use (processing block 1040). FIG. 11 shows a diagrammatic representation of a machine in the example form of a mobile computing and/or communication system 700 within which a set of instructions when executed processing logic when activated may cause the machine to perform any one or more of the methodologies described and/or claimed herein. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity or a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a laptop computer, a tablet computing system, a Personal Digital Assistant (PDA), a cellular telephone, a smartphone, a web appliance, as set-top box (STB), a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) or activating processing logic that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” can also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions or processing logic to perform any one or more of the methodologies described and/or claimed herein. The example mobile computing and/or communication system 700 includes a data processor 702 (e.g., a System-on-a-Chip (SOC), general processing core, graphics core, and optionally other processing logic) and a memory 704, which can communicate with each other via a bus or other data transfer system 706. The mobile computing and/or communication system 700 may further include various input/output (I/O) devices and/or interfaces 710, such as a touchscreen display, an audio jack, and optionally a network interface 712. In an example embodiment, the network interface 712 can include one or more radio transceivers configured for compatibility with any one or more standard wireless and/or cellular protocols or access technologies (e.g., 2nd (2G), 2.5, 3rd (3G), 4th (4G) generation, and future generation radio access for cellular systems, Global System for Mobile communication (GSM), General Packet Radio Services (GPRS Enhanced Data GSM Environment EDGE), Wideband Code Division Multiple Access (WCDMA), LTE, CDMA2000, WLAN, Wireless Router (WR) mesh, and the like). Network interface 712 may also be configured for use with various other wired and/or wireless communication protocols, including TCP/IP, UDP, SIP, SMS, RTP, WAP, CDMA, TDMA, UMTS, UWB, WiFi, WiMax, Bluetooth, IEEE 802,11x, and the like. In essence, network interface 712 may include or support virtually any wired and/or wireless communication mechanisms by which information may travel between the mobile computing and/or communication system 700 and another computing or communication system via network 714. The memory 704 can represent a machine-readable medium on which is stored one or more sets of instructions, software, firmware, or other processing logic (e.g., logic 708) embodying any one or more of the methodologies or functions described and/or claimed herein. The logic 708, or a portion thereof, may also reside, completely or at least partially within the processor 702 during execution thereof by the mobile computing and/or communication system 700. As such, the memory 704 and the processor 702 may also constitute machine-readable media. The logic 708, or a portion thereof, may also be configured as processing logic or logic, at least a portion of which is partially implemented in hardware. The logic 708, or a portion thereof, may further be transmitted or received over a network 714 via the network interface 712. While the machine-readable medium of an example embodiment can be is single medium, the term “machine-readable medium” should be taken to include a single non-transitory medium or multiple non-transitory media (e.g., a centralized or distributed database, and/or associated caches and computing systems) that store the one or more sets of instructions. The term “machine-readable medium” can also be taken to include any non-transitory medium that is capable of storing, encoding or carrying a set of instructions fur execution by the machine and that cause the machine to perform any one or more of the methodologies of the various embodiments, or that is capable of storing, encoding or carrying data structures utilized by or associated with such a set of instructions. The term “machine-readable medium” can accordingly be taken to include, but not be limited to, solid-state memories, optical media, and magnetic media. In various embodiments as described herein, example embodiments include at least the following examples. A mobile device comprising: logic, at least a portion of which is partially implemented in hardware, the logic configured to determine a context from sensor data and to perform at least one action based on the determined context, the at least one action including modifying a configuration in a mobile device for sending notifications to a user. The mobile device as claimed above wherein the sensor data being encoded with audio signals and received on a microphone line via a microphone conductor of an audio jack. The mobile device as claimed above including a sensor data receiver to receive sensor data produced by one or more sensors in a peripheral device and to provide the received sensor data to the logic for processing. The mobile device as claimed above wherein the sensor data receiver includes a wireless transceiver, the sensor data being received via a wireless data transmission. The mobile device as claimed above wherein the sensor data is of a type from the group consisting of: biometric data, heart rate data, temperature data, pressure data, acceleration data, galvanic skin response data, and global positioning system data. The mobile device as claimed above wherein the mobile device is a mobile phone. A system comprising: a peripheral device including one or more sensors to produce sensor data; and logic, at least a portion of which is partially implemented in hardware, the logic configured to determine a context from the sensor data and to perform at least one action based on the determined context, the at least one action including modifying a configuration in a mobile device for sending notifications to a user. The system as claimed above wherein the sensor data being encoded with audio signals and received on a microphone line via a microphone conductor of an audio jack. The system as claimed above wherein the peripheral device including a microcontroller coupled to the one or more sensors to receive the sensor data generated by the one or more sensors, the microcontroller being further configured to encode the sensor data into an audio band signal, the peripheral device including an adder to combine the encoded data with audio signals on the microphone line, the adder being further configured to transfer the combined audio signals via the microphone conductor of the audio jack. The system as claimed above including a sensor data receiver to receive the sensor data produced by the one or more sensors in the peripheral device and to provide the received sensor data to the logic for processing. The system as claimed above wherein the peripheral device includes a wireless transceiver, the sensor data being sent via a wireless data transmission. The system as claimed, above wherein the sensor data produced by the one or more sensors in the peripheral device is biometric data. The system as claimed above wherein the sensor data is of a type from the group consisting of: heart rate data, temperature data, pressure data, acceleration data, galvanic skin response data, and global positioning system data. The system as claimed above wherein the logic is implemented in a mobile phone. The system as claimed above wherein the peripheral device is from the group consisting of a headset and an earbud accessory. A non-transitory machine-useable storage medium embodying instructions which, when executed by a machine, cause the machine to receive sensor data produced by one or more sensors in a peripheral device; transfer the sensor data to a mobile device for processing; determine a context from the sensor data; and perform at least one action based on the determined context, the at least one action including modifying a configuration in the mobile device for sending notifications to as user. The machine-useable storage medium as claimed above wherein the instructions being further configured to receive the sensor data on a microphone line via a microphone conductor of an audio jack. The machine-useable storage medium as claimed above wherein the instructions being further configured to receive the sensor data via a wireless data transmission. The machine-useable storage medium as claimed above wherein the sensor data produced by the one or more sensors in the peripheral device is biometric data. The machine-useable storage medium as claimed above wherein the sensor data is of a type from the group consisting of heart rate data, temperature data, pressure data, acceleration data galvanic skin response data, and global positioning system data. A method comprising: determining a context from sensor data; and performing at least one action based on the determined context, the at least one action including modifying a configuration in a mobile device for sending notifications to a user. The method as claimed above wherein the sensor data being encoded with audio signals and received on a microphone line via a microphone conductor of an audio jack. The method as claimed above including receiving sensor data produced by one or more sensors in a peripheral device and providing the received sensor data to logic for processing. The method as claimed above wherein the sensor data being received via a wireless data transmission. The method as claimed above wherein the sensor data is of a type from the group consisting of biometric data, heart rate data temperature data, pressure data, acceleration data, galvanic skin response data, and global positioning system data. The method as claimed above wherein the mobile device is a mobile phone. A mobile apparatus comprising: logic means, at least a portion of which is partially implemented. In hardware, the logic means con flamed to determine a context from sensor data and to perform at least one action based on the determined context, the at least one action including modifying a configuration in a mobile device for sending, notifications to a user. The mobile apparatus as claimed above wherein the sensor data being encoded with audio signals and received on a microphone line via a microphone conductor of an audio jack. The mobile apparatus as claimed above including as sensor data receiving means to receive sensor data produced by one or more sensors in a peripheral device and to provide the received sensor data to the logic means for processing. The mobile apparatus as claimed above wherein the sensor data receiving means includes a wireless transceiver, the sensor data being received via a wireless data transmission. The mobile apparatus as claimed above wherein the sensor data is of a type from the group consisting of: biometric data, heart rate data, temperature data, pressure data, acceleration data, galvanic skin response data, and global positioning system data. The mobile apparatus as claimed above wherein the mobile device is a mobile phone. The Abstract of the Disclosure is provided to comply with 37 C.F.R. §1.72(b), requiring an abstract that will allow the reader to quickly ascertain the nature of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, it eau be seen that various features are grouped together in a single embodiment for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus, the following claims are hereby incorporated into the Detailed Description, with each claim standing on its own as a separate embodiment.""]",['H04Q900'],['H04Q900'],['20160129'],[''],['20160630'],['59257.0']
|
patent_application2.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
patent_number,decision,title,abstract,claims,background,summary,description,cpc_label,ipc_label,filing_date,patent_issue_date,date_published,examiner_id
|
2 |
+
['14374144'],['PENDING'],['INFORMATION EXTRACTION FROM SEMANTIC DATA'],"['Technologies and implementations for extracting information from semantic data available, for example, on the World Wide Web, are generally disclosed.']","['1. A method to extract information from semantic data on the world wide web, the method comprising: generating a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a plurality of statements of the ontology; determining information candidates based at least in part on syntax of information representation language; and validating the information candidates based at least in part on the plurality of assertions. 2. The method of claim 1, wherein generating the plurality of assertions comprises generating one or more assertions based at least in part upon a terminological box (Tbox) classification and an assertion box (Abox) sampling. 3. The method of claim 2, wherein generating the plurality of assertions comprises determining a concept hierarchy tree and a role hierarchy tree, both being based at least in part on the Tbox classification. 4. The method of claim 2, wherein generating the plurality of assertions comprises determining an assertion pattern based at least in part on the Abox sampling. 5. The method of claim 4, wherein determining the assertion pattern comprises generating a plurality of distilled assertions based at least in part on the Abox sampling and the Tbox classification. 6. The method of claim 1, wherein determining information candidates comprises determining information candidates based at least in part on a description logic. 7. The method of claim 6, wherein determining information candidates based at least in part on the description logic comprises determining information candidates based at least in part on Web Ontology Language (OWL). 8. The method of claim 1, wherein determining information candidates comprises determining information candidates based at least in part on syntax of information representation language and signatures included in the Tbox classification. 9. The method of claim 1, wherein determining information candidates comprises determining information candidates based at least in part on novelty rule. 10. The method of claim 1, wherein determining information candidates comprises determining information candidates based at least in part on simplicity rule. 11. The method of claim 1, wherein validating the information candidates comprises determining an approximate Abox sampling. 12. The method of claim 1, wherein validating the information candidates comprises calculating a certainty level for a concept candidate based at least in part on a majority rule. 13. A machine readable non-transitory medium having stored therein instructions that, when executed by one or more processors, operatively enable a semantic data processing module to: generate a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a terminological box (Tbox) classification and an assertion box (Abox) sampling; determine information candidates based at least in part on syntax of information representation language; and validate the information candidates based at least in part on the plurality of assertions. 14. The machine readable non-transitory medium of claim 13, wherein the stored instructions, when executed by one or more processors, further operatively enable the semantic data processing module to determine a concept hierarchy tree and a role hierarchy tree, both being based at least in part on the Tbox classification. 15. The machine readable non-transitory medium of claim 14, wherein the stored instructions, when executed by one or more processors, further operatively enable the semantic data processing module to assign instances to at least one of concepts and roles based at least in part on the concept hierarchy tree and the role hierarchy tree. 16. The machine readable non-transitory medium of claim 13, wherein the stored instructions, when executed by one or more processors, further operatively enable the semantic data processing module to determine an assertion pattern based at least in part on the Abox sampling. 17. The machine readable non-transitory medium of claim 16, wherein the stored instructions, when executed by one or more processors, further operatively enable the semantic data processing module to generate a plurality of distilled assertions based at least in part on the Abox sampling and the Tbox classification. 18. The machine readable non-transitory medium of claim 13, wherein the stored instructions, when executed by one or more processors, further operatively enable the semantic data processing module to determine information candidates based at least in part on a description logic. 19. The machine readable non-transitory medium of claim 18, wherein the stored instructions, when executed by one or more processors, further operatively enable the semantic data processing module to determine information candidates based at least in part on Web Ontology Language (OWL). 20. The machine readable non-transitory medium of claim 13, wherein the stored instructions, when executed by one or more processors, further operatively enable the semantic data processing module to determine information candidates based at least in part on syntax of information representation language and signatures included in the Tbox classification. 21. The machine readable non-transitory medium of claim 13, wherein the stored instructions, when executed by one or more processors, further operatively enable the semantic data processing module to determine an approximate Abox sampling. 22. The machine readable non-transitory medium of claim 13, wherein the stored instructions, when executed by one or more processors, further operatively enable the semantic data processing module to calculate a certainty level for a concept candidate based at least in part on a majority rule. 23. A system to extract information from semantic data on the world wide web comprising: a processor; and a semantic data processing module communicatively coupled to the processor, the semantic data processing module configured to: generate a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a terminological box (Tbox) classification and an assertion box (Abox) sampling; determine information candidates based at least in part on syntax of information representation language; and validate the information candidates based at least in part on the plurality of assertions. 24. The system of claim 23, wherein the semantic data processing module is further configured to determine a concept hierarchy tree and a role hierarchy tree, both being based at least in part on the Tbox classification. 25. The system of claim 24, wherein the semantic data processing module is further configured to assign instances to at least one of concepts and roles based at least in part on the concept hierarchy tree and the role hierarchy tree. 26. The system of claim 23, wherein the semantic data processing module is further configured to determine an assertion pattern based at least in part on the Abox sampling. 27. The system of claim 26, wherein the semantic data processing module is further configured to generate a plurality of distilled assertions based at least in part on the Abox sampling and the Tbox classification. 28. The system of claim 23, wherein the semantic data processing module is further configured to determine information candidates based at least in part on a description logic. 29. The system of claim 28, wherein the semantic data processing module is further configured to determine information candidates based at least in part on Web Ontology Language (OWL). 30. The system of claim 23, wherein the semantic data processing module is further configured to determine information candidates based at least in part on syntax of information representation language and signatures included in the Tbox classification. 31. The system of claim 23, wherein the semantic data processing module is further configured to determine an approximate Abox sampling. 32. The system of claim 23, wherein the semantic data processing module is further configured to calculate a certainty level for a concept candidate based at least in part on a majority rule.']","['<SOH> BACKGROUND <EOH>Unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section. Large amounts of semantic data may be accessible from a computer. For example, large amounts of semantic data may be available on the World Wide Web (WWW). Due to the potentially vast amounts of semantic data, extracting information from the semantic data (e.g., using computers, or the like) may be difficult.']","['<SOH> SUMMARY <EOH>Described herein are various illustrative methods for extracting information from semantic data on the World Wide Web. Example methods may include generating a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a plurality of statements of the ontology, determining information candidates based at least in part on syntax of information representation language, and validating the information candidates based at least in part on the plurality of assertions. The present disclosure also describes various example machine readable non-transitory medium having stored therein instructions that, when executed by one or more processors, operatively enable a semantic data processing module to generate a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a terminological box (Tbox) classification and an assertion box (Abox) sampling, determine information candidates based at least in part on syntax of information representation language, and validate the information candidates based at least in part on plurality of assertions. The present disclosure additionally describes example systems. Example systems may include a processor, and a semantic data processing module communicatively coupled to the processor, the semantic data processing module configured to generate a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a terminological box (Tbox) classification and an assertion box (Abox) sampling, determine information candidates based at least in part on syntax of information representation language, and validate the information candidates based at least in part on plurality of assertions. The foregoing summary is illustrative only and not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description.']","[""BACKGROUND Unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section. Large amounts of semantic data may be accessible from a computer. For example, large amounts of semantic data may be available on the World Wide Web (WWW). Due to the potentially vast amounts of semantic data, extracting information from the semantic data (e.g., using computers, or the like) may be difficult. SUMMARY Described herein are various illustrative methods for extracting information from semantic data on the World Wide Web. Example methods may include generating a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a plurality of statements of the ontology, determining information candidates based at least in part on syntax of information representation language, and validating the information candidates based at least in part on the plurality of assertions. The present disclosure also describes various example machine readable non-transitory medium having stored therein instructions that, when executed by one or more processors, operatively enable a semantic data processing module to generate a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a terminological box (Tbox) classification and an assertion box (Abox) sampling, determine information candidates based at least in part on syntax of information representation language, and validate the information candidates based at least in part on plurality of assertions. The present disclosure additionally describes example systems. Example systems may include a processor, and a semantic data processing module communicatively coupled to the processor, the semantic data processing module configured to generate a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a terminological box (Tbox) classification and an assertion box (Abox) sampling, determine information candidates based at least in part on syntax of information representation language, and validate the information candidates based at least in part on plurality of assertions. The foregoing summary is illustrative only and not intended to be in any way limiting. In addition to the illustrative aspects, embodiments, and features described above, further aspects, embodiments, and features will become apparent by reference to the drawings and the following detailed description. BRIEF DESCRIPTION OF THE DRAWINGS Subject matter is particularly pointed out and distinctly claimed in the concluding portion of the specification. The foregoing and other features of the present disclosure will become more fully apparent from the following description and appended claims, taken in conjunction with the accompanying drawings. Understanding that these drawings depict only several embodiments in accordance with the disclosure, and are therefore, not to be considered limiting of its scope. The disclosure will be described with additional specificity and detail through use of the accompanying drawings. In the drawings: FIG. 1 illustrates a block diagram of a system configured to extract information from semantic data on the WWW; FIG. 2 is a flow chart of an example method for extracting information from semantic data on the WWW; FIG. 3 illustrates an example computer program product; and FIG. 4 illustrates a block diagram of an example computing device, all arranged in accordance with at least some embodiments described herein. DETAILED DESCRIPTION The following description sets forth various examples along with specific details to provide a thorough understanding of claimed subject matter. It will be understood by those skilled in the art that claimed subject matter might be practiced without some or more of the specific details disclosed herein. Further, in some circumstances, well-known methods, procedures, systems, components and/or circuits have not been described in detail, in order to avoid unnecessarily obscuring claimed subject matter. In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, similar symbols typically identify similar components, unless context dictates otherwise. The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here. It will be readily understood that the aspects of the present disclosure, as generally described herein, and illustrated in the Figures, can be arranged, substituted, combined, and designed in a wide variety of different configurations, all of which are explicitly contemplated and make part of this disclosure. This disclosure is drawn, inter alia, to methods, devices, systems and computer readable media related to information extraction from semantic data. Large amounts of semantic data may be available (e.g., on the WWW, on a LAN, in a data center, on a server, or the like). The available semantic data may correspond to a variety of different subjects (e.g., science, history, sports, economics, society, technology, etc.). Due to the large amounts of semantic data that may be available, extracting information (e.g., patterns, statistics, inferences, potentially useful facts, etc.) from the semantic data may be difficult. For example, large amounts of semantic data related to cancer may be available on the WWW. Extracting information (e.g., possible cause of cancer, etc.) from the semantic data may be difficult. Additionally, some techniques for extracting information from data stored in a database may not be applicable to extracting information from semantic data. More particularly, as data stored in a database may have a different format than semantic data (e.g., relational vs. graph based, etc.,) techniques for extracting information from data stored in a database may not be applicable to extracting information from semantic data. In general, semantic data may be organized based at least in part on a terminological box (Tbox) classification and an assertion box (Abox) sampling. In general, a TBox classification may define relationships among concepts and/or roles within the semantic data. An ABox sampling may describe information about one or more entities, using the concepts and roles defined by the TBox. As an example, semantic data may correspond to patients in a hospital. Such semantic data may have a TBox classification that describes the concept “hospital patient.” The semantic data may also have an ABox sampling that describes any number of entities (e.g., persons, animals, or the like) that are “hospital patients.” Various embodiments described herein may be provided for extracting information from semantic data. In some examples, information may be extracted from semantic data by generating assertions from the semantic data, determining information candidates from the semantic data, and applying a verification process on the determined information candidates using the generated assertions. Some examples presented herein may describe extracting information from semantic data available on the WWW. However, this is not intended to be limiting. For example, information may be extracted from semantic data available in a data center, on a LAN, on a server, or the like. In some examples, a computing device, coupled to the Internet, may be configured to both generate assertions and determine information candidates from semantic data available on the WWW. The computing device may further be configured to validate the determined information candidates based at least in part on the generated assertions. The computing device may generate a multiple number of assertions from an ontology corresponding to the semantic data based at least in part on the TBox classification and/or the ABox sampling. In some embodiments, the computing device may generate assertions by assigning entities referenced in the ABox sampling to a concept and/or role from the TBox classification (e.g., based on a concept hierarchy tree and/or based on a role hierarchy tree). Alternatively and/or additionally, the computing device may generate assertions by identifying patterns (e.g., used by a majority of assertions in the ABox sampling, or the like) in the ABox sampling. The computing device may determine information candidates based at least in part on a “simplicity rule”. For example, information candidates may be restricted to a particular length. In some examples, the length may be based on the syntax of information representation language. The computing device may determine information candidates based at least in part on a “novelty rule”. For example, information candidates may be required to be “new” (e.g., not already described by the TBox, or the like). The computing device may validate the determined information candidates based at least in part on the generated assertions. In some embodiments, the computing device may validate the information candidates based at least in part on a “majority rule”. For example, the computing device may determine information candidates that satisfy a majority or the generated assertions. FIG. 1 illustrates an example system 100 configured to extract information from semantic data on the WWW, arranged in accordance with at least some embodiments described herein. As depicted, the system 100 may include a computing device 110 configured to extract information from semantic data on the WWW. In general, the computing device 110 may be configured to generate assertions and determine information candidates from some semantic data on the WWW. For example, the computing device 110 may be configured to generate assertions and determine information candidates from some semantic data related to one or more causes of cancer that may be available on the WWW. The computing device 110 may further be configured to validate the determined information candidates based at least in part on the generated assertions. More details and examples of the computing device 110 generating assertions from semantic data will be provided below while discussing FIG. 1 and FIG. 2, as well as elsewhere herein. As depicted in this figure, the computing device 110 may access semantic data 120 available on the WWW 130 via connection 140. In some embodiments, the computing device 110 may access an amount of semantic data 120 sufficient for computing device 110 to generate assertions and determine information candidates as described herein. The computing device 110 may be any type of computing device connectable to the Internet. For example, the computing device 110 may be a laptop, a desktop, a server, a virtual machine, a cloud computing system, a distributed computing system, and/or the like. The connection 140 may be any type of connection to the Internet. For example, the connection 140 may be a wired connection, a wireless connection, a cellular data connection, and/or the like. The semantic data 120 may be any ontology describing entities and the entities' relationship to a concept and/or a role using a TBox classification 122 and an ABox sampling 124. The TBox classification 122 may include sentences describing concept hierarchies (e.g., relationships between concepts) and/or role hierarchies (e.g., relationships between roles). The ABox sampling 124 may include sentences stating where in the hierarchy one or more entities belong (e.g., relationships between entities and the concepts). TBox classification and ABox sampling facilitates or allows for the determination of an approximate ABox, since calculation of the complete ABox (derivation of all implicit assertions) may be difficult, especially for a very large semantic data set. On the other hand, more implicit assertions allows for or correlates to more accurate ABox sampling wherein derivation of all implicit assertions may be desired. Optimally, a balance point may be found between derivation of all implicit assertions and a sufficiently large number of implicit assertions obtained to achieve a desired ABox sampling accuracy. Since TBox classification is efficient and some implicit assertions can be easily obtained, TBox classification for the original ABox is executed before the ABox sampling, meaning that TBox classification may be replaced by other efficient methods. One purpose of TBox classification is to make the sequent ABox sampling process more accurate, i.e., to capture important patterns based on more assertions. Furthermore, computed assertions (ABox1) before ABox sampling can also be used to generate a combined set of assertions, e.g., ABox1∪ABox2. The semantic data 120 may be expressed using any suitable language. For example, the semantic data 120 may be expressed using the Resource Description Framework (RDF), the Web Ontology Language (OWL), Extensible Markup Language (XML), or the like. Similarly, the semantic data 120 may be expressed using a variety of description logics (e.g., SHOIN, SHIF, SROIQ, or the like). The computing device 110 may include a semantic data processing module 112. In general, the semantic data processing module 112 may be configured to extract information from the semantic data 120 as described herein. Simply stated, the semantic data processing module 120 may be configured to generate assertions 114 and determine information candidates 116 from the semantic data 120. The semantic data processing module 112 may further be configured to validate the determined information candidates 116 based at least in part on the generated assertions 114. In general, the generated assertions 114 may include multiple assertions. Similarly, the determined information candidates 116 may include multiple information candidates. In some portions of the present disclosure, the generated assertions 114 and the determined information candidates 116 are referred to in the plural form. As such, the “set” of generated assertions 114 or the “set” of determined information candidates 116 may be referenced. Additionally, in some portions of the present disclosure, a single one of the generated assertions 114 or a single one of the determined information candidates 116 is referred to. Although care is taken to distinguish between plural and singular references, it is to be appreciated, that in some references to the plural form, the singular form may be implied and vice versa. The semantic data processing module 112 may determine the assertions 114 based on at least in part on the TBox classification 122 and/or the ABox sampling 124. For example, the semantic data processing module 112 may generate assertions by assigning entities referenced in the original ABox in the TBox classification algorithm to a concept and/or role from the TBox classification 122 (e.g., based on a concept hierarchy tree and/or based on a role hierarchy tree). As another example, the semantic data processing module 112 may generate assertions by identifying patterns (e.g., used by a majority of assertions in the ABox sampling 124, or the like) in the ABox sampling 124. The semantic data processing module 112 may generate information candidates 116 based on at least in part on restricting the determined information candidates to a particular length (e.g., based on syntax of information representation language, or the like). As another example, the semantic data processing module 112 may require determined information candidates 116 to be “new” (e.g., not already described by the TBox, or the like). The semantic data processing module 112 may validate the determined information candidates 116 based at least in part on the determined assertions 114. In response to, or a part of the validation, the semantic data processing module 112 may generate a validation result 118. In some examples, the determined information candidates 116 that satisfy a majority of the generated assertions 114 may be included in the validation result 118. FIG. 2 illustrates a flow diagram of an example method for extracting information from semantic data on the WWW, arranged in accordance with at least some embodiments described herein. In some portions of the description, illustrative implementations of the method are described with reference to elements of the system 100 depicted in FIG. 1. However, the described embodiments are not limited to these depictions. More specifically, some elements depicted in FIG. 1 may be omitted from some implementations of the methods detailed herein. Furthermore, other elements not depicted in FIG. 1 may be used to implement example methods detailed herein. Additionally, FIG. 2 employs block diagrams to illustrate the example methods detailed therein. These block diagrams may set out various functional blocks or actions that may be described as processing steps, functional operations, events and/or acts, etc., and may be performed by hardware, software, and/or firmware. Numerous alternatives to the functional blocks detailed may be practiced in various implementations. For example, intervening actions not shown in the figures and/or additional actions not shown in the figures may be employed and/or some of the actions shown in the figures may be eliminated. In some examples, the actions shown in one figure may be operated using techniques discussed with respect to another figure. Additionally, in some examples, the actions shown in these figures may be operated using parallel processing techniques. The above described, and other not described, rearrangements, substitutions, changes, modifications, etc., may be made without departing from the scope of claimed subject matter. FIG. 2 illustrates an example method 200 for extracting information from semantic data on the WWW. Beginning at block 210 (“Generate Assertions From an Ontology Corresponding to Semantic Data”), the semantic data processing module 112 may include logic and/or features to generate assertions from semantic data on the WWW. In general, at block 210, the semantic data processing module 112 may generate the assertions 114 from the semantic data 120. In some examples, the semantic data processing module 112 may, at block 210, generate assertions 114 by assigning entities referenced in the original ABox in the TBox classification algorithm to a concept and/or role from the TBox classification 122 (e.g., based on a concept hierarchy tree and/or based on a role hierarchy tree). Alternatively, and/or additionally, the semantic data processing module 112 may, at block 210, generate assertions 114 by identifying patterns (e.g., used by a majority of assertions in the ABox sampling 124, or the like) in the ABox sampling 124. For example, the semantic data processing module 112 may, at block 210, determine a concept hierarchy tree and/or a role hierarchy tree based in part on the roles and/or concepts defined in the TBox classification 122. The semantic data processing module 112 may assign entities references in the original ABox in the TBox classification algorithm to concepts and/or roles in the determined hierarchy trees. The following pseudo code is provided as an illustrative example for how the semantic data processing module 112 may generate assertions 114 from semantic data 120. FUNCTION: Generate Assertions From Semantic Data (O) 120. INPUT: TBox classification 122 and the original ABox. OUTPUT: A New ABox (ABox1) That Includes One or More Generated Assertions. Start Process the TBox classification 122 to generate a concepts hierarchy tree (T1) and role hierarchy tree (T2). For each concept assertion C(a) in the ABox 124 Generate an assertion D(a) by assigning entity a to an all super-concept (D) that corresponds to C in the T1. Add the assertion D(a) to ABox1. End For For each role assertion R(b,c) in the ABox 124 Generate an assertion S(b,c) by assigning entities b and c to an all super-role (S) that corresponds to R in T1. Add the assertion S(b,c) to ABox1. End For End As another example, the semantic data processing module 112 may, at block 210, identify assertion patterns that are used by more than a threshold number of assertions in the ABox sampling 124. For example, the semantic data processing module 112 may determine the number of entities in the ABox sampling 124 (where a1, a2-an represents entities in the ABox sampling 124) that use a particular pattern (where C(x) represents a pattern). The semantic data processing module 112 may determine if the number of entities using the pattern C(x) exceeds a threshold value, and if so, generate an assertion based on the pattern. Assuming that the semantic data processing module 112 determines that a number of entities in the ABox sampling 124 greater than the threshold number use the pattern C(x), the semantic data processing module 124 may generate an assertion C(anew) based on the identified pattern C. For example, assume there are 1000 patients in the hospital, and 306 patients feel good about the services of the hospital, denoted by feelGood (pi, hospitalServices), where pi is a patient. Assuming the threshold is 30%, the pattern feelGood (pi, hospitalServices) is selected. All feelGood (pi, hospitalServices) assertions may then be removed from the ABox, and a feelGood (pnew, hospitalServices) may be added into the ABox. In the meantime, the mapping relation between pnew and pi is recorded. In some examples, the threshold number may correspond to a number equal to or greater than a majority (e.g., 50%, or the like) of the entities referenced in the ABox sampling 124. The following pseudo code is provided as an illustrative example of how the semantic data processing module 112 may generate assertions 124 from semantic data 120. FUNCTION: Generate Assertions from Semantic Data (O) 120. INPUT: Concepts Hierarchy Tree (T1), Role Hierarchy Tree (T2), TBox classification 122, ABox sampling 124, and a Threshold Number Representing Majority Rule (d). OUTPUT: A New ABox Sampling (ABox2) That Includes One or More Generated Assertions. Start n = 1 1. Process the TBox classification 122 to identify all n-dimensional patterns based on the concepts and the roles in the TBox classification 122. For each identified pattern Determine the number of assertions (x) that satisfy the pattern. If x > d, Then Add the pattern into a new ABox sampling (ABox3) and the relationship between the pattern and the represented assertions into a mapping table M. End If End For If at least one pattern satisfied the majority rule Then n++. go back to step 1. Else Determine all assertions based on T1, T2, and ABox3. (Comment: In the above operation, algorithms are used to find implicit assertions that cannot be computed by the TBox classification (assertions in ABox1)) Generate corresponding assertions using M. Add all generated assertions to ABox2. END In some examples, one or more of the patterns in the ABox sampling 124 may be multi-dimensional (e.g., contain more than one axiom, or the like). For example, the pattern C(x) may be a one-dimensional pattern while the pattern C1(x), C2(x) may be a two-dimensional pattern. As shown in the above pseudo code, multi-dimensional patterns may be incrementally explored, until no patterns of that dimensionality satisfy the majority rule. In some examples, assertions from leaf concepts and/or leaf roles may be directly assigned to its super concepts and/or roles. As stated above, in some examples, the semantic data processing module 112 may generate the assertions 114 using a variety of different approaches. For example, the generated assertions in ABox1 and ABox2 may be combined (e.g., ABox1∪ABox2, or the like) to form the set of generated assertions 114. Continuing from block 210 to block 220 (“Determine Information Candidates From the Semantic Data”), the semantic data processing module 112 may include logic and/or features to determine information candidates. In general, at block 220, the semantic data processing module 112 may be configured to determine the information candidates 116 from the semantic data 120. For example, the semantic data processing module 112 may determine the information candidates 116 based on the syntax of information representation language corresponding to the semantic data 120. The semantic data processing module 112 may determine the information candidates 116 by limiting the length of the determined candidates based in part on a simplicity rule. Alternatively, and/or additionally, the semantic data processing module 112 may determine information candidates based in part on the TBox classification 122 (e.g., using a novelty rule, or the like). For example, the semantic data processing module 112 may remove any information candidates from the generated information candidates 116, which are already described and/or implied by the TBox classification 122. In some examples, the semantic data processing module 112 may determine information candidates IC={I1, I2 . . . } using the following rules, where {C, . . . } is a set of concepts and {R, . . . } a set of roles from the TBox classification 122 and n is a non-negative integer. It is noted, that the following rules are expressed using SHOIN description logic and OWL, which is not intended to be in any way limiting. Concepts construction rule: C→C|C1C2|C1␣C2|∃RC|∀RC|≧nR|≦nR| Role construction rule: Trans(R), R1R2, R−, In some examples, the length of an information candidate may be restricted to a length L, which may be determined based in part on the following equations, which also use SHOIN description logic and OWL. |D|=1, for a concept (D) |C|=|C|+1 |C1C2|=|C1␣C2|=|C1|+|C2|+1 |∃RC|=|∀RC|=|C|+2 |≧nR|≦nR|=n+1 |Trans(R)|=2 |R1R2|=3 R−=2 Continuing from block 220 to block 230 (“Validate the Information Candidates Based at Least in Part on the Generated Assertions”), the semantic data processing module 112 may include logic and/or features to validate the determined information candidates. In general, at block 230, the semantic data processing module 112 may validate the determined information candidates 116 based at least in part on the generated assertions 114 (e.g., ABox1, and/or ABox2, or the like). The semantic data processing module 112 may provide the validated information candidates 116 as the validation result 118. In some examples, the semantic data processing module 112 may, at block 230, validate the determined information candidates 116 based in part on the syntax of information representation language corresponding to the semantic data 120. As an illustrative example of the syntax of an information representation language, Table 1 is provided. Table 1, shown below, depicts some example syntaxes and semantics based on the SHOIN description logic. TABLE 1 Syntax Semantics T ΔI ⊥ Ø C ΔI\\CI C1 C2 C1I ∩ C2I C1 C2 C1I ∪ C2I ∃r.C {d εΔI | there is an e εΔI with (d,e) ε rI and e ε CI} ∀r.C {d ε ΔI | for all e ε ΔI, (d,e) ε rI implies e ε CI } ≦ nR.C ∀y1,. . ., yn+1 : R(x, yi) {circumflex over ( )} C(yi) → yi ≈ yj ≧ nR.C ∃y1,. . ., yn+1 : R(x, yi) {circumflex over ( )} C(yi) {circumflex over ( )} yi yj R1 R2 ∀x,y : {circumflex over ( )}R1(x,y) → R2 (x,y) Trans (R) ∀x,y,z : R(x,y) {circumflex over ( )} R(y,z) → R(x,z) R− ∀x,y : R(x, y) R− (y,x) The semantic data processing module 112 may validate the determined information candidates 116 based in part on determining a degree of certainty for each of the information candidates in the set of information candidates 116. For example, assume all entities in the original ABox sampling 124 correspond to the domain Δ1. The semantic data processing module 112 may, at block 230, determine a degree of certainty for an information candidate (ICk) based in part on the following equations, where ICc is a concept information candidate and ICr is a role information candidate. certainty \ue8a0 ( I \ue89e \ue89e C c ) = number \ue89e \ue89e of \ue89e \ue89e assertions \ue89e \ue89e which \ue89e \ue89e satisfy \ue89e \ue89e IC c \ue89e \ue89e in \ue89e \ue89e ABox \ue89e \ue89e 1 ⋃ ABox \ue89e \ue89e 2 \uf603 Δ I \uf604 certainty \ue8a0 ( I \ue89e \ue89e C r ) = number \ue89e \ue89e of \ue89e \ue89e assertions \ue89e \ue89e which \ue89e \ue89e satisfy \ue89e \ue89e IC r \ue89e \ue89e in \ue89e \ue89e ABox \ue89e \ue89e 1 ⋃ ABox \ue89e \ue89e 2 \uf603 Δ I × Δ I \uf604 In some examples, the semantic data processing module 112 may, at block 230, determine if the certainty of an information candidate is greater than a threshold value. The semantic data processing module 112 may add the information candidate to the validation result 118 based on the determination that the certainty of the information candidate is greater than a threshold level. In some embodiments, the semantic data processing module 112 may, at block 230, determine whether a selected information candidate (ICi) models another selected information candidate (ICj) (e.g., ICi|=ICj). In some examples, if the semantic data processing module 112 determines that ICi|=ICj, the selected information candidates may be validated based on the following formula. certainty (ICj)>ζcertainty (ICi)>ζ certainty (ICi)<ζcertainty (ICj)<ζ Accordingly, the semantic data processing module 112 may, at block 230, determine that the certainty of an information candidate (ICi) exceed the threshold value if the certainty of its implied information candidate (ICj)) exceeds the threshold value. In which case, the semantic data processing module 112 may add the selected concept information candidate (ICi) to the validated results 118. Similarly, the semantic data processing module 112 may, at block 230, determine that the certainty of an information candidate (ICj) does not exceed the threshold value if the certainty of the selected concept information candidate (ICi) does not exceed the threshold value. In which case, the semantic data processing module 112 may not add the selected information candidate (ICj) to the validated results 118. In general, the method described with respect to FIG. 2 and elsewhere herein may be implemented as a computer program product, executable on any suitable computing system, or the like. For example, a computer program product for extracting information from semantic data on the WWW may be provided. Example computer program products are described with respect to FIG. 3 and elsewhere herein. FIG. 3 illustrates an example computer program product 300, arranged in accordance with at least some embodiments described herein. Computer program product 300 may include machine readable non-transitory medium having stored therein instructions that, when executed, cause the machine to extract information from semantic data on the WWW according to the processes and methods discussed herein. Computer program product 300 may include a signal bearing medium 302. Signal bearing medium 302 may include one or more machine-readable instructions 304, which, when executed by one or more processors, may operatively enable a computing device to provide the functionality described herein. In various examples, some or all of the machine-readable instructions may be used by the devices discussed herein. In some examples, the machine readable instructions 304 may include generate a plurality of assertions from an ontology corresponding to the semantic data based at least in part on a terminological box (Tbox) classification and an assertion box (Abox) sampling. In some examples, the machine readable instructions 304 may include determine information candidates based at least in part on syntax of information representation language. In some examples, the machine readable instructions 304 may include validate the information candidates based at least in part on plurality of assertions. In some examples, the machine readable instructions 304 may include determine a concept hierarchy tree and a role hierarchy tree, both being based at least in part on the Tbox classification. In some examples, the machine readable instructions 304 may include assign instances to at least one of concepts and roles based at least in part on the concept hierarchy tree and the role hierarchy tree. In some examples, the machine readable instructions 304 may include generate a plurality of distilled assertions based at least in part on the Abox sampling and the Tbox classification. In some examples, the machine readable instructions 304 may include determine information candidates based at least in part on a description logic. In some implementations, signal bearing medium 302 may encompass a computer-readable medium 306, such as, but not limited to, a hard disk drive, a Compact Disc (CD), a Digital Versatile Disk (DVD), a digital tape, memory, etc. In some implementations, the signal bearing medium 302 may encompass a recordable medium 308, such as, but not limited to, memory, read/write (R/W) CDs, R/W DVDs, etc. In some implementations, the signal bearing medium 302 may encompass a communications medium 310, such as, but not limited to, a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communication link, a wireless communication link, etc.). In some examples, the signal bearing medium 302 may encompass a machine readable non-transitory medium. In general, the methods described with respect to FIG. 2 and elsewhere herein may be implemented in any suitable computing system. Example systems may be described with respect to FIG. 4 and elsewhere herein. In general, the system may be configured to extract information from semantic data on the WWW. FIG. 4 illustrates a block diagram illustrating an example computing device 400, arranged in accordance with at least some embodiments described herein. In various examples, computing device 400 may be configured to extract information from semantic data on the WWW as discussed herein. In one example of a basic configuration 401, computing device 400 may include one or more processors 410 and a system memory 420. A memory bus 430 can be used for communicating between the one or more processors 410 and the system memory 420. Depending on the desired configuration, the one or more processors 410 may be of any type including but not limited to a microprocessor (μP), a microcontroller (μC), a digital signal processor (DSP), or any combination thereof. The one or more processors 410 may include one or more levels of caching, such as a level one cache 411 and a level two cache 412, a processor core 413, and registers 414. The processor core 413 can include an arithmetic logic unit (ALU), a floating point unit (FPU), a digital signal processing core (DSP Core), or any combination thereof. A memory controller 415 can also be used with the one or more processors 410, or in some implementations the memory controller 415 can be an internal part of the processor 410. Depending on the desired configuration, the system memory 420 may be of any type including but not limited to volatile memory (such as RAM), non-volatile memory (such as ROM, flash memory, etc.) or any combination thereof. The system memory 420 may include an operating system 421, one or more applications 422, and program data 424. The one or more applications 422 may include semantic data processing module application 423 that can be arranged to perform the functions, actions, and/or operations as described herein including the functional blocks, actions, and/or operations described herein. The program data 424 may include semantic data, assertion data, and/or information candidate data 425 for use with the network congestion module application 423. In some example embodiments, the one or more applications 422 may be arranged to operate with the program data 424 on the operating system 421. This described basic configuration 401 is illustrated in FIG. 4 by those components within dashed line. Computing device 400 may have additional features or functionality, and additional interfaces to facilitate communications between the basic configuration 401 and any required devices and interfaces. For example, a bus/interface controller 440 may be used to facilitate communications between the basic configuration 401 and one or more data storage devices 450 via a storage interface bus 441. The one or more data storage devices 450 may be removable storage devices 451, non-removable storage devices 452, or a combination thereof. Examples of removable storage and non-removable storage devices include magnetic disk devices such as flexible disk drives and hard-disk drives (HDD), optical disk drives such as compact disk (CD) drives or digital versatile disk (DVD) drives, solid state drives (SSD), and tape drives to name a few. Example computer storage media may include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. The system memory 420, the removable storage 451 and the non-removable storage 452 are all examples of computer storage media. The computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which may be used to store the desired information and which may be accessed by the computing device 400. Any such computer storage media may be part of the computing device 400. The computing device 400 may also include an interface bus 442 for facilitating communication from various interface devices (e.g., output interfaces, peripheral interfaces, and communication interfaces) to the basic configuration 401 via the bus/interface controller 440. Example output interfaces 460 may include a graphics processing unit 461 and an audio processing unit 462, which may be configured to communicate to various external devices such as a display or speakers via one or more NV ports 463. Example peripheral interfaces 470 may include a serial interface controller 471 or a parallel interface controller 472, which may be configured to communicate with external devices such as input devices (e.g., keyboard, mouse, pen, voice input device, touch input device, etc.) or other peripheral devices (e.g., printer, scanner, etc.) via one or more I/O ports 473. An example communication interface 480 includes a network controller 481, which may be arranged to facilitate communications with one or more other computing devices 483 over a network communication via one or more communication ports 482. A communication connection is one example of a communication media. The communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and may include any information delivery media. A “modulated data signal” may be a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared (IR) and other wireless media. The term computer readable media as used herein may include both storage media and communication media. The computing device 400 may be implemented as a portion of a small-form factor portable (or mobile) electronic device such as a cell phone, a mobile phone, a tablet device, a laptop computer, a personal data assistant (PDA), a personal media player device, a wireless web-watch device, a personal headset device, an application specific device, or a hybrid device that includes any of the above functions. The computing device 400 may also be implemented as a personal computer including both laptop computer and non-laptop computer configurations. In addition, the computing device 400 may be implemented as part of a wireless base station or other wireless system or device. Some portions of the foregoing detailed description are presented in terms of algorithms or symbolic representations of operations on data bits or binary digital signals stored within a computing system memory, such as a computer memory. These algorithmic descriptions or representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. An algorithm is here, and generally, is considered to be a self-consistent sequence of operations or similar processing leading to a desired result. In this context, operations or processing involve physical manipulation of physical quantities. Typically, although not necessarily, such quantities may take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared or otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to such signals as bits, data, values, elements, symbols, characters, terms, numbers, numerals or the like. It should be understood, however, that all of these and similar terms are to be associated with appropriate physical quantities and are merely convenient labels. Unless specifically stated otherwise, as apparent from the following discussion, it is appreciated that throughout this specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining” or the like refer to actions or processes of a computing device, that manipulates or transforms data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing device. The claimed subject matter is not limited in scope to the particular implementations described herein. For example, some implementations may be in hardware, such as employed to operate on a device or combination of devices, for example, whereas other implementations may be in software and/or firmware. Likewise, although claimed subject matter is not limited in scope in this respect, some implementations may include one or more articles, such as a signal bearing medium, a storage medium and/or storage media. This storage media, such as CD-ROMs, computer disks, flash memory, or the like, for example, may have instructions stored thereon, that, when executed by a computing device, such as a computing system, computing platform, or other system, for example, may result in execution of a processor in accordance with the claimed subject matter, such as one of the implementations previously described, for example. As one possibility, a computing device may include one or more processing units or processors, one or more input/output devices, such as a display, a keyboard and/or a mouse, and one or more memories, such as static random access memory, dynamic random access memory, flash memory, and/or a hard drive. There is little distinction left between hardware and software implementations of aspects of systems; the use of hardware or software is generally (but not always, in that in certain contexts the choice between hardware and software can become significant) a design choice representing cost vs. efficiency tradeoffs. There are various vehicles by which processes and/or systems and/or other technologies described herein can be affected (e.g., hardware, software, and/or firmware), and that the preferred vehicle will vary with the context in which the processes and/or systems and/or other technologies are deployed. For example, if an implementer determines that speed and accuracy are paramount, the implementer may opt for a mainly hardware and/or firmware vehicle; if flexibility is paramount, the implementer may opt for a mainly software implementation; or, yet again alternatively, the implementer may opt for some combination of hardware, software, and/or firmware. The foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, flowcharts, and/or examples. Insofar as such block diagrams, flowcharts, and/or examples contain one or more functions and/or operations, it will be understood by those within the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, several portions of the subject matter described herein may be implemented via Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), digital signal processors (DSPs), or other integrated formats. However, those skilled in the art will recognize that some aspects of the embodiments disclosed herein, in whole or in part, can be equivalently implemented in integrated circuits, as one or more computer programs running on one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs running on one or more processors (e.g., as one or more programs running on one or more microprocessors), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of skill in the art in light of this disclosure. In addition, those skilled in the art will appreciate that the mechanisms of the subject matter described herein are capable of being distributed as a program product in a variety of forms, and that an illustrative embodiment of the subject matter described herein applies regardless of the particular type of signal bearing medium used to actually carry out the distribution. Examples of a signal bearing medium include, but are not limited to, the following: a recordable type medium such as a flexible disk, a hard disk drive (HDD), a Compact Disc (CD), a Digital Versatile Disk (DVD), a digital tape, a computer memory, etc.; and a transmission type medium such as a digital and/or an analog communication medium (e.g., a fiber optic cable, a waveguide, a wired communications link, a wireless communication link, etc.). Those skilled in the art will recognize that it is common within the art to describe devices and/or processes in the fashion set forth herein, and thereafter use engineering practices to integrate such described devices and/or processes into data processing systems. That is, at least a portion of the devices and/or processes described herein can be integrated into a data processing system via a reasonable amount of experimentation. Those having skill in the art will recognize that a typical data processing system generally includes one or more of a system unit housing, a video display device, a memory such as volatile and non-volatile memory, processors such as microprocessors and digital signal processors, computational entities such as operating systems, drivers, graphical user interfaces, and applications programs, one or more interaction devices, such as a touch pad or screen, and/or control systems including feedback loops and control motors (e.g., feedback for sensing position and/or velocity; control motors for moving and/or adjusting components and/or quantities). A typical data processing system may be implemented utilizing any suitable commercially available components, such as those typically found in data computing/communication and/or network computing/communication systems. The herein described subject matter sometimes illustrates different components contained within, or connected with, different other components. It is to be understood that such depicted architectures are merely exemplary, and that in fact many other architectures can be implemented which achieve the same functionality. In a conceptual sense, any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality can be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected”, or “operably coupled”, to each other to achieve the desired functionality, and any two components capable of being so associated can also be viewed as being “operably couplable”, to each other to achieve the desired functionality. Specific examples of operably couplable include but are not limited to physically mateable and/or physically interacting components and/or wirelessly interactable and/or wirelessly interacting components and/or logically interacting and/or logically interactable components. With respect to the use of substantially any plural and/or singular terms herein, those having skill in the art can translate from the plural to the singular and/or from the singular to the plural as is appropriate to the context and/or application. The various singular/plural permutations may be expressly set forth herein for sake of clarity. It will be understood by those within the art that, in general, terms used herein, and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” should be interpreted as “including but not limited to,” the term “having” should be interpreted as “having at least,” the term “includes” should be interpreted as “includes but is not limited to,” etc.). It will be further understood by those within the art that if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases should not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim recitation to subject matter containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” should typically be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations. In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such recitation should typically be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, typically means at least two recitations, or two or more recitations). Furthermore, in those instances where a convention analogous to “at least one of A, B, and C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, and C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). In those instances where a convention analogous to “at least one of A, B, or C, etc.” is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., “a system having at least one of A, B, or C” would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase “A or B” will be understood to include the possibilities of “A” or “B” or “A and B.” Reference in the specification to “an implementation,” “one implementation,” “some implementations,” or “other implementations” may mean that a particular feature, structure, or characteristic described in connection with one or more implementations may be included in at least some implementations, but not necessarily in all implementations. The various appearances of “an implementation,” “one implementation,” or “some implementations” in the preceding description are not necessarily all referring to the same implementations. While certain exemplary techniques have been described and shown herein using various methods and systems, it should be understood by those skilled in the art that various other modifications may be made, and equivalents may be substituted, without departing from claimed subject matter. Additionally, many modifications may be made to adapt a particular situation to the teachings of claimed subject matter without departing from the central concept described herein. Therefore, it is intended that claimed subject matter not be limited to the particular examples disclosed, but that such claimed subject matter also may include all implementations falling within the scope of the appended claims, and equivalents thereof.""]",['G06F172725'],['G06F1727'],['20160129'],[''],['20160519'],['77994.0']
|