id
stringlengths 1
169
| pr-title
stringlengths 2
190
| pr-article
stringlengths 0
65k
| pr-summary
stringlengths 47
4.27k
| sc-title
stringclasses 2
values | sc-article
stringlengths 0
2.03M
| sc-abstract
stringclasses 2
values | sc-section_names
sequencelengths 0
0
| sc-sections
sequencelengths 0
0
| sc-authors
sequencelengths 0
0
| source
stringclasses 2
values | Topic
stringclasses 10
values | Citation
stringlengths 4
4.58k
| Paper_URL
stringlengths 4
213
| News_URL
stringlengths 4
119
| pr-summary-and-article
stringlengths 49
66.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
380 | Cheap, User-Friendly Smartphone App Predicts Vineyard Yields | Cornell engineers and plant scientists have teamed up to develop a low-cost system that allows grape growers to predict their yields much earlier in the season and more accurately than costly traditional methods.
The new method allows a grower to use a smartphone to video grape vines while driving a tractor or walking through the vineyard at night. Growers may then upload their video to a server to process the data. The system relies on computer-vision to improve the reliability of yield estimates.
Traditional methods for estimating grape cluster numbers are often done manually by workers, who count a subset of clusters on vines and then scale their numbers up to account for the entire vineyard. This strategy is laborious, costly and inaccurate, with average cluster count error rates of up to 24% of actual yields. The new method cuts those maximum average error rates by almost half.
"This could be a real game-changer for small and medium-sized farms in the Northeast," said Kirstin Petersen , assistant professor of electrical and computer engineering in the College of Engineering. Petersen is a co-author of the paper, " Low-Cost, Computer Vision-Based, Prebloom Cluster Count Prediction in Vineyards ," which published April 8 in the journal Frontiers in Agronomy. Jonathan Jaramillo, a doctoral student in Petersen's lab, is the paper's first author; Justine Vanden Heuvel , professor in the School of Integrative Plant Science Horticulture Section in the College of Agriculture and Life Sciences, is a co-author.
When workers manually count clusters on a vine, accuracy greatly depends on the person counting. In an experiment, the researchers found that for a panel of four vines containing 320 clusters, manual counts ranged from 237 to 309. Workers will count the number of grape clusters in a small portion of the vineyard to get an average number of clusters per row. Farmers will then multiply the average by the total number of rows to predict yields for a vineyard. When cluster numbers are miscounted, multiplying only further amplifies inaccurate yield predictions.
One advantage of the technology is that it counts every vine. Even though the new method also results in counting errors, the numbers aren't magnified by counting a subset of vines and then multiplying.
"We showed that compared to the technology, a farmer would have to manually count 70% of their vineyard to gain the same level of confidence in their yield prediction," Petersen said, "and no one would do that."
High tech, accurate robot counters do exist but they cost upwards of $12,000, making them inaccessible to small and medium-sized growers. Another disadvantage: they count grapes when they are closer to ripening, late in the season, in September or October. The new method counts clusters in May to June.
"Having good predictions earlier in the season gives farmers more time to act on information," Jaramillo said. Famers may then secure labor and buyers in advance. Or, if they are making wine, they can acquire the right amount of equipment for producing and packaging it. "Not having these things lined up in advance can cause problems for growers last minute and ultimately reduce profits," Jaramillo said.
Now an unskilled laborer can simply drive a tractor up and down the rows with a smartphone set up on a gimbal. While details of a public release are still being worked out, the researchers will field test an app this summer. The researchers intend for this app to be open sourced, and the machine learning components setup such that users simply upload their video to a server that will process the data for them.
"The success of this project relied on the combined knowledge of engineering, farmer practices, and plant sciences, which is a great example of interdisciplinary sciences at Cornell." Petersen said. "There's nothing else like it."
The project is funded by the National Science Foundation, the U.S. Department of Agriculture's National Institute of Food and Agriculture and the Cornell Institute for Digital Agriculture. | Cornell University engineers and plant scientists have developed an inexpensive machine learning application to predict vineyard yields earlier in the season and with greater accuracy than costlier manual counting techniques. Growers can use a smartphone to record video of their grapevines, then upload the footage to a server to process; the system uses computer vision to improve yield estimates. Cornell's Kirstin Petersen said, "Compared to the technology, a farmer would have to manually count 70% of their vineyard to gain the same level of confidence in their yield prediction, and no one would do that." As a result, Petersen said, "This could be a real game-changer for small and medium-sized farms in the Northeast." | [] | [] | [] | scitechnews | None | None | None | None | Cornell University engineers and plant scientists have developed an inexpensive machine learning application to predict vineyard yields earlier in the season and with greater accuracy than costlier manual counting techniques. Growers can use a smartphone to record video of their grapevines, then upload the footage to a server to process; the system uses computer vision to improve yield estimates. Cornell's Kirstin Petersen said, "Compared to the technology, a farmer would have to manually count 70% of their vineyard to gain the same level of confidence in their yield prediction, and no one would do that." As a result, Petersen said, "This could be a real game-changer for small and medium-sized farms in the Northeast."
Cornell engineers and plant scientists have teamed up to develop a low-cost system that allows grape growers to predict their yields much earlier in the season and more accurately than costly traditional methods.
The new method allows a grower to use a smartphone to video grape vines while driving a tractor or walking through the vineyard at night. Growers may then upload their video to a server to process the data. The system relies on computer-vision to improve the reliability of yield estimates.
Traditional methods for estimating grape cluster numbers are often done manually by workers, who count a subset of clusters on vines and then scale their numbers up to account for the entire vineyard. This strategy is laborious, costly and inaccurate, with average cluster count error rates of up to 24% of actual yields. The new method cuts those maximum average error rates by almost half.
"This could be a real game-changer for small and medium-sized farms in the Northeast," said Kirstin Petersen , assistant professor of electrical and computer engineering in the College of Engineering. Petersen is a co-author of the paper, " Low-Cost, Computer Vision-Based, Prebloom Cluster Count Prediction in Vineyards ," which published April 8 in the journal Frontiers in Agronomy. Jonathan Jaramillo, a doctoral student in Petersen's lab, is the paper's first author; Justine Vanden Heuvel , professor in the School of Integrative Plant Science Horticulture Section in the College of Agriculture and Life Sciences, is a co-author.
When workers manually count clusters on a vine, accuracy greatly depends on the person counting. In an experiment, the researchers found that for a panel of four vines containing 320 clusters, manual counts ranged from 237 to 309. Workers will count the number of grape clusters in a small portion of the vineyard to get an average number of clusters per row. Farmers will then multiply the average by the total number of rows to predict yields for a vineyard. When cluster numbers are miscounted, multiplying only further amplifies inaccurate yield predictions.
One advantage of the technology is that it counts every vine. Even though the new method also results in counting errors, the numbers aren't magnified by counting a subset of vines and then multiplying.
"We showed that compared to the technology, a farmer would have to manually count 70% of their vineyard to gain the same level of confidence in their yield prediction," Petersen said, "and no one would do that."
High tech, accurate robot counters do exist but they cost upwards of $12,000, making them inaccessible to small and medium-sized growers. Another disadvantage: they count grapes when they are closer to ripening, late in the season, in September or October. The new method counts clusters in May to June.
"Having good predictions earlier in the season gives farmers more time to act on information," Jaramillo said. Famers may then secure labor and buyers in advance. Or, if they are making wine, they can acquire the right amount of equipment for producing and packaging it. "Not having these things lined up in advance can cause problems for growers last minute and ultimately reduce profits," Jaramillo said.
Now an unskilled laborer can simply drive a tractor up and down the rows with a smartphone set up on a gimbal. While details of a public release are still being worked out, the researchers will field test an app this summer. The researchers intend for this app to be open sourced, and the machine learning components setup such that users simply upload their video to a server that will process the data for them.
"The success of this project relied on the combined knowledge of engineering, farmer practices, and plant sciences, which is a great example of interdisciplinary sciences at Cornell." Petersen said. "There's nothing else like it."
The project is funded by the National Science Foundation, the U.S. Department of Agriculture's National Institute of Food and Agriculture and the Cornell Institute for Digital Agriculture. |
|||
381 | Digital Nose Stimulation Enables Smelling in Stereo | Humans have two nostrils, which you'd think would allow us to determine the direction of smells, in the same way that two ears let us determine the direction of sounds. But that's not how it works, sadly - humans, in general, are not stereo smellers. We can track down a smell by moving our head and body while sniffing, searching for increasing smell strength , but that's much different from stereo smelling, which would allow us to localize smells based on different intensities wafting into each nostril.
Branching off from earlier work accessing alternative physiological smelling systems , researchers at the Human Computer Integration lab at the University of Chicago have developed a way to augment our sense of smell with a small piece of nose-worn hardware that uses tiny electrical impulses to give us the power of directional smell.
When you smell a smell, your olfactory bulb gets most of the credit for what that smell smells like, but there's also a complex facial nerve system called the trigeminal nerve that adds some smell sensations, and your brain fuses them together into one distinct smell. The trigeminal nerve and olfactory bulb react to different smells in different ways, and with some particular kinds of smells, like mint, that "cold" smell is coming from your trigeminal nerve. The trigeminal nerve is also responsible for the "hot" smell of peppers, and the "sharp" smell of vinegar.
While research has shown that humans cannot consciously determine the directionality of smells using our olfactory bulb, we can determine direction with very high accuracy if the smell triggers our trigeminal nerve, meaning that you can localize the smell of mint chocolate chip ice cream, but not regular chocolate chip ice cream.
One way of triggering the trigeminal nerve is through smells, but you can also do it with direct electrical stimulation. This works on the olfactory bulb as well, but to do that, you have to stick electrodes up your nose. Way up your nose, way up there, way waaaaay up there, since your olfactory bulb is back behind your eyeballs (!). Meanwhile, your trigeminal nerve extends all around your face and into your nasal septum, meaning that you can interface with it pretty easily using just a small bit of electronic kit:
Images: Human Computer Integration lab/University of Chicago
You'd think that the way to mimic a stereo smell with this kind of device would be to stimulate one side of your nose differently than the other side, but remarkably, it turns out that you can generate stereo smell sensations (as well as smell intensity sensations) using only electrical waveform variation. The wireless, battery-powered device uses magnets to keep itself attached to the inside of your nose; it can detect when you inhale, and then uses electrodes to stimulate your septum. The current implementation communicates with external sensors, and this system works so well that completely untrained people can use the device to localize virtual smells, following electrically-induced virtual odors around a room.
Image: Human Computer Integration lab/University of Chicago
The real question, of course, is what does it actually feel like to have this thing in your schnoz? Apparently, it doesn't feel like electricity in the nose or anything like that - it's actually smell-like, somehow. We asked Jas Brooks , first author on a paper being presented at CHI this week , to try and explain:
This device could be of immediate use as an assistive device for people experiencing anosmia, or loss of olfactory function, since in many cases (including anosmia due to Covid-19), damage to the olfactory bulb does not extend to the trigeminal nerve. In the future, the researchers suggest that it may be possible to map between trigeminal stimulation and olfactory stimulation, meaning that a wider range of smell sensations could be electrically induced. It may also be possible to use this device to make people into super-smellers, not only localizing odors but also leveraging external sensors to smell things that they'd never be able to on their own. Like, imagine being able to smell carbon monoxide, or perhaps something more exotic, like radioactivity. Or, as the researchers suggest, a mapping application - smell your way home. | Researchers at the University of Chicago (UChicago) have developed a wireless, nose-worn device that can enable directional smell using tiny electrical impulses. The battery-powered hardware has magnets to keep itself attached to the inside of the nose; it can detect inhalation, and employs electrodes to stimulate the septum and trigger the trigeminal nerve. The model, which connects with external sensors, works well enough that completely untrained people can use it to localize electrically-triggered virtual odors. UChicago's Jas Brooks said, "The sensation our device produces can feel like a 'tickling' or 'sting,' not far from that of wasabi or the smell of white vinegar, except it is clearly directional." The stimulator could be used as an assistive device for people suffering loss of olfactory function. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the University of Chicago (UChicago) have developed a wireless, nose-worn device that can enable directional smell using tiny electrical impulses. The battery-powered hardware has magnets to keep itself attached to the inside of the nose; it can detect inhalation, and employs electrodes to stimulate the septum and trigger the trigeminal nerve. The model, which connects with external sensors, works well enough that completely untrained people can use it to localize electrically-triggered virtual odors. UChicago's Jas Brooks said, "The sensation our device produces can feel like a 'tickling' or 'sting,' not far from that of wasabi or the smell of white vinegar, except it is clearly directional." The stimulator could be used as an assistive device for people suffering loss of olfactory function.
Humans have two nostrils, which you'd think would allow us to determine the direction of smells, in the same way that two ears let us determine the direction of sounds. But that's not how it works, sadly - humans, in general, are not stereo smellers. We can track down a smell by moving our head and body while sniffing, searching for increasing smell strength , but that's much different from stereo smelling, which would allow us to localize smells based on different intensities wafting into each nostril.
Branching off from earlier work accessing alternative physiological smelling systems , researchers at the Human Computer Integration lab at the University of Chicago have developed a way to augment our sense of smell with a small piece of nose-worn hardware that uses tiny electrical impulses to give us the power of directional smell.
When you smell a smell, your olfactory bulb gets most of the credit for what that smell smells like, but there's also a complex facial nerve system called the trigeminal nerve that adds some smell sensations, and your brain fuses them together into one distinct smell. The trigeminal nerve and olfactory bulb react to different smells in different ways, and with some particular kinds of smells, like mint, that "cold" smell is coming from your trigeminal nerve. The trigeminal nerve is also responsible for the "hot" smell of peppers, and the "sharp" smell of vinegar.
While research has shown that humans cannot consciously determine the directionality of smells using our olfactory bulb, we can determine direction with very high accuracy if the smell triggers our trigeminal nerve, meaning that you can localize the smell of mint chocolate chip ice cream, but not regular chocolate chip ice cream.
One way of triggering the trigeminal nerve is through smells, but you can also do it with direct electrical stimulation. This works on the olfactory bulb as well, but to do that, you have to stick electrodes up your nose. Way up your nose, way up there, way waaaaay up there, since your olfactory bulb is back behind your eyeballs (!). Meanwhile, your trigeminal nerve extends all around your face and into your nasal septum, meaning that you can interface with it pretty easily using just a small bit of electronic kit:
Images: Human Computer Integration lab/University of Chicago
You'd think that the way to mimic a stereo smell with this kind of device would be to stimulate one side of your nose differently than the other side, but remarkably, it turns out that you can generate stereo smell sensations (as well as smell intensity sensations) using only electrical waveform variation. The wireless, battery-powered device uses magnets to keep itself attached to the inside of your nose; it can detect when you inhale, and then uses electrodes to stimulate your septum. The current implementation communicates with external sensors, and this system works so well that completely untrained people can use the device to localize virtual smells, following electrically-induced virtual odors around a room.
Image: Human Computer Integration lab/University of Chicago
The real question, of course, is what does it actually feel like to have this thing in your schnoz? Apparently, it doesn't feel like electricity in the nose or anything like that - it's actually smell-like, somehow. We asked Jas Brooks , first author on a paper being presented at CHI this week , to try and explain:
This device could be of immediate use as an assistive device for people experiencing anosmia, or loss of olfactory function, since in many cases (including anosmia due to Covid-19), damage to the olfactory bulb does not extend to the trigeminal nerve. In the future, the researchers suggest that it may be possible to map between trigeminal stimulation and olfactory stimulation, meaning that a wider range of smell sensations could be electrically induced. It may also be possible to use this device to make people into super-smellers, not only localizing odors but also leveraging external sensors to smell things that they'd never be able to on their own. Like, imagine being able to smell carbon monoxide, or perhaps something more exotic, like radioactivity. Or, as the researchers suggest, a mapping application - smell your way home. |
|||
383 | Envisioning Safer Cities with AI | Artificial intelligence is providing new opportunities in a range of fields, from business to industrial design to entertainment. But how about civil engineering and city planning? How might machine- and deep-learning help us create safer, more sustainable, and resilient built environments?
A team of researchers from the NSF NHERI SimCenter , a computational modeling and simulation center for the natural hazards engineering community headquartered at the University of California, Berkeley, have developed a suite of tools called BRAILS - Building Recognition using AI at Large-Scale - that can automatically identify characteristics of buildings in a city and even detect the risks that a city's structures would face in an earthquake, hurricane, or tsunami. The team is comprised of researchers from UC Berkeley, International Computer Science Institute, Stanford, and UCLA.
Charles Wang, a postdoctoral researcher at the University of California, Berkeley, and the lead developer of BRAILS, says the project grew out of a need to quickly and reliably characterize the structures in a city.
"We want to simulate the impact of hazards on all of the buildings in a region, but we don't have a description of the building attributes," Wang said. "For example, in the San Francisco Bay area, there are millions of buildings. Using AI, we are able to get the needed information. We can train neural network models to infer building information from images and other sources of data."
A schematic showing the workflow for Building Recognition using AI at Large-Scale (BRAILS). [Credit: Chaofeng Wang, UC Berkeley]
BRAILS uses machine learning, deep learning, and computer vision to extract information about the built environment. It is envisioned as a tool for architects, engineers and planning professionals to more efficiently plan, design, and manage buildings and infrastructure systems.
The SimCenter recently released BRAILS version 2.0 which includes modules to predict a larger spectrum of building characteristics. These include occupancy class (commercial, single-family, or multi-family), roof type (flat, gabled, or hipped), foundation elevation, year built, number of floors, and whether a building has a "soft-story" - a civil engineering term for structures that include ground floors with large openings (like storefronts) that may be more prone to collapse during an earthquake.
The basic BRAILS framework developed by Wang and his collaborators automatically extracts building information from satellite and ground level images drawn from Google Maps and merges these with data from several sources, such as Microsoft Footprint Data and OpenStreetMap - a collaborative project to create a free editable map of the world. The framework also provides the option to fuse this data with tax records, city surveys, and other information, to complement the computer vision component.
"Given the importance of regional simulations and the need for large inventory data to execute these, machine learning is really the only option for making progress," noted SimCenter Principal Investigator and co-Director Sanjay Govindjee. "It is exciting to see civil engineers learning these new technologies and applying them to real world problems."
Leverage Crowdsourcing Power
Recently, the SimCenter launched a project on the citizen science web portal, Zooniverse, to collect additional labelled data. The project, called "Building Detective for Disaster Preparedness," enables the public to identify specific architectural features of structures, like roofs, windows, and chimneys. These labels will be used to train additional feature extraction modules.
"We launched the Zooniverse project in March and within a couple of weeks we had a thousand volunteers, and 20,000 images annotated," Wang said.
Since no data source is complete or fully accurate, BRAILS performs data enhancements using logical and statistical methods to fill in gaps. It also computes the uncertainty for its estimates.
The "Building Detective For Disaster Preparedness" project in Zooniverse invites citizen scientists to label data that helps train the BRAILS tool.
After developing and testing the accuracy of these modules individually, the team combined them to create the CityBuilder tool inside BRAILS. Inputting a given city or region into CityBuilder can automatically generate a characterization of every structure in that geographic area.
Wang and his collaborators performed a series of validation demonstrations, or as they call them, testbeds, to determine the accuracy of the AI-derived models. Each testbed generates an inventory of structures and simulates the impact of a hazard based on historical or plausible events. The team has created testbeds for earthquakes in San Francisco; and hurricanes in Lake Charles, Louisiana, the Texas coast, and Atlantic City, New Jersey.
"Our objectives are two-fold," Wang said. "First, to mitigate the damage in the future by doing simulations and providing results to decision- and policy-makers. And second, to use this data to quickly simulate a real scenario - instantly following a new event, before the reconnaissance team is deployed. We hope near-real-time simulation results can help guide emergency response with greater accuracy."
The team outlined their framework in the February 2021 issue of Automation in Construction . They showed that their neural network could generate realistic spatial distributions of buildings in a region and described how it could be used for large-scale natural hazard risk management using five coastal cities in New Jersey.
The team presented a testbed for Hurricane Laura (2020), the strongest hurricane to make landfall in Louisiana, at the 2021 Workshop on SHared Operational REsearch Logistics In the Nearshore Environment (SHORELINE21) .
Computational Resources
To train the BRAILS modules and run the simulations, the researchers used supercomputers at the Texas Advanced Computing Center (TACC) - notably Frontera, the fastest academic supercomputer in the world, and Maverick 2, a GPU-based system designed for deep learning.
"For one model, the training could be finished in a few hours, but this depends on the number of images, the number of GPUs, the learning rate, etc.," Wang explained.
TACC, like the SimCenter, is a funded partner in the NSF NHERI program. TACC designed and maintains the DesignSafe-CI (Cyberinfrastructure) - a platform for computation, data analysis, and tools used by natural hazard researchers.
"This project is a great example of how advanced computing through DesignSafe can enable new avenues of natural hazards research and new tools, with many components of NHERI working together," said Ellen Rathje, professor of civil engineering at The University of Texas at Austin and principal investigator of the DesignSafe project.
BRAILS/CityBuilder is designed to work seamlessly with the SimCenter Regional Resilience Determination (R2D) tool . R2D is a graphical user interface for the SimCenter application framework for quantifying the regional impact from natural hazards. Its outputs include the damage state and the loss ratio - the percentage of a building's repair cost to its replacement value - of each building across an entire city or region, and the degree of confidence in the prediction.
"The hazard event simulations - applying wind fields or ground shaking to thousands or millions of buildings to assess the impact of a hurricane or earthquake - requires a lot of computing resources and time," Wang said. "For one city-wide simulation, depending on the size, it typically takes hours to run on TACC."
TACC is an ideal environment for this research, Wang says. It provides most of the computation his team needs. "Working on NSF projects related to DesignSafe, I can compute almost without limitations. It's awesome."
Impacts
To make our communities more resilient to natural hazards, we need to know what level of damage we will have in the future, to inform residents and policymakers about whether to strengthen buildings or move people to other places.
"That's what the simulation and modeling can provide, " Wang said. "All to create a more resilient built environment."
The team is comprised of researchers from UC Berkeley, UC Berkeley's International Computer Science Institute, Stanford, and UCLA. Contributing members of the team working on BRAILS are Yunhui Guo, Qian Yu, Sascha Hornauer, Barbaros Cetiner, Frank McKenna, Stella Yu, Ertugrul Taciroglu, Satish Rao, and Kincho Law. | University of California, Berkeley (UC Berkeley) researchers have designed an artificial intelligence toolkit for automatically identifying building properties, and for gauging urban structures' resilience. BRAILS (Building Recognition using AI at Large-Scale) applies machine learning, deep learning, and computer vision on data about the built environment as a tool for more efficient urban planning, design, and management of buildings and infrastructure. The basic BRAILS framework derives building characteristics from satellite and ground-level images drawn from Google Maps, combining them with data from sources like Microsoft Footprint Data and OpenStreetMap. The researchers trained the BRAILS modules and ran simulations using supercomputers at the Texas Advanced Computing Center. UC Berkeley's Charles Wang said the research aims "to create a more resilient built environment." | [] | [] | [] | scitechnews | None | None | None | None | University of California, Berkeley (UC Berkeley) researchers have designed an artificial intelligence toolkit for automatically identifying building properties, and for gauging urban structures' resilience. BRAILS (Building Recognition using AI at Large-Scale) applies machine learning, deep learning, and computer vision on data about the built environment as a tool for more efficient urban planning, design, and management of buildings and infrastructure. The basic BRAILS framework derives building characteristics from satellite and ground-level images drawn from Google Maps, combining them with data from sources like Microsoft Footprint Data and OpenStreetMap. The researchers trained the BRAILS modules and ran simulations using supercomputers at the Texas Advanced Computing Center. UC Berkeley's Charles Wang said the research aims "to create a more resilient built environment."
Artificial intelligence is providing new opportunities in a range of fields, from business to industrial design to entertainment. But how about civil engineering and city planning? How might machine- and deep-learning help us create safer, more sustainable, and resilient built environments?
A team of researchers from the NSF NHERI SimCenter , a computational modeling and simulation center for the natural hazards engineering community headquartered at the University of California, Berkeley, have developed a suite of tools called BRAILS - Building Recognition using AI at Large-Scale - that can automatically identify characteristics of buildings in a city and even detect the risks that a city's structures would face in an earthquake, hurricane, or tsunami. The team is comprised of researchers from UC Berkeley, International Computer Science Institute, Stanford, and UCLA.
Charles Wang, a postdoctoral researcher at the University of California, Berkeley, and the lead developer of BRAILS, says the project grew out of a need to quickly and reliably characterize the structures in a city.
"We want to simulate the impact of hazards on all of the buildings in a region, but we don't have a description of the building attributes," Wang said. "For example, in the San Francisco Bay area, there are millions of buildings. Using AI, we are able to get the needed information. We can train neural network models to infer building information from images and other sources of data."
A schematic showing the workflow for Building Recognition using AI at Large-Scale (BRAILS). [Credit: Chaofeng Wang, UC Berkeley]
BRAILS uses machine learning, deep learning, and computer vision to extract information about the built environment. It is envisioned as a tool for architects, engineers and planning professionals to more efficiently plan, design, and manage buildings and infrastructure systems.
The SimCenter recently released BRAILS version 2.0 which includes modules to predict a larger spectrum of building characteristics. These include occupancy class (commercial, single-family, or multi-family), roof type (flat, gabled, or hipped), foundation elevation, year built, number of floors, and whether a building has a "soft-story" - a civil engineering term for structures that include ground floors with large openings (like storefronts) that may be more prone to collapse during an earthquake.
The basic BRAILS framework developed by Wang and his collaborators automatically extracts building information from satellite and ground level images drawn from Google Maps and merges these with data from several sources, such as Microsoft Footprint Data and OpenStreetMap - a collaborative project to create a free editable map of the world. The framework also provides the option to fuse this data with tax records, city surveys, and other information, to complement the computer vision component.
"Given the importance of regional simulations and the need for large inventory data to execute these, machine learning is really the only option for making progress," noted SimCenter Principal Investigator and co-Director Sanjay Govindjee. "It is exciting to see civil engineers learning these new technologies and applying them to real world problems."
Leverage Crowdsourcing Power
Recently, the SimCenter launched a project on the citizen science web portal, Zooniverse, to collect additional labelled data. The project, called "Building Detective for Disaster Preparedness," enables the public to identify specific architectural features of structures, like roofs, windows, and chimneys. These labels will be used to train additional feature extraction modules.
"We launched the Zooniverse project in March and within a couple of weeks we had a thousand volunteers, and 20,000 images annotated," Wang said.
Since no data source is complete or fully accurate, BRAILS performs data enhancements using logical and statistical methods to fill in gaps. It also computes the uncertainty for its estimates.
The "Building Detective For Disaster Preparedness" project in Zooniverse invites citizen scientists to label data that helps train the BRAILS tool.
After developing and testing the accuracy of these modules individually, the team combined them to create the CityBuilder tool inside BRAILS. Inputting a given city or region into CityBuilder can automatically generate a characterization of every structure in that geographic area.
Wang and his collaborators performed a series of validation demonstrations, or as they call them, testbeds, to determine the accuracy of the AI-derived models. Each testbed generates an inventory of structures and simulates the impact of a hazard based on historical or plausible events. The team has created testbeds for earthquakes in San Francisco; and hurricanes in Lake Charles, Louisiana, the Texas coast, and Atlantic City, New Jersey.
"Our objectives are two-fold," Wang said. "First, to mitigate the damage in the future by doing simulations and providing results to decision- and policy-makers. And second, to use this data to quickly simulate a real scenario - instantly following a new event, before the reconnaissance team is deployed. We hope near-real-time simulation results can help guide emergency response with greater accuracy."
The team outlined their framework in the February 2021 issue of Automation in Construction . They showed that their neural network could generate realistic spatial distributions of buildings in a region and described how it could be used for large-scale natural hazard risk management using five coastal cities in New Jersey.
The team presented a testbed for Hurricane Laura (2020), the strongest hurricane to make landfall in Louisiana, at the 2021 Workshop on SHared Operational REsearch Logistics In the Nearshore Environment (SHORELINE21) .
Computational Resources
To train the BRAILS modules and run the simulations, the researchers used supercomputers at the Texas Advanced Computing Center (TACC) - notably Frontera, the fastest academic supercomputer in the world, and Maverick 2, a GPU-based system designed for deep learning.
"For one model, the training could be finished in a few hours, but this depends on the number of images, the number of GPUs, the learning rate, etc.," Wang explained.
TACC, like the SimCenter, is a funded partner in the NSF NHERI program. TACC designed and maintains the DesignSafe-CI (Cyberinfrastructure) - a platform for computation, data analysis, and tools used by natural hazard researchers.
"This project is a great example of how advanced computing through DesignSafe can enable new avenues of natural hazards research and new tools, with many components of NHERI working together," said Ellen Rathje, professor of civil engineering at The University of Texas at Austin and principal investigator of the DesignSafe project.
BRAILS/CityBuilder is designed to work seamlessly with the SimCenter Regional Resilience Determination (R2D) tool . R2D is a graphical user interface for the SimCenter application framework for quantifying the regional impact from natural hazards. Its outputs include the damage state and the loss ratio - the percentage of a building's repair cost to its replacement value - of each building across an entire city or region, and the degree of confidence in the prediction.
"The hazard event simulations - applying wind fields or ground shaking to thousands or millions of buildings to assess the impact of a hurricane or earthquake - requires a lot of computing resources and time," Wang said. "For one city-wide simulation, depending on the size, it typically takes hours to run on TACC."
TACC is an ideal environment for this research, Wang says. It provides most of the computation his team needs. "Working on NSF projects related to DesignSafe, I can compute almost without limitations. It's awesome."
Impacts
To make our communities more resilient to natural hazards, we need to know what level of damage we will have in the future, to inform residents and policymakers about whether to strengthen buildings or move people to other places.
"That's what the simulation and modeling can provide, " Wang said. "All to create a more resilient built environment."
The team is comprised of researchers from UC Berkeley, UC Berkeley's International Computer Science Institute, Stanford, and UCLA. Contributing members of the team working on BRAILS are Yunhui Guo, Qian Yu, Sascha Hornauer, Barbaros Cetiner, Frank McKenna, Stella Yu, Ertugrul Taciroglu, Satish Rao, and Kincho Law. |
|||
384 | 'Blind' Robot Successfully Navigates Stairs | It's routine for four-legged robots with computer vision to navigate stairs , but getting a "blind" bipedal robot to do it is a whole other challenge. Now, researchers from Oregon State University have accomplished the feat with a bipedal robot called Cassie (from Agility Robotics) by training it in a simulator.
Why would you want a blind robot to navigate stairs? As the researchers point out, robots can't always rely completely on cameras or other sensors because of possible dim lighting, fog and other issues. So ideally, they'd also use "proprioception" (body awareness) to navigate unknown environments.
The researchers used a technique called sim-to-real Reinforcement Learning (RL) to establish how the robot will walk. They noted that that "for biped locomotion, the training will involve many falls and crashes, especially early in training," so a simulator allowed them to do that without breaking the robot. They taught the robot virtually to handle a number of situations, including stairs and flat ground.
With simulated training done, the researchers took the robot around the university campus to tackle staircases and different types of terrain. It proved to be an apt pupil, handling curbs, logs and other uneven terrain that it had never seen before. On the stairs, the researchers did 10 trials ascending stairs and 10 descending, and it handled those with 80 percent and 100 percent efficiency, respectively.
There were a few caveats in the first trials, as the robot had to run at a standard speed - it tended to fail if it came in too fast or too slow. It's also highly dependent on a memory mechanism due to the challenge of navigating an unknown environment while blind. The researchers plan future tests to see if the efficiency improves with the addition of computer vision. All told though, "this work has demonstrated surprising capabilities for blind locomotion and leaves open the question of where the limits lie," they wrote. | A "blind" bipedal robot trained in a simulator by Oregon State University (OSU) researchers can negotiate varying terrain, including the climbing of stairs. The researchers applied sim-to-real Reinforcement Learning to establish how Agility Robotics' Cassie robot would ambulate. The OSU team taught Cassie virtually to manage various situations, including stairs and flat surfaces. In real-world tests, the robot could handle curbs, logs, and other uneven terrain it had never encountered before, and ascended and descended stairs with 80% and 100% efficiency, respectively. According to the researchers, "This work has demonstrated surprising capabilities for blind locomotion and leaves open the question of where the limits lie." | [] | [] | [] | scitechnews | None | None | None | None | A "blind" bipedal robot trained in a simulator by Oregon State University (OSU) researchers can negotiate varying terrain, including the climbing of stairs. The researchers applied sim-to-real Reinforcement Learning to establish how Agility Robotics' Cassie robot would ambulate. The OSU team taught Cassie virtually to manage various situations, including stairs and flat surfaces. In real-world tests, the robot could handle curbs, logs, and other uneven terrain it had never encountered before, and ascended and descended stairs with 80% and 100% efficiency, respectively. According to the researchers, "This work has demonstrated surprising capabilities for blind locomotion and leaves open the question of where the limits lie."
It's routine for four-legged robots with computer vision to navigate stairs , but getting a "blind" bipedal robot to do it is a whole other challenge. Now, researchers from Oregon State University have accomplished the feat with a bipedal robot called Cassie (from Agility Robotics) by training it in a simulator.
Why would you want a blind robot to navigate stairs? As the researchers point out, robots can't always rely completely on cameras or other sensors because of possible dim lighting, fog and other issues. So ideally, they'd also use "proprioception" (body awareness) to navigate unknown environments.
The researchers used a technique called sim-to-real Reinforcement Learning (RL) to establish how the robot will walk. They noted that that "for biped locomotion, the training will involve many falls and crashes, especially early in training," so a simulator allowed them to do that without breaking the robot. They taught the robot virtually to handle a number of situations, including stairs and flat ground.
With simulated training done, the researchers took the robot around the university campus to tackle staircases and different types of terrain. It proved to be an apt pupil, handling curbs, logs and other uneven terrain that it had never seen before. On the stairs, the researchers did 10 trials ascending stairs and 10 descending, and it handled those with 80 percent and 100 percent efficiency, respectively.
There were a few caveats in the first trials, as the robot had to run at a standard speed - it tended to fail if it came in too fast or too slow. It's also highly dependent on a memory mechanism due to the challenge of navigating an unknown environment while blind. The researchers plan future tests to see if the efficiency improves with the addition of computer vision. All told though, "this work has demonstrated surprising capabilities for blind locomotion and leaves open the question of where the limits lie," they wrote. |
|||
387 | Amazon Blocks Police from Using Its Facial Recognition Software Indefinitely | Amazon has indefinitely extended its ban on police use of its facial recognition software, which lawmakers and company employees have said discriminates against African-Americans. When announcing the moratorium last June, the Internet retailer said it hoped a year would give Congress sufficient time to develop legislation regulating the ethical use of the technology. The American Civil Liberties Union (ACLI) said it was glad to see the ban extended indefinitely. ACLI's Nathan Freed said, "Now, the Biden administration and legislatures across the country must further protect communities from the dangers of this technology by ending its use by law enforcement entirely, regardless which company is selling it." | [] | [] | [] | scitechnews | None | None | None | None | Amazon has indefinitely extended its ban on police use of its facial recognition software, which lawmakers and company employees have said discriminates against African-Americans. When announcing the moratorium last June, the Internet retailer said it hoped a year would give Congress sufficient time to develop legislation regulating the ethical use of the technology. The American Civil Liberties Union (ACLI) said it was glad to see the ban extended indefinitely. ACLI's Nathan Freed said, "Now, the Biden administration and legislatures across the country must further protect communities from the dangers of this technology by ending its use by law enforcement entirely, regardless which company is selling it."
|
||||
388 | Helping Drone Swarms Avoid Obstacles Without Hitting Each Other | There is strength in numbers. That's true not only for humans, but for drones too. By flying in a swarm, they can cover larger areas and collect a wider range of data, since each drone can be equipped with different sensors.
Preventing drones from bumping into each other
One reason why drone swarms haven't been used more widely is the risk of gridlock within the swarm. Studies on the collective movement of animals show that each agent tends to coordinate its movements with the others, adjusting its trajectory so as to keep a safe inter-agent distance or to travel in alignment, for example.
"In a drone swarm, when one drone changes its trajectory to avoid an obstacle, its neighbors automatically synchronize their movements accordingly," says Dario Floreano, a professor at EPFL's School of Engineering and head of the Laboratory of Intelligent Systems (LIS) . "But that often causes the swarm to slow down, generates gridlock within the swarm or even leads to collisions."
Not just reacting, but also predicting
Enrica Soria, a PhD student at LIS, has come up with a new method for getting around that problem. She has developed a predictive control model that allows drones to not just react to others in a swarm, but also to anticipate their own movements and predict those of their neighbors. "Our model gives drones the ability to determine when a neighbor is about to slow down, meaning the slowdown has less of an effect on their own flight," says Soria. The model works by programing in locally controlled, simple rules, such as a minimum inter-agent distance to maintain, a set velocity to keep, or a specific direction to follow. Soria's work has just been published in Nature Machine Intelligence .
With Soria's model, drones are much less dependent on commands issued by a central computer. Drones in aerial light shows, for example, get their instructions from a computer that calculates each one's trajectory to avoid a collision. "But with our model, drones are commanded using local information and can modify their trajectories autonomously," says Soria.
A model inspired by nature
Tests run at LIS show that Soria's system improves the speed, order and safety of drone swarms in areas with a lot of obstacles. "We don't yet know if, or to what extent, animals are able to predict the movements of those around them," says Floreano. "But biologists have recently suggested that the synchronized direction changes observed in some large groups would require a more sophisticated cognitive ability than what has been believed until now." | A predictive control model developed by engineers at Swiss Federal Institute of Technology Lausanne (EPFL) allows individual drones to predict their own behavior and that of neighboring drones in a swarm, to keep them from bumping into each other. EPFL's Enrica Soria said, "Our model gives drones the ability to determine when a neighbor is about to slow down, meaning the slowdown has less of an effect on their own flight." In the new model, Soria explained, "Drones are commanded using local information and can modify their trajectories autonomously." Tests conducted in the university's Laboratory of Intelligent Systems found that in areas with multiple obstacles, the model improves a drone swarm's speed, order, and safety. | [] | [] | [] | scitechnews | None | None | None | None | A predictive control model developed by engineers at Swiss Federal Institute of Technology Lausanne (EPFL) allows individual drones to predict their own behavior and that of neighboring drones in a swarm, to keep them from bumping into each other. EPFL's Enrica Soria said, "Our model gives drones the ability to determine when a neighbor is about to slow down, meaning the slowdown has less of an effect on their own flight." In the new model, Soria explained, "Drones are commanded using local information and can modify their trajectories autonomously." Tests conducted in the university's Laboratory of Intelligent Systems found that in areas with multiple obstacles, the model improves a drone swarm's speed, order, and safety.
There is strength in numbers. That's true not only for humans, but for drones too. By flying in a swarm, they can cover larger areas and collect a wider range of data, since each drone can be equipped with different sensors.
Preventing drones from bumping into each other
One reason why drone swarms haven't been used more widely is the risk of gridlock within the swarm. Studies on the collective movement of animals show that each agent tends to coordinate its movements with the others, adjusting its trajectory so as to keep a safe inter-agent distance or to travel in alignment, for example.
"In a drone swarm, when one drone changes its trajectory to avoid an obstacle, its neighbors automatically synchronize their movements accordingly," says Dario Floreano, a professor at EPFL's School of Engineering and head of the Laboratory of Intelligent Systems (LIS) . "But that often causes the swarm to slow down, generates gridlock within the swarm or even leads to collisions."
Not just reacting, but also predicting
Enrica Soria, a PhD student at LIS, has come up with a new method for getting around that problem. She has developed a predictive control model that allows drones to not just react to others in a swarm, but also to anticipate their own movements and predict those of their neighbors. "Our model gives drones the ability to determine when a neighbor is about to slow down, meaning the slowdown has less of an effect on their own flight," says Soria. The model works by programing in locally controlled, simple rules, such as a minimum inter-agent distance to maintain, a set velocity to keep, or a specific direction to follow. Soria's work has just been published in Nature Machine Intelligence .
With Soria's model, drones are much less dependent on commands issued by a central computer. Drones in aerial light shows, for example, get their instructions from a computer that calculates each one's trajectory to avoid a collision. "But with our model, drones are commanded using local information and can modify their trajectories autonomously," says Soria.
A model inspired by nature
Tests run at LIS show that Soria's system improves the speed, order and safety of drone swarms in areas with a lot of obstacles. "We don't yet know if, or to what extent, animals are able to predict the movements of those around them," says Floreano. "But biologists have recently suggested that the synchronized direction changes observed in some large groups would require a more sophisticated cognitive ability than what has been believed until now." |
|||
389 | France Embraces Google, Microsoft in Quest to Safeguard Sensitive Data | The French government has indicated that cloud computing technology developed by Google and Microsoft could be used to store sensitive state and corporate data, providing it is licensed to French companies. French Finance Minister Bruno Le Maire acknowledged U.S. technological superiority in the field but said guaranteeing the location of servers on French soil, and European ownership of the companies that store and process the data, could help ensure a "trustworthy" cloud computing alternative. Companies that offer cloud computing services that meet these principles and other conditions set forth by France's cybersecurity agency ANSSI could receive a "trustworthy cloud" designation. Two French companies already meet the criteria. | [] | [] | [] | scitechnews | None | None | None | None | The French government has indicated that cloud computing technology developed by Google and Microsoft could be used to store sensitive state and corporate data, providing it is licensed to French companies. French Finance Minister Bruno Le Maire acknowledged U.S. technological superiority in the field but said guaranteeing the location of servers on French soil, and European ownership of the companies that store and process the data, could help ensure a "trustworthy" cloud computing alternative. Companies that offer cloud computing services that meet these principles and other conditions set forth by France's cybersecurity agency ANSSI could receive a "trustworthy cloud" designation. Two French companies already meet the criteria.
|
||||
391 | COVID-19 Wrecked the Algorithms That Set Airfares, but They Won't Stay Dumb | The COVID-19 pandemic crippled the reliability of algorithms used to set air fares based on historical data and has accelerated a hybrid model that combines historical and live data. Before the pandemic, airlines used the algorithms to predict how strong ticket demand would be on a particular day and time, or exactly when people will fly to visit relatives before a holiday. Corporate travel constitutes a large share of airline profits, with business fliers avoiding Tuesdays and Wednesdays, favoring short trips over week-long ones, and booking late. The pandemic undermined historical demand patterns while cancellations undercut live data, causing the algorithms to post absurd prices. Overall, the pandemic has stress-tested useful advancements to the algorithms, like assigning greater weight to recent booking numbers, and applying online searches to forecast when and where demand will manifest. | [] | [] | [] | scitechnews | None | None | None | None | The COVID-19 pandemic crippled the reliability of algorithms used to set air fares based on historical data and has accelerated a hybrid model that combines historical and live data. Before the pandemic, airlines used the algorithms to predict how strong ticket demand would be on a particular day and time, or exactly when people will fly to visit relatives before a holiday. Corporate travel constitutes a large share of airline profits, with business fliers avoiding Tuesdays and Wednesdays, favoring short trips over week-long ones, and booking late. The pandemic undermined historical demand patterns while cancellations undercut live data, causing the algorithms to post absurd prices. Overall, the pandemic has stress-tested useful advancements to the algorithms, like assigning greater weight to recent booking numbers, and applying online searches to forecast when and where demand will manifest.
|
||||
392 | AI Uses Timing, Weather Data to Accurately Predict Cardiac Arrest Risk | Machine learning model combines timing and weather data.
A branch of artificial intelligence (AI), called machine learning, can accurately predict the risk of an out of hospital cardiac arrest - when the heart suddenly stops beating - using a combination of timing and weather data, finds research published online in the journal Heart .
Machine learning is the study of computer algorithms, and based on the idea that systems can learn from data and identify patterns to inform decisions with minimal intervention.
The risk of a cardiac arrest was highest on Sundays, Mondays, public holidays and when temperatures dropped sharply within or between days, the findings show.
This information could be used as an early warning system for citizens, to lower their risk and improve their chances of survival, and to improve the preparedness of emergency medical services, suggest the researchers.
Out of hospital cardiac arrest is common around the world, but is generally associated with low rates of survival. Risk is affected by prevailing weather conditions.
But meteorological data are extensive and complex, and machine learning has the potential to pick up associations not identified by conventional one-dimensional statistical approaches, say the Japanese researchers.
To explore this further, they assessed the capacity of machine learning to predict daily out-of-hospital cardiac arrest, using daily weather (temperature, relative humidity, rainfall, snowfall, cloud cover, wind speed, and atmospheric pressure readings) and timing (year, season, day of the week, hour of the day, and public holidays) data.
Of 1,299,784 cases occurring between 2005 and 2013, machine learning was applied to 525,374, using either weather or timing data, or both (training dataset).
The results were then compared with 135,678 cases occurring in 2014-15 to test the accuracy of the model for predicting the number of daily cardiac arrests in other years (testing dataset).
And to see how accurate the approach might be at the local level, the researchers carried out a 'heatmap analysis,' using another dataset drawn from the location of out-of-hospital cardiac arrests in Kobe city between January 2016 and December 2018.
The combination of weather and timing data most accurately predicted an out-of-hospital cardiac arrest in both the training and testing datasets.
It predicted that Sundays, Mondays, public holidays, winter, low temperatures, and sharp temperature drops within and between days were more strongly associated with cardiac arrest than either the weather or timing data alone.
The researchers acknowledge that they didn't have detailed information on the location of cardiac arrests except in Kobe city, nor did they have any data on pre-existing medical conditions, both of which may have influenced the results.
But they suggest: "Our predictive model for daily incidence of [out of hospital cardiac arrest] is widely generalizable for the general population in developed countries, because this study had a large sample size and used comprehensive meteorological data."
They add: "The methods developed in this study serve as an example of a new model for predictive analytics that could be applied to other clinical outcomes of interest related to life-threatening acute cardiovascular disease."
And they conclude: "This predictive model may be useful for preventing [out of hospital cardiac arrest] and improving the prognosis of patients...via a warning system for citizens and [emergency medical services] on high-risk days in the future."
In a linked editorial, Dr. David Foster Gaieski, of Sidney Kimmel Medical College at Thomas Jefferson University, agrees.
"Knowing what the weather will most likely be in the coming week can generate 'cardiovascular emergency warnings' for people at risk - notifying the elderly and others about upcoming periods of increased danger similar to how weather data are used to notify people of upcoming hazardous road conditions during winter storms," he explains.
"These predictions can be used for resource deployment, scheduling, and planning so that emergency medical services systems, emergency department resuscitation resources, and cardiac catheterization laboratory staff are aware of, and prepared for, the number of expected [cases] during the coming days," he adds.
References:
"Machine learning model for predicting out-of-hospital cardiac arrests using meteorological and chronological data" by Takahiro Nakashima, Soshiro Ogata, Teruo Noguchi, Yoshio Tahara, Daisuke Onozuka, Satoshi Kato, Yoshiki Yamagata, Sunao Kojima, Taku Iwami, Tetsuya Sakamoto, Ken Nagao, Hiroshi Nonogi, Satoshi Yasuda, Koji Iihara, Robert Neumar and Kunihiro Nishimura, 17 May 2021, Heart . DOI: 10.1136/heartjnl-2020-318726
"Next week's weather forecast: cloudy, cold, with a chance of cardiac arrest" by David Foster Gaieski, 17 May 2021, Heart . DOI: 10.1136/heartjnl-2021-318950
Funding: Environmental Restoration and Conservation Agency of Japan; Japan Society for the Promotion of Science; Intramural Research Fund of Cardiovascular Disease of the National Cerebral and Cardiovascular Centre | Machine learning can predict one's risk of out-of-hospital cardiac arrest by combining timing and meteorological data, according to researchers at U.S. and Japanese institutions. The researchers trained their algorithm on 525,374 of roughly 1.23 million cases of out-of-hospital cardiac arrests using either weather or timing data, or both. The results were compared with 135,678 such cases occurring in 2014-2015 to test the model's accuracy; they also were assessed for local-level accuracy via a heatmap analysis using another dataset drawn from the location of out-of-hospital cardiac arrests in Kobe between January 2016 and December 2018. The researchers said the combination of weather and timing data was most accurate in predicting an out-of-hospital cardiac arrest in both the training and testing datasets. | [] | [] | [] | scitechnews | None | None | None | None | Machine learning can predict one's risk of out-of-hospital cardiac arrest by combining timing and meteorological data, according to researchers at U.S. and Japanese institutions. The researchers trained their algorithm on 525,374 of roughly 1.23 million cases of out-of-hospital cardiac arrests using either weather or timing data, or both. The results were compared with 135,678 such cases occurring in 2014-2015 to test the model's accuracy; they also were assessed for local-level accuracy via a heatmap analysis using another dataset drawn from the location of out-of-hospital cardiac arrests in Kobe between January 2016 and December 2018. The researchers said the combination of weather and timing data was most accurate in predicting an out-of-hospital cardiac arrest in both the training and testing datasets.
Machine learning model combines timing and weather data.
A branch of artificial intelligence (AI), called machine learning, can accurately predict the risk of an out of hospital cardiac arrest - when the heart suddenly stops beating - using a combination of timing and weather data, finds research published online in the journal Heart .
Machine learning is the study of computer algorithms, and based on the idea that systems can learn from data and identify patterns to inform decisions with minimal intervention.
The risk of a cardiac arrest was highest on Sundays, Mondays, public holidays and when temperatures dropped sharply within or between days, the findings show.
This information could be used as an early warning system for citizens, to lower their risk and improve their chances of survival, and to improve the preparedness of emergency medical services, suggest the researchers.
Out of hospital cardiac arrest is common around the world, but is generally associated with low rates of survival. Risk is affected by prevailing weather conditions.
But meteorological data are extensive and complex, and machine learning has the potential to pick up associations not identified by conventional one-dimensional statistical approaches, say the Japanese researchers.
To explore this further, they assessed the capacity of machine learning to predict daily out-of-hospital cardiac arrest, using daily weather (temperature, relative humidity, rainfall, snowfall, cloud cover, wind speed, and atmospheric pressure readings) and timing (year, season, day of the week, hour of the day, and public holidays) data.
Of 1,299,784 cases occurring between 2005 and 2013, machine learning was applied to 525,374, using either weather or timing data, or both (training dataset).
The results were then compared with 135,678 cases occurring in 2014-15 to test the accuracy of the model for predicting the number of daily cardiac arrests in other years (testing dataset).
And to see how accurate the approach might be at the local level, the researchers carried out a 'heatmap analysis,' using another dataset drawn from the location of out-of-hospital cardiac arrests in Kobe city between January 2016 and December 2018.
The combination of weather and timing data most accurately predicted an out-of-hospital cardiac arrest in both the training and testing datasets.
It predicted that Sundays, Mondays, public holidays, winter, low temperatures, and sharp temperature drops within and between days were more strongly associated with cardiac arrest than either the weather or timing data alone.
The researchers acknowledge that they didn't have detailed information on the location of cardiac arrests except in Kobe city, nor did they have any data on pre-existing medical conditions, both of which may have influenced the results.
But they suggest: "Our predictive model for daily incidence of [out of hospital cardiac arrest] is widely generalizable for the general population in developed countries, because this study had a large sample size and used comprehensive meteorological data."
They add: "The methods developed in this study serve as an example of a new model for predictive analytics that could be applied to other clinical outcomes of interest related to life-threatening acute cardiovascular disease."
And they conclude: "This predictive model may be useful for preventing [out of hospital cardiac arrest] and improving the prognosis of patients...via a warning system for citizens and [emergency medical services] on high-risk days in the future."
In a linked editorial, Dr. David Foster Gaieski, of Sidney Kimmel Medical College at Thomas Jefferson University, agrees.
"Knowing what the weather will most likely be in the coming week can generate 'cardiovascular emergency warnings' for people at risk - notifying the elderly and others about upcoming periods of increased danger similar to how weather data are used to notify people of upcoming hazardous road conditions during winter storms," he explains.
"These predictions can be used for resource deployment, scheduling, and planning so that emergency medical services systems, emergency department resuscitation resources, and cardiac catheterization laboratory staff are aware of, and prepared for, the number of expected [cases] during the coming days," he adds.
References:
"Machine learning model for predicting out-of-hospital cardiac arrests using meteorological and chronological data" by Takahiro Nakashima, Soshiro Ogata, Teruo Noguchi, Yoshio Tahara, Daisuke Onozuka, Satoshi Kato, Yoshiki Yamagata, Sunao Kojima, Taku Iwami, Tetsuya Sakamoto, Ken Nagao, Hiroshi Nonogi, Satoshi Yasuda, Koji Iihara, Robert Neumar and Kunihiro Nishimura, 17 May 2021, Heart . DOI: 10.1136/heartjnl-2020-318726
"Next week's weather forecast: cloudy, cold, with a chance of cardiac arrest" by David Foster Gaieski, 17 May 2021, Heart . DOI: 10.1136/heartjnl-2021-318950
Funding: Environmental Restoration and Conservation Agency of Japan; Japan Society for the Promotion of Science; Intramural Research Fund of Cardiovascular Disease of the National Cerebral and Cardiovascular Centre |
|||
393 | Top Educational Apps for Children Might Not Be as Beneficial as Promised | UNIVERSITY PARK, Pa. - Log on to any app store, and parents will find hundreds of options for children that claim to be educational. But new research suggests these apps might not be as beneficial to children as they seem.
A new study analyzed some of the most downloaded educational apps for kids, using a set of four criteria designed to evaluate whether an app provides a high-quality educational experience for children. The researchers found that most of the apps scored low, with free apps scoring even lower than their paid counterparts on some criteria.
Jennifer Zosh, associate professor of human development and family studies at Penn State Brandywine, said the study - recently published in the Journal of Children and Media - suggests apps shouldn't replace human interaction nor do they guarantee learning.
"Parents shouldn't automatically trust that something marked 'educational' in an app store is actually educational," Zosh said. "By co-playing apps with their children, talking to them about what is happening as they play, pointing out what is happening in the real world that relates to something shown in an app, and selecting apps that minimize distraction, they are able to leverage the pillars of learning and can successfully navigate this new digital childhood."
According to previous research, about 98% of kids ages eight and under live in a home with some type of mobile device, like a smartphone or tablet. While watching videos and playing games are popular ways children spend their time on these devices, the researchers said there are also many apps that are not only popular but claim to be educational.
Marisa Meyer, a research assistant at the University of Michigan, said the idea for the study came about when reviewing the top-downloaded apps on the Google Play marketplace for different research.
"We noticed a concerning number of apps being marketed to children as 'educational' without reputable justification or verification of these educational claims," Meyer said. "Our study was an effort to create a coding scheme that would allow us to evaluate apps marketed as educational and have a framework to verify, or refute, those claims."
For the study, the researchers developed a system for evaluating educational apps that was based on Zosh's previous work in the journal Psychological Science in the Public Interest, which used decades of research on the science of learning to uncover the "pillars" of learning - or the contexts and traits of truly educational experiences. In that piece, Zosh said, "we explored how these pillars might give us insight into how to leverage new technology to create truly educational experiences for young children."
In the current study, Zosh and the other researchers tested the apps children are actually using against these pillars to uncover what today's apps are doing well and where they struggle in supporting learning in young children. The researchers deemed an app high-quality based on how it performed across each pillar.
"The first pillar is to facilitate active, minds-on thinking in the children - asking them to question, guess, evaluate and think deeply, rather than simply tapping or reacting to on-screen stimuli," Zosh said. "The second is that it helps children stay tuned into the learning at hand, rather than distracting them with overwhelming sound effects, flash ads and gimmicky rewards."
The researchers said the third pillar is that the app contain relevant and meaningful content that facilitates a connection of app-based learning to the user's external world. Finally, the fourth pillar is that the app provides opportunities for social interaction, either in-person or mediated by the screen.
The top 100 children's educational apps from the Google Play and Apple app stores, as well as 24 apps most frequently played by preschool-age children in a separate longitudinal cohort study, were analyzed for the study. Each app was given a score of "0" (low) to "3" (high) for each pillar. Apps that had a combined score of less than five after adding the scores for each pillar were considered low quality.
After analyzing the data, the researchers found that a score of "1" was the most common rating for all four pillars. For the fourth pillar - Social Interaction - a score of "0" was the second most common rating.
According to the researchers, because these apps might not provide high-quality educational experiences for kids, they risk parents choosing them over other activities - such as reading, physical activity or pretend play - that actually could be more beneficial.
Meyer added that the study also has implications not just for parents, but for app developers, as well.
"If app designers intend to engender and advertise educational gains through use of their apps, we recommend collaborating with child development experts in order to develop apps rooted in the ways children learn most effectively," Meyer said. "We also recommend that app designers and app stores work with child development experts to create evidence-based ratings of apps, so that higher-quality products with fewer distracting enhancements can be easily identified by parents."
Caroline McLaren, University of Michigan; Michael Robb, Common Sense Media; Harlan McCafferty, University of Michigan; Roberta Michnick Golinkoff, University of Delaware; and Kathy Hirsh-Pasek, Temple University, also participated in this work.
The National Institute of Child Health and Human Development helped support this research. | An analysis of the most frequently downloaded educational apps for kids by a team of researchers led by the Pennsylvania State University Brandywine found such apps may not provide high-quality educational experiences. The researchers used previous research on the pillars of learning to develop criteria for the assessment of the top 100 children's educational apps from the Google Play and Apple apps stores, among others. After the apps were scored from 0 (low) to 3 (high) for each pillar of learning, the researchers found a score of 1 was most common for each app with regard to all four pillars. The University of Michigan's Marisa Meyer said, "If app designers intend to engender and advertise educational gains through use of their apps, we recommend collaborating with child development experts in order to develop apps rooted in the ways children learn most effectively." | [] | [] | [] | scitechnews | None | None | None | None | An analysis of the most frequently downloaded educational apps for kids by a team of researchers led by the Pennsylvania State University Brandywine found such apps may not provide high-quality educational experiences. The researchers used previous research on the pillars of learning to develop criteria for the assessment of the top 100 children's educational apps from the Google Play and Apple apps stores, among others. After the apps were scored from 0 (low) to 3 (high) for each pillar of learning, the researchers found a score of 1 was most common for each app with regard to all four pillars. The University of Michigan's Marisa Meyer said, "If app designers intend to engender and advertise educational gains through use of their apps, we recommend collaborating with child development experts in order to develop apps rooted in the ways children learn most effectively."
UNIVERSITY PARK, Pa. - Log on to any app store, and parents will find hundreds of options for children that claim to be educational. But new research suggests these apps might not be as beneficial to children as they seem.
A new study analyzed some of the most downloaded educational apps for kids, using a set of four criteria designed to evaluate whether an app provides a high-quality educational experience for children. The researchers found that most of the apps scored low, with free apps scoring even lower than their paid counterparts on some criteria.
Jennifer Zosh, associate professor of human development and family studies at Penn State Brandywine, said the study - recently published in the Journal of Children and Media - suggests apps shouldn't replace human interaction nor do they guarantee learning.
"Parents shouldn't automatically trust that something marked 'educational' in an app store is actually educational," Zosh said. "By co-playing apps with their children, talking to them about what is happening as they play, pointing out what is happening in the real world that relates to something shown in an app, and selecting apps that minimize distraction, they are able to leverage the pillars of learning and can successfully navigate this new digital childhood."
According to previous research, about 98% of kids ages eight and under live in a home with some type of mobile device, like a smartphone or tablet. While watching videos and playing games are popular ways children spend their time on these devices, the researchers said there are also many apps that are not only popular but claim to be educational.
Marisa Meyer, a research assistant at the University of Michigan, said the idea for the study came about when reviewing the top-downloaded apps on the Google Play marketplace for different research.
"We noticed a concerning number of apps being marketed to children as 'educational' without reputable justification or verification of these educational claims," Meyer said. "Our study was an effort to create a coding scheme that would allow us to evaluate apps marketed as educational and have a framework to verify, or refute, those claims."
For the study, the researchers developed a system for evaluating educational apps that was based on Zosh's previous work in the journal Psychological Science in the Public Interest, which used decades of research on the science of learning to uncover the "pillars" of learning - or the contexts and traits of truly educational experiences. In that piece, Zosh said, "we explored how these pillars might give us insight into how to leverage new technology to create truly educational experiences for young children."
In the current study, Zosh and the other researchers tested the apps children are actually using against these pillars to uncover what today's apps are doing well and where they struggle in supporting learning in young children. The researchers deemed an app high-quality based on how it performed across each pillar.
"The first pillar is to facilitate active, minds-on thinking in the children - asking them to question, guess, evaluate and think deeply, rather than simply tapping or reacting to on-screen stimuli," Zosh said. "The second is that it helps children stay tuned into the learning at hand, rather than distracting them with overwhelming sound effects, flash ads and gimmicky rewards."
The researchers said the third pillar is that the app contain relevant and meaningful content that facilitates a connection of app-based learning to the user's external world. Finally, the fourth pillar is that the app provides opportunities for social interaction, either in-person or mediated by the screen.
The top 100 children's educational apps from the Google Play and Apple app stores, as well as 24 apps most frequently played by preschool-age children in a separate longitudinal cohort study, were analyzed for the study. Each app was given a score of "0" (low) to "3" (high) for each pillar. Apps that had a combined score of less than five after adding the scores for each pillar were considered low quality.
After analyzing the data, the researchers found that a score of "1" was the most common rating for all four pillars. For the fourth pillar - Social Interaction - a score of "0" was the second most common rating.
According to the researchers, because these apps might not provide high-quality educational experiences for kids, they risk parents choosing them over other activities - such as reading, physical activity or pretend play - that actually could be more beneficial.
Meyer added that the study also has implications not just for parents, but for app developers, as well.
"If app designers intend to engender and advertise educational gains through use of their apps, we recommend collaborating with child development experts in order to develop apps rooted in the ways children learn most effectively," Meyer said. "We also recommend that app designers and app stores work with child development experts to create evidence-based ratings of apps, so that higher-quality products with fewer distracting enhancements can be easily identified by parents."
Caroline McLaren, University of Michigan; Michael Robb, Common Sense Media; Harlan McCafferty, University of Michigan; Roberta Michnick Golinkoff, University of Delaware; and Kathy Hirsh-Pasek, Temple University, also participated in this work.
The National Institute of Child Health and Human Development helped support this research. |
|||
394 | Exxon Mobil's Messaging Shifted Blame for Warming to Consumers | Exxon Mobil Corp. has used language to systematically shift blame for climate change from fossil fuel companies onto consumers, according to a new paper by Harvard University researchers.
The paper , published yesterday in the journal One Earth , could bolster efforts to hold the oil giant accountable in court for its alleged deception about global warming.
"This is the first computational assessment of how Exxon Mobil has used language in subtle yet systematic ways to shape the way the public talks about and thinks about climate change," Geoffrey Supran, a research fellow at Harvard and co-author of the paper, said in an interview with E&E News.
"One of our overall findings is that Exxon Mobil has used rhetoric mimicking the tobacco industry to downplay the reality and seriousness of climate change and to shift responsibility for climate change away from itself and onto consumers," he added.
A spokesperson for Exxon Mobil disputed the paper, calling it part of a coordinated legal campaign against the company.
Supran and co-author Naomi Oreskes, a professor of the history of science at Harvard (and Scientific American columnist ), conducted a computational analysis of 180 Exxon Mobil documents from 1972 to 2019, including peer-reviewed publications, advertorials in The New York Times and internal memos.
Using a series of algorithms, the Harvard researchers found that Exxon Mobil had privately relied on some terms while publicly avoiding them altogether.
For example, Exxon Mobil's internal documents frequently described climate change as a problem caused by "fossil fuel combustion." But in public-facing documents, the company referred to global warming as a problem caused by the "energy demand" of "consumers."
The public communications sought to deflect responsibility for climate change away from the oil giant and onto individual consumers who heat their homes and fill their cars with gas, the researchers wrote.
Following their merger in 1999, Exxon and Mobil also increasingly presented climate change as a "risk," rather than a reality, the analysis found.
This language sought to minimize the dangers of global warming while not denying their existence outright, the paper says.
The researchers drew a direct comparison between Exxon Mobil and major tobacco companies, which they said used similar tactics to shape public discourse about smoking cigarettes.
"These patterns mimic the tobacco industry's documented strategy of shifting responsibility away from corporations - which knowingly sold a deadly product while denying its harms - and onto consumers," they wrote.
In the 1990s, attorneys general from all 50 states sued the largest U.S. tobacco companies over their alleged deception about the harmful health effects of smoking cigarettes and the addictive nature of nicotine. The suits culminated in a $206 billion master settlement agreement ( Climatewire , March 10).
Since 2017, five states and more than a dozen municipalities have filed similar suits alleging that the biggest U.S. oil and gas companies misled the public about the climate risks of burning fossil fuels. The complaints ask the oil industry to help cover the costs of addressing floods, wildfires and other disasters fueled by rising global temperatures.
Supran said that although he didn't conduct the study with the climate liability litigation in mind, the findings can nonetheless inform the legal fights.
"Obviously, we did this completely independently. I've never spoken to any of the lawyers involved," Supran said. "But certainly with hindsight, our insights may be relevant, especially to these more nascent cases alleging deceptive marketing."
Last month, New York City filed a lawsuit asserting that Exxon Mobil, BP PLC and Royal Dutch Shell PLC violated the city's consumer protection law by engaging in "green washing," or the practice of making their fossil fuel products seem more environmentally friendly than they really are.
Asked for comment on the paper, Exxon spokesperson Casey Norton said in an email to E&E News: "This research is clearly part of a litigation strategy against Exxon Mobil and other energy companies."
Norton noted that Oreskes is on retainer with Sher Edling LLP, a San Francisco-based law firm that represents a slew of the challengers in the climate liability suits. "Oreskes did not disclose this blatant conflict of interest," he said.
Norton also pointed to the paper's partial funding from the Rockefeller family, which Exxon has accused of supporting a climate conspiracy against the company ( Climatewire , May 4).
"The research was paid for by the Rockefeller Family Fund, which is helping finance climate change litigation against energy companies," he said. "This follows a previous study attacking Exxon Mobil that used a similar discredited methodology."
Norton was referring to a 2017 paper by Supran and Oreskes in the journal Environmental Research Letters that found Exxon's communications from 1977 to 2014 misled the public about climate science.
Vijay Swarup, Exxon's vice president of research and development, previously blasted the 2017 paper as "fundamentally flawed" in a comment in Environmental Research Letters last year.
Norton concluded by stressing that Exxon supports climate action, noting that the oil giant recently proposed a massive carbon capture project in Houston ( Energywire , April 20).
"Exxon Mobil supports the Paris climate agreement, and is working to reduce company emissions and helping customers reduce their emissions while working on new lower-emission technologies and advocating for effective policies," he said.
In a joint statement to E&E News, Supran and Oreskes pushed back on Norton's allegations.
"Sher Edling played no role in the paper we published today, nor in any other academic work we have done. They have not funded any of our studies; they have not reviewed our data or interpretations prior to its peer-reviewed publication; and we have never discussed any of our work concerning ExxonMobil with them. Therefore, there is no conflict of interest and nothing to disclose," the researchers said.
"For context, we note that over the past four years, ExxonMobil have attacked us and our work," they added. "It has become a familiar pattern. We publish science, ExxonMobil offers spin and character assassination. ExxonMobil is now misleading the public about its history of misleading the public."
Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2021. E&E News provides essential news for energy and environment professionals. | A computational assessment by Harvard University researchers determined that oil giant Exxon Mobil used language to deflect accountability for climate change from fossil fuel companies to consumers. Harvard's Geoffrey Supran and Naomi Oreskes used algorithms to analyze 180 Exxon Mobil documents from 1972 to 2019, including peer-reviewed publications, advertorials in | [] | [] | [] | scitechnews | None | None | None | None | A computational assessment by Harvard University researchers determined that oil giant Exxon Mobil used language to deflect accountability for climate change from fossil fuel companies to consumers. Harvard's Geoffrey Supran and Naomi Oreskes used algorithms to analyze 180 Exxon Mobil documents from 1972 to 2019, including peer-reviewed publications, advertorials in
Exxon Mobil Corp. has used language to systematically shift blame for climate change from fossil fuel companies onto consumers, according to a new paper by Harvard University researchers.
The paper , published yesterday in the journal One Earth , could bolster efforts to hold the oil giant accountable in court for its alleged deception about global warming.
"This is the first computational assessment of how Exxon Mobil has used language in subtle yet systematic ways to shape the way the public talks about and thinks about climate change," Geoffrey Supran, a research fellow at Harvard and co-author of the paper, said in an interview with E&E News.
"One of our overall findings is that Exxon Mobil has used rhetoric mimicking the tobacco industry to downplay the reality and seriousness of climate change and to shift responsibility for climate change away from itself and onto consumers," he added.
A spokesperson for Exxon Mobil disputed the paper, calling it part of a coordinated legal campaign against the company.
Supran and co-author Naomi Oreskes, a professor of the history of science at Harvard (and Scientific American columnist ), conducted a computational analysis of 180 Exxon Mobil documents from 1972 to 2019, including peer-reviewed publications, advertorials in The New York Times and internal memos.
Using a series of algorithms, the Harvard researchers found that Exxon Mobil had privately relied on some terms while publicly avoiding them altogether.
For example, Exxon Mobil's internal documents frequently described climate change as a problem caused by "fossil fuel combustion." But in public-facing documents, the company referred to global warming as a problem caused by the "energy demand" of "consumers."
The public communications sought to deflect responsibility for climate change away from the oil giant and onto individual consumers who heat their homes and fill their cars with gas, the researchers wrote.
Following their merger in 1999, Exxon and Mobil also increasingly presented climate change as a "risk," rather than a reality, the analysis found.
This language sought to minimize the dangers of global warming while not denying their existence outright, the paper says.
The researchers drew a direct comparison between Exxon Mobil and major tobacco companies, which they said used similar tactics to shape public discourse about smoking cigarettes.
"These patterns mimic the tobacco industry's documented strategy of shifting responsibility away from corporations - which knowingly sold a deadly product while denying its harms - and onto consumers," they wrote.
In the 1990s, attorneys general from all 50 states sued the largest U.S. tobacco companies over their alleged deception about the harmful health effects of smoking cigarettes and the addictive nature of nicotine. The suits culminated in a $206 billion master settlement agreement ( Climatewire , March 10).
Since 2017, five states and more than a dozen municipalities have filed similar suits alleging that the biggest U.S. oil and gas companies misled the public about the climate risks of burning fossil fuels. The complaints ask the oil industry to help cover the costs of addressing floods, wildfires and other disasters fueled by rising global temperatures.
Supran said that although he didn't conduct the study with the climate liability litigation in mind, the findings can nonetheless inform the legal fights.
"Obviously, we did this completely independently. I've never spoken to any of the lawyers involved," Supran said. "But certainly with hindsight, our insights may be relevant, especially to these more nascent cases alleging deceptive marketing."
Last month, New York City filed a lawsuit asserting that Exxon Mobil, BP PLC and Royal Dutch Shell PLC violated the city's consumer protection law by engaging in "green washing," or the practice of making their fossil fuel products seem more environmentally friendly than they really are.
Asked for comment on the paper, Exxon spokesperson Casey Norton said in an email to E&E News: "This research is clearly part of a litigation strategy against Exxon Mobil and other energy companies."
Norton noted that Oreskes is on retainer with Sher Edling LLP, a San Francisco-based law firm that represents a slew of the challengers in the climate liability suits. "Oreskes did not disclose this blatant conflict of interest," he said.
Norton also pointed to the paper's partial funding from the Rockefeller family, which Exxon has accused of supporting a climate conspiracy against the company ( Climatewire , May 4).
"The research was paid for by the Rockefeller Family Fund, which is helping finance climate change litigation against energy companies," he said. "This follows a previous study attacking Exxon Mobil that used a similar discredited methodology."
Norton was referring to a 2017 paper by Supran and Oreskes in the journal Environmental Research Letters that found Exxon's communications from 1977 to 2014 misled the public about climate science.
Vijay Swarup, Exxon's vice president of research and development, previously blasted the 2017 paper as "fundamentally flawed" in a comment in Environmental Research Letters last year.
Norton concluded by stressing that Exxon supports climate action, noting that the oil giant recently proposed a massive carbon capture project in Houston ( Energywire , April 20).
"Exxon Mobil supports the Paris climate agreement, and is working to reduce company emissions and helping customers reduce their emissions while working on new lower-emission technologies and advocating for effective policies," he said.
In a joint statement to E&E News, Supran and Oreskes pushed back on Norton's allegations.
"Sher Edling played no role in the paper we published today, nor in any other academic work we have done. They have not funded any of our studies; they have not reviewed our data or interpretations prior to its peer-reviewed publication; and we have never discussed any of our work concerning ExxonMobil with them. Therefore, there is no conflict of interest and nothing to disclose," the researchers said.
"For context, we note that over the past four years, ExxonMobil have attacked us and our work," they added. "It has become a familiar pattern. We publish science, ExxonMobil offers spin and character assassination. ExxonMobil is now misleading the public about its history of misleading the public."
Reprinted from E&E News with permission from POLITICO, LLC. Copyright 2021. E&E News provides essential news for energy and environment professionals. |
|||
395 | Using ML to Predict High-Impact Research | An artificial intelligence framework built by MIT researchers can give an "early-alert" signal for future high-impact technologies, by learning from patterns gleaned from previous scientific publications.
In a retrospective test of its capabilities, DELPHI , short for Dynamic Early-warning by Learning to Predict High Impact, was able to identify all pioneering papers on an experts' list of key foundational biotechnologies, sometimes as early as the first year after their publication.
James W. Weis, a research affiliate of the MIT Media Lab, and Joseph Jacobson, a professor of media arts and sciences and head of the Media Lab's Molecular Machines research group, also used DELPHI to highlight 50 recent scientific papers that they predict will be high impact by 2023. Topics covered by the papers include DNA nanorobots used for cancer treatment, high-energy density lithium-oxygen batteries, and chemical synthesis using deep neural networks, among others.
The researchers see DELPHI as a tool that can help humans better leverage funding for scientific research, identifying "diamond in the rough" technologies that might otherwise languish and offering a way for governments, philanthropies, and venture capital firms to more efficiently and productively support science.
"In essence, our algorithm functions by learning patterns from the history of science, and then pattern-matching on new publications to find early signals of high impact," says Weis. "By tracking the early spread of ideas, we can predict how likely they are to go viral or spread to the broader academic community in a meaningful way."
The paper has been published in Nature Biotechnology .
Searching for the "diamond in the rough"
The machine learning algorithm developed by Weis and Jacobson takes advantage of the vast amount of digital information that is now available with the exponential growth in scientific publication since the 1980s. But instead of using one-dimensional measures, such as the number of citations, to judge a publication's impact, DELPHI was trained on a full time-series network of journal article metadata to reveal higher-dimensional patterns in their spread across the scientific ecosystem.
The result is a knowledge graph that contains the connections between nodes representing papers, authors, institutions, and other types of data. The strength and type of the complex connections between these nodes determine their properties, which are used in the framework. "These nodes and edges define a time-based graph that DELPHI uses to learn patterns that are predictive of high future impact," explains Weis.
Together, these network features are used to predict scientific impact, with papers that fall in the top 5 percent of time-scaled node centrality five years after publication considered the "highly impactful" target set that DELPHI aims to identify. These top 5 percent of papers constitute 35 percent of the total impact in the graph. DELPHI can also use cutoffs of the top 1, 10, and 15 percent of time-scaled node centrality, the authors say.
DELPHI suggests that highly impactful papers spread almost virally outside their disciplines and smaller scientific communities. Two papers can have the same number of citations, but highly impactful papers reach a broader and deeper audience. Low-impact papers, on the other hand, "aren't really being utilized and leveraged by an expanding group of people," says Weis.
The framework might be useful in "incentivizing teams of people to work together, even if they don't already know each other - perhaps by directing funding toward them to come together to work on important multidisciplinary problems," he adds.
Compared to citation number alone, DELPHI identifies over twice the number of highly impactful papers, including 60 percent of "hidden gems," or papers that would be missed by a citation threshold.
"Advancing fundamental research is about taking lots of shots on goal and then being able to quickly double down on the best of those ideas," says Jacobson. "This study was about seeing whether we could do that process in a more scaled way, by using the scientific community as a whole, as embedded in the academic graph, as well as being more inclusive in identifying high-impact research directions."
The researchers were surprised at how early in some cases the "alert signal" of a highly impactful paper shows up using DELPHI. "Within one year of publication we are already identifying hidden gems that will have significant impact later on," says Weis.
He cautions, however, that DELPHI isn't exactly predicting the future. "We're using machine learning to extract and quantify signals that are hidden in the dimensionality and dynamics of the data that already exist."
Fair, efficient, and effective funding
The hope, the researchers say, is that DELPHI will offer a less-biased way to evaluate a paper's impact, as other measures such as citations and journal impact factor number can be manipulated, as past studies have shown.
"We hope we can use this to find the most deserving research and researchers, regardless of what institutions they're affiliated with or how connected they are," Weis says.
As with all machine learning frameworks, however, designers and users should be alert to bias, he adds. "We need to constantly be aware of potential biases in our data and models. We want DELPHI to help find the best research in a less-biased way - so we need to be careful our models are not learning to predict future impact solely on the basis of sub-optimal metrics like h -Index, author citation count, or institutional affiliation."
DELPHI could be a powerful tool to help scientific funding become more efficient and effective, and perhaps be used to create new classes of financial products related to science investment.
"The emerging metascience of science funding is pointing toward the need for a portfolio approach to scientific investment," notes David Lang, executive director of the Experiment Foundation. "Weis and Jacobson have made a significant contribution to that understanding and, more importantly, its implementation with DELPHI."
It's something Weis has thought about a lot after his own experiences in launching venture capital funds and laboratory incubation facilities for biotechnology startups.
"I became increasingly cognizant that investors, including myself, were consistently looking for new companies in the same spots and with the same preconceptions," he says. "There's a giant wealth of highly-talented people and amazing technology that I started to glimpse, but that is often overlooked. I thought there must be a way to work in this space - and that machine learning could help us find and more effectively realize all this unmined potential." | Researchers at the Massachusetts Institute of Technology (MIT) have developed an artificial intelligence framework able to predict future high-impact technologies based on patterns found in published scientific studies. The framework, DELPHI (Dynamic Early-warning by Learning to Predict High Impact), identified all pioneering papers on a list of key foundational biotechnologies as early as the first year after their publication. MIT's James W. Weis said the framework "functions by learning patterns from the history of science, and then pattern-matching on new publications to find early signals of high impact." DELPHI, which was trained on a full time-series network of journal article metadata, identified more than twice as many high-impact papers compared to citation numbers alone. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the Massachusetts Institute of Technology (MIT) have developed an artificial intelligence framework able to predict future high-impact technologies based on patterns found in published scientific studies. The framework, DELPHI (Dynamic Early-warning by Learning to Predict High Impact), identified all pioneering papers on a list of key foundational biotechnologies as early as the first year after their publication. MIT's James W. Weis said the framework "functions by learning patterns from the history of science, and then pattern-matching on new publications to find early signals of high impact." DELPHI, which was trained on a full time-series network of journal article metadata, identified more than twice as many high-impact papers compared to citation numbers alone.
An artificial intelligence framework built by MIT researchers can give an "early-alert" signal for future high-impact technologies, by learning from patterns gleaned from previous scientific publications.
In a retrospective test of its capabilities, DELPHI , short for Dynamic Early-warning by Learning to Predict High Impact, was able to identify all pioneering papers on an experts' list of key foundational biotechnologies, sometimes as early as the first year after their publication.
James W. Weis, a research affiliate of the MIT Media Lab, and Joseph Jacobson, a professor of media arts and sciences and head of the Media Lab's Molecular Machines research group, also used DELPHI to highlight 50 recent scientific papers that they predict will be high impact by 2023. Topics covered by the papers include DNA nanorobots used for cancer treatment, high-energy density lithium-oxygen batteries, and chemical synthesis using deep neural networks, among others.
The researchers see DELPHI as a tool that can help humans better leverage funding for scientific research, identifying "diamond in the rough" technologies that might otherwise languish and offering a way for governments, philanthropies, and venture capital firms to more efficiently and productively support science.
"In essence, our algorithm functions by learning patterns from the history of science, and then pattern-matching on new publications to find early signals of high impact," says Weis. "By tracking the early spread of ideas, we can predict how likely they are to go viral or spread to the broader academic community in a meaningful way."
The paper has been published in Nature Biotechnology .
Searching for the "diamond in the rough"
The machine learning algorithm developed by Weis and Jacobson takes advantage of the vast amount of digital information that is now available with the exponential growth in scientific publication since the 1980s. But instead of using one-dimensional measures, such as the number of citations, to judge a publication's impact, DELPHI was trained on a full time-series network of journal article metadata to reveal higher-dimensional patterns in their spread across the scientific ecosystem.
The result is a knowledge graph that contains the connections between nodes representing papers, authors, institutions, and other types of data. The strength and type of the complex connections between these nodes determine their properties, which are used in the framework. "These nodes and edges define a time-based graph that DELPHI uses to learn patterns that are predictive of high future impact," explains Weis.
Together, these network features are used to predict scientific impact, with papers that fall in the top 5 percent of time-scaled node centrality five years after publication considered the "highly impactful" target set that DELPHI aims to identify. These top 5 percent of papers constitute 35 percent of the total impact in the graph. DELPHI can also use cutoffs of the top 1, 10, and 15 percent of time-scaled node centrality, the authors say.
DELPHI suggests that highly impactful papers spread almost virally outside their disciplines and smaller scientific communities. Two papers can have the same number of citations, but highly impactful papers reach a broader and deeper audience. Low-impact papers, on the other hand, "aren't really being utilized and leveraged by an expanding group of people," says Weis.
The framework might be useful in "incentivizing teams of people to work together, even if they don't already know each other - perhaps by directing funding toward them to come together to work on important multidisciplinary problems," he adds.
Compared to citation number alone, DELPHI identifies over twice the number of highly impactful papers, including 60 percent of "hidden gems," or papers that would be missed by a citation threshold.
"Advancing fundamental research is about taking lots of shots on goal and then being able to quickly double down on the best of those ideas," says Jacobson. "This study was about seeing whether we could do that process in a more scaled way, by using the scientific community as a whole, as embedded in the academic graph, as well as being more inclusive in identifying high-impact research directions."
The researchers were surprised at how early in some cases the "alert signal" of a highly impactful paper shows up using DELPHI. "Within one year of publication we are already identifying hidden gems that will have significant impact later on," says Weis.
He cautions, however, that DELPHI isn't exactly predicting the future. "We're using machine learning to extract and quantify signals that are hidden in the dimensionality and dynamics of the data that already exist."
Fair, efficient, and effective funding
The hope, the researchers say, is that DELPHI will offer a less-biased way to evaluate a paper's impact, as other measures such as citations and journal impact factor number can be manipulated, as past studies have shown.
"We hope we can use this to find the most deserving research and researchers, regardless of what institutions they're affiliated with or how connected they are," Weis says.
As with all machine learning frameworks, however, designers and users should be alert to bias, he adds. "We need to constantly be aware of potential biases in our data and models. We want DELPHI to help find the best research in a less-biased way - so we need to be careful our models are not learning to predict future impact solely on the basis of sub-optimal metrics like h -Index, author citation count, or institutional affiliation."
DELPHI could be a powerful tool to help scientific funding become more efficient and effective, and perhaps be used to create new classes of financial products related to science investment.
"The emerging metascience of science funding is pointing toward the need for a portfolio approach to scientific investment," notes David Lang, executive director of the Experiment Foundation. "Weis and Jacobson have made a significant contribution to that understanding and, more importantly, its implementation with DELPHI."
It's something Weis has thought about a lot after his own experiences in launching venture capital funds and laboratory incubation facilities for biotechnology startups.
"I became increasingly cognizant that investors, including myself, were consistently looking for new companies in the same spots and with the same preconceptions," he says. "There's a giant wealth of highly-talented people and amazing technology that I started to glimpse, but that is often overlooked. I thought there must be a way to work in this space - and that machine learning could help us find and more effectively realize all this unmined potential." |
|||
397 | Academics Edge Closer to Research on Cloud Platforms | In the race to harness the power of cloud computing, and further develop artificial intelligence, academics have a new concern: falling behind a fast-moving tech industry.
In the US, 22 higher education institutions, including Stanford and Carnegie Mellon, have signed up to a National Research Cloud initiative seeking access to the computational power they need to keep up.
It is one of several cloud projects being called for by academics globally, and is being explored by the US Congress, given the potential of the technology to deliver breakthroughs in healthcare and climate change.
Under the US proposal, authored by Fei-Fei Li and John Etchemendy from the Stanford Institute for Human-Centered Artificial Intelligence , a national cloud platform would enable more academic and industry researchers to work at the leading edge of AI, and help train a new generation of experts.
Li and Etchemendy's NRC proposal cautions about declining government funding for basic and foundational research and highlights the US's history of federally funding research into innovations - from gene sequencing to the internet itself. However, between 2000 and 2017, the share of basic research funded by the US federal government declined from 58 to 42 per cent .
"If we kneecap academia from being an active participant in AI development, we threaten the innovation ecosystem as a whole," warns Li, who is the Sequoia Professor in computer science at Stanford University, a former vice-president at Google and, as of last year, an independent director at Twitter. "Academia provides a space to think beyond profits and pushes boundaries in science that can benefit all of humanity."
US Congress has authorised an AI task force to explore the viability of the NRC and make recommendations in the coming months. Amazon, Google and Microsoft have also backed the idea of a US research cloud.
Here, the US may seek inspiration from Australia, which established a national research cloud in 2012. That was made possible thanks to the country having "a critical mass of researchers but [being] small enough to co-ordinate a national initiative," says Rosie Hicks, chief executive of the Australian Research Data Commons, which co-ordinated the project.
The ARDC helps researchers to analyse, share and retain large and complex data sets in fields including supercomputing, nanofabrication, health, environment and urban sustainability.
The project, which has received nearly A$200m ($160m) in federal funding, goes beyond offering access to data storage and collaboration tools, and is actively promoting the expansion of research infrastructure.
"We are not interested in a vanilla computing environment. We want to be pushing boundaries and create an environment that researchers cannot get elsewhere," says Hicks.
Meanwhile, Europe - a research powerhouse hosting some of the world's leading universities and technical facilities - is building a continental infrastructure called the European Open Science Cloud (EOSC).
Juan Bicarregui, co-ordinator of the pilot phase of EOSC, describes the European project as a "web of shared data for research," which is supporting roughly 20 projects. The goal is not to create a public cloud but to "bring together existing infrastructures," Bicarregui says.
Its vision is both to empower the continent's 1.7m science researchers to be able to collaborate more effectively through sharing data, software and methodologies.
"The way science has been communicated for the past 300 years is through publishing academic papers," says Bicarregui. The EOSC, in contrast, aims to make research and data accessible to all by allowing open access both to academic papers as well as the data, methodologies and software involved.
This could help researchers avoid duplication and tackle the "reproducibility crisis" - the difficulty in trying to replicate a study to test its outcomes. This will enable researchers to understand more about the development of projects and ultimately accelerate innovation.
"The open science movement is about changing culture," says Bicarregui. "Data is what enables publications, so people can be worried about sharing data, thinking that others are going to scoop their idea. But that's only because of how much we obsess about publications. The rewards and recognition mechanisms of science have not yet evolved to rewarding more data-sharing."
A second motivator for the initiative is the trend towards larger teams in scientific research, as the community turns its attention to knottier problems.
"The big challenges of the [UN's] Sustainable Development Goals and the experience of the pandemic, require a multidisciplinary approach to solve," says Bicarregui. "EOSC is about putting in place the infrastructure you need for that collaboration." | A group of 22 higher education institutions, concerned about trailing the technology sector in tapping cloud computing and advancing artificial intelligence (AI), have enrolled in a National Research Cloud initiative to access the computational power they need to keep pace. The proposal for the initiative, authored by Fei-Fei Li and John Etchemendy of the Stanford Institute for Human-Centered Artificial Intelligence, would establish a national cloud platform on which academics and industry players could work on the advancement of AI, while helping to train new experts in the field. Li said, "If we kneecap academia from being an active participant in AI development, we threaten the innovation ecosystem as a whole." | [] | [] | [] | scitechnews | None | None | None | None | A group of 22 higher education institutions, concerned about trailing the technology sector in tapping cloud computing and advancing artificial intelligence (AI), have enrolled in a National Research Cloud initiative to access the computational power they need to keep pace. The proposal for the initiative, authored by Fei-Fei Li and John Etchemendy of the Stanford Institute for Human-Centered Artificial Intelligence, would establish a national cloud platform on which academics and industry players could work on the advancement of AI, while helping to train new experts in the field. Li said, "If we kneecap academia from being an active participant in AI development, we threaten the innovation ecosystem as a whole."
In the race to harness the power of cloud computing, and further develop artificial intelligence, academics have a new concern: falling behind a fast-moving tech industry.
In the US, 22 higher education institutions, including Stanford and Carnegie Mellon, have signed up to a National Research Cloud initiative seeking access to the computational power they need to keep up.
It is one of several cloud projects being called for by academics globally, and is being explored by the US Congress, given the potential of the technology to deliver breakthroughs in healthcare and climate change.
Under the US proposal, authored by Fei-Fei Li and John Etchemendy from the Stanford Institute for Human-Centered Artificial Intelligence , a national cloud platform would enable more academic and industry researchers to work at the leading edge of AI, and help train a new generation of experts.
Li and Etchemendy's NRC proposal cautions about declining government funding for basic and foundational research and highlights the US's history of federally funding research into innovations - from gene sequencing to the internet itself. However, between 2000 and 2017, the share of basic research funded by the US federal government declined from 58 to 42 per cent .
"If we kneecap academia from being an active participant in AI development, we threaten the innovation ecosystem as a whole," warns Li, who is the Sequoia Professor in computer science at Stanford University, a former vice-president at Google and, as of last year, an independent director at Twitter. "Academia provides a space to think beyond profits and pushes boundaries in science that can benefit all of humanity."
US Congress has authorised an AI task force to explore the viability of the NRC and make recommendations in the coming months. Amazon, Google and Microsoft have also backed the idea of a US research cloud.
Here, the US may seek inspiration from Australia, which established a national research cloud in 2012. That was made possible thanks to the country having "a critical mass of researchers but [being] small enough to co-ordinate a national initiative," says Rosie Hicks, chief executive of the Australian Research Data Commons, which co-ordinated the project.
The ARDC helps researchers to analyse, share and retain large and complex data sets in fields including supercomputing, nanofabrication, health, environment and urban sustainability.
The project, which has received nearly A$200m ($160m) in federal funding, goes beyond offering access to data storage and collaboration tools, and is actively promoting the expansion of research infrastructure.
"We are not interested in a vanilla computing environment. We want to be pushing boundaries and create an environment that researchers cannot get elsewhere," says Hicks.
Meanwhile, Europe - a research powerhouse hosting some of the world's leading universities and technical facilities - is building a continental infrastructure called the European Open Science Cloud (EOSC).
Juan Bicarregui, co-ordinator of the pilot phase of EOSC, describes the European project as a "web of shared data for research," which is supporting roughly 20 projects. The goal is not to create a public cloud but to "bring together existing infrastructures," Bicarregui says.
Its vision is both to empower the continent's 1.7m science researchers to be able to collaborate more effectively through sharing data, software and methodologies.
"The way science has been communicated for the past 300 years is through publishing academic papers," says Bicarregui. The EOSC, in contrast, aims to make research and data accessible to all by allowing open access both to academic papers as well as the data, methodologies and software involved.
This could help researchers avoid duplication and tackle the "reproducibility crisis" - the difficulty in trying to replicate a study to test its outcomes. This will enable researchers to understand more about the development of projects and ultimately accelerate innovation.
"The open science movement is about changing culture," says Bicarregui. "Data is what enables publications, so people can be worried about sharing data, thinking that others are going to scoop their idea. But that's only because of how much we obsess about publications. The rewards and recognition mechanisms of science have not yet evolved to rewarding more data-sharing."
A second motivator for the initiative is the trend towards larger teams in scientific research, as the community turns its attention to knottier problems.
"The big challenges of the [UN's] Sustainable Development Goals and the experience of the pandemic, require a multidisciplinary approach to solve," says Bicarregui. "EOSC is about putting in place the infrastructure you need for that collaboration." |
|||
398 | Facebook Loses Bid to Block Ruling on EU-U.S. Data Flows | Facebook has lost its attempt to block a European Union privacy ruling that could bar its sending of information about European users to U.S. computer servers. Ireland's High Court rejected Facebook's procedural complaints about a preliminary decision on data flows from the country's Data Protection Commission (DPC), which spurned Facebook's argument that it had allocated too little time for the company to respond, or issued a judgment prematurely. Legal experts say the reasoning in Ireland's provisional directive could apply to other large technology companies that are subject to U.S. surveillance statutes, potentially disrupting trans-Atlantic data flows and billions of dollars for the cloud computing, social media, and advertising sectors. | [] | [] | [] | scitechnews | None | None | None | None | Facebook has lost its attempt to block a European Union privacy ruling that could bar its sending of information about European users to U.S. computer servers. Ireland's High Court rejected Facebook's procedural complaints about a preliminary decision on data flows from the country's Data Protection Commission (DPC), which spurned Facebook's argument that it had allocated too little time for the company to respond, or issued a judgment prematurely. Legal experts say the reasoning in Ireland's provisional directive could apply to other large technology companies that are subject to U.S. surveillance statutes, potentially disrupting trans-Atlantic data flows and billions of dollars for the cloud computing, social media, and advertising sectors.
|
||||
400 | Simulating Sneezes, Coughs to Show How COVID-19 Spreads | ALBUQUERQUE, N.M. - Two groups of researchers at Sandia National Laboratories have published papers on the droplets of liquid sprayed by coughs or sneezes and how far they can travel under different conditions.
Both teams used Sandia's decades of experience with advanced computer simulations studying how liquids and gases move for its nuclear stockpile stewardship mission.
Their findings reinforce the importance of wearing masks, maintaining social distancing, avoiding poorly ventilated indoor spaces and washing your hands frequently, especially with the emergence of new, more transmissible variants of SARS-CoV-2, the virus that causes COVID-19.
One study used Sandia-developed high-performance computer simulation tools to model coughing with and without a breeze and with and without protective barriers. This work was recently published in the scientific journal Atomization and Sprays.
Stefan Domino, the lead computer scientist on the paper, said his team found that while protective barriers, such as plexiglass partitions in grocery stores, offer protection from larger droplets, very tiny particles can persist in the air for an extended time and travel some distance depending on the environmental conditions.
Separate computer modeling research at Sandia looked at what happens to the smaller aerosol droplets under different conditions, including when a person is wearing a face covering. That study showed that face masks and shields keep even the small droplets from a cough from dispersing great distances, said researcher Cliff Ho, who is leading that effort. This work was published in the journal Applied Mathematical Modelling on Feb. 24.
Simulating coughs shows persistent particles
In simulations run by Domino's team through Sandia's high-performance computers, larger droplets from a cough with no crosswind and no face coverings fell at most approximately three meters, or roughly nine feet away. They also found that the dry "droplet nuclei," or aerosols, left over after the liquid evaporates from a droplet traveled about the same distance but stuck around in the air for the two minutes they modeled.
Add a plexiglass partition into the mix, and their computer simulations showed that larger droplets cling to the barrier, which mitigates the risk of direct transmission, but the smaller droplet nuclei persist in the air, Domino said.
When they added a 10-meter-per-second breeze from the back to the simulation without a barrier, the larger droplets traveled up to 11 1/2 feet and the droplet nuclei traveled farther.
This study does not call into question the social-distancing standard of 6 feet recommended by the Centers for Disease Control and Prevention designed to prevent direct contact from the majority of larger droplets. In a typical cough from an infected person, roughly 35% of the droplets might have the virus present, but models of how much SARS-CoV-2 and its variants are needed to infect another person are still being developed, Domino said.
"A recent review paper on the transmission of SARS-CoV-2 that appeared in the Annals of Internal Medicine suggests that respiratory transmission is the dominant route for transmission. As such, we feel that establishing a credible modeling and simulation tool to model transport of pathogen-containing droplets emanating from coughs and how they persist in public spaces that we all inhabit represents a critical piece of the required science," he said. Partitions, masks, social distancing, staying home when feeling unwell and getting vaccinated are still important to help cut down transmission, especially with the new more transmissible variants.
Domino also conducted computer modeling of outdoor open spaces and found that standing people exposed to a cough from someone in a kneeling position had relatively low risk of exposure compared to people who were seated. This was because of how the droplets and aerosols interact with the complex breezes that move around people. This work was published in the International Journal of Computational Fluid Dynamics on April 1. Domino's simulations used over four million hours of computer processing time and were run on many computer processors at the same time.
Simulations support social distancing, masks
Ho used a commercially available fluid dynamics computer model to simulate various events that expel moist fluid, such as coughing, sneezing, talking and even breathing, to understand how they affect transport and transmission of airborne pathogens. He assumed that viral pathogens were aerosolized in tiny droplets and that the pathogen distribution and concentration could be represented by the concentration of the simulated exhaled vapor.
"I introduced spatial and temporal concentrations into the modeling to develop quantified risks of exposure based on separation distance, exposure duration and environmental conditions, such as airflow and face coverings," said Ho. "I could then determine the probability of infection based on spatial and temporal aerosol concentrations, viral load, infectivity rate, viral viability, lung-deposition probability and inhalation rate."
The model also confirmed that wearing a face mask or face shield significantly reduced the forward travel of exhaled vapor and exposure risk by about tenfold. However, the vapor concentrations near the face persisted longer than without face coverings.
Overall, the model showed that social distancing significantly reduced the exposure risk from aerosols by at least tenfold and allowed time for dilution and dispersion of the exhaled viral plume. Other models quantified the degree that being upwind or crosswind of the source of the cough reduced exposure risks, and the degree being directly downwind of the cough increased exposure risks.
The exposure risks decreased with increasing distance, but the greatest increase in benefit was at three feet. Ho's models also quantified the degree that wearing a mask reduces exposure risks at various distances.
In short, the computer modeling confirmed the importance of social distancing and wearing masks. In addition, staying upwind and increasing fresh air ventilation in places like grocery stores, restaurants and schools can help to reduce the exposure risk.
Ho also conducted computer modeling of school buses and found that opening windows on school buses increased ventilation and reduced exposure risks. Specifically, to achieve sufficient ventilation, at least two sets of windows should be opened, one near the front of the bus and one near the back of the bus.
Sandia's stockpile stewardship work aids simulations
Sandia researchers were able to apply many of the same computational tools used in their nuclear stockpile stewardship mission to simulate droplets from coughs and sneezes, as well as Sandia's advanced high-performance computing resources. For the nuclear deterrence mission, these tools study such things as how turbulent jets, plumes and propellent fires react in different conditions.
"We can deploy our simulation tool capability to other applications," Domino said. "If you look at the physics of a cough or a sneeze, it includes attributes of these physics that we normally study at Sandia. We can simulate the trajectory of droplets and how they interact in the environment."
Those environmental conditions can include variables, such as temperature, humidity, launch trajectory, and crosswind strength and direction. They can also include natural and manmade barriers.
Along with studies done by others on cough spray, Sandia's computer-simulation capabilities add the value of seeing how droplets from a cough will react to different conditions. Sandia's simulation tools combine the mass, momentum and energy of the droplets to capture detailed evaporation physics that support the ability to distinguish between droplets that deposit and those that persist in the environment.
The research projects were funded by Sandia's Laboratory Directed Research and Development Rapid Response, the Department of Energy's Office of Science through the National Virtual Biotechnology Laboratory, a consortium of DOE national laboratories focused on response to COVID-19, with support provided by the Coronavirus CARES Act.
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.
Sandia news media contact: Mollie Rappe, mrappe@sandia.gov , 505-228-6123 | Two groups of computer scientists used computer facilities at the U.S. Department of Energy's Sandia National Laboratories to create detailed simulations of droplets sprayed by coughs or sneezes, to demonstrate how COVID-19 spreads. A study that modeled coughing with and without a breeze and with and without protective barriers found that protective barriers offer protection from larger droplets, while tiny particles can remain airborne for an extended time and can travel farther, depending on environmental conditions. A second study, which modeled smaller aerosol droplets under various conditions, found that face masks and shields can prevent them from traveling far. | [] | [] | [] | scitechnews | None | None | None | None | Two groups of computer scientists used computer facilities at the U.S. Department of Energy's Sandia National Laboratories to create detailed simulations of droplets sprayed by coughs or sneezes, to demonstrate how COVID-19 spreads. A study that modeled coughing with and without a breeze and with and without protective barriers found that protective barriers offer protection from larger droplets, while tiny particles can remain airborne for an extended time and can travel farther, depending on environmental conditions. A second study, which modeled smaller aerosol droplets under various conditions, found that face masks and shields can prevent them from traveling far.
ALBUQUERQUE, N.M. - Two groups of researchers at Sandia National Laboratories have published papers on the droplets of liquid sprayed by coughs or sneezes and how far they can travel under different conditions.
Both teams used Sandia's decades of experience with advanced computer simulations studying how liquids and gases move for its nuclear stockpile stewardship mission.
Their findings reinforce the importance of wearing masks, maintaining social distancing, avoiding poorly ventilated indoor spaces and washing your hands frequently, especially with the emergence of new, more transmissible variants of SARS-CoV-2, the virus that causes COVID-19.
One study used Sandia-developed high-performance computer simulation tools to model coughing with and without a breeze and with and without protective barriers. This work was recently published in the scientific journal Atomization and Sprays.
Stefan Domino, the lead computer scientist on the paper, said his team found that while protective barriers, such as plexiglass partitions in grocery stores, offer protection from larger droplets, very tiny particles can persist in the air for an extended time and travel some distance depending on the environmental conditions.
Separate computer modeling research at Sandia looked at what happens to the smaller aerosol droplets under different conditions, including when a person is wearing a face covering. That study showed that face masks and shields keep even the small droplets from a cough from dispersing great distances, said researcher Cliff Ho, who is leading that effort. This work was published in the journal Applied Mathematical Modelling on Feb. 24.
Simulating coughs shows persistent particles
In simulations run by Domino's team through Sandia's high-performance computers, larger droplets from a cough with no crosswind and no face coverings fell at most approximately three meters, or roughly nine feet away. They also found that the dry "droplet nuclei," or aerosols, left over after the liquid evaporates from a droplet traveled about the same distance but stuck around in the air for the two minutes they modeled.
Add a plexiglass partition into the mix, and their computer simulations showed that larger droplets cling to the barrier, which mitigates the risk of direct transmission, but the smaller droplet nuclei persist in the air, Domino said.
When they added a 10-meter-per-second breeze from the back to the simulation without a barrier, the larger droplets traveled up to 11 1/2 feet and the droplet nuclei traveled farther.
This study does not call into question the social-distancing standard of 6 feet recommended by the Centers for Disease Control and Prevention designed to prevent direct contact from the majority of larger droplets. In a typical cough from an infected person, roughly 35% of the droplets might have the virus present, but models of how much SARS-CoV-2 and its variants are needed to infect another person are still being developed, Domino said.
"A recent review paper on the transmission of SARS-CoV-2 that appeared in the Annals of Internal Medicine suggests that respiratory transmission is the dominant route for transmission. As such, we feel that establishing a credible modeling and simulation tool to model transport of pathogen-containing droplets emanating from coughs and how they persist in public spaces that we all inhabit represents a critical piece of the required science," he said. Partitions, masks, social distancing, staying home when feeling unwell and getting vaccinated are still important to help cut down transmission, especially with the new more transmissible variants.
Domino also conducted computer modeling of outdoor open spaces and found that standing people exposed to a cough from someone in a kneeling position had relatively low risk of exposure compared to people who were seated. This was because of how the droplets and aerosols interact with the complex breezes that move around people. This work was published in the International Journal of Computational Fluid Dynamics on April 1. Domino's simulations used over four million hours of computer processing time and were run on many computer processors at the same time.
Simulations support social distancing, masks
Ho used a commercially available fluid dynamics computer model to simulate various events that expel moist fluid, such as coughing, sneezing, talking and even breathing, to understand how they affect transport and transmission of airborne pathogens. He assumed that viral pathogens were aerosolized in tiny droplets and that the pathogen distribution and concentration could be represented by the concentration of the simulated exhaled vapor.
"I introduced spatial and temporal concentrations into the modeling to develop quantified risks of exposure based on separation distance, exposure duration and environmental conditions, such as airflow and face coverings," said Ho. "I could then determine the probability of infection based on spatial and temporal aerosol concentrations, viral load, infectivity rate, viral viability, lung-deposition probability and inhalation rate."
The model also confirmed that wearing a face mask or face shield significantly reduced the forward travel of exhaled vapor and exposure risk by about tenfold. However, the vapor concentrations near the face persisted longer than without face coverings.
Overall, the model showed that social distancing significantly reduced the exposure risk from aerosols by at least tenfold and allowed time for dilution and dispersion of the exhaled viral plume. Other models quantified the degree that being upwind or crosswind of the source of the cough reduced exposure risks, and the degree being directly downwind of the cough increased exposure risks.
The exposure risks decreased with increasing distance, but the greatest increase in benefit was at three feet. Ho's models also quantified the degree that wearing a mask reduces exposure risks at various distances.
In short, the computer modeling confirmed the importance of social distancing and wearing masks. In addition, staying upwind and increasing fresh air ventilation in places like grocery stores, restaurants and schools can help to reduce the exposure risk.
Ho also conducted computer modeling of school buses and found that opening windows on school buses increased ventilation and reduced exposure risks. Specifically, to achieve sufficient ventilation, at least two sets of windows should be opened, one near the front of the bus and one near the back of the bus.
Sandia's stockpile stewardship work aids simulations
Sandia researchers were able to apply many of the same computational tools used in their nuclear stockpile stewardship mission to simulate droplets from coughs and sneezes, as well as Sandia's advanced high-performance computing resources. For the nuclear deterrence mission, these tools study such things as how turbulent jets, plumes and propellent fires react in different conditions.
"We can deploy our simulation tool capability to other applications," Domino said. "If you look at the physics of a cough or a sneeze, it includes attributes of these physics that we normally study at Sandia. We can simulate the trajectory of droplets and how they interact in the environment."
Those environmental conditions can include variables, such as temperature, humidity, launch trajectory, and crosswind strength and direction. They can also include natural and manmade barriers.
Along with studies done by others on cough spray, Sandia's computer-simulation capabilities add the value of seeing how droplets from a cough will react to different conditions. Sandia's simulation tools combine the mass, momentum and energy of the droplets to capture detailed evaporation physics that support the ability to distinguish between droplets that deposit and those that persist in the environment.
The research projects were funded by Sandia's Laboratory Directed Research and Development Rapid Response, the Department of Energy's Office of Science through the National Virtual Biotechnology Laboratory, a consortium of DOE national laboratories focused on response to COVID-19, with support provided by the Coronavirus CARES Act.
Sandia National Laboratories is a multimission laboratory operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy's National Nuclear Security Administration. Sandia Labs has major research and development responsibilities in nuclear deterrence, global security, defense, energy technologies and economic competitiveness, with main facilities in Albuquerque, New Mexico, and Livermore, California.
Sandia news media contact: Mollie Rappe, mrappe@sandia.gov , 505-228-6123 |
|||
402 | Police Departments Adopting Facial Recognition Tech Amid Allegations of Wrongful Arrests | In the past few years, facial recognition technology has become a critical tool for law enforcement: the FBI has used it to help identify suspects who stormed the Capitol on January 6 and state and local police say it's been instrumental in cases of robberies, assaults, and murders. Facial recognition software uses complex mathematical algorithms or instructions to compare a picture of a suspect's face to potentially millions of others in a database. But it's not just mugshots the software may be searching through. If you have a driver's license, there's a good chance your picture might have been searched, even if you've never committed a crime in your life.
In January 2020, Robert Williams arrived home from work to find two Detroit police officers waiting for him outside his house in the quiet suburb of Farmington Hills, Michigan.
They had a warrant for his arrest.
He'd never been in trouble with the law before and had no idea what he was being accused of.
Robert Williams: I'm like, "You can't just show up to my house and tell me I'm under arrest."
Robert Williams: The cop gets a piece of paper and it says, "Felony larceny," on it. I'm like, "Bro, I didn't steal nothin'. I'm like, "Y'all got the wrong person."
Williams, who is 43 and now recovering from a stroke, was handcuffed in front of his wife, Melissa, and their two daughters.
Melissa Williams: We thought it was maybe a mistaken identity. Like, did someone use his name?
He was brought to this detention center and locked in a cell overnight. Police believed Williams was this man in the red hat, recorded in a Detroit store in 2018 stealing $3,800 worth of watches.
When Detroit detectives finally questioned Williams more than 12 hours after his arrest, they showed him some photos - including from the store security camera.
Robert Williams: So, he turns over the paper and he's like, "So, that's not you?" And I looked at it, picked it up and held it up to my face and I said, "I hope y'all don't think all Black people look alike." At this point, I'm upset. Like, bro, why am I even here? He's like, "So, I guess the computer got it wrong."
Anderson Cooper: The-- police officer said, "The computer got it wrong"?
Robert Williams: Yeah, the computer got it wrong.
Williams didn't know the computer they were referring to was a facial recognition program. Detroit police showed us how their system works. First, a photo of an unknown suspect is run against a database of hundreds of thousands of mugshots, which we've blurred for this demonstration.
James Craig: It's an electronic mug book, if you will.
James Craig has been Detroit's chief of police since 2013.
James Craig: Once we insert a photograph, a probe photo-- into the software-- the computer may generate 100 probables. And then they rank these photographs in order of what the computer suggests or the software suggests is the most likely suspect.
Anderson Cooper: The software ranks it in order of-- of most likely suspect to least likely?
James Craig: Absolutely.
It's then up to an analyst to compare each of those possible matches to the suspect and decide whether any of them should be investigated further.
In the Robert Williams case, Detroit police say they didn't have an analyst on duty that day, so they asked Michigan state police to run this photo through their system which had a database of nearly 50 million faces taken from mug shots but also state IDs and driver's licenses. This old drivers' license photo of Robert Williams popped up. We learned it was ranked 9th among 243 possible matches. An analyst then sent it to Detroit police as an investigative lead only, not probable cause to arrest.
Anderson Cooper: What happened in the case of Robert Williams? What went wrong?
James Craig: Sloppy, sloppy investigative work.
Sloppy, Chief Craig says because the detective in Detroit did little other investigative work before getting a warrant to arrest Williams.
James Craig: The response by this administration-- that detective was disciplined. And, subsequently, a commanding officer of that command has been de-appointed. But it wasn't facial recognition that failed. What failed was a horrible investigation.
Two weeks after he was arrested, the charges against Robert Williams were dismissed.
Anderson Cooper: The police in Detroit say that the Williams case was in large part a result of sloppy detective work. Do you think that's true?
Clare Garvie: That's part of the story, for sure. But face recognition's also part of the story.
Clare Garvie, a lawyer at Georgetown's Center on Privacy and Technology, estimates facial recognition has been involved in hundreds of thousands of cases.
She's been tracking its use by police for years. Legislators, law enforcement and software developers have all sought out Garvie's input on the topic.
Clare Garvie: Because it's a machine, because it's math that does the face recognition match in these systems, we're giving it too much credence. We're saying, "It's math, therefore, it must be right."
Anderson Cooper: "The computer must be right"?
Clare Garvie: Exactly. When we want to agree with the computer, we are gonna go to find evidence that agrees with it
But Garvie says the computer is only as good as the software it runs on.
Clare Garvie: There are some good algorithms; there are some terrible algorithms, and everything in between.
Patrick Grother has examined most of them. He is a computer scientist at a little-known government agency called the National Institute of Standards and Technology.
Every year, more than a hundred facial recognition developers around the world send his lab prototypes to test for accuracy.
Anderson Cooper: Are computers good at recognizing faces?
Patrick Grother: They're very good. I-- they're-- they're better than humans today.
Anderson Cooper: Are these algorithms flawless?
Patrick Grother: By no means.
Software developers train facial recognition algorithms by showing them huge numbers of human faces so they can learn how to spot similarities and differences, but they don't work the way you might expect.
Anderson Cooper: We've all seen in movies computers that-- form a map of the face. Is that what's happening with facial recognition?
Patrick Grother: I mean, historically, yes. But nowadays that is not the-- the approach.
Anderson Cooper: So it's not taking your nose and comparing it to noses in its database, or and then taking eyes and comparing it to eyes.
Patrick Grother: Nothing so explicit. It could be looking at eyebrows or eyes or nose or lips. It could be looking at-- skin texture. So if we have 20 photos of Anderson Cooper what is consistent? It's trying to uncover what makes Anderson Cooper, Anderson Cooper.
A year and a half ago, Grother and his team published a landmark study which found that many facial recognition algorithms had a harder time making out differences in Black, Asian, and female faces.
Patrick Grother: They make mistakes-- false negative errors, where they don't find the correct face, and false positive errors, where they find somebody else. In a criminal justice context, it could lead to a -- you know an incorrect arrest.
Anderson Cooper: Why would race or gender of a person lead to misidentification?
Patrick Grother: The algorithm has been built on a finite number of photos that may or may not be demographically balanced.
Anderson Cooper: Might not have enough females, enough Asian people, enough Black people among the photos that it's using to teach the algorithm.
Patrick Grother: Yeah, to teach the algorithm how to-- do identity.
Clare Garvie points out the potential for mistakes makes it all the more important how police use the information generated by the software.
Anderson Cooper: Is facial recognition being misused by police?
Clare Garvie: In the absence of rules, what-- I mean, what's misuse? There are very few rules in most places around how the technology's used.
It turns out there aren't well-established national guidelines and it's up to states, cities and local law enforcement agencies to decide how they use the technology: who can run facial recognition searches, what kind of formal training, if any, is needed, and what kind of images can be used in a search.
In a 2019 report, Garvie found surprising practices at some police departments, including the editing of suspects' facial features before running the photos in a search.
Clare Garvie: Most photos that police are-- are dealing with understandably are partially obscured. Police departments say, "No worries. Just cut and paste someone else's features, someone else's chin and mouth into that photo before submitting it."
Anderson Cooper: But, that's, like, half of somebody's face.
Clare Garvie: I agree. If we think about this, face recognition is considered a biometric, like fingerprinting. It's unique to the individual. So how can you swap out people's eyes and expect to still get a good result?
Detroit's police chief, James Craig, says they don't allow that kind of editing with their facial recognition system, which cost an estimated $1.8 million and has been in use since 2017. He says it's become a crucial tool in combating one of the highest violent crime rates in the country and was used in 117 investigations last year alone.
James Craig: We understand that the-- the software is not perfect. We know that.
Anderson Cooper: It's gotta be tempting for some officers to use it as more than just a lead.
James Craig: Well, as I like to always say to my staff, the end never justifies the means.
After Robert Williams, another alleged wrongful arrest by Detroit police came to light. Chief Craig says they've put in place stricter policies limiting the use of facial recognition to violent crime.
Requiring police to disclose its use to prosecutors when seeking a warrant and adding new layers of oversight.
James Craig: So analyst number one will go through the m-- methodical work of trying to identify the suspect. A second analyst has to go through the same level of rigor. The last step is if the supervisor concurs, now that person can be used as a lead. A lead only. A suspect cannot be arrested alone and charged alone based on the recognition.
But there are police departments like this one in Woodbridge, New Jersey, where it's unclear what rules are in place. In February 2019, police in Woodbridge arrested 33-year-old Nijeer Parks.
Anderson Cooper: Did they tell you why they thought it was you?
Nijeer Parks: Pretty much just facial recognition. When I asked 'em like, "Well, how did you come to get to me?" Like, "The-- like, the computer brung you up. The computer's not just gonna lie."
Police said a suspect allegedly shoplifted from this Hampton Inn and nearly hit an officer with a car while escaping the scene. They ran this driver's license the suspect gave them through a facial recognition program. It's not clear if this is the perpetrator since law enforcement said it's a fake ID. According to police reports, the search returned, "a high-profile comparison to Nijeer Parks." He spent more than a week in jail before he was released to fight his case in court.
Nijeer Parks: I think they looked at my background, they pretty much figured, like, "He had the jacket, we got 'em. He ain't-- he's not gonna fight it."
Anderson Cooper: When you say you had a jacket, you had prior convictions.
Nijeer Parks: Yes.
Anderson Cooper: What prior convictions do you have?
Nijeer Parks: I've been convicted for selling drugs, I've been in prison twice for it. But I've been home since 2016, had a job nonstop the whole time.
Facing 20 years in jail, Parks says he considered taking a plea deal.
Nijeer Parks: I knew I didn't do it, but it's like, I got a chance to be home, spending more time with my son, or I got a chance to come home, and he's a grown man and might have his own son.
Anderson Cooper: 'Cause I think most people think, "Well, if I didn't commit a crime, there's no way I would accept a plea deal."
Nijeer Parks: You would say that until you're sittin' right there.
After eight months, prosecutors failed to produce any evidence in court linking Parks to the crime, and in october 2019, the case was dismissed. Parks is now the third Black man since last summer to come forward and sue for wrongful arrest involving facial recognition. Woodbridge police and the prosecutor involved in Parks' case declined to speak with us. A spokesman for the town told us that they've denied Parks' civil claims in court filings.
Anderson Cooper: If this has been used hundreds of thousands of times as leads in investigations, and you can only point to three arrests based on misidentification by this technology, in the balance is that so bad?
Clare Garvie: The fact that we only know of three misidentifications is more a product of how little we know about the technology than how accurate it is.
Anderson Cooper: Do you think there're more?
Clare Garvie: Yes. I have every reason to believe there are more. And this is why the person who's accused almost never finds out that it was used.
Robert Williams says he only found out it was used after detectives told him the computer got it wrong. He's sued with the help of the ACLU. The city of Detroit has denied any legal responsibility and told us they hope to settle.
While the use of facial recognition technology by police departments continues to grow, so do calls for greater oversight. Within the last two years, one city and one state have put a moratorium on the technology and 19 cities have banned it outright.
Produced by Nichole Marks and David M. Levine. Associate producer, Annabelle Hanflig. Edited by Joe Schanzer. | U.S. police departments are adopting facial recognition technology, despite complaints of wrongful arrests resulting from its use. Clare Garvie at Georgetown University Law's Center on Privacy and Technology thinks facial recognition has been involved in hundreds of thousands of such cases, in which users incorrectly assume the technology is faultless, given the mathematical basis of its matches. The U.S. National Institute of Standards and Technology's Patrick Grother evaluates prototype facial recognition algorithms, and his team published a landmark study which determined that many facial recognition algorithms found it difficult to distinguish between Black, Asian, and female faces. Grother said false negatives arising from such errors could lead to wrongful arrests. Since last summer, three Black men have sued for wrongful arrest involving facial recognition; said Garvie, "The fact that we only know of three misidentifications is more a product of how little we know about the technology than how accurate it is." | [] | [] | [] | scitechnews | None | None | None | None | U.S. police departments are adopting facial recognition technology, despite complaints of wrongful arrests resulting from its use. Clare Garvie at Georgetown University Law's Center on Privacy and Technology thinks facial recognition has been involved in hundreds of thousands of such cases, in which users incorrectly assume the technology is faultless, given the mathematical basis of its matches. The U.S. National Institute of Standards and Technology's Patrick Grother evaluates prototype facial recognition algorithms, and his team published a landmark study which determined that many facial recognition algorithms found it difficult to distinguish between Black, Asian, and female faces. Grother said false negatives arising from such errors could lead to wrongful arrests. Since last summer, three Black men have sued for wrongful arrest involving facial recognition; said Garvie, "The fact that we only know of three misidentifications is more a product of how little we know about the technology than how accurate it is."
In the past few years, facial recognition technology has become a critical tool for law enforcement: the FBI has used it to help identify suspects who stormed the Capitol on January 6 and state and local police say it's been instrumental in cases of robberies, assaults, and murders. Facial recognition software uses complex mathematical algorithms or instructions to compare a picture of a suspect's face to potentially millions of others in a database. But it's not just mugshots the software may be searching through. If you have a driver's license, there's a good chance your picture might have been searched, even if you've never committed a crime in your life.
In January 2020, Robert Williams arrived home from work to find two Detroit police officers waiting for him outside his house in the quiet suburb of Farmington Hills, Michigan.
They had a warrant for his arrest.
He'd never been in trouble with the law before and had no idea what he was being accused of.
Robert Williams: I'm like, "You can't just show up to my house and tell me I'm under arrest."
Robert Williams: The cop gets a piece of paper and it says, "Felony larceny," on it. I'm like, "Bro, I didn't steal nothin'. I'm like, "Y'all got the wrong person."
Williams, who is 43 and now recovering from a stroke, was handcuffed in front of his wife, Melissa, and their two daughters.
Melissa Williams: We thought it was maybe a mistaken identity. Like, did someone use his name?
He was brought to this detention center and locked in a cell overnight. Police believed Williams was this man in the red hat, recorded in a Detroit store in 2018 stealing $3,800 worth of watches.
When Detroit detectives finally questioned Williams more than 12 hours after his arrest, they showed him some photos - including from the store security camera.
Robert Williams: So, he turns over the paper and he's like, "So, that's not you?" And I looked at it, picked it up and held it up to my face and I said, "I hope y'all don't think all Black people look alike." At this point, I'm upset. Like, bro, why am I even here? He's like, "So, I guess the computer got it wrong."
Anderson Cooper: The-- police officer said, "The computer got it wrong"?
Robert Williams: Yeah, the computer got it wrong.
Williams didn't know the computer they were referring to was a facial recognition program. Detroit police showed us how their system works. First, a photo of an unknown suspect is run against a database of hundreds of thousands of mugshots, which we've blurred for this demonstration.
James Craig: It's an electronic mug book, if you will.
James Craig has been Detroit's chief of police since 2013.
James Craig: Once we insert a photograph, a probe photo-- into the software-- the computer may generate 100 probables. And then they rank these photographs in order of what the computer suggests or the software suggests is the most likely suspect.
Anderson Cooper: The software ranks it in order of-- of most likely suspect to least likely?
James Craig: Absolutely.
It's then up to an analyst to compare each of those possible matches to the suspect and decide whether any of them should be investigated further.
In the Robert Williams case, Detroit police say they didn't have an analyst on duty that day, so they asked Michigan state police to run this photo through their system which had a database of nearly 50 million faces taken from mug shots but also state IDs and driver's licenses. This old drivers' license photo of Robert Williams popped up. We learned it was ranked 9th among 243 possible matches. An analyst then sent it to Detroit police as an investigative lead only, not probable cause to arrest.
Anderson Cooper: What happened in the case of Robert Williams? What went wrong?
James Craig: Sloppy, sloppy investigative work.
Sloppy, Chief Craig says because the detective in Detroit did little other investigative work before getting a warrant to arrest Williams.
James Craig: The response by this administration-- that detective was disciplined. And, subsequently, a commanding officer of that command has been de-appointed. But it wasn't facial recognition that failed. What failed was a horrible investigation.
Two weeks after he was arrested, the charges against Robert Williams were dismissed.
Anderson Cooper: The police in Detroit say that the Williams case was in large part a result of sloppy detective work. Do you think that's true?
Clare Garvie: That's part of the story, for sure. But face recognition's also part of the story.
Clare Garvie, a lawyer at Georgetown's Center on Privacy and Technology, estimates facial recognition has been involved in hundreds of thousands of cases.
She's been tracking its use by police for years. Legislators, law enforcement and software developers have all sought out Garvie's input on the topic.
Clare Garvie: Because it's a machine, because it's math that does the face recognition match in these systems, we're giving it too much credence. We're saying, "It's math, therefore, it must be right."
Anderson Cooper: "The computer must be right"?
Clare Garvie: Exactly. When we want to agree with the computer, we are gonna go to find evidence that agrees with it
But Garvie says the computer is only as good as the software it runs on.
Clare Garvie: There are some good algorithms; there are some terrible algorithms, and everything in between.
Patrick Grother has examined most of them. He is a computer scientist at a little-known government agency called the National Institute of Standards and Technology.
Every year, more than a hundred facial recognition developers around the world send his lab prototypes to test for accuracy.
Anderson Cooper: Are computers good at recognizing faces?
Patrick Grother: They're very good. I-- they're-- they're better than humans today.
Anderson Cooper: Are these algorithms flawless?
Patrick Grother: By no means.
Software developers train facial recognition algorithms by showing them huge numbers of human faces so they can learn how to spot similarities and differences, but they don't work the way you might expect.
Anderson Cooper: We've all seen in movies computers that-- form a map of the face. Is that what's happening with facial recognition?
Patrick Grother: I mean, historically, yes. But nowadays that is not the-- the approach.
Anderson Cooper: So it's not taking your nose and comparing it to noses in its database, or and then taking eyes and comparing it to eyes.
Patrick Grother: Nothing so explicit. It could be looking at eyebrows or eyes or nose or lips. It could be looking at-- skin texture. So if we have 20 photos of Anderson Cooper what is consistent? It's trying to uncover what makes Anderson Cooper, Anderson Cooper.
A year and a half ago, Grother and his team published a landmark study which found that many facial recognition algorithms had a harder time making out differences in Black, Asian, and female faces.
Patrick Grother: They make mistakes-- false negative errors, where they don't find the correct face, and false positive errors, where they find somebody else. In a criminal justice context, it could lead to a -- you know an incorrect arrest.
Anderson Cooper: Why would race or gender of a person lead to misidentification?
Patrick Grother: The algorithm has been built on a finite number of photos that may or may not be demographically balanced.
Anderson Cooper: Might not have enough females, enough Asian people, enough Black people among the photos that it's using to teach the algorithm.
Patrick Grother: Yeah, to teach the algorithm how to-- do identity.
Clare Garvie points out the potential for mistakes makes it all the more important how police use the information generated by the software.
Anderson Cooper: Is facial recognition being misused by police?
Clare Garvie: In the absence of rules, what-- I mean, what's misuse? There are very few rules in most places around how the technology's used.
It turns out there aren't well-established national guidelines and it's up to states, cities and local law enforcement agencies to decide how they use the technology: who can run facial recognition searches, what kind of formal training, if any, is needed, and what kind of images can be used in a search.
In a 2019 report, Garvie found surprising practices at some police departments, including the editing of suspects' facial features before running the photos in a search.
Clare Garvie: Most photos that police are-- are dealing with understandably are partially obscured. Police departments say, "No worries. Just cut and paste someone else's features, someone else's chin and mouth into that photo before submitting it."
Anderson Cooper: But, that's, like, half of somebody's face.
Clare Garvie: I agree. If we think about this, face recognition is considered a biometric, like fingerprinting. It's unique to the individual. So how can you swap out people's eyes and expect to still get a good result?
Detroit's police chief, James Craig, says they don't allow that kind of editing with their facial recognition system, which cost an estimated $1.8 million and has been in use since 2017. He says it's become a crucial tool in combating one of the highest violent crime rates in the country and was used in 117 investigations last year alone.
James Craig: We understand that the-- the software is not perfect. We know that.
Anderson Cooper: It's gotta be tempting for some officers to use it as more than just a lead.
James Craig: Well, as I like to always say to my staff, the end never justifies the means.
After Robert Williams, another alleged wrongful arrest by Detroit police came to light. Chief Craig says they've put in place stricter policies limiting the use of facial recognition to violent crime.
Requiring police to disclose its use to prosecutors when seeking a warrant and adding new layers of oversight.
James Craig: So analyst number one will go through the m-- methodical work of trying to identify the suspect. A second analyst has to go through the same level of rigor. The last step is if the supervisor concurs, now that person can be used as a lead. A lead only. A suspect cannot be arrested alone and charged alone based on the recognition.
But there are police departments like this one in Woodbridge, New Jersey, where it's unclear what rules are in place. In February 2019, police in Woodbridge arrested 33-year-old Nijeer Parks.
Anderson Cooper: Did they tell you why they thought it was you?
Nijeer Parks: Pretty much just facial recognition. When I asked 'em like, "Well, how did you come to get to me?" Like, "The-- like, the computer brung you up. The computer's not just gonna lie."
Police said a suspect allegedly shoplifted from this Hampton Inn and nearly hit an officer with a car while escaping the scene. They ran this driver's license the suspect gave them through a facial recognition program. It's not clear if this is the perpetrator since law enforcement said it's a fake ID. According to police reports, the search returned, "a high-profile comparison to Nijeer Parks." He spent more than a week in jail before he was released to fight his case in court.
Nijeer Parks: I think they looked at my background, they pretty much figured, like, "He had the jacket, we got 'em. He ain't-- he's not gonna fight it."
Anderson Cooper: When you say you had a jacket, you had prior convictions.
Nijeer Parks: Yes.
Anderson Cooper: What prior convictions do you have?
Nijeer Parks: I've been convicted for selling drugs, I've been in prison twice for it. But I've been home since 2016, had a job nonstop the whole time.
Facing 20 years in jail, Parks says he considered taking a plea deal.
Nijeer Parks: I knew I didn't do it, but it's like, I got a chance to be home, spending more time with my son, or I got a chance to come home, and he's a grown man and might have his own son.
Anderson Cooper: 'Cause I think most people think, "Well, if I didn't commit a crime, there's no way I would accept a plea deal."
Nijeer Parks: You would say that until you're sittin' right there.
After eight months, prosecutors failed to produce any evidence in court linking Parks to the crime, and in october 2019, the case was dismissed. Parks is now the third Black man since last summer to come forward and sue for wrongful arrest involving facial recognition. Woodbridge police and the prosecutor involved in Parks' case declined to speak with us. A spokesman for the town told us that they've denied Parks' civil claims in court filings.
Anderson Cooper: If this has been used hundreds of thousands of times as leads in investigations, and you can only point to three arrests based on misidentification by this technology, in the balance is that so bad?
Clare Garvie: The fact that we only know of three misidentifications is more a product of how little we know about the technology than how accurate it is.
Anderson Cooper: Do you think there're more?
Clare Garvie: Yes. I have every reason to believe there are more. And this is why the person who's accused almost never finds out that it was used.
Robert Williams says he only found out it was used after detectives told him the computer got it wrong. He's sued with the help of the ACLU. The city of Detroit has denied any legal responsibility and told us they hope to settle.
While the use of facial recognition technology by police departments continues to grow, so do calls for greater oversight. Within the last two years, one city and one state have put a moratorium on the technology and 19 cities have banned it outright.
Produced by Nichole Marks and David M. Levine. Associate producer, Annabelle Hanflig. Edited by Joe Schanzer. |
|||
403 | Irish Health System Targeted in 'Serious' Ransomware Attack | Ireland's health service said a ransomware attack led by "international criminals' forced the shutdown of its information technology systems on May 14. Deputy Prime Minister Leo Varadkar, who called the incident "very serious," said it could last for days. Steve Forbes at U.K. Web domain registry Nominet said the breach highlights concerns about the vulnerability of critical infrastructure to worsening attacks by hacker gangs and criminals, and threatens to exacerbate a health system already strained by the pandemic. Forbes said the Irish hack and the recent disruption of the Colonial Pipeline in the U.S. show that "criminal groups are choosing targets that will have the greatest impact on governments and the public, regardless of the collateral damage, in order to apply the most leverage." | [] | [] | [] | scitechnews | None | None | None | None | Ireland's health service said a ransomware attack led by "international criminals' forced the shutdown of its information technology systems on May 14. Deputy Prime Minister Leo Varadkar, who called the incident "very serious," said it could last for days. Steve Forbes at U.K. Web domain registry Nominet said the breach highlights concerns about the vulnerability of critical infrastructure to worsening attacks by hacker gangs and criminals, and threatens to exacerbate a health system already strained by the pandemic. Forbes said the Irish hack and the recent disruption of the Colonial Pipeline in the U.S. show that "criminal groups are choosing targets that will have the greatest impact on governments and the public, regardless of the collateral damage, in order to apply the most leverage."
|
||||
404 | The Way We Use Emojis Evolves Like Language, Changes Their Meaning | By Chris Stokel-Walker
The meaning of emojis changes over time karnoff/Shutterstock
The meaning of emojis changes depending on the context in which they're used and when they've been posted, according to the first study of their use over time.
Alexander Robertson at the University of Edinburgh, UK, and his colleagues tracked how emojis were used between 2012 and 2018 by Twitter users. In all, 1.7 billion tweets were checked to see if they contained an emoji, with duplicate content and non-English tweets filtered out.
The tweets were then analysed with models that recognise ... | The first study of emoji use over time found that their use and meaning evolves like language, with changes dictated by context. Researchers at the U.K.'s University of Edinburgh tracked emoji use between 2012 and 2018 on Twitter, reviewing 1.7 billion tweets (with duplicate content and non-English tweets filtered out). They used models that identify the semantics of how words are used based on surrounding words to analyze the tweets, to attribute meanings to the emojis used and to note changes to those meanings. Edinburgh's Alexander Robertson said the researchers found patterns in the meanings of emojis that are also found in words, like seasonality (different meanings ascribed depending on the time of year). Effie Le Moignan at the U.K.'s Newcastle University said the research is important but limited, because "this does not generalize beyond Twitter." | [] | [] | [] | scitechnews | None | None | None | None | The first study of emoji use over time found that their use and meaning evolves like language, with changes dictated by context. Researchers at the U.K.'s University of Edinburgh tracked emoji use between 2012 and 2018 on Twitter, reviewing 1.7 billion tweets (with duplicate content and non-English tweets filtered out). They used models that identify the semantics of how words are used based on surrounding words to analyze the tweets, to attribute meanings to the emojis used and to note changes to those meanings. Edinburgh's Alexander Robertson said the researchers found patterns in the meanings of emojis that are also found in words, like seasonality (different meanings ascribed depending on the time of year). Effie Le Moignan at the U.K.'s Newcastle University said the research is important but limited, because "this does not generalize beyond Twitter."
By Chris Stokel-Walker
The meaning of emojis changes over time karnoff/Shutterstock
The meaning of emojis changes depending on the context in which they're used and when they've been posted, according to the first study of their use over time.
Alexander Robertson at the University of Edinburgh, UK, and his colleagues tracked how emojis were used between 2012 and 2018 by Twitter users. In all, 1.7 billion tweets were checked to see if they contained an emoji, with duplicate content and non-English tweets filtered out.
The tweets were then analysed with models that recognise ... |
|||
405 | Flash Memory's 2D Cousin is 5,000 Times Speedier | A 2D cousin of flash memory is not only roughly 5,000 times faster, but can store multiple bits of data instead of just zeroes and ones, a new study finds.
Flash drives, hard disks, magnetic tape and other forms of non-volatile memory help store data even after the power is removed. One key weakness of these devices is how they are often slow, typically requiring at least hundreds of microseconds to write data, a few orders of magnitude longer than their volatile counterparts.
Now researchers have developed non-volatile memory that only takes nanoseconds to write data. This makes it thousands of times faster than commercial flash memory and roughly as speedy as the dynamic RAM found in most computers. They detailed their findings online this month in the journal Nature Nanotechnology .
The new device is made of layers of atomically thin 2-D materials. Previous research found that when two or more atomically thin layers of different materials are placed on top of each other to form so-called heterostructures, novel hybrid properties can emerge. These layers are typically held together by weak electric forces known as van der Waals interactions , the same forces that often make adhesive tapes sticky.
Scientists at the Chinese Academy of Sciences' Institute of Physics in Beijing and their colleagues noted that silicon-based memory is ultimately limited in speed because of unavoidable defects on ultra-thin silicon films that degrade performance. They reasoned that atomically flat van der Waals heterostructures could avoid such problems.
The researchers fabricated a van der Waals heterostructure consisting of an indium selenide semiconducting layer, a hexagonal boron nitride insulating layer, and multiple electrically conductive graphene layers sitting on top of a wafer of silicon dioxide and silicon. A voltage pulse lasting only 21 nanoseconds can inject electric charge into graphene to write or erase data. These pulses are roughly as strong as those used to write and erase in commercial flash memory.
Besides speed, a key feature of this new memory is the possibility of multi-bit storage. A conventional memory device can store a bit of data, either a zero or one, by switching between, say, a highly electrically conductive state and a less electrically conductive state. The researchers note their new device could theoretically store multiple bits of data with multiple electric states, each written and erased using a different sequence of voltage pulses.
"Memory can become much more powerful when a single device can store more bits of information - it helps build denser and denser memory architectures," says electrical engineer Deep Jariwala at the University of Pennsylvania, who did not take part in this research.
The scientists projected their devices can store data for 10 years. They noted another Chinese group recently achieved similar results with a van der Waals heterostructure made of molybdenum disulfide, hexagonal boron nitride and multi-layer graphene.
A major question now is whether or not researchers can make such devices on commercial scales. "This is the Achilles heel of most of these devices," Jariwala says. "When it comes to real applications, scalability and the ability to integrate these devices on top of silicon processors are really challenging issues." | A two-dimensional relative of flash memory is about 5,000 times faster than standard flash drives and can store multiple types of data instead of just zeroes and ones, according to new research. Scientists at the Chinese Academy of Sciences theorized that atomically flat van der Waals heterostructures could eliminate performance-degrading defects on silicon films, and they produced one from an indium selenide semiconducting layer, a hexagonal boron nitride insulating layer, and multiple electrically conductive graphene layers atop a silicon dioxide-silicon wafer. The researchers said the device could theoretically store multi-bit data with multiple electric states, each written and erased using a different voltage-pulse sequence, for as long as a decade. | [] | [] | [] | scitechnews | None | None | None | None | A two-dimensional relative of flash memory is about 5,000 times faster than standard flash drives and can store multiple types of data instead of just zeroes and ones, according to new research. Scientists at the Chinese Academy of Sciences theorized that atomically flat van der Waals heterostructures could eliminate performance-degrading defects on silicon films, and they produced one from an indium selenide semiconducting layer, a hexagonal boron nitride insulating layer, and multiple electrically conductive graphene layers atop a silicon dioxide-silicon wafer. The researchers said the device could theoretically store multi-bit data with multiple electric states, each written and erased using a different voltage-pulse sequence, for as long as a decade.
A 2D cousin of flash memory is not only roughly 5,000 times faster, but can store multiple bits of data instead of just zeroes and ones, a new study finds.
Flash drives, hard disks, magnetic tape and other forms of non-volatile memory help store data even after the power is removed. One key weakness of these devices is how they are often slow, typically requiring at least hundreds of microseconds to write data, a few orders of magnitude longer than their volatile counterparts.
Now researchers have developed non-volatile memory that only takes nanoseconds to write data. This makes it thousands of times faster than commercial flash memory and roughly as speedy as the dynamic RAM found in most computers. They detailed their findings online this month in the journal Nature Nanotechnology .
The new device is made of layers of atomically thin 2-D materials. Previous research found that when two or more atomically thin layers of different materials are placed on top of each other to form so-called heterostructures, novel hybrid properties can emerge. These layers are typically held together by weak electric forces known as van der Waals interactions , the same forces that often make adhesive tapes sticky.
Scientists at the Chinese Academy of Sciences' Institute of Physics in Beijing and their colleagues noted that silicon-based memory is ultimately limited in speed because of unavoidable defects on ultra-thin silicon films that degrade performance. They reasoned that atomically flat van der Waals heterostructures could avoid such problems.
The researchers fabricated a van der Waals heterostructure consisting of an indium selenide semiconducting layer, a hexagonal boron nitride insulating layer, and multiple electrically conductive graphene layers sitting on top of a wafer of silicon dioxide and silicon. A voltage pulse lasting only 21 nanoseconds can inject electric charge into graphene to write or erase data. These pulses are roughly as strong as those used to write and erase in commercial flash memory.
Besides speed, a key feature of this new memory is the possibility of multi-bit storage. A conventional memory device can store a bit of data, either a zero or one, by switching between, say, a highly electrically conductive state and a less electrically conductive state. The researchers note their new device could theoretically store multiple bits of data with multiple electric states, each written and erased using a different sequence of voltage pulses.
"Memory can become much more powerful when a single device can store more bits of information - it helps build denser and denser memory architectures," says electrical engineer Deep Jariwala at the University of Pennsylvania, who did not take part in this research.
The scientists projected their devices can store data for 10 years. They noted another Chinese group recently achieved similar results with a van der Waals heterostructure made of molybdenum disulfide, hexagonal boron nitride and multi-layer graphene.
A major question now is whether or not researchers can make such devices on commercial scales. "This is the Achilles heel of most of these devices," Jariwala says. "When it comes to real applications, scalability and the ability to integrate these devices on top of silicon processors are really challenging issues." |
|||
407 | Ford Launches Over-the-Air Upgrades to Millions of Cars | Ford Motor said it has launched technology to enable significant over-the-air (OTA) upgrades to its vehicles, with plans to roll it out to 33 million autos by 2028. Ford calls the remote updating technology Power-Up. The automaker said it already has sent OTA updates to more than 100,000 F-150 pickups and Mustang Mach-E customers since late March; according to Ford, one update to the F-150 to fix a battery drainage problem saved the company over $20 million in warranty costs. OTA updates also directly connect manufacturers to consumers, boosting potential earnings from data fleet monetization and recurring revenue opportunities for new features or upgrades. | [] | [] | [] | scitechnews | None | None | None | None | Ford Motor said it has launched technology to enable significant over-the-air (OTA) upgrades to its vehicles, with plans to roll it out to 33 million autos by 2028. Ford calls the remote updating technology Power-Up. The automaker said it already has sent OTA updates to more than 100,000 F-150 pickups and Mustang Mach-E customers since late March; according to Ford, one update to the F-150 to fix a battery drainage problem saved the company over $20 million in warranty costs. OTA updates also directly connect manufacturers to consumers, boosting potential earnings from data fleet monetization and recurring revenue opportunities for new features or upgrades.
|
||||
409 | Researchers 3D-Print Complex Micro-Optics with Improved Imaging Performance | 13 May 2021
Tiny lenses could correct color distortions for digital cameras and medical endoscopes
WASHINGTON - In a new study, researchers have shown that 3D printing can be used to make highly precise and complex miniature lenses with sizes of just a few microns. The microlenses can be used to correct color distortion during imaging, enabling small and lightweight cameras that can be designed for a variety of applications. "The ability to 3D print complex micro-optics means that they can be fabricated directly onto many different surfaces such as the CCD or CMOS chips used in digital cameras," said Michael Schmid, a member of the research team from University of Stuttgart in Germany. "The micro-optics can also be printed on the end of optical fibers to create very small medical endoscopes with excellent imaging quality." Caption: Researchers used 3D printing to make highly precise and complex apochromatic miniature lenses that can be used to correct color distortion during imaging. Credit: Michael Schmid, University of Stuttgart In The Optical Society ( OSA ) journal Optics Letters , researchers led by Harald Giessen detail how they used a type of 3D printing known as two-photon lithography to create lenses that combine refractive and diffractive surfaces. They also show that combining different materials can improve the optical performance of these lenses. "3D printing of micro-optics has improved drastically over the past few years and offers a design freedom not available from other methods," said Schmid. "Our optimized approach for 3D printing complex micro-optics opens many possibilities for creating new and innovative optical designs that can benefit many research fields and applications." Pushing the limits of 3D printing Two-photon lithography uses a focused laser beam to solidify, or polymerize, a liquid light-sensitive material known as photoresist. The optical phenomenon known as two-photon absorption allows cubic micrometer volumes of the photoresist to be polymerized, which enables fabrication of complex optical structures on the micron scale. The research team has been investigating and optimizing micro-optics made with two-photon lithography for the past 10 years. "We noticed that color errors known as chromatic aberrations were present in some of the images created with our micro-optics, so we set out to design 3D printed lenses with improved optical performance to reduce these errors," said Schmid. Chromatic aberrations occur because the way that light bends, or refracts, when it enters a lens depends on the color, or wavelength, of the light. This means that without correction, red light will focus to a different spot than blue light, for example, causing fringes or color seams to appear in images. The researchers designed miniature versions of lenses traditionally used to correct for chromatic aberrations. They began with an achromatic lens, which combines a refractive and diffractive component to limit the effects of chromatic aberration by bringing two wavelengths into focus on the same plane. The researchers used a commercially available two-photon lithography instrument made by NanoScribe GmbH to add a diffractive surface to a printed smooth refractive lens in one step. They then took this a step further by designing an apochromatic lens by combining the refractive-diffractive lens with another lens made from a different photoresist with different optical properties. Topping the two-material lens with the refractive-diffractive surface reduces chromatic aberrations even more, thus improving imaging performance. The design was performed by Simon Thiele from the Institute of Technical Optics in Stuttgart, who recently spun out the company PrintOptics which gives customers access to the entire value chain from design over prototyping to a series of micro-optical systems. Caption: In tests of the new lenses, the reference lens (left) shows color seams due to chromatic aberrations. The 3D printed achromat lenses (middle) reduced these drastically while images taken with the apochromat (right) completely eliminated the color distortion. Credit: Michael Schmid, University of Stuttgart Testing the micro-optics To show that the new apochromatic lens could reduce chromatic aberration, the researchers measured the focal spot location for three wavelengths and compared them to a simple refractive lens with no color correction. While the reference lens with no chromatic correction showed focal spots separated by many microns, the apochromatic lenses exhibited focal spots that aligned within 1 micron. The researchers also used the lenses to acquire images. Images taken using the simple reference lens showed strong color seams. Although the 3D printed achromat reduced these drastically, only images taken with the apochromat completely eliminated the color seams. "Our test results showed that the performance of 3D printed micro-optics can be improved and that two-photon lithography can be used to combine refractive and diffractive surfaces as well as different photo resists," said Schmid. The researchers point out that fabrication time will become faster in the future, which makes this approach more practical. It currently can take several hours to create one micro-optical element, depending on size. As the technology continues to mature, the researchers are working to create new lens designs for different applications. Paper : M. Schmid, F. Sterl, S. Thiele, A. Herkommer, H. Giessen, "3D Printed Hybrid Refractive/Diffractive Achromat and Apochromat for the Visible Wavelength Range," Opt. Lett. , 46, 10, 2485-2488 (2021).
DOI: https://doi.org/10.1364/OL.423196 .
About Optics Letters
Optics Letters offers rapid dissemination of new results in all areas of optical science with short, original, peer-reviewed communications. Optics Letters accepts papers that are noteworthy to a substantial part of the optics community. Published by The Optical Society and led by Editor-in-Chief Miguel Alonso, Institut Fresnel, École Centrale de Marseille and Aix-Marseille Université, France, University of Rochester, USA. Optics Letters is available online at OSA Publishing.
About The Optical Society
The Optical Society (OSA) is dedicated to promoting the generation, application, archiving, and dissemination of knowledge in optics and photonics worldwide. Founded in 1916, it is the leading organization for scientists, engineers, business professionals, students, and others interested in the science of light. OSA's renowned publications, meetings, online resources, and in-person activities fuel discoveries, shape real-life applications and accelerate scientific, technical, and educational achievement.
Media Contact
mediarelations@osa.org | A team of researchers at Germany's University of Stuttgart used three-dimensional (3D) printing to generate micron-scale lenses that can be used to correct color distortion during imaging and enable small, lightweight cameras for various applications. The researchers used two-photon lithography to fabricate apochromatic lenses that integrate refractive and diffractive surfaces, and showed these lenses could reduce chromatic aberration by measuring the focal spot site for three wavelengths and comparing them to a simple refractive lens with no color correction. Images captured by the 3D-printed achromatic lens reduced color seams significantly, but only images taken with the apochromatic lens completely eliminated them. | [] | [] | [] | scitechnews | None | None | None | None | A team of researchers at Germany's University of Stuttgart used three-dimensional (3D) printing to generate micron-scale lenses that can be used to correct color distortion during imaging and enable small, lightweight cameras for various applications. The researchers used two-photon lithography to fabricate apochromatic lenses that integrate refractive and diffractive surfaces, and showed these lenses could reduce chromatic aberration by measuring the focal spot site for three wavelengths and comparing them to a simple refractive lens with no color correction. Images captured by the 3D-printed achromatic lens reduced color seams significantly, but only images taken with the apochromatic lens completely eliminated them.
13 May 2021
Tiny lenses could correct color distortions for digital cameras and medical endoscopes
WASHINGTON - In a new study, researchers have shown that 3D printing can be used to make highly precise and complex miniature lenses with sizes of just a few microns. The microlenses can be used to correct color distortion during imaging, enabling small and lightweight cameras that can be designed for a variety of applications. "The ability to 3D print complex micro-optics means that they can be fabricated directly onto many different surfaces such as the CCD or CMOS chips used in digital cameras," said Michael Schmid, a member of the research team from University of Stuttgart in Germany. "The micro-optics can also be printed on the end of optical fibers to create very small medical endoscopes with excellent imaging quality." Caption: Researchers used 3D printing to make highly precise and complex apochromatic miniature lenses that can be used to correct color distortion during imaging. Credit: Michael Schmid, University of Stuttgart In The Optical Society ( OSA ) journal Optics Letters , researchers led by Harald Giessen detail how they used a type of 3D printing known as two-photon lithography to create lenses that combine refractive and diffractive surfaces. They also show that combining different materials can improve the optical performance of these lenses. "3D printing of micro-optics has improved drastically over the past few years and offers a design freedom not available from other methods," said Schmid. "Our optimized approach for 3D printing complex micro-optics opens many possibilities for creating new and innovative optical designs that can benefit many research fields and applications." Pushing the limits of 3D printing Two-photon lithography uses a focused laser beam to solidify, or polymerize, a liquid light-sensitive material known as photoresist. The optical phenomenon known as two-photon absorption allows cubic micrometer volumes of the photoresist to be polymerized, which enables fabrication of complex optical structures on the micron scale. The research team has been investigating and optimizing micro-optics made with two-photon lithography for the past 10 years. "We noticed that color errors known as chromatic aberrations were present in some of the images created with our micro-optics, so we set out to design 3D printed lenses with improved optical performance to reduce these errors," said Schmid. Chromatic aberrations occur because the way that light bends, or refracts, when it enters a lens depends on the color, or wavelength, of the light. This means that without correction, red light will focus to a different spot than blue light, for example, causing fringes or color seams to appear in images. The researchers designed miniature versions of lenses traditionally used to correct for chromatic aberrations. They began with an achromatic lens, which combines a refractive and diffractive component to limit the effects of chromatic aberration by bringing two wavelengths into focus on the same plane. The researchers used a commercially available two-photon lithography instrument made by NanoScribe GmbH to add a diffractive surface to a printed smooth refractive lens in one step. They then took this a step further by designing an apochromatic lens by combining the refractive-diffractive lens with another lens made from a different photoresist with different optical properties. Topping the two-material lens with the refractive-diffractive surface reduces chromatic aberrations even more, thus improving imaging performance. The design was performed by Simon Thiele from the Institute of Technical Optics in Stuttgart, who recently spun out the company PrintOptics which gives customers access to the entire value chain from design over prototyping to a series of micro-optical systems. Caption: In tests of the new lenses, the reference lens (left) shows color seams due to chromatic aberrations. The 3D printed achromat lenses (middle) reduced these drastically while images taken with the apochromat (right) completely eliminated the color distortion. Credit: Michael Schmid, University of Stuttgart Testing the micro-optics To show that the new apochromatic lens could reduce chromatic aberration, the researchers measured the focal spot location for three wavelengths and compared them to a simple refractive lens with no color correction. While the reference lens with no chromatic correction showed focal spots separated by many microns, the apochromatic lenses exhibited focal spots that aligned within 1 micron. The researchers also used the lenses to acquire images. Images taken using the simple reference lens showed strong color seams. Although the 3D printed achromat reduced these drastically, only images taken with the apochromat completely eliminated the color seams. "Our test results showed that the performance of 3D printed micro-optics can be improved and that two-photon lithography can be used to combine refractive and diffractive surfaces as well as different photo resists," said Schmid. The researchers point out that fabrication time will become faster in the future, which makes this approach more practical. It currently can take several hours to create one micro-optical element, depending on size. As the technology continues to mature, the researchers are working to create new lens designs for different applications. Paper : M. Schmid, F. Sterl, S. Thiele, A. Herkommer, H. Giessen, "3D Printed Hybrid Refractive/Diffractive Achromat and Apochromat for the Visible Wavelength Range," Opt. Lett. , 46, 10, 2485-2488 (2021).
DOI: https://doi.org/10.1364/OL.423196 .
About Optics Letters
Optics Letters offers rapid dissemination of new results in all areas of optical science with short, original, peer-reviewed communications. Optics Letters accepts papers that are noteworthy to a substantial part of the optics community. Published by The Optical Society and led by Editor-in-Chief Miguel Alonso, Institut Fresnel, École Centrale de Marseille and Aix-Marseille Université, France, University of Rochester, USA. Optics Letters is available online at OSA Publishing.
About The Optical Society
The Optical Society (OSA) is dedicated to promoting the generation, application, archiving, and dissemination of knowledge in optics and photonics worldwide. Founded in 1916, it is the leading organization for scientists, engineers, business professionals, students, and others interested in the science of light. OSA's renowned publications, meetings, online resources, and in-person activities fuel discoveries, shape real-life applications and accelerate scientific, technical, and educational achievement.
Media Contact
mediarelations@osa.org |
|||
410 | Research Could Improve Cache Efficiency by 60% | Research from Carnegie Mellon University may soon help Twitter run faster and more efficiently.
Juncheng Yang , a Ph.D. candidate in computer science, and Rashmi Vinayak , an assistant professor in the Computer Science Department , worked with Yao Yue from Twitter to develop Segcache to make better use of DRAM cache.
"We performed a large-scale study on how items were stored and accessed in the cache, and based on our research, we developed a system to make better use of the precious cache space," Yang said. "This could potentially allow Twitter to reduce the largest cache cluster size by 60%."
The team's research won the Community Award for being one of the best papers at last month's USENIX Symposium on Networked Systems Design and Implementation .
Most computers, from personal laptops to servers housing millions of tweets, store items in one of two systems: hard drives or dynamic random-access memory (DRAM). Hard drives store items permanently, while DRAM houses on-demand items, like files stored in the cache. Items in the DRAM can be retrieved quickly, but DRAM is relatively small, expensive and energy-consuming. How to better use that limited space has always been a hard problem to solve.
When you open Twitter, the tweets displayed immediately in the feed come from the cache. Without it, loading the homepage requires retrieving tweets from everyone you follow from the hard drive - which takes a long time and consumes system resources.
Segcache applies two techniques to better use cache space. First, it groups items to allow metadata sharing between them. Items in the cache are usually small - the most common length of a tweet is 33 characters. However, existing systems store large amounts of metadata with each item, wasting precious cache space. Grouping similar items and sharing their metadata reduces this overhead and uses the cache more efficiently.
The second technique is redesigning the system to identify and remove expired items more effectively. Cached items typically have a short lifetime, and when expired items linger in the cache they waste valuable space. The new design removes these items more quickly and with fewer scans than existing approaches, which need to scan all items periodically.
Yang and Vinayak said the collaboration with Twitter was crucial to their work, as the company allowed them to study the social media network's production system. Twitter is now working to incorporate the team's research into its production system.
"We and our collaborators at Twitter are very excited about this work," Vinayak said. "Changing a production system is cumbersome, and companies rarely do it to incorporate the latest research. When the research that we do is used in the real world, it is very exciting." | Researchers from Carnegie Mellon University (CMU) collaborated with Twitter to develop a system that could improve the speed and efficiency of the social media platform. The researchers examined how Twitter items were stored and accessed in dynamic random-access memory (DRAM) cache, and developed Segcache to improve the use of limited cache space. Tweets immediately displayed in a user's Twitter feed come from the cache; without it, tweets from everyone they follow would have to be retrieved from the hard drive. Segcache groups items in the cache to permit metadata sharing and allows for expired cache items to be removed more quickly and with fewer scans. CMU's Juncheng Yang said the use of Segcache "could potentially allow Twitter to reduce the largest cache cluster size by 60%." Twitter is working to incorporate the research into its production system. | [] | [] | [] | scitechnews | None | None | None | None | Researchers from Carnegie Mellon University (CMU) collaborated with Twitter to develop a system that could improve the speed and efficiency of the social media platform. The researchers examined how Twitter items were stored and accessed in dynamic random-access memory (DRAM) cache, and developed Segcache to improve the use of limited cache space. Tweets immediately displayed in a user's Twitter feed come from the cache; without it, tweets from everyone they follow would have to be retrieved from the hard drive. Segcache groups items in the cache to permit metadata sharing and allows for expired cache items to be removed more quickly and with fewer scans. CMU's Juncheng Yang said the use of Segcache "could potentially allow Twitter to reduce the largest cache cluster size by 60%." Twitter is working to incorporate the research into its production system.
Research from Carnegie Mellon University may soon help Twitter run faster and more efficiently.
Juncheng Yang , a Ph.D. candidate in computer science, and Rashmi Vinayak , an assistant professor in the Computer Science Department , worked with Yao Yue from Twitter to develop Segcache to make better use of DRAM cache.
"We performed a large-scale study on how items were stored and accessed in the cache, and based on our research, we developed a system to make better use of the precious cache space," Yang said. "This could potentially allow Twitter to reduce the largest cache cluster size by 60%."
The team's research won the Community Award for being one of the best papers at last month's USENIX Symposium on Networked Systems Design and Implementation .
Most computers, from personal laptops to servers housing millions of tweets, store items in one of two systems: hard drives or dynamic random-access memory (DRAM). Hard drives store items permanently, while DRAM houses on-demand items, like files stored in the cache. Items in the DRAM can be retrieved quickly, but DRAM is relatively small, expensive and energy-consuming. How to better use that limited space has always been a hard problem to solve.
When you open Twitter, the tweets displayed immediately in the feed come from the cache. Without it, loading the homepage requires retrieving tweets from everyone you follow from the hard drive - which takes a long time and consumes system resources.
Segcache applies two techniques to better use cache space. First, it groups items to allow metadata sharing between them. Items in the cache are usually small - the most common length of a tweet is 33 characters. However, existing systems store large amounts of metadata with each item, wasting precious cache space. Grouping similar items and sharing their metadata reduces this overhead and uses the cache more efficiently.
The second technique is redesigning the system to identify and remove expired items more effectively. Cached items typically have a short lifetime, and when expired items linger in the cache they waste valuable space. The new design removes these items more quickly and with fewer scans than existing approaches, which need to scan all items periodically.
Yang and Vinayak said the collaboration with Twitter was crucial to their work, as the company allowed them to study the social media network's production system. Twitter is now working to incorporate the team's research into its production system.
"We and our collaborators at Twitter are very excited about this work," Vinayak said. "Changing a production system is cumbersome, and companies rarely do it to incorporate the latest research. When the research that we do is used in the real world, it is very exciting." |
|||
411 | Ayanna Howard Named ACM Athena Lecturer | ACM has named Ayanna Howard , dean of The Ohio State University College of Engineering, as the 2021-2022 ACM Athena Lecturer. Howard is recognized for fundamental contributions to the development of accessible human-robotic systems and artificial intelligence, along with forging new paths to broaden participation in computing through entrepreneurial and mentoring efforts. Her contributions span theoretical foundations, experimental evaluation, and practical applications.
Howard is a leading roboticist, entrepreneur, and educator whose research includes dexterous manipulation, robot learning, field robotics, and human-robot interaction. She is a leader in studying the overtrust that people place in robots in various autonomous decision-making settings. In addition to her stellar research record, Howard has a strong record of service that demonstrates her commitment to advancing the field and broadening participation.
"Ayanna Howard is a trailblazer in vital research areas, including topics such as trust and bias in AI, which will continue to be front-and-center in society in the coming years," said ACM President Gabriele Kotsis. "The quality of her research has made her a thought leader in developing accessible human-robot interaction systems. Both as an entrepreneur and mentor, Ayanna Howard has worked to increase the participation of women and underrepresented groups in computing. For all these reasons, she is precisely the kind of leader ACM seeks to recognize with the Athena Lecturer Award."
KEY TECHNICAL CONTRIBUTIONS
Robotic Manipulation Her doctoral research on dexterous robotic manipulation of deformable objects proposed some of the first ideas on the modeling of deformable objects via physical simulation, such that they could be robustly grasped by robot arms. This work also demonstrated how neural networks could be trained to extract the minimum force required for subsequent deformable object manipulation tasks.
Terrain Classification of Field Robots Terrain classification is critical for many robots operating in unstructured natural field environments, including navigating the Arctic or determining safe landing locations on the surface of Mars. Howard's work introduced fuzzy logic methods to model environmental uncertainty that advanced the state of the art in field robotics, including finding evidence of never-before-observed life on Antarctica's sea floor.
Robotics for Children with Special Needs Howard studied the ways in which socially-effective robots could improve the access and scalability of services for children with special needs, as well as potentially improve outcomes through the engaging nature of robots. In adapting her contributions to real-world settings for assistive technology for children, her work has also provided first-of-its-kind computer vision techniques that analyze the movement of children to devise therapeutic measures.
Overtrust in Robotics and AI Systems Howard is a leader in modeling trust among humans, robots and AI systems, including conversational agents, emergency response scenarios, autonomous navigation systems, child-robot interaction, and the use of lethal force. Her work introduced human-robot interaction algorithms that, for the first time, quantified the impact of robot mistakes on human trust in a realistic, simulated, and very high-risk scenario. This work has led to better understanding of the biases and social inequities underlying AI and robotic systems.
BROADENING PARTICIPATION/SERVICE TO THE FIELD
Howard has created and led numerous programs designed to engage, recruit, and retain students and faculty from groups that are historically underrepresented in computing, including several NSF-funded Broadening Participation in Computing initiatives. She was the principal investigator for (PI) /co-PI for Popularizing Computing in the Mainstream, which focused on creating interventions to engage underrepresented groups in the computing field; Advancing Robotics for Societal Impact Alliance , an initiative to provide mentorship to computer science faculty and students at Historically Black Colleges and Universities (HBCUs); and Accessible Robotic Programming for Students with Disabilities, an initiative to engage middle- and high school students with disabilities in robotics-based programming activities. She also led and co-founded efforts to broaden participation in the field through the IEEE Robotics PhD Forum and the CRA-WP Graduate Cohort Workshop for Inclusion, Diversity, Equity, Accessibility, and Leadership Skills.
As part of her service to the field, Howard has held key roles on various editorial boards and conference/program committees. Some of her more high-profile efforts have included co-organizing the AAAI Symposium on Accessible Hands-on AI and Robotics Education, the International Joint Conference on Neural Networks, the International Conference on Social Robotics, and the IEEE Workshop on Advanced Robotics and Its Social Impacts.
News Release | Printable PDF
Media Coverage: MIT Technology Review Robotics 24/7 HPC Wire | ACM named Ayanna Howard, dean of the Ohio State University College of Engineering, its 2021-2022 ACM Athena Lecturer for her contributions to the development of accessible human-robotic systems and artificial intelligence (AI), and for boosting participation in computing. Howard proposed some of the first concepts for simulating deformable objects via physical modeling, to enable robust robot grasping; she also introduced the modeling of environmental uncertainty through fuzzy logic, furthering the state of the art in field robotics. Howard also has spearheaded modeling trust among humans, robots, and AI systems, including conversational agents, emergency response situations, autonomous navigation, child-robot interaction, and use of lethal force. ACM president Gabriele Kotsis said, "Both as an entrepreneur and mentor, Ayanna Howard has worked to increase the participation of women and underrepresented groups in computing." | [] | [] | [] | scitechnews | None | None | None | None | ACM named Ayanna Howard, dean of the Ohio State University College of Engineering, its 2021-2022 ACM Athena Lecturer for her contributions to the development of accessible human-robotic systems and artificial intelligence (AI), and for boosting participation in computing. Howard proposed some of the first concepts for simulating deformable objects via physical modeling, to enable robust robot grasping; she also introduced the modeling of environmental uncertainty through fuzzy logic, furthering the state of the art in field robotics. Howard also has spearheaded modeling trust among humans, robots, and AI systems, including conversational agents, emergency response situations, autonomous navigation, child-robot interaction, and use of lethal force. ACM president Gabriele Kotsis said, "Both as an entrepreneur and mentor, Ayanna Howard has worked to increase the participation of women and underrepresented groups in computing."
ACM has named Ayanna Howard , dean of The Ohio State University College of Engineering, as the 2021-2022 ACM Athena Lecturer. Howard is recognized for fundamental contributions to the development of accessible human-robotic systems and artificial intelligence, along with forging new paths to broaden participation in computing through entrepreneurial and mentoring efforts. Her contributions span theoretical foundations, experimental evaluation, and practical applications.
Howard is a leading roboticist, entrepreneur, and educator whose research includes dexterous manipulation, robot learning, field robotics, and human-robot interaction. She is a leader in studying the overtrust that people place in robots in various autonomous decision-making settings. In addition to her stellar research record, Howard has a strong record of service that demonstrates her commitment to advancing the field and broadening participation.
"Ayanna Howard is a trailblazer in vital research areas, including topics such as trust and bias in AI, which will continue to be front-and-center in society in the coming years," said ACM President Gabriele Kotsis. "The quality of her research has made her a thought leader in developing accessible human-robot interaction systems. Both as an entrepreneur and mentor, Ayanna Howard has worked to increase the participation of women and underrepresented groups in computing. For all these reasons, she is precisely the kind of leader ACM seeks to recognize with the Athena Lecturer Award."
KEY TECHNICAL CONTRIBUTIONS
Robotic Manipulation Her doctoral research on dexterous robotic manipulation of deformable objects proposed some of the first ideas on the modeling of deformable objects via physical simulation, such that they could be robustly grasped by robot arms. This work also demonstrated how neural networks could be trained to extract the minimum force required for subsequent deformable object manipulation tasks.
Terrain Classification of Field Robots Terrain classification is critical for many robots operating in unstructured natural field environments, including navigating the Arctic or determining safe landing locations on the surface of Mars. Howard's work introduced fuzzy logic methods to model environmental uncertainty that advanced the state of the art in field robotics, including finding evidence of never-before-observed life on Antarctica's sea floor.
Robotics for Children with Special Needs Howard studied the ways in which socially-effective robots could improve the access and scalability of services for children with special needs, as well as potentially improve outcomes through the engaging nature of robots. In adapting her contributions to real-world settings for assistive technology for children, her work has also provided first-of-its-kind computer vision techniques that analyze the movement of children to devise therapeutic measures.
Overtrust in Robotics and AI Systems Howard is a leader in modeling trust among humans, robots and AI systems, including conversational agents, emergency response scenarios, autonomous navigation systems, child-robot interaction, and the use of lethal force. Her work introduced human-robot interaction algorithms that, for the first time, quantified the impact of robot mistakes on human trust in a realistic, simulated, and very high-risk scenario. This work has led to better understanding of the biases and social inequities underlying AI and robotic systems.
BROADENING PARTICIPATION/SERVICE TO THE FIELD
Howard has created and led numerous programs designed to engage, recruit, and retain students and faculty from groups that are historically underrepresented in computing, including several NSF-funded Broadening Participation in Computing initiatives. She was the principal investigator for (PI) /co-PI for Popularizing Computing in the Mainstream, which focused on creating interventions to engage underrepresented groups in the computing field; Advancing Robotics for Societal Impact Alliance , an initiative to provide mentorship to computer science faculty and students at Historically Black Colleges and Universities (HBCUs); and Accessible Robotic Programming for Students with Disabilities, an initiative to engage middle- and high school students with disabilities in robotics-based programming activities. She also led and co-founded efforts to broaden participation in the field through the IEEE Robotics PhD Forum and the CRA-WP Graduate Cohort Workshop for Inclusion, Diversity, Equity, Accessibility, and Leadership Skills.
As part of her service to the field, Howard has held key roles on various editorial boards and conference/program committees. Some of her more high-profile efforts have included co-organizing the AAAI Symposium on Accessible Hands-on AI and Robotics Education, the International Joint Conference on Neural Networks, the International Conference on Social Robotics, and the IEEE Workshop on Advanced Robotics and Its Social Impacts.
News Release | Printable PDF
Media Coverage: MIT Technology Review Robotics 24/7 HPC Wire |
|||
412 | Biden Signs Executive Order to Strengthen U.S. Cybersecurity Defenses After Colonial Pipeline Hack | In the wake of the Colonial Pipeline ransomware attack, President Biden has signed an executive order to fortify U.S. cybersecurity defenses. The pipeline hack is the latest in a string of high-profile attacks on private and federal entities conducted by criminal groups or state actors. Biden's directive requires information technology service providers to alert the government to cybersecurity breaches that could impact U.S. networks, and lifts contractual barriers that might prevent them from flagging breaches. The order also calls for a standardized playbook and definitions for federal responses to cyber incidents; upgrades to cloud services and other cyber infrastructure security; a mandate that software developers share certain security data publicly; and a Cybersecurity Safety Review Board to analyze breaches and make recommendations. | [] | [] | [] | scitechnews | None | None | None | None | In the wake of the Colonial Pipeline ransomware attack, President Biden has signed an executive order to fortify U.S. cybersecurity defenses. The pipeline hack is the latest in a string of high-profile attacks on private and federal entities conducted by criminal groups or state actors. Biden's directive requires information technology service providers to alert the government to cybersecurity breaches that could impact U.S. networks, and lifts contractual barriers that might prevent them from flagging breaches. The order also calls for a standardized playbook and definitions for federal responses to cyber incidents; upgrades to cloud services and other cyber infrastructure security; a mandate that software developers share certain security data publicly; and a Cybersecurity Safety Review Board to analyze breaches and make recommendations.
|
||||
413 | Patients May Not Take Advice from AI Doctors Who Know Their Names | UNIVERSITY PARK, Pa. - As the use of artificial intelligence (AI) in health applications grows, health providers are looking for ways to improve patients' experience with their machine doctors.
Researchers from Penn State and University of California, Santa Barbara (UCSB) found that people may be less likely to take health advice from an AI doctor when the robot knows their name and medical history. On the other hand, patients want to be on a first-name basis with their human doctors.
When the AI doctor used the first name of the patients and referred to their medical history in the conversation, study participants were more likely to consider an AI health chatbot intrusive and also less likely to heed the AI's medical advice, the researchers added. However, while chatting online with human doctors they expected the doctors to differentiate them from other patients and were less likely to comply when a human doctor failed to remember their information.
The findings offer further evidence that machines walk a fine line in serving as doctors, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State.
"Machines don't have the ability to feel and experience, so when they ask patients how they are feeling, it's really just data to them," said Sundar, who is also an affiliate of Penn State's Institute for Computational and Data Sciences (ICDS) . "It's possibly a reason why people in the past have been resistant to medical AI."
Machines do have advantages as medical providers, said Joseph B. Walther, distinguished professor in communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at UCSB. He said that, like a family doctor who has treated a patient for a long time, computer systems could - hypothetically - know a patient's complete medical history. In comparison, seeing a new doctor or a specialist who knows only your latest lab tests might be a more common experience, said Walther, who is also director of the Center for Information Technology and Society at UCSB .
"This struck us with the question: 'Who really knows us better: a machine that can store all this information, or a human who has never met us before or hasn't developed a relationship with us, and what do we value in a relationship with a medical expert?'" said Walther. "So this research asks, who knows us better - and who do we like more?"
The team designed five chatbots for the two-phase study, recruiting a total of 295 participants for the first phase, 223 of whom returned for the second phase. In the first part of the study, participants were randomly assigned to interact with either a human doctor, an AI doctor, or an AI-assisted doctor through the chat function.
In the second phase of the study, the participants were assigned to interact with the same doctor again. However, when the doctor initiated the conversation in this phase, they either identified the participant by the first name and recalled information from the last interaction or they asked again how the patient preferred to be addressed and repeated questions about their medical history.
In both phases, the chatbots were programmed to ask eight questions concerning COVID-19 symptoms and behaviors, and offer diagnosis and recommendations, said Jin Chen, doctoral student in mass communications, Penn State and first author of the paper.
"We chose to focus this on COVID-19 because it was a salient health issue during the study period," said Jin Chen.
Accepting AI doctors
As medical providers look for cost-effective ways to provide better care, AI medical services may provide one alternative. However, AI doctors must provide care and advice that patients are willing to accept, according to Cheng Chen, doctoral student in mass communications at Penn State.
"One of the reasons we conducted this study was that we read in the literature a lot of accounts of how people are reluctant to accept AI as a doctor," said Chen. "They just don't feel comfortable with the technology and they don't feel that the AI recognizes their uniqueness as a patient. So, we thought that because machines can retain so much information about a person, they can provide individuation, and solve this uniqueness problem."
The findings suggest that this strategy can backfire. "When an AI system recognizes a person's uniqueness, it comes across as intrusive, echoing larger concerns with AI in society," said Sundar.
In a perplexing finding, about 78% of the participants in the experimental condition that featured a human doctor believed that they were interacting with an AI doctor, said the researchers. Sundar added a tentative explanation for this finding is that people may have become more accustomed to online health platforms during the pandemic, and may have expected a richer interaction.
In the future, the researchers expect more investigations into the roles that authenticity and the ability for machines to engage in back-and-forth questions may play in developing better rapport with patients.
The researchers presented their findings today at the virtual 2021 ACM CHI Conference on Human Factors in Computing Systems - the premier international conference for research on Human-Computer Interaction. | Patients may be less inclined to heed artificial intelligence (AI) doctors that know their names and medical history, according to researchers at Pennsylvania State University (Penn State) and the University of California, Santa Barbara. The team designed five AI chatbots; Penn State's Jin Chen said the bots were programmed to ask questions about COVID-19 symptoms and behaviors, and to offer diagnosis and recommendations. Study participants were more likely to consider a chatbot intrusive and less likely to follow its medical advice when it used their first name and referred to their medical history, yet they expected human doctors to distinguish them from other patients, and were less likely to comply when a clinician did not recall their information. Penn State's S. Shyam Sundar said, "When an AI system recognizes a person's uniqueness, it comes across as intrusive, echoing larger concerns with AI in society." | [] | [] | [] | scitechnews | None | None | None | None | Patients may be less inclined to heed artificial intelligence (AI) doctors that know their names and medical history, according to researchers at Pennsylvania State University (Penn State) and the University of California, Santa Barbara. The team designed five AI chatbots; Penn State's Jin Chen said the bots were programmed to ask questions about COVID-19 symptoms and behaviors, and to offer diagnosis and recommendations. Study participants were more likely to consider a chatbot intrusive and less likely to follow its medical advice when it used their first name and referred to their medical history, yet they expected human doctors to distinguish them from other patients, and were less likely to comply when a clinician did not recall their information. Penn State's S. Shyam Sundar said, "When an AI system recognizes a person's uniqueness, it comes across as intrusive, echoing larger concerns with AI in society."
UNIVERSITY PARK, Pa. - As the use of artificial intelligence (AI) in health applications grows, health providers are looking for ways to improve patients' experience with their machine doctors.
Researchers from Penn State and University of California, Santa Barbara (UCSB) found that people may be less likely to take health advice from an AI doctor when the robot knows their name and medical history. On the other hand, patients want to be on a first-name basis with their human doctors.
When the AI doctor used the first name of the patients and referred to their medical history in the conversation, study participants were more likely to consider an AI health chatbot intrusive and also less likely to heed the AI's medical advice, the researchers added. However, while chatting online with human doctors they expected the doctors to differentiate them from other patients and were less likely to comply when a human doctor failed to remember their information.
The findings offer further evidence that machines walk a fine line in serving as doctors, said S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory at Penn State.
"Machines don't have the ability to feel and experience, so when they ask patients how they are feeling, it's really just data to them," said Sundar, who is also an affiliate of Penn State's Institute for Computational and Data Sciences (ICDS) . "It's possibly a reason why people in the past have been resistant to medical AI."
Machines do have advantages as medical providers, said Joseph B. Walther, distinguished professor in communication and the Mark and Susan Bertelsen Presidential Chair in Technology and Society at UCSB. He said that, like a family doctor who has treated a patient for a long time, computer systems could - hypothetically - know a patient's complete medical history. In comparison, seeing a new doctor or a specialist who knows only your latest lab tests might be a more common experience, said Walther, who is also director of the Center for Information Technology and Society at UCSB .
"This struck us with the question: 'Who really knows us better: a machine that can store all this information, or a human who has never met us before or hasn't developed a relationship with us, and what do we value in a relationship with a medical expert?'" said Walther. "So this research asks, who knows us better - and who do we like more?"
The team designed five chatbots for the two-phase study, recruiting a total of 295 participants for the first phase, 223 of whom returned for the second phase. In the first part of the study, participants were randomly assigned to interact with either a human doctor, an AI doctor, or an AI-assisted doctor through the chat function.
In the second phase of the study, the participants were assigned to interact with the same doctor again. However, when the doctor initiated the conversation in this phase, they either identified the participant by the first name and recalled information from the last interaction or they asked again how the patient preferred to be addressed and repeated questions about their medical history.
In both phases, the chatbots were programmed to ask eight questions concerning COVID-19 symptoms and behaviors, and offer diagnosis and recommendations, said Jin Chen, doctoral student in mass communications, Penn State and first author of the paper.
"We chose to focus this on COVID-19 because it was a salient health issue during the study period," said Jin Chen.
Accepting AI doctors
As medical providers look for cost-effective ways to provide better care, AI medical services may provide one alternative. However, AI doctors must provide care and advice that patients are willing to accept, according to Cheng Chen, doctoral student in mass communications at Penn State.
"One of the reasons we conducted this study was that we read in the literature a lot of accounts of how people are reluctant to accept AI as a doctor," said Chen. "They just don't feel comfortable with the technology and they don't feel that the AI recognizes their uniqueness as a patient. So, we thought that because machines can retain so much information about a person, they can provide individuation, and solve this uniqueness problem."
The findings suggest that this strategy can backfire. "When an AI system recognizes a person's uniqueness, it comes across as intrusive, echoing larger concerns with AI in society," said Sundar.
In a perplexing finding, about 78% of the participants in the experimental condition that featured a human doctor believed that they were interacting with an AI doctor, said the researchers. Sundar added a tentative explanation for this finding is that people may have become more accustomed to online health platforms during the pandemic, and may have expected a richer interaction.
In the future, the researchers expect more investigations into the roles that authenticity and the ability for machines to engage in back-and-forth questions may play in developing better rapport with patients.
The researchers presented their findings today at the virtual 2021 ACM CHI Conference on Human Factors in Computing Systems - the premier international conference for research on Human-Computer Interaction. |
|||
414 | Singapore Researchers Control Venus Flytraps Using Smartphones | SINGAPORE, May 12 (Reuters) - Researchers in Singapore have found a way of controlling a Venus flytrap using electric signals from a smartphone, an innovation they hope will have a range of uses from robotics to employing the plants as environmental sensors.
Luo Yifei, a researcher at Singapore's Nanyang Technological University (NTU), showed in a demonstration how a signal from a smartphone app sent to tiny electrodes attached to the plant could make its trap close as it does when catching a fly.
"Plants are like humans, they generate electric signals, like the ECG (electrocardiogram) from our hearts," said Luo, who works at NTU's School of Materials Science and Engineering.
"We developed a non-invasive technology to detect these electric signals from the surface of plants without damaging them," Luo said.
The scientists have also detached the trap portion of the Venus flytrap and attached it to a robotic arm so it can, when given a signal, grip something thin and light like a piece of wire.
In this way, the plant could be used as a "soft robot," the scientists say, to pick up fragile things that might be damaged by industrial grippers, as well as being more environmentally friendly.
Communication between humans and plants is not necessarily entirely one-way.
The NTU research team hopes their technology can be used to detect signals from plants about abnormalities or potential diseases before full-blown symptoms appear.
"We are exploring using plants as living sensors to monitor environmental pollution like gas, toxic gas, or water pollution," said Luo, who stressed there was a long way to go before such plant technology could be used commercially.
But for Darren Ng, an enthusiast of the carnivorous plants and founder of SG VenusFlytrap, a group that sells the plants and offers care tips, the research is welcome.
"If the plant can talk back to us, maybe growing all these plants may be even easier," he says.
Our Standards: The Thomson Reuters Trust Principles. | Researchers at Singapore's Nanyang Technological University (NTU) connected electrodes to the surface of a Venus flytrap, in order to control it with electric signals from a smartphone application. NTU's Luo Yifei said the non-invasive technology can detect the plant's electric signals. Luo said, "We are exploring using plants as living sensors to monitor environmental pollution like gas, toxic gas, or water pollution." The NTU researchers also detached the flytrap's trap portion and reattached it to a robotic arm, where it has demonstrated the ability to grasp a thin, light item when directed to do so. The researchers think this technology could enable the plant to function as an environmentally friendly soft robot for handling fragile objects. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Singapore's Nanyang Technological University (NTU) connected electrodes to the surface of a Venus flytrap, in order to control it with electric signals from a smartphone application. NTU's Luo Yifei said the non-invasive technology can detect the plant's electric signals. Luo said, "We are exploring using plants as living sensors to monitor environmental pollution like gas, toxic gas, or water pollution." The NTU researchers also detached the flytrap's trap portion and reattached it to a robotic arm, where it has demonstrated the ability to grasp a thin, light item when directed to do so. The researchers think this technology could enable the plant to function as an environmentally friendly soft robot for handling fragile objects.
SINGAPORE, May 12 (Reuters) - Researchers in Singapore have found a way of controlling a Venus flytrap using electric signals from a smartphone, an innovation they hope will have a range of uses from robotics to employing the plants as environmental sensors.
Luo Yifei, a researcher at Singapore's Nanyang Technological University (NTU), showed in a demonstration how a signal from a smartphone app sent to tiny electrodes attached to the plant could make its trap close as it does when catching a fly.
"Plants are like humans, they generate electric signals, like the ECG (electrocardiogram) from our hearts," said Luo, who works at NTU's School of Materials Science and Engineering.
"We developed a non-invasive technology to detect these electric signals from the surface of plants without damaging them," Luo said.
The scientists have also detached the trap portion of the Venus flytrap and attached it to a robotic arm so it can, when given a signal, grip something thin and light like a piece of wire.
In this way, the plant could be used as a "soft robot," the scientists say, to pick up fragile things that might be damaged by industrial grippers, as well as being more environmentally friendly.
Communication between humans and plants is not necessarily entirely one-way.
The NTU research team hopes their technology can be used to detect signals from plants about abnormalities or potential diseases before full-blown symptoms appear.
"We are exploring using plants as living sensors to monitor environmental pollution like gas, toxic gas, or water pollution," said Luo, who stressed there was a long way to go before such plant technology could be used commercially.
But for Darren Ng, an enthusiast of the carnivorous plants and founder of SG VenusFlytrap, a group that sells the plants and offers care tips, the research is welcome.
"If the plant can talk back to us, maybe growing all these plants may be even easier," he says.
Our Standards: The Thomson Reuters Trust Principles. |
|||
415 | Amazon Cloud Technology Aids NFL in Schedule Making | The National Football League (NFL) used Amazon Web Services (AWS) ' cloud platform to arrange its just-released 272-game 2021 schedule. The NFL's Mike North said, "We've got 5,000 computers each building up schedules." The cloud computers negotiated trillions of possibilities on what day, time, and network to play each game, and league officials studied over 80,000 possibilities before making a final choice. Each morning officials receive new schedules from the computers, determine whether any schedule rates being the new lead, and instruct the computers by eliminating certain seed games or shifting different spots to hopefully resolve other problems; this process repeats until a final option is reached. | [] | [] | [] | scitechnews | None | None | None | None | The National Football League (NFL) used Amazon Web Services (AWS) ' cloud platform to arrange its just-released 272-game 2021 schedule. The NFL's Mike North said, "We've got 5,000 computers each building up schedules." The cloud computers negotiated trillions of possibilities on what day, time, and network to play each game, and league officials studied over 80,000 possibilities before making a final choice. Each morning officials receive new schedules from the computers, determine whether any schedule rates being the new lead, and instruct the computers by eliminating certain seed games or shifting different spots to hopefully resolve other problems; this process repeats until a final option is reached.
|
||||
416 | Escape COVID-19: Game Can Help Healthcare Warriors Unwind, Combat Spread | Researchers at the Geneva University Hospitals (HUG) in Switzerland have developed a computer game to help healthcare workers unwind while educating them on how change their behaviors to curtail the spread of COVID-19. The game, "Escape COVID-19," guides players through scenarios encountered by healthcare workers every day. A study of almost 300 emergency room workers in Geneva who were given either written materials about proper protocols or an opportunity to play the computer game showed that the game was more effective in inspiring behavioral change. HUG's Melanie Suppan said, "Those who played the game were three times more likely to say they wanted to change their behavior compared to those who received the regular material." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the Geneva University Hospitals (HUG) in Switzerland have developed a computer game to help healthcare workers unwind while educating them on how change their behaviors to curtail the spread of COVID-19. The game, "Escape COVID-19," guides players through scenarios encountered by healthcare workers every day. A study of almost 300 emergency room workers in Geneva who were given either written materials about proper protocols or an opportunity to play the computer game showed that the game was more effective in inspiring behavioral change. HUG's Melanie Suppan said, "Those who played the game were three times more likely to say they wanted to change their behavior compared to those who received the regular material."
|
||||
418 | IBM Just Solved This Quantum Computing Problem 120 Times Faster Than Previously Possible | Using a combination of tweaked algorithms, improved control systems and a new quantum service called Qiskit Runtime, IBM researchers have managed to resolve a quantum problem 120 times faster than the previous time they gave it a go.
Back in 2017, Big Blue announced that, equipped with a seven-qubit quantum processor, its researchers had successfully simulated the behavior of a small molecule called lithium hydride (LiH). At the time, the operation took 45 days. Now, four years later, the IBM Quantum team has announced that the same problem was solved in only nine hours .
The simulation was run entirely on the cloud, through IBM's Qiskit platform - an open-source library of tools that lets developers around the world create quantum programs and run them on prototype quantum devices that IBM makes available over the cloud.
SEE: Building the bionic brain (free PDF) (TechRepublic)
The speed-up that was observed was largely made possible thanks to a new quantum service, Qiskit Runtime, which was key to reducing latencies during the simulation.
IBM teased Qiskit Runtime earlier this year as part of the company's software roadmap for quantum computing, and at the time estimated that the new service would lead to a 100-time speed-up in workloads. With a reported 120-time speed-up, therefore, it seems that Big Blue has exceeded its own objectives.
Classical computing remains a fundamental part of Qiskit, and of any quantum operation carried out over the cloud. A quantum program can effectively be broken down into two parts: using classical hardware, like a laptop, developers send queries over the cloud to the quantum hardware - in this case, to IBM's quantum computation center in Poughkeepsie, New York.
"The quantum method isn't just a quantum circuit that you execute," Blake Johnson, quantum platform lead at IBM Quantum, tells ZDNet. "There is an interaction between a classical computing resource that makes queries to the quantum hardware, then interprets those results to make new queries. That conversation is not a one-off thing - it's happening over and over again, and you need it to be fast."
With every request that is sent, a few tens of thousands of quantum circuits are executed. To simulate the small LiH molecule, for example, 4.1 billion circuits were executed, which corresponds to millions of queries going back and forth between the classical resource and the quantum one.
When this conversation happens in the cloud, over an internet connection, between a user's laptop and IBM's US-based quantum processors, latency can quickly become a significant hurdle.
Case in point: while solving a problem as complex as molecular simulation in 45 days is a start, it isn't enough to achieve the quantum strides that scientists are getting excited about.
"We currently have a system that isn't architected intrinsically around the fact that real workloads have these quantum-classical loops," says Johnson.
Based on this observation, IBM's quantum team set out to build Qiskit Runtime - a system that is built to natively accelerate the execution of a quantum program by removing some of the friction associated with the back-and-forth that is on-going between the quantum and the classical world.
Qiskit Runtime creates a containerized execution environment located beside the quantum hardware. Rather than sending many queries from their device to the cloud-based quantum computer, developers can therefore send entire programs to the Runtime environment, where the IBM hybrid cloud uploads and executes the work for them.
In other words, the loops that happen between the classical and the quantum environment are contained within Runtime - which itself is near to the quantum processor. This effectively slashes the latencies that emerge from communicating between a user's computer and the quantum processor.
"The classical part, which generates queries to the quantum hardware, can now be run in a container platform that is co-located with the quantum hardware," explains Johnson. "The program executing there can ask a question to the quantum hardware and get a response back very quickly. It is a very low-cost interaction, so those loops are now suddenly much faster."
Improving the accuracy and scale of quantum calculations is no easy task.
Until now, explains Johnson, much of the research effort has focused on improving the quality of the quantum circuit. In practice, this has meant developing software that helps correct errors and add fault tolerance to the quantum hardware.
Qiskit Runtime, in this sense, marks a change in thinking: instead of working on the quality of quantum hardware, says Johnson, the system increases the overall program's capacity.
It remains true that the 120-times speed-up would not have been possible without additional tweaks to the hardware performance.
Algorithmic improvements, for example, reduced the number of iterations of the model that were required to receive a final answer by two to 10 times; while better processor performance meant that each iteration of the algorithm required less circuit runs.
At the same time, upgrades to the system software and control systems reduced the amount of time per circuit execution for each iteration.
"The quality is a critical ingredient that also makes the whole system run faster," says Johnson. "It is the harmonious improvement of quality and capacity working together that makes the system faster."
Now that the speed-up has been demonstrated in simulating the LiH molecule, Johnson is hoping to see developers use the improved technology to experiment with quantum applications in a variety of different fields beyond chemistry.
In another demonstration, for example, IBM's quantum team used Qiskit Runtime to run a machine-learning program for a classification task. The new system was able to execute the workload and find the optimal model to label a set of data in a timescale that Johnson described as "meaningful."
Qiskit Runtime will initially be released in beta, for a select number of users from IBM's Q Network, and will come with a fixed set-up of programs that are configurable. IBM expects that the system will be available to every user of the company's quantum services in the third quarter of 2021.
Combined with the 127-qubit quantum processor, called the IBM Quantum Eagle, which is slated for later this year, Big Blue hopes that the speed-up enabled by Runtime will mean that a lot of tasks that were once thought impractical on quantum computers will now be achievable.
The system certainly sets IBM on track to meet the objectives laid out in the company's quantum software roadmap, which projects that there will be frictionless quantum computing in a number of applications by 2025. | IBM researchers have solved a quantum computing problem 120 times faster than the last time they solved it. The IBM Quantum team simulated the behavior of the lithium hydride molecule entirely on the cloud in just nine hours on IBM's Qiskit Runtime platform. In 2017, researchers ran the simulation with a seven-qubit quantum processor; it took 45 days. With Qiskit Runtime, IBM Quantum's Blake Johnson said, "The classical part, which generates queries to the quantum hardware, can now be run in a container platform that is co-located with the quantum hardware. The program executing there can ask a question to the quantum hardware and get a response back very quickly. It is a very low-cost interaction, so those loops are now suddenly much faster." | [] | [] | [] | scitechnews | None | None | None | None | IBM researchers have solved a quantum computing problem 120 times faster than the last time they solved it. The IBM Quantum team simulated the behavior of the lithium hydride molecule entirely on the cloud in just nine hours on IBM's Qiskit Runtime platform. In 2017, researchers ran the simulation with a seven-qubit quantum processor; it took 45 days. With Qiskit Runtime, IBM Quantum's Blake Johnson said, "The classical part, which generates queries to the quantum hardware, can now be run in a container platform that is co-located with the quantum hardware. The program executing there can ask a question to the quantum hardware and get a response back very quickly. It is a very low-cost interaction, so those loops are now suddenly much faster."
Using a combination of tweaked algorithms, improved control systems and a new quantum service called Qiskit Runtime, IBM researchers have managed to resolve a quantum problem 120 times faster than the previous time they gave it a go.
Back in 2017, Big Blue announced that, equipped with a seven-qubit quantum processor, its researchers had successfully simulated the behavior of a small molecule called lithium hydride (LiH). At the time, the operation took 45 days. Now, four years later, the IBM Quantum team has announced that the same problem was solved in only nine hours .
The simulation was run entirely on the cloud, through IBM's Qiskit platform - an open-source library of tools that lets developers around the world create quantum programs and run them on prototype quantum devices that IBM makes available over the cloud.
SEE: Building the bionic brain (free PDF) (TechRepublic)
The speed-up that was observed was largely made possible thanks to a new quantum service, Qiskit Runtime, which was key to reducing latencies during the simulation.
IBM teased Qiskit Runtime earlier this year as part of the company's software roadmap for quantum computing, and at the time estimated that the new service would lead to a 100-time speed-up in workloads. With a reported 120-time speed-up, therefore, it seems that Big Blue has exceeded its own objectives.
Classical computing remains a fundamental part of Qiskit, and of any quantum operation carried out over the cloud. A quantum program can effectively be broken down into two parts: using classical hardware, like a laptop, developers send queries over the cloud to the quantum hardware - in this case, to IBM's quantum computation center in Poughkeepsie, New York.
"The quantum method isn't just a quantum circuit that you execute," Blake Johnson, quantum platform lead at IBM Quantum, tells ZDNet. "There is an interaction between a classical computing resource that makes queries to the quantum hardware, then interprets those results to make new queries. That conversation is not a one-off thing - it's happening over and over again, and you need it to be fast."
With every request that is sent, a few tens of thousands of quantum circuits are executed. To simulate the small LiH molecule, for example, 4.1 billion circuits were executed, which corresponds to millions of queries going back and forth between the classical resource and the quantum one.
When this conversation happens in the cloud, over an internet connection, between a user's laptop and IBM's US-based quantum processors, latency can quickly become a significant hurdle.
Case in point: while solving a problem as complex as molecular simulation in 45 days is a start, it isn't enough to achieve the quantum strides that scientists are getting excited about.
"We currently have a system that isn't architected intrinsically around the fact that real workloads have these quantum-classical loops," says Johnson.
Based on this observation, IBM's quantum team set out to build Qiskit Runtime - a system that is built to natively accelerate the execution of a quantum program by removing some of the friction associated with the back-and-forth that is on-going between the quantum and the classical world.
Qiskit Runtime creates a containerized execution environment located beside the quantum hardware. Rather than sending many queries from their device to the cloud-based quantum computer, developers can therefore send entire programs to the Runtime environment, where the IBM hybrid cloud uploads and executes the work for them.
In other words, the loops that happen between the classical and the quantum environment are contained within Runtime - which itself is near to the quantum processor. This effectively slashes the latencies that emerge from communicating between a user's computer and the quantum processor.
"The classical part, which generates queries to the quantum hardware, can now be run in a container platform that is co-located with the quantum hardware," explains Johnson. "The program executing there can ask a question to the quantum hardware and get a response back very quickly. It is a very low-cost interaction, so those loops are now suddenly much faster."
Improving the accuracy and scale of quantum calculations is no easy task.
Until now, explains Johnson, much of the research effort has focused on improving the quality of the quantum circuit. In practice, this has meant developing software that helps correct errors and add fault tolerance to the quantum hardware.
Qiskit Runtime, in this sense, marks a change in thinking: instead of working on the quality of quantum hardware, says Johnson, the system increases the overall program's capacity.
It remains true that the 120-times speed-up would not have been possible without additional tweaks to the hardware performance.
Algorithmic improvements, for example, reduced the number of iterations of the model that were required to receive a final answer by two to 10 times; while better processor performance meant that each iteration of the algorithm required less circuit runs.
At the same time, upgrades to the system software and control systems reduced the amount of time per circuit execution for each iteration.
"The quality is a critical ingredient that also makes the whole system run faster," says Johnson. "It is the harmonious improvement of quality and capacity working together that makes the system faster."
Now that the speed-up has been demonstrated in simulating the LiH molecule, Johnson is hoping to see developers use the improved technology to experiment with quantum applications in a variety of different fields beyond chemistry.
In another demonstration, for example, IBM's quantum team used Qiskit Runtime to run a machine-learning program for a classification task. The new system was able to execute the workload and find the optimal model to label a set of data in a timescale that Johnson described as "meaningful."
Qiskit Runtime will initially be released in beta, for a select number of users from IBM's Q Network, and will come with a fixed set-up of programs that are configurable. IBM expects that the system will be available to every user of the company's quantum services in the third quarter of 2021.
Combined with the 127-qubit quantum processor, called the IBM Quantum Eagle, which is slated for later this year, Big Blue hopes that the speed-up enabled by Runtime will mean that a lot of tasks that were once thought impractical on quantum computers will now be achievable.
The system certainly sets IBM on track to meet the objectives laid out in the company's quantum software roadmap, which projects that there will be frictionless quantum computing in a number of applications by 2025. |
|||
420 | Smaller Chips Open Door to RFID Applications | Researchers at North Carolina State University have made what is believed to be the smallest state-of-the-art RFID chip, which should drive down the cost of RFID tags. In addition, the chip's design makes it possible to embed RFID tags into high value chips, such as computer chips, boosting supply chain security for high-end technologies.
"As far as we can tell, it's the world's smallest Gen2-compatible RFID chip," says Paul Franzon, corresponding author of a paper on the work and Cirrus Logic Distinguished Professor of Electrical and Computer Engineering at NC State.
Gen2 RFID chips are state of the art and are already in widespread use. One of the things that sets these new RFID chips apart is their size. They measure 125 micrometers (μm) by 245μm. Manufacturers were able to make smaller RFID chips using earlier technologies, but Franzon and his collaborators have not been able to identify smaller RFID chips that are compatible with the current Gen2 technology.
"The size of an RFID tag is largely determined by the size of its antenna - not the RFID chip," Franzon says. "But the chip is the expensive part."
The smaller the chip, the more chips you can get from a single silicon wafer. And the more chips you can get from the silicon wafer, the less expensive they are.
"In practical terms, this means that we can manufacture RFID tags for less than one cent each if we're manufacturing them in volume," Franzon says.
That makes it more feasible for manufacturers, distributors or retailers to use RFID tags to track lower-cost items. For example, the tags could be used to track all of the products in a grocery store without requiring employees to scan items individually.
"Another advantage is that the design of the circuits we used here is compatible with a wide range of semiconductor technologies, such as those used in conventional computer chips," says Kirti Bhanushali, who worked on the project as a Ph.D. student at NC State and is first author of the paper. "This makes it possible to incorporate RFID tags into computer chips, allowing users to track individual chips throughout their life cycle. This could help to reduce counterfeiting, and allow you to verify that a component is what it says it is."
"We've demonstrated what is possible, and we know that these chips can be made using existing manufacturing technologies," Franzon says. "We're now interested in working with industry partners to explore commercializing the chip in two ways: creating low-cost RFID at scale for use in sectors such as grocery stores; and embedding RFID tags into computer chips in order to secure high-value supply chains."
The paper, "A 125μm×245μm Mainly Digital UHF EPC Gen2 Compatible RFID tag in 55nm CMOS process," was presented April 29 at the IEEE International Conference on RFID. The paper was co-authored by Wenxu Zhao, who worked on the project as a Ph.D. student at NC State; and Shepherd Pitts, who worked on the project while a research assistant professor at NC State.
The work was done with support from the National Science Foundation, under grant 1422172; and from NC State's Chancellor's Innovation Fund.
-shipman-
Note to Editors: The study abstract follows.
"A 125μm×245μm Mainly Digital UHF EPC Gen2 Compatible RFID tag in 55nm CMOS process"
Authors : Kirti Bhanushali, Microsoft; Wenxu Zhao, Broadcom; W. Shepherd Pitts and Paul Franzon, North Carolina State University
Presented : April 29, IEEE International Conference on RFID
Abstract: This paper presents a compact and largely digital UHF EPC Gen2-compatible RFID implemented using digital IP blocks that are easily portable. This is the first demonstration of a digital Gen2-compatible RFID tag chip with an area of 125μm×245μm and -2 dBm sensitivity operating in the 860-960MHz band. It is enabled by a) largely standard cell-based digital implementation using dual-phase RF-only logic approach, b) near-threshold voltage operation, and c) elimination of area intensive, complex, and less scalable rectifiers, storage capacitors, and power management units used in conventional RFID tags. In this demonstration, all but six cells were directly used from the standard cell library provided by the foundry. This makes it suitable for cost-sensitive applications, and as embedded RFIDs for tagging counterfeit Integrated Circuits (ICs). | North Carolina State University (NC State) researchers have built what they're calling the world's smallest Gen-2 compatible radio-frequency ID (RFID) chip. The Gen-2 chips measure 125 by 245 micrometers, a size that allows more chips to be obtained from a silicon wafer, reducing costs. The lower cost increases the likelihood manufacturers, distributors, or retailers will use RFID tags to track lower-cost items. NC State's Kirti Bhanushali said the circuit design also can interoperate with a wide spectrum of semiconductor technologies, which "makes it possible to incorporate RFID tags into computer chips, allowing users to track individual chips throughout their life cycle." | [] | [] | [] | scitechnews | None | None | None | None | North Carolina State University (NC State) researchers have built what they're calling the world's smallest Gen-2 compatible radio-frequency ID (RFID) chip. The Gen-2 chips measure 125 by 245 micrometers, a size that allows more chips to be obtained from a silicon wafer, reducing costs. The lower cost increases the likelihood manufacturers, distributors, or retailers will use RFID tags to track lower-cost items. NC State's Kirti Bhanushali said the circuit design also can interoperate with a wide spectrum of semiconductor technologies, which "makes it possible to incorporate RFID tags into computer chips, allowing users to track individual chips throughout their life cycle."
Researchers at North Carolina State University have made what is believed to be the smallest state-of-the-art RFID chip, which should drive down the cost of RFID tags. In addition, the chip's design makes it possible to embed RFID tags into high value chips, such as computer chips, boosting supply chain security for high-end technologies.
"As far as we can tell, it's the world's smallest Gen2-compatible RFID chip," says Paul Franzon, corresponding author of a paper on the work and Cirrus Logic Distinguished Professor of Electrical and Computer Engineering at NC State.
Gen2 RFID chips are state of the art and are already in widespread use. One of the things that sets these new RFID chips apart is their size. They measure 125 micrometers (μm) by 245μm. Manufacturers were able to make smaller RFID chips using earlier technologies, but Franzon and his collaborators have not been able to identify smaller RFID chips that are compatible with the current Gen2 technology.
"The size of an RFID tag is largely determined by the size of its antenna - not the RFID chip," Franzon says. "But the chip is the expensive part."
The smaller the chip, the more chips you can get from a single silicon wafer. And the more chips you can get from the silicon wafer, the less expensive they are.
"In practical terms, this means that we can manufacture RFID tags for less than one cent each if we're manufacturing them in volume," Franzon says.
That makes it more feasible for manufacturers, distributors or retailers to use RFID tags to track lower-cost items. For example, the tags could be used to track all of the products in a grocery store without requiring employees to scan items individually.
"Another advantage is that the design of the circuits we used here is compatible with a wide range of semiconductor technologies, such as those used in conventional computer chips," says Kirti Bhanushali, who worked on the project as a Ph.D. student at NC State and is first author of the paper. "This makes it possible to incorporate RFID tags into computer chips, allowing users to track individual chips throughout their life cycle. This could help to reduce counterfeiting, and allow you to verify that a component is what it says it is."
"We've demonstrated what is possible, and we know that these chips can be made using existing manufacturing technologies," Franzon says. "We're now interested in working with industry partners to explore commercializing the chip in two ways: creating low-cost RFID at scale for use in sectors such as grocery stores; and embedding RFID tags into computer chips in order to secure high-value supply chains."
The paper, "A 125μm×245μm Mainly Digital UHF EPC Gen2 Compatible RFID tag in 55nm CMOS process," was presented April 29 at the IEEE International Conference on RFID. The paper was co-authored by Wenxu Zhao, who worked on the project as a Ph.D. student at NC State; and Shepherd Pitts, who worked on the project while a research assistant professor at NC State.
The work was done with support from the National Science Foundation, under grant 1422172; and from NC State's Chancellor's Innovation Fund.
-shipman-
Note to Editors: The study abstract follows.
"A 125μm×245μm Mainly Digital UHF EPC Gen2 Compatible RFID tag in 55nm CMOS process"
Authors : Kirti Bhanushali, Microsoft; Wenxu Zhao, Broadcom; W. Shepherd Pitts and Paul Franzon, North Carolina State University
Presented : April 29, IEEE International Conference on RFID
Abstract: This paper presents a compact and largely digital UHF EPC Gen2-compatible RFID implemented using digital IP blocks that are easily portable. This is the first demonstration of a digital Gen2-compatible RFID tag chip with an area of 125μm×245μm and -2 dBm sensitivity operating in the 860-960MHz band. It is enabled by a) largely standard cell-based digital implementation using dual-phase RF-only logic approach, b) near-threshold voltage operation, and c) elimination of area intensive, complex, and less scalable rectifiers, storage capacitors, and power management units used in conventional RFID tags. In this demonstration, all but six cells were directly used from the standard cell library provided by the foundry. This makes it suitable for cost-sensitive applications, and as embedded RFIDs for tagging counterfeit Integrated Circuits (ICs). |
|||
421 | Public Health Tweets Struggled to Reflect Local Realities at Start of Pandemic: Study | A new study that examined thousands of tweets from Canadian public health agencies and officials during the first few months of the COVID-19 pandemic suggests many struggled to tailor messaging to local needs.
The study published online this month in the journal Health & Place analyzed close to 7,000 tweets from public health agencies and officials at all levels of government over the first six months of last year.
But they found the messages often failed to reflect the situation and risk level in local communities despite the significant variations in transmission levels and other factors.
"Despite the need for public health communications to effectively convey the level of COVID-19 infection risk in particular jurisdictions, the tweets we analyzed did not always contain relevant messaging or risk communication strategies that would have helped citizens in those jurisdictions assess risks to health," the study said.
Accounts related to urban areas largely used tweets to disseminate information, rather than for other purposes, and the percentage of tweets aimed at promoting specific actions decreased over time, the study found.
"Given that the risks of community transmission of COVID-19 are higher in denser urban areas with larger populations... action tweets could be viewed as a useful communication tool to help drive changes to behaviour among urban individuals to reduce disease spread," it said.
In comparison, accounts related to rural areas - where transmission was typically lower - primarily used Twitter to encourage certain actions, though residents may have benefited from more information about the virus, the study found.
While some local agencies tweeted messaging that was relevant to their particular circumstances, those accounts did not have large numbers of followers, drawing fewer per capita than provincial or national accounts, the study found.
"Tweets containing particular messaging deployed at specific times for audiences located in specific places could be better utilized to tackle periods of increased disease transmission during the COVID-19 pandemic and other future public health crises," it said.
"Crafting communications that are relevant for the levels of risk that audience members are likely encountering in a given geographic context could increase the uptake of those communications and result in better population health outcomes."
The study also found only two per cent of tweets examined addressed misinformation and myths surrounding COVID-19.
Tweets debunking COVID-19 myths were issued more frequently by local accounts, a finding the study said was "somewhat surprising" given that provincial and national accounts are primarily responsible for disseminating information about the pandemic.
The study also highlighted a relative lack of "community-building" messages that could have been used to foster institutional trust, calling it "a missed opportunity" to do more than simply share information about the pandemic.
The researchers also cited acknowledging uncertainty and public concerns as a key part of building trust and promoting health measures during a public health crisis.
"It has been critical for public health officials, who are often considered trusted experts, to provide quick and clear information on disease transmission, what constitutes safe and risky behaviour and what community supports are available to slow the spread of the virus," lead author Catherine Slavik, a graduate student of health geography at McMaster University, said in a statement.
"Tweets that focus on community efforts to fight the pandemic ... are really important for building institutional trust, for establishing human connections between the community and local officials who are there to serve them. We were surprised public health officials did not put more emphasis on messages showcasing people coming together or local programs helping to keep us safe." | Researchers at McMaster University and the University of Waterloo studied nearly 7,000 tweets from Canadian public health agencies and officials in the first months of the COVID-19 pandemic and found the messages often did not reflect the specific risk level in local communities. Accounts tied to urban areas, where community transmission risks were higher, issued tweets primarily to disseminate information, while tweets to promote specific actions declined over time. Accounts related to rural areas with lower transmission risks largely were used to encourage actions, rather than to provide information about the virus. Only 2% of the studied tweets addressed misinformation and myths. McMaster's Catherine Slavik said, "We were surprised public health officials did not put more emphasis on messages showcasing people coming together or local programs helping to keep us safe." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at McMaster University and the University of Waterloo studied nearly 7,000 tweets from Canadian public health agencies and officials in the first months of the COVID-19 pandemic and found the messages often did not reflect the specific risk level in local communities. Accounts tied to urban areas, where community transmission risks were higher, issued tweets primarily to disseminate information, while tweets to promote specific actions declined over time. Accounts related to rural areas with lower transmission risks largely were used to encourage actions, rather than to provide information about the virus. Only 2% of the studied tweets addressed misinformation and myths. McMaster's Catherine Slavik said, "We were surprised public health officials did not put more emphasis on messages showcasing people coming together or local programs helping to keep us safe."
A new study that examined thousands of tweets from Canadian public health agencies and officials during the first few months of the COVID-19 pandemic suggests many struggled to tailor messaging to local needs.
The study published online this month in the journal Health & Place analyzed close to 7,000 tweets from public health agencies and officials at all levels of government over the first six months of last year.
But they found the messages often failed to reflect the situation and risk level in local communities despite the significant variations in transmission levels and other factors.
"Despite the need for public health communications to effectively convey the level of COVID-19 infection risk in particular jurisdictions, the tweets we analyzed did not always contain relevant messaging or risk communication strategies that would have helped citizens in those jurisdictions assess risks to health," the study said.
Accounts related to urban areas largely used tweets to disseminate information, rather than for other purposes, and the percentage of tweets aimed at promoting specific actions decreased over time, the study found.
"Given that the risks of community transmission of COVID-19 are higher in denser urban areas with larger populations... action tweets could be viewed as a useful communication tool to help drive changes to behaviour among urban individuals to reduce disease spread," it said.
In comparison, accounts related to rural areas - where transmission was typically lower - primarily used Twitter to encourage certain actions, though residents may have benefited from more information about the virus, the study found.
While some local agencies tweeted messaging that was relevant to their particular circumstances, those accounts did not have large numbers of followers, drawing fewer per capita than provincial or national accounts, the study found.
"Tweets containing particular messaging deployed at specific times for audiences located in specific places could be better utilized to tackle periods of increased disease transmission during the COVID-19 pandemic and other future public health crises," it said.
"Crafting communications that are relevant for the levels of risk that audience members are likely encountering in a given geographic context could increase the uptake of those communications and result in better population health outcomes."
The study also found only two per cent of tweets examined addressed misinformation and myths surrounding COVID-19.
Tweets debunking COVID-19 myths were issued more frequently by local accounts, a finding the study said was "somewhat surprising" given that provincial and national accounts are primarily responsible for disseminating information about the pandemic.
The study also highlighted a relative lack of "community-building" messages that could have been used to foster institutional trust, calling it "a missed opportunity" to do more than simply share information about the pandemic.
The researchers also cited acknowledging uncertainty and public concerns as a key part of building trust and promoting health measures during a public health crisis.
"It has been critical for public health officials, who are often considered trusted experts, to provide quick and clear information on disease transmission, what constitutes safe and risky behaviour and what community supports are available to slow the spread of the virus," lead author Catherine Slavik, a graduate student of health geography at McMaster University, said in a statement.
"Tweets that focus on community efforts to fight the pandemic ... are really important for building institutional trust, for establishing human connections between the community and local officials who are there to serve them. We were surprised public health officials did not put more emphasis on messages showcasing people coming together or local programs helping to keep us safe." |
|||
422 | Reconfigurable Optical Networks Will Move Supercomputer Data 100X Faster | Imagine being able to read an entire book in a single second, but only receiving the pages individually over the course of a minute. This is analogous to the woes of a supercomputer.
Supercomputer processors can handle whopping amounts of data per second, but the flow of data between the processor and computer subsystems is not nearly as efficient, creating a data transfer bottleneck. To address this issue, one group of researchers has devised a system design involving re-configurable networks called FLEET - which could potentially speed up the transfer of data 100-fold. The initial design, as part of a "DARPA-hard" project, is described in a study published on April 30 in IEEE Internet Computing .
Network interface cards are critical hardware components that link computers to networks, facilitating the transfer of data. However, these components currently lag far behind computer processors in terms of how fast they can handle data.
"Processors and optical networks operate at Terabits per second (Tbps), but [current] network interfaces used to transfer data in and out typically operate in gigabit per second ranges," explains Seth Robertson, Chief Research Scientist with Peraton Labs (previously named Perspecta Labs) who has been co-leading the design of FLEET.
Part of his team's solution is the development of Optical Network Interface Cards (O-NICs), which can be plugged into existing computer hardware. Whereas traditional network interface cards typically have one port, the newly designed O-NICs have two ports and can support data transfer among many different kinds of computer subcomponents. The O-NICs are connected to optical switches, which allow the system to quickly re-configure the flow of data as needed.
"The connections can be modified before or during execution to match different devices over time," explains Fred Douglis, a Chief Research Scientist with Peraton Labs and co-Principal Investigator of FLEET. He likens the concept to the peripatetic Grand Staircase in the Harry Potter series's Hogwarts School. "Imagine Hogwarts staircases if they always appeared just as you needed to walk someplace new," he says.
To support re-configurability, the researchers have designed a new software planner that determines the best configuration and adjusts the flow of data accordingly. "On the software side, a planner that can actually make use of this flexibility is essential to realizing the performance improvements we expect," Douglis emphasizes. " The wide range of topologies can result in many tens of terabits of data in flight at a given moment."
The development of FLEET is still in its early stages. The initial design of the O-NICs and software planner was achieved in the first year of what is expected to be a four-year project. But once complete, the team anticipates that the new network interface will reach speeds of 12 Tbps based on the current (fifth) generation of PCIe (an interface standard that connects interface network cards and other high-performance peripheral devices), and could reach higher speeds with newer generations of PCIe.
Importantly, Robertson notes that FLEET will depend almost entirely on off-the-shelf components, with the exception of the newly designed O-NICs, meaning FLEET can be easily integrated into existing computer systems.
"Once we can prove [FLEET] meets its performance targets, we'd like to work to standardize its interfaces and see traditional hardware vendors make this highly adaptable networking topology widely available," says Robertson, noting that his team plans to open-source the software.
This article appears in the June 2021 print issue as "Computing on FLEET." | Researchers at self-described mission capability integrator and transformative enterprise IT provider Peraton Labs have designed a system to increase the speed of data flow between supercomputer processors and computer subsystems. The FLEET system potentially could speed the transfer of supercomputer data as much as 100 times. The researchers developed Optical Network Interface Cards (O-NICs) to replace traditional network interface cards; the new components can support data transfer among different computer subcomponents and are connected to optical switches that enable the system to reconfigure the flow of data as needed. They also designed a new software planner that adjusts the flow of data based on what it has determined to be the best configuration. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at self-described mission capability integrator and transformative enterprise IT provider Peraton Labs have designed a system to increase the speed of data flow between supercomputer processors and computer subsystems. The FLEET system potentially could speed the transfer of supercomputer data as much as 100 times. The researchers developed Optical Network Interface Cards (O-NICs) to replace traditional network interface cards; the new components can support data transfer among different computer subcomponents and are connected to optical switches that enable the system to reconfigure the flow of data as needed. They also designed a new software planner that adjusts the flow of data based on what it has determined to be the best configuration.
Imagine being able to read an entire book in a single second, but only receiving the pages individually over the course of a minute. This is analogous to the woes of a supercomputer.
Supercomputer processors can handle whopping amounts of data per second, but the flow of data between the processor and computer subsystems is not nearly as efficient, creating a data transfer bottleneck. To address this issue, one group of researchers has devised a system design involving re-configurable networks called FLEET - which could potentially speed up the transfer of data 100-fold. The initial design, as part of a "DARPA-hard" project, is described in a study published on April 30 in IEEE Internet Computing .
Network interface cards are critical hardware components that link computers to networks, facilitating the transfer of data. However, these components currently lag far behind computer processors in terms of how fast they can handle data.
"Processors and optical networks operate at Terabits per second (Tbps), but [current] network interfaces used to transfer data in and out typically operate in gigabit per second ranges," explains Seth Robertson, Chief Research Scientist with Peraton Labs (previously named Perspecta Labs) who has been co-leading the design of FLEET.
Part of his team's solution is the development of Optical Network Interface Cards (O-NICs), which can be plugged into existing computer hardware. Whereas traditional network interface cards typically have one port, the newly designed O-NICs have two ports and can support data transfer among many different kinds of computer subcomponents. The O-NICs are connected to optical switches, which allow the system to quickly re-configure the flow of data as needed.
"The connections can be modified before or during execution to match different devices over time," explains Fred Douglis, a Chief Research Scientist with Peraton Labs and co-Principal Investigator of FLEET. He likens the concept to the peripatetic Grand Staircase in the Harry Potter series's Hogwarts School. "Imagine Hogwarts staircases if they always appeared just as you needed to walk someplace new," he says.
To support re-configurability, the researchers have designed a new software planner that determines the best configuration and adjusts the flow of data accordingly. "On the software side, a planner that can actually make use of this flexibility is essential to realizing the performance improvements we expect," Douglis emphasizes. " The wide range of topologies can result in many tens of terabits of data in flight at a given moment."
The development of FLEET is still in its early stages. The initial design of the O-NICs and software planner was achieved in the first year of what is expected to be a four-year project. But once complete, the team anticipates that the new network interface will reach speeds of 12 Tbps based on the current (fifth) generation of PCIe (an interface standard that connects interface network cards and other high-performance peripheral devices), and could reach higher speeds with newer generations of PCIe.
Importantly, Robertson notes that FLEET will depend almost entirely on off-the-shelf components, with the exception of the newly designed O-NICs, meaning FLEET can be easily integrated into existing computer systems.
"Once we can prove [FLEET] meets its performance targets, we'd like to work to standardize its interfaces and see traditional hardware vendors make this highly adaptable networking topology widely available," says Robertson, noting that his team plans to open-source the software.
This article appears in the June 2021 print issue as "Computing on FLEET." |
|||
424 | Germany to Support Quantum Computing with €2 Billion | Germany's economy and science ministries announced an approximately €2-billion ($2.4-billion) allocation to develop the country's first competitive quantum computer and associated technologies in the next four years. The science ministry will invest €1.1 billion ($1.3 billion) by 2025 to support quantum computing research and development, while the economy ministry will spend €878 million ($1.06 billion) on practical applications. The economy ministry said most subsidies will go to Germany's Aerospace Center, which will partner with industrial companies, midsized enterprises, and startups to establish two consortia. Economy Minister Peter Altmaier cited management of supply and demand in the energy sector, improved traffic control, and faster testing of new active substances as areas that quantum computing could potentially revolutionize. | [] | [] | [] | scitechnews | None | None | None | None | Germany's economy and science ministries announced an approximately €2-billion ($2.4-billion) allocation to develop the country's first competitive quantum computer and associated technologies in the next four years. The science ministry will invest €1.1 billion ($1.3 billion) by 2025 to support quantum computing research and development, while the economy ministry will spend €878 million ($1.06 billion) on practical applications. The economy ministry said most subsidies will go to Germany's Aerospace Center, which will partner with industrial companies, midsized enterprises, and startups to establish two consortia. Economy Minister Peter Altmaier cited management of supply and demand in the energy sector, improved traffic control, and faster testing of new active substances as areas that quantum computing could potentially revolutionize.
|
||||
427 | UConn Researchers Study Anti-Vax Facebook Groups in Early Days of COVID-19 Pandemic | Social media has become a powerful force for spreading information, and unfortunately, misinformation. Anti-vaccine groups have established a strong presence on social media sites like Facebook. A pair of UConn researchers recently found these groups quickly seized on the COVID-19 pandemic as their next avenue of mongering fears over a vaccine before it even existed.
UConn researchers Seth Kalichman, professor of psychology, and Lisa Eaton , professor of human development and family sciences, recently published their findings of a study of anti-vaccine Facebook groups' communications during the early days of the COVID-19 pandemic in the Journal of Public Health.
Kalichman and Eaton collaborated with University of Delaware colleagues Natalie Brousseau and Valerie Earnshaw, who earned her Ph.D. at UConn.
The researchers found anti-vaccine groups identified COVID-19 as a significant public health threat that would likely require a vaccine as early as February 2020. The study covered the period from February to May 2020.
"When COVID-19 emerged my colleagues and I were interested in understanding how the anti-vaccine groups might respond," Kalichman says. "We were surprised to see they started as early as they did."
This work drew upon Eaton and Kalichman's earlier research on anti-vaccine groups online. From 2013-2015, they conducted a study funded by the Bill and Melinda Gates Foundation. Many of the same trends appeared now.
The researchers selected four groups to focus on for this study: Dr. Tenpenny on Vaccines, the National Vaccine Information Center (NVIC), the Vaccination Information Network (VINE), and the Vaccine Machine. The first three of these groups were all popular during their initial study.
These groups, which have tens of thousands of followers, are highly active on Facebook, a widely used social media platform.
Kalichman says the messaging and tactics these groups used for COVID-19 are largely the same as what they use for other diseases.
"The framework for them is already in place, it doesn't matter what disease, what vaccine," Kalichman says. "It's a worldview."
The goal of this work is not to convince fervent disbelievers in vaccines that they are wrong, but rather to prevent other people who may be ambivalent about getting vaccinated from getting sucked into this swirl of misinformation.
These anti-vaccine groups often capitalize on appeals to emotion and personal anecdotes, strategies which have proven effective.
"People can really gravitate to anecdotal accounts, as opposed to Dr. Fauci lecturing people to get vaccinated," Kalichman says.
The aim of this work is to help inform public health officials about how anti-vaccine groups are communicating and appealing to the public so officials can better combat misinformation earlier in the next public health crisis.
"By better understanding how anti-vaccine groups formulate their initial communications and start to sow seeds of doubt in vaccines long before there even is a vaccine can help inform public health officials in how they communicate and how they might head off some of the anti-vaccine rhetoric," Kalichman says.
Eaton and Kalichman recommend public health agencies should establish a robust presence on all social media platforms. Messaging that is credible, clear, and appealing will be more effective than dense scientific language or having a limited to nonexistent social media presence.
While this study did not examine the impact of anti-COVID-19 vaccine posts on people's hesitancy to receive one of the vaccines once they were approved, the researchers work under the reasonable assumption that exposure to these posts has a negative impact.
The researchers are now interested in analyzing email newsletters from the far-right news outlet The Epoch Times and Dr. Tenpenny.
Other potential avenues of interest in this field include looking at these groups' posting patterns once vaccines were approved, or how they operate on other social media platforms like Twitter and characterizing the kinds of language the groups and commentors use.
This work was supported by the Office of the Vice President for Research.
Kalichman holds a Ph.D. from the University of South Carolina. He is a researcher in the UConn Institute for Collaboration on Health, Intervention, and Policy (InCHIP). His research interests include health disparities and HIV-AIDS behavior research.
Eaton holds a Ph.D. from the University of Connecticut and completed a post-doctoral fellowship at Yale University. She is a researcher in InCHIP. Her research interests include health disparities, stigma and its effects on marginalized populations, social determinants of disease, and human sexuality and positive well-being.
Follow UConn Research on Twitter & LinkedIn . | A study of anti-vaccine Facebook groups by researchers at the University of Connecticut (UConn) found that, as early as February 2020, these groups considered COVID-19 to be a significant public health threat that likely would require a vaccine. The study looked at four groups, each with tens of thousands of followers, that were highly active on Facebook. The researchers found that anti-vaccine groups capitalized on appeals to emotion and personal anecdotes and used the same messaging and tactics for COVID-19 as for other diseases. UConn's Seth Kalichman said, "Better understanding how anti-vaccine groups formulate their initial communications and start to sow seeds of doubt in vaccines long before there even is a vaccine can help inform public health officials in how they communicate and how they might head off some of the anti-vaccine rhetoric." | [] | [] | [] | scitechnews | None | None | None | None | A study of anti-vaccine Facebook groups by researchers at the University of Connecticut (UConn) found that, as early as February 2020, these groups considered COVID-19 to be a significant public health threat that likely would require a vaccine. The study looked at four groups, each with tens of thousands of followers, that were highly active on Facebook. The researchers found that anti-vaccine groups capitalized on appeals to emotion and personal anecdotes and used the same messaging and tactics for COVID-19 as for other diseases. UConn's Seth Kalichman said, "Better understanding how anti-vaccine groups formulate their initial communications and start to sow seeds of doubt in vaccines long before there even is a vaccine can help inform public health officials in how they communicate and how they might head off some of the anti-vaccine rhetoric."
Social media has become a powerful force for spreading information, and unfortunately, misinformation. Anti-vaccine groups have established a strong presence on social media sites like Facebook. A pair of UConn researchers recently found these groups quickly seized on the COVID-19 pandemic as their next avenue of mongering fears over a vaccine before it even existed.
UConn researchers Seth Kalichman, professor of psychology, and Lisa Eaton , professor of human development and family sciences, recently published their findings of a study of anti-vaccine Facebook groups' communications during the early days of the COVID-19 pandemic in the Journal of Public Health.
Kalichman and Eaton collaborated with University of Delaware colleagues Natalie Brousseau and Valerie Earnshaw, who earned her Ph.D. at UConn.
The researchers found anti-vaccine groups identified COVID-19 as a significant public health threat that would likely require a vaccine as early as February 2020. The study covered the period from February to May 2020.
"When COVID-19 emerged my colleagues and I were interested in understanding how the anti-vaccine groups might respond," Kalichman says. "We were surprised to see they started as early as they did."
This work drew upon Eaton and Kalichman's earlier research on anti-vaccine groups online. From 2013-2015, they conducted a study funded by the Bill and Melinda Gates Foundation. Many of the same trends appeared now.
The researchers selected four groups to focus on for this study: Dr. Tenpenny on Vaccines, the National Vaccine Information Center (NVIC), the Vaccination Information Network (VINE), and the Vaccine Machine. The first three of these groups were all popular during their initial study.
These groups, which have tens of thousands of followers, are highly active on Facebook, a widely used social media platform.
Kalichman says the messaging and tactics these groups used for COVID-19 are largely the same as what they use for other diseases.
"The framework for them is already in place, it doesn't matter what disease, what vaccine," Kalichman says. "It's a worldview."
The goal of this work is not to convince fervent disbelievers in vaccines that they are wrong, but rather to prevent other people who may be ambivalent about getting vaccinated from getting sucked into this swirl of misinformation.
These anti-vaccine groups often capitalize on appeals to emotion and personal anecdotes, strategies which have proven effective.
"People can really gravitate to anecdotal accounts, as opposed to Dr. Fauci lecturing people to get vaccinated," Kalichman says.
The aim of this work is to help inform public health officials about how anti-vaccine groups are communicating and appealing to the public so officials can better combat misinformation earlier in the next public health crisis.
"By better understanding how anti-vaccine groups formulate their initial communications and start to sow seeds of doubt in vaccines long before there even is a vaccine can help inform public health officials in how they communicate and how they might head off some of the anti-vaccine rhetoric," Kalichman says.
Eaton and Kalichman recommend public health agencies should establish a robust presence on all social media platforms. Messaging that is credible, clear, and appealing will be more effective than dense scientific language or having a limited to nonexistent social media presence.
While this study did not examine the impact of anti-COVID-19 vaccine posts on people's hesitancy to receive one of the vaccines once they were approved, the researchers work under the reasonable assumption that exposure to these posts has a negative impact.
The researchers are now interested in analyzing email newsletters from the far-right news outlet The Epoch Times and Dr. Tenpenny.
Other potential avenues of interest in this field include looking at these groups' posting patterns once vaccines were approved, or how they operate on other social media platforms like Twitter and characterizing the kinds of language the groups and commentors use.
This work was supported by the Office of the Vice President for Research.
Kalichman holds a Ph.D. from the University of South Carolina. He is a researcher in the UConn Institute for Collaboration on Health, Intervention, and Policy (InCHIP). His research interests include health disparities and HIV-AIDS behavior research.
Eaton holds a Ph.D. from the University of Connecticut and completed a post-doctoral fellowship at Yale University. She is a researcher in InCHIP. Her research interests include health disparities, stigma and its effects on marginalized populations, social determinants of disease, and human sexuality and positive well-being.
Follow UConn Research on Twitter & LinkedIn . |
|||
428 | 3D Printing Lays Foundation for Range of Diagnostic Tests | Researchers at KU Leuven have developed a 3D printing technique that extends the possibilities of lateral flow testing. These tests are widespread in the form of the classic pregnancy test and the COVID-19 self-tests. With the new printing technique, advanced diagnostic tests can be produced that are quick, cheap, and easy to use.
The COVID-19 pandemic has made everyone aware of the importance of rapid diagnosis. The sale of self-tests in pharmacies has been permitted in Belgium since the end of March. This self-test is a so-called lateral flow test. Using a wiper, a sample is taken through the nose. Next, it is dissolved in a solvent, and applied to the test kit. Absorbent material in the kit moves the sample downstream and brings it in contact with an antibody. If virus is present, a coloured line appears. The advantage of these tests is that they are cheap and do not require any specialised appliances.
Lateral flow tests are useful for simple tests that result in a yes-no answer, but not for tests that require a multi-step protocol. That is why bioengineers at KU Leuven set out to develop a new type of lateral flow test with more capabilities.
Using a 3D printer, the researchers fabricated a 3D version of a lateral flow test. The basis is a small block of porous polymer, in which 'inks' with specific properties are printed at precise locations. In this way, a network of channels and small 'locks' is printed that let the flow through or block it where and when necessary, without the need for moving parts. During the test, the sample is automatically guided through the different test steps. That way, even complex protocols can be followed.
The researchers evaluated their technique reproducing an ELISA test (Enzyme-Linked Immunosorbent Assay), which is used to detect immunoglobulin E (IgE). Ig E is measured to diagnose allergies. In the lab, this test requires several steps, with different rinses and a change in acidity. The research team was able to run this entire protocol using a printed test kit the size of a thick credit card.
(Continue reading below the video)
"The great thing about 3D printing is that you can quickly adapt a test's design to accommodate another protocol, for example, to detect a cancer biomarker. For the 3D printer it does not matter how complex the network of channels is," says Dr. Cesar Parra. The 3D printing technique is also affordable and scalable. "In our lab, producing the Ig E prototype test costs about $ 1.50, but if we can scale it up, it would be less than $ 1," says Dr. Parra. The technique not only offers opportunities for cheaper and faster diagnosis in developed countries, but also in countries where the medical infrastructure is less accessible and where there is a strong need for affordable diagnostic tests.
The research group is currently designing its own 3D printer, which will be more flexible than the commercial model used in the current study. "An optimised printer is kind of like a mobile mini factory which can quickly produce diagnostics. You could then create different types of tests by simply loading a different design file and ink. We want to continue our research on diagnostic challenges and applications with the help of partners," concludes innovation manager Bart van Duffel. | A three-dimensional (3D) printing method developed by researchers at Belgium's KU Leuven paves the way for new lateral flow diagnostic tests. While lateral flow tests like pregnancy tests and COVID-19 self-tests have long been used for yes-no answer tests and self-tests, the new technique aims to make lateral flow tests more useful for those requiring multi-step protocols. The researchers' 3D version of a lateral flow test is built from a small block of porous polymer with a network of printed channels and small locks, which automatically guide the sample through test steps and permit or block the flow as necessary without the need for moving parts. KU Leuven's Cesar Parra said the technique is affordable and scalable; a prototype test manufactured in the lab cost about $1.50, but scaling up could drop the price to under $1. | [] | [] | [] | scitechnews | None | None | None | None | A three-dimensional (3D) printing method developed by researchers at Belgium's KU Leuven paves the way for new lateral flow diagnostic tests. While lateral flow tests like pregnancy tests and COVID-19 self-tests have long been used for yes-no answer tests and self-tests, the new technique aims to make lateral flow tests more useful for those requiring multi-step protocols. The researchers' 3D version of a lateral flow test is built from a small block of porous polymer with a network of printed channels and small locks, which automatically guide the sample through test steps and permit or block the flow as necessary without the need for moving parts. KU Leuven's Cesar Parra said the technique is affordable and scalable; a prototype test manufactured in the lab cost about $1.50, but scaling up could drop the price to under $1.
Researchers at KU Leuven have developed a 3D printing technique that extends the possibilities of lateral flow testing. These tests are widespread in the form of the classic pregnancy test and the COVID-19 self-tests. With the new printing technique, advanced diagnostic tests can be produced that are quick, cheap, and easy to use.
The COVID-19 pandemic has made everyone aware of the importance of rapid diagnosis. The sale of self-tests in pharmacies has been permitted in Belgium since the end of March. This self-test is a so-called lateral flow test. Using a wiper, a sample is taken through the nose. Next, it is dissolved in a solvent, and applied to the test kit. Absorbent material in the kit moves the sample downstream and brings it in contact with an antibody. If virus is present, a coloured line appears. The advantage of these tests is that they are cheap and do not require any specialised appliances.
Lateral flow tests are useful for simple tests that result in a yes-no answer, but not for tests that require a multi-step protocol. That is why bioengineers at KU Leuven set out to develop a new type of lateral flow test with more capabilities.
Using a 3D printer, the researchers fabricated a 3D version of a lateral flow test. The basis is a small block of porous polymer, in which 'inks' with specific properties are printed at precise locations. In this way, a network of channels and small 'locks' is printed that let the flow through or block it where and when necessary, without the need for moving parts. During the test, the sample is automatically guided through the different test steps. That way, even complex protocols can be followed.
The researchers evaluated their technique reproducing an ELISA test (Enzyme-Linked Immunosorbent Assay), which is used to detect immunoglobulin E (IgE). Ig E is measured to diagnose allergies. In the lab, this test requires several steps, with different rinses and a change in acidity. The research team was able to run this entire protocol using a printed test kit the size of a thick credit card.
(Continue reading below the video)
"The great thing about 3D printing is that you can quickly adapt a test's design to accommodate another protocol, for example, to detect a cancer biomarker. For the 3D printer it does not matter how complex the network of channels is," says Dr. Cesar Parra. The 3D printing technique is also affordable and scalable. "In our lab, producing the Ig E prototype test costs about $ 1.50, but if we can scale it up, it would be less than $ 1," says Dr. Parra. The technique not only offers opportunities for cheaper and faster diagnosis in developed countries, but also in countries where the medical infrastructure is less accessible and where there is a strong need for affordable diagnostic tests.
The research group is currently designing its own 3D printer, which will be more flexible than the commercial model used in the current study. "An optimised printer is kind of like a mobile mini factory which can quickly produce diagnostics. You could then create different types of tests by simply loading a different design file and ink. We want to continue our research on diagnostic challenges and applications with the help of partners," concludes innovation manager Bart van Duffel. |
|||
429 | 96% of U.S. Users Opt Out of App Tracking in iOS 14.5, Analytics Find | It seems that in the United States, at least, app developers and advertisers who rely on targeted mobile advertising for revenue are seeing their worst fears realized: Analytics data published this week suggests that US users choose to opt out of tracking 96 percent of the time in the wake of iOS 14.5.
The change met fierce resistance from companies like Facebook, whose market advantages and revenue streams are built on leveraging users' data to target the most effective ads at those users. Facebook went so far as to take out full-page newspaper ads claiming that the change would not just hurt Facebook but would destroy small businesses around the world. Shortly after, Apple CEO Tim Cook attended a data privacy conference and delivered a speech that harshly criticized Facebook's business model.
Nonetheless, Facebook and others have complied with Apple's new rule to avoid being rejected from the iPhone's App Store, though some apps present a screen explaining why users should opt in before the Apple-mandated prompt to opt in or out appears.
This new data comes from Verizon-owned Flurry Analytics, which claims to be used in more than one million mobile apps. Flurry says it will update the data daily so followers can see the trend as it progresses.
Based on the data from those one million apps, Flurry Analytics says US users agree to be tracked only four percent of the time. The global number is significantly higher at 12 percent, but that's still below some advertising companies' estimates.
The data from Flurry Analytics shows users rejecting tracking at much higher rates than were predicted by surveys that were conducted before iOS 14.5 went live. One of those surveys found that just shy of 40 percent , not 4 percent, would opt in to tracking when prompted.
Flurry Analytics' data doesn't break things down by app, though, so it's impossible to know from this data whether the numbers are skewed against app tracking opt-in by, say, users' distrust of Facebook. It's possible users are being more trusting of some types of apps than others, but that data is not available.
Listing image by Bloomberg | Getty Images | U.S. users have opted out of application tracking nearly all (96%) of the time following Apple's release of iOS 14.5 in April, according to mobile app analysis platform Flurry Analytics. That release was accompanied by Apple's launch of enforcement of the App Tracking Transparency policy, which requires iPhone, iPad, and Apple TV apps to request user consent to monitor their activity across multiple apps for data collection and ad targeting. Based on data from roughly 1 million mobile apps, Flurry Analytics said U.S. users agree to be tracked only 4% of the time; globally, the firm found that number reaching 12%. | [] | [] | [] | scitechnews | None | None | None | None | U.S. users have opted out of application tracking nearly all (96%) of the time following Apple's release of iOS 14.5 in April, according to mobile app analysis platform Flurry Analytics. That release was accompanied by Apple's launch of enforcement of the App Tracking Transparency policy, which requires iPhone, iPad, and Apple TV apps to request user consent to monitor their activity across multiple apps for data collection and ad targeting. Based on data from roughly 1 million mobile apps, Flurry Analytics said U.S. users agree to be tracked only 4% of the time; globally, the firm found that number reaching 12%.
It seems that in the United States, at least, app developers and advertisers who rely on targeted mobile advertising for revenue are seeing their worst fears realized: Analytics data published this week suggests that US users choose to opt out of tracking 96 percent of the time in the wake of iOS 14.5.
The change met fierce resistance from companies like Facebook, whose market advantages and revenue streams are built on leveraging users' data to target the most effective ads at those users. Facebook went so far as to take out full-page newspaper ads claiming that the change would not just hurt Facebook but would destroy small businesses around the world. Shortly after, Apple CEO Tim Cook attended a data privacy conference and delivered a speech that harshly criticized Facebook's business model.
Nonetheless, Facebook and others have complied with Apple's new rule to avoid being rejected from the iPhone's App Store, though some apps present a screen explaining why users should opt in before the Apple-mandated prompt to opt in or out appears.
This new data comes from Verizon-owned Flurry Analytics, which claims to be used in more than one million mobile apps. Flurry says it will update the data daily so followers can see the trend as it progresses.
Based on the data from those one million apps, Flurry Analytics says US users agree to be tracked only four percent of the time. The global number is significantly higher at 12 percent, but that's still below some advertising companies' estimates.
The data from Flurry Analytics shows users rejecting tracking at much higher rates than were predicted by surveys that were conducted before iOS 14.5 went live. One of those surveys found that just shy of 40 percent , not 4 percent, would opt in to tracking when prompted.
Flurry Analytics' data doesn't break things down by app, though, so it's impossible to know from this data whether the numbers are skewed against app tracking opt-in by, say, users' distrust of Facebook. It's possible users are being more trusting of some types of apps than others, but that data is not available.
Listing image by Bloomberg | Getty Images |
|||
430 | Technique Predicts Response of Brain Tumors to Chemoradiation | AUSTIN, Texas - A team studying malignant brain tumors has developed a new technique for predicting how individual patients will respond to chemoradiation, a major step forward in efforts to personalize cancer treatment.
Researchers at The University of Texas at Austin's Oden Institute for Computational Engineering and Sciences , Texas Advanced Computing Center (TACC ) and The University of Texas MD Anderson Cancer Center have merged various quantitative imaging measurements with computational simulations to create an accurate model for calculating the progression of high-grade glioma.
High-grade gliomas are the most common cancerous primary brain tumors found in adults. Current treatments involve surgical resection of the tumor followed by radiation therapy and chemotherapy. Despite this aggressive treatment, prognosis for patients who undergo this approach is generally poor. The growth and behavior of these tumors varies from patient to patient, making the need for techniques to personalize therapy on an individual patient level particularly important.
In a paper published in Nature Scientific Reports , the authors used a combination of anatomical and structural imaging to inform a computational mechanistic model that predicts for high-grade glioma tumor progression.
"This project couldn't be attempted without close collaboration between engineers and clinicians," said David Hormuth of the Center for Computational Oncology at UT Austin's Oden Institute.
"Our approach of using individual patient imaging data in a predictive mechanistic model, that incorporates both the anatomical appearance of the tumor on MRI and measurements from a specific MRI scanning technique called diffusion tensor imaging, is showing real promise," said Dr. Caroline Chung of MD Anderson.
Current radiation therapy methods are already tailored to patients using mostly anatomical imaging data prior to the start of radiation therapy and can be adapted in reaction to major changes in tumor appearance during treatment. However, this new technique is a first step toward providing radiation oncologists with the information they need to personalize treatment plans based on a predicted spatial map of the tumor's resistance to radiation.
Throughout this project, researchers at the Oden Institute and MD Anderson have gone back and forth on the type of data needed, model components and the overall goal or application of this model. The Oden Institute brought the expertise in tumor mechanics and modeling, an innovative, physics-based research approach led by Tom Yankeelov of UT Austin over several years. Once paired with Chung's quantitative imaging and clinical brain tumor expertise, the researchers successfully translated prior preclinical efforts in high-grade glioma.
TACC, the third partner in the collaboration to end cancer, made it possible for the researchers to simultaneously calibrate a large family of biologically based mathematical models for each patient.
"In total, we had roughly 6,000 different calibrations or forecast scenarios that would take years to run on a standard laptop," Hormuth said. "By using the Lonestar 5 system to run our model calibration and forecasting approach in parallel, we were able to evaluate all of these scenarios in a matter of days." | A new technique for predicting brain tumors' response to chemoradiation (a combination of chemotherapy and radiation therapy) could help personalize cancer treatment. Researchers at the University of Texas at Austin (UT Austin), the Texas Advanced Computing Center (TACC), and the University of Texas MD Anderson Cancer Center combined anatomical and structural imaging to inform a computational mechanistic model that forecasts high-grade glioma tumor progression. UT Austin's David Hormuth said, "We had roughly 6,000 different calibrations or forecast scenarios that would take years to run on a standard laptop." Hormuth said using TACC's Lonestar 5 high-performance computing system to run the model calibration and forecasting approach in parallel allowed the researchers "to evaluate all of these scenarios in a matter of days." | [] | [] | [] | scitechnews | None | None | None | None | A new technique for predicting brain tumors' response to chemoradiation (a combination of chemotherapy and radiation therapy) could help personalize cancer treatment. Researchers at the University of Texas at Austin (UT Austin), the Texas Advanced Computing Center (TACC), and the University of Texas MD Anderson Cancer Center combined anatomical and structural imaging to inform a computational mechanistic model that forecasts high-grade glioma tumor progression. UT Austin's David Hormuth said, "We had roughly 6,000 different calibrations or forecast scenarios that would take years to run on a standard laptop." Hormuth said using TACC's Lonestar 5 high-performance computing system to run the model calibration and forecasting approach in parallel allowed the researchers "to evaluate all of these scenarios in a matter of days."
AUSTIN, Texas - A team studying malignant brain tumors has developed a new technique for predicting how individual patients will respond to chemoradiation, a major step forward in efforts to personalize cancer treatment.
Researchers at The University of Texas at Austin's Oden Institute for Computational Engineering and Sciences , Texas Advanced Computing Center (TACC ) and The University of Texas MD Anderson Cancer Center have merged various quantitative imaging measurements with computational simulations to create an accurate model for calculating the progression of high-grade glioma.
High-grade gliomas are the most common cancerous primary brain tumors found in adults. Current treatments involve surgical resection of the tumor followed by radiation therapy and chemotherapy. Despite this aggressive treatment, prognosis for patients who undergo this approach is generally poor. The growth and behavior of these tumors varies from patient to patient, making the need for techniques to personalize therapy on an individual patient level particularly important.
In a paper published in Nature Scientific Reports , the authors used a combination of anatomical and structural imaging to inform a computational mechanistic model that predicts for high-grade glioma tumor progression.
"This project couldn't be attempted without close collaboration between engineers and clinicians," said David Hormuth of the Center for Computational Oncology at UT Austin's Oden Institute.
"Our approach of using individual patient imaging data in a predictive mechanistic model, that incorporates both the anatomical appearance of the tumor on MRI and measurements from a specific MRI scanning technique called diffusion tensor imaging, is showing real promise," said Dr. Caroline Chung of MD Anderson.
Current radiation therapy methods are already tailored to patients using mostly anatomical imaging data prior to the start of radiation therapy and can be adapted in reaction to major changes in tumor appearance during treatment. However, this new technique is a first step toward providing radiation oncologists with the information they need to personalize treatment plans based on a predicted spatial map of the tumor's resistance to radiation.
Throughout this project, researchers at the Oden Institute and MD Anderson have gone back and forth on the type of data needed, model components and the overall goal or application of this model. The Oden Institute brought the expertise in tumor mechanics and modeling, an innovative, physics-based research approach led by Tom Yankeelov of UT Austin over several years. Once paired with Chung's quantitative imaging and clinical brain tumor expertise, the researchers successfully translated prior preclinical efforts in high-grade glioma.
TACC, the third partner in the collaboration to end cancer, made it possible for the researchers to simultaneously calibrate a large family of biologically based mathematical models for each patient.
"In total, we had roughly 6,000 different calibrations or forecast scenarios that would take years to run on a standard laptop," Hormuth said. "By using the Lonestar 5 system to run our model calibration and forecasting approach in parallel, we were able to evaluate all of these scenarios in a matter of days." |
|||
431 | Self-Learning Robots Go Full Steam Ahead | Researchers from AMOLF's Soft Robotic Matter group have shown that a group of small autonomous, self-learning robots can adapt easily to changing circumstances. They connected these simple robots in a line, after which each individual robot taught itself to move forward as quickly as possible. The results were published today in the scientific journal PNAS.
Robots are ingenious devices that can do an awful lot. There are robots that can dance and walk up and down stairs, and swarms of drones that can independently fly in a formation, just to name a few. However, all of those robots are programmed to a considerable extent - different situations or patterns have been planted in their brain in advance, they are centrally controlled, or a complex computer network teaches them behavior through machine learning. Bas Overvelde, Principal Investigator of the Soft Robotic Matter group at AMOLF, wanted to go back to the basics: a self-learning robot that is as simple as possible. "Ultimately, we want to be able to use self-learning systems constructed from simple building blocks, which for example only consist of a material like a polymer. We would also refer to these as robotic materials."
The researchers succeeded in getting very simple, interlinked robotic carts that move on a track to learn how they could move as fast as possible in a certain direction. The carts did this without being programmed with a route or knowing what the other robotic carts were doing. "This is a new way of thinking in the design of self-learning robots. Unlike most traditional, programmed robots, this kind of simple self-learning robot does not require any complex models to enable it to adapt to a strongly changing environment," explains Overvelde. "In the future, this could have an application in soft robotics, such as robotic hands that learn how different objects can be picked up or robots that automatically adapt their behavior after incurring damage."
Breathing robots The self-learning system consists of several linked building blocks of a few centimeters in size, the individual robots. These robots consist of a microcontroller (a minicomputer), a motion sensor, a pump that pumps air into a bellows and a needle to let the air out. This combination enables the robot to breathe, as it were. If you link a second robot via the first robot's bellows, they push each other away. That is what enables the entire robotic train to move. "We wanted to keep the robots as simple as possible, which is why we chose bellows and air. Many soft robots use this method," says PhD student Luuk van Laake.
The only thing that the researchers do in advance is to tell each robot a simple set of rules with a few lines of computer code (a short algorithm): switch the pump on and off every few seconds - this is called the cycle - and then try to move in a certain direction as quickly as possible. The chip on the robot continuously measures the speed. Every few cycles, the robot makes small adjustments to when the pump is switched on and determines whether these adjustments move the robotic train forward faster. Therefore, each robotic cart continuously conducts small experiments.
If you allow two or more robots to push and pull each other in this way, the train will move in a single direction sooner or later. Consequently, the robots learn that this is the better setting for their pump without the need to communicate and without precise programming on how to move forward. The system slowly optimizes itself. The videos published with the article show how the train slowly but surely moves over a circular trajectory.
Tackling new situations The researchers used two different versions of the algorithm to see which worked better. The first algorithm saves the best speed measurements of the robot and uses this to decide the best setting for the pump. The second algorithm only uses the last speed measurement to decide the best moment for the pump to be switched on in each cycle. That latter algorithm works far better. It can tackle situations without these being programmed in advance because it wastes no time on behavior that might have worked well in the past but no longer does so in the new situation. For example, it could swiftly overcome an obstacle on the trajectory, whereas robots programmed with the other algorithm came to a standstill. "If you manage to find the right algorithm, then this simple system is very robust," says Overvelde. "It can cope with a range of unexpected situations."
Pulling off a leg However simple they might be, the researchers feel the robots have come to life. For one of the experiments, they wanted to damage a robot to see how the entire system would recover. "We removed the needle that acts as the nozzle. That felt a bit strange. As if we were pulling off its leg." The robots also adapted their behavior in the case of this maiming so that the train once again moved in the right direction. It was yet another piece of evidence for the system's robustness.
The system is easy to scale up; the researchers have already managed to produce a moving train of seven robots. The next step is building robots that undergo more complex behavior. "One such example is an octopus-like construction," says Overvelde. "It is interesting to see whether the individual building blocks will behave like the arms of an octopus. Those also have a decentralized nervous system, a sort of independent brain, just like our robotic system."
Reference Giorgio Oliveri, Lucas C. van Laake, Cesare Carissimo, Clara Miette, and Johannes T.B. Overvelde, Continous learning of emergent behavior in robotic matter , PNAS 118 (2021), DOI:10.1073/pnas.2017015118 | AMOLF researchers in the Netherlands demonstrated that a group of small autonomous self-learning robots can easily change what they are doing in response to changing conditions. The team induced individual robotic carts that are interlinked and move on a track to maximize their speed in a certain direction without a programmed route or knowing what the others were doing. The system is comprised of a microcontroller, a motion sensor, a pump that pumps air into a bellows, and a needle for deflation; linking a second robot to a robot's bellows causes the robots to push each other away, driving the robotic train. Each robot is fed a set of rules via a short algorithm, while a chip continuously measures speed. The AMOLF researchers found the robots could better adapt to changing situations with an algorithm that only uses the last speed measurement to decide the best moment for the pump to be switched on in each cycle. | [] | [] | [] | scitechnews | None | None | None | None | AMOLF researchers in the Netherlands demonstrated that a group of small autonomous self-learning robots can easily change what they are doing in response to changing conditions. The team induced individual robotic carts that are interlinked and move on a track to maximize their speed in a certain direction without a programmed route or knowing what the others were doing. The system is comprised of a microcontroller, a motion sensor, a pump that pumps air into a bellows, and a needle for deflation; linking a second robot to a robot's bellows causes the robots to push each other away, driving the robotic train. Each robot is fed a set of rules via a short algorithm, while a chip continuously measures speed. The AMOLF researchers found the robots could better adapt to changing situations with an algorithm that only uses the last speed measurement to decide the best moment for the pump to be switched on in each cycle.
Researchers from AMOLF's Soft Robotic Matter group have shown that a group of small autonomous, self-learning robots can adapt easily to changing circumstances. They connected these simple robots in a line, after which each individual robot taught itself to move forward as quickly as possible. The results were published today in the scientific journal PNAS.
Robots are ingenious devices that can do an awful lot. There are robots that can dance and walk up and down stairs, and swarms of drones that can independently fly in a formation, just to name a few. However, all of those robots are programmed to a considerable extent - different situations or patterns have been planted in their brain in advance, they are centrally controlled, or a complex computer network teaches them behavior through machine learning. Bas Overvelde, Principal Investigator of the Soft Robotic Matter group at AMOLF, wanted to go back to the basics: a self-learning robot that is as simple as possible. "Ultimately, we want to be able to use self-learning systems constructed from simple building blocks, which for example only consist of a material like a polymer. We would also refer to these as robotic materials."
The researchers succeeded in getting very simple, interlinked robotic carts that move on a track to learn how they could move as fast as possible in a certain direction. The carts did this without being programmed with a route or knowing what the other robotic carts were doing. "This is a new way of thinking in the design of self-learning robots. Unlike most traditional, programmed robots, this kind of simple self-learning robot does not require any complex models to enable it to adapt to a strongly changing environment," explains Overvelde. "In the future, this could have an application in soft robotics, such as robotic hands that learn how different objects can be picked up or robots that automatically adapt their behavior after incurring damage."
Breathing robots The self-learning system consists of several linked building blocks of a few centimeters in size, the individual robots. These robots consist of a microcontroller (a minicomputer), a motion sensor, a pump that pumps air into a bellows and a needle to let the air out. This combination enables the robot to breathe, as it were. If you link a second robot via the first robot's bellows, they push each other away. That is what enables the entire robotic train to move. "We wanted to keep the robots as simple as possible, which is why we chose bellows and air. Many soft robots use this method," says PhD student Luuk van Laake.
The only thing that the researchers do in advance is to tell each robot a simple set of rules with a few lines of computer code (a short algorithm): switch the pump on and off every few seconds - this is called the cycle - and then try to move in a certain direction as quickly as possible. The chip on the robot continuously measures the speed. Every few cycles, the robot makes small adjustments to when the pump is switched on and determines whether these adjustments move the robotic train forward faster. Therefore, each robotic cart continuously conducts small experiments.
If you allow two or more robots to push and pull each other in this way, the train will move in a single direction sooner or later. Consequently, the robots learn that this is the better setting for their pump without the need to communicate and without precise programming on how to move forward. The system slowly optimizes itself. The videos published with the article show how the train slowly but surely moves over a circular trajectory.
Tackling new situations The researchers used two different versions of the algorithm to see which worked better. The first algorithm saves the best speed measurements of the robot and uses this to decide the best setting for the pump. The second algorithm only uses the last speed measurement to decide the best moment for the pump to be switched on in each cycle. That latter algorithm works far better. It can tackle situations without these being programmed in advance because it wastes no time on behavior that might have worked well in the past but no longer does so in the new situation. For example, it could swiftly overcome an obstacle on the trajectory, whereas robots programmed with the other algorithm came to a standstill. "If you manage to find the right algorithm, then this simple system is very robust," says Overvelde. "It can cope with a range of unexpected situations."
Pulling off a leg However simple they might be, the researchers feel the robots have come to life. For one of the experiments, they wanted to damage a robot to see how the entire system would recover. "We removed the needle that acts as the nozzle. That felt a bit strange. As if we were pulling off its leg." The robots also adapted their behavior in the case of this maiming so that the train once again moved in the right direction. It was yet another piece of evidence for the system's robustness.
The system is easy to scale up; the researchers have already managed to produce a moving train of seven robots. The next step is building robots that undergo more complex behavior. "One such example is an octopus-like construction," says Overvelde. "It is interesting to see whether the individual building blocks will behave like the arms of an octopus. Those also have a decentralized nervous system, a sort of independent brain, just like our robotic system."
Reference Giorgio Oliveri, Lucas C. van Laake, Cesare Carissimo, Clara Miette, and Johannes T.B. Overvelde, Continous learning of emergent behavior in robotic matter , PNAS 118 (2021), DOI:10.1073/pnas.2017015118 |
|||
433 | Smart Finger Ring with Integrated RFID Chip | Now, where's my house key - could I have left it in the office? And when we want to pull out our wallet at the supermarket checkout, we often find that it's somehow made way to the bottom of the shopping bag in all the hustle and bustle. A smart ring could soon put an end to such frantic searches: Concealed inside the ring is an RFID tag that is able to pay at the checkout, open the smart front door, act as our health insurance card when attending a medical appointment or replace the key card in a hotel. It might also be possible to save medical data such as our blood group or drug intolerances on this chip: In an accident, the emergency physician would have all the necessary information to hand. Researchers at Fraunhofer IGCV developed the intelligent ring as part of the MULTIMATERIAL Center Augsburg. The large-scale project, sponsored by the Bavarian Ministry of Economic Affairs, Regional Development and Energy, is divided into ten individual projects - including the KINEMATAM project, which came up with the idea and the demonstrator model of the smart part.
More important than the ring itself, however, are the manufacturing process and the ability to integrate electronics while a component is being produced - even at places within the component that would otherwise be inaccessible. The inside of a ring, for example. We can refer to 3D printing in the broadest sense to describe a production process, but in technical jargon, it would be called "powder bed-based additive manufacturing." The principle is this: A laser beam is guided over a bed of fine metal powder. At the point where the 80 micrometer laser spot hits the powder, the powder melts and then solidifies to form a composite material - the rest of the metal, which is not exposed, retains its powder form. The ring is built up layer by layer, with a cavity left for the electronics. Midstream, the process is halted: A robot system automatically picks up an RFID component from a magazine and places it in the recess before the printing process continues. This precisely controllable production technology is opening the door to a host of possibilities for realizing completely individualized ring designs. And the chip is sealed by the ring, making it tamper-proof.
3D printing itself has been around for a long time. The main focal point of the development was the expansion of the laser beam melting unit by the internally developed automated process that places the electronics. "Converting the hardware technology to allow electronic components to be integrated during the manufacturing process is unique," says Maximilian Binder, Senior Researcher and Group Manager in the Additive Manufacturing unit at Fraunhofer IGCV. The second focus of the development was to answer this question: How can the electromagnetic signals from the RFID chip be sent through metal? Metal, you see, is normally an effective shield against signals. The research team carried out numerous simulations and experiments - and found a suitable solution. "We use a frequency of 125 kilohertz: This has a shorter range - which is exactly what we want here - and is less effectively shielded by the metal," explains Binder. Plus, the tag is affixed in such a way that its signals have to penetrate just one millimeter of metal. The design of the cavity and the way the electronics are embedded into it are also instrumental in propagating the signal since the walls can reflect or absorb the signals. Another challenge was to protect the sensitive electronics of the RFID tags from the high temperatures, reaching over 1000 degrees Celsius, involved in the manufacturing process.
The technology can be used wherever the conventional method of integrating the electronics proves difficult. The researchers are currently working, for example, on an application in the production technology sector: They are implementing sensors in gear wheels, the aim being for them to send, live during operation, information about the load state, temperatures at various positions, and other important parameters to an evaluation unit in a wireless fashion. Is initial damage already occurring on a tooth? The measured vibration will tell us. The integrated sensors receive the energy they need via a printed RFID antenna on the outside - the sensors then work passively, meaning without a battery or other separate power supply. Consequently, the integrated sensors will, in the future, be able to realize a monitoring potential that would otherwise not have been possible due to the fast rotational speed of the gear wheels. | Researchers at Germany's Fraunhofer Institute for Casting, Composite, and Processing Technology (IGCV) have developed a three-dimensionally (3D) printed smart finger ring that potentially could replace house keys, wallets, health insurance cards, and more. The smart ring contains an integrated radio-frequency identification (RFID) chip that enables it to make purchases, open smart doors, and store medical data, among other things. Produced using the 3D printing process known as powder bed-based additive manufacturing, the ring is built layer by layer, stopping midstream to place the RFID chip into a cavity in the ring, then continuing. IGCV's Maximilian Binder said, "Converting the hardware technology to allow electronic components to be integrated during the manufacturing process is unique." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Germany's Fraunhofer Institute for Casting, Composite, and Processing Technology (IGCV) have developed a three-dimensionally (3D) printed smart finger ring that potentially could replace house keys, wallets, health insurance cards, and more. The smart ring contains an integrated radio-frequency identification (RFID) chip that enables it to make purchases, open smart doors, and store medical data, among other things. Produced using the 3D printing process known as powder bed-based additive manufacturing, the ring is built layer by layer, stopping midstream to place the RFID chip into a cavity in the ring, then continuing. IGCV's Maximilian Binder said, "Converting the hardware technology to allow electronic components to be integrated during the manufacturing process is unique."
Now, where's my house key - could I have left it in the office? And when we want to pull out our wallet at the supermarket checkout, we often find that it's somehow made way to the bottom of the shopping bag in all the hustle and bustle. A smart ring could soon put an end to such frantic searches: Concealed inside the ring is an RFID tag that is able to pay at the checkout, open the smart front door, act as our health insurance card when attending a medical appointment or replace the key card in a hotel. It might also be possible to save medical data such as our blood group or drug intolerances on this chip: In an accident, the emergency physician would have all the necessary information to hand. Researchers at Fraunhofer IGCV developed the intelligent ring as part of the MULTIMATERIAL Center Augsburg. The large-scale project, sponsored by the Bavarian Ministry of Economic Affairs, Regional Development and Energy, is divided into ten individual projects - including the KINEMATAM project, which came up with the idea and the demonstrator model of the smart part.
More important than the ring itself, however, are the manufacturing process and the ability to integrate electronics while a component is being produced - even at places within the component that would otherwise be inaccessible. The inside of a ring, for example. We can refer to 3D printing in the broadest sense to describe a production process, but in technical jargon, it would be called "powder bed-based additive manufacturing." The principle is this: A laser beam is guided over a bed of fine metal powder. At the point where the 80 micrometer laser spot hits the powder, the powder melts and then solidifies to form a composite material - the rest of the metal, which is not exposed, retains its powder form. The ring is built up layer by layer, with a cavity left for the electronics. Midstream, the process is halted: A robot system automatically picks up an RFID component from a magazine and places it in the recess before the printing process continues. This precisely controllable production technology is opening the door to a host of possibilities for realizing completely individualized ring designs. And the chip is sealed by the ring, making it tamper-proof.
3D printing itself has been around for a long time. The main focal point of the development was the expansion of the laser beam melting unit by the internally developed automated process that places the electronics. "Converting the hardware technology to allow electronic components to be integrated during the manufacturing process is unique," says Maximilian Binder, Senior Researcher and Group Manager in the Additive Manufacturing unit at Fraunhofer IGCV. The second focus of the development was to answer this question: How can the electromagnetic signals from the RFID chip be sent through metal? Metal, you see, is normally an effective shield against signals. The research team carried out numerous simulations and experiments - and found a suitable solution. "We use a frequency of 125 kilohertz: This has a shorter range - which is exactly what we want here - and is less effectively shielded by the metal," explains Binder. Plus, the tag is affixed in such a way that its signals have to penetrate just one millimeter of metal. The design of the cavity and the way the electronics are embedded into it are also instrumental in propagating the signal since the walls can reflect or absorb the signals. Another challenge was to protect the sensitive electronics of the RFID tags from the high temperatures, reaching over 1000 degrees Celsius, involved in the manufacturing process.
The technology can be used wherever the conventional method of integrating the electronics proves difficult. The researchers are currently working, for example, on an application in the production technology sector: They are implementing sensors in gear wheels, the aim being for them to send, live during operation, information about the load state, temperatures at various positions, and other important parameters to an evaluation unit in a wireless fashion. Is initial damage already occurring on a tooth? The measured vibration will tell us. The integrated sensors receive the energy they need via a printed RFID antenna on the outside - the sensors then work passively, meaning without a battery or other separate power supply. Consequently, the integrated sensors will, in the future, be able to realize a monitoring potential that would otherwise not have been possible due to the fast rotational speed of the gear wheels. |
|||
435 | Graphene Key for Novel Hardware Security | 5/10/2021
By Gabrielle Stewart
UNIVERSITY PARK, Pa. - As more private data is stored and shared digitally, researchers are exploring new ways to protect data against attacks from bad actors. Current silicon technology exploits microscopic differences between computing components to create secure keys, but artificial intelligence (AI) techniques can be used to predict these keys and gain access to data. Now, Penn State researchers have designed a way to make the encrypted keys harder to crack.
Led by Saptarshi Das, assistant professor of engineering science and mechanics, the researchers used graphene - a layer of carbon one atom thick - to develop a novel low-power, scalable, reconfigurable hardware security device with significant resilience to AI attacks. They published their findings in Nature Electronics today (May 10).
"There has been more and more breaching of private data recently," Das said. "We developed a new hardware security device that could eventually be implemented to protect these data across industries and sectors."
The device, called a physically unclonable function (PUF), is the first demonstration of a graphene-based PUF, according to the researchers. The physical and electrical properties of graphene, as well as the fabrication process, make the novel PUF more energy-efficient, scalable, and secure against AI attacks that pose a threat to silicon PUFs.
The team first fabricated nearly 2,000 identical graphene transistors, which switch current on and off in a circuit. Despite their structural similarity, the transistors' electrical conductivity varied due to the inherent randomness arising from the production process. While such variation is typically a drawback for electronic devices, it's a desirable quality for a PUF not shared by silicon-based devices.
After the graphene transistors were implemented into PUFs, the researchers modeled their characteristics to create a simulation of 64 million graphene-based PUFs. To test the PUFs' security, Das and his team used machine learning, a method that allows AI to study a system and find new patterns. The researchers trained the AI with the graphene PUF simulation data, testing to see if the AI could use this training to make predictions about the encrypted data and reveal system insecurities.
"Neural networks are very good at developing a model from a huge amount of data, even if humans are unable to," Das said. "We found that AI could not develop a model, and it was not possible for the encryption process to be learned."
This resistance to machine learning attacks makes the PUF more secure because potential hackers could not use breached data to reverse engineer a device for future exploitation, Das said. Even if the key could be predicted, the graphene PUF could generate a new key through a reconfiguration process requiring no additional hardware or replacement of components.
"Normally, once a system's security has been compromised, it is permanently compromised," said Akhil Dodda, an engineering science and mechanics graduate student conducting research under Das's mentorship. "We developed a scheme where such a compromised system could be reconfigured and used again, adding tamper resistance as another security feature."
With these features, as well as the capacity to operate across a wide range of temperatures, the graphene-based PUF could be used in a variety of applications. Further research can open pathways for its use in flexible and printable electronics, household devices and more.
Paper co-authors include Dodda, Shiva Subbulakshmi Radhakrishnan, Thomas Schranghamer and Drew Buzzell from Penn State; and Parijat Sengupta from Purdue University. Das is also affiliated with the Penn State Department of Materials Science and Engineering and the Materials Research Institute . | Researchers at Pennsylvania State University (Penn State) have demonstrated the first graphene-based physically unclonable function (PUF), a hardware security device resistant to the use of artificial intelligence (AI) techniques to crack encrypted keys. The researchers said graphene's physical and electrical properties ensure the novel PUF is more energy-efficient, scalable, and secure than silicon PUFs. The researchers tested the PUF's security by using a simulation of 64 million graphene-based PUFs to train an AI to determine whether it could make predictions about the encrypted data and identify system insecurities. Penn State's Saptarshi Das said, "We found that AI could not develop a model, and it was not possible for the encryption process to be learned." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Pennsylvania State University (Penn State) have demonstrated the first graphene-based physically unclonable function (PUF), a hardware security device resistant to the use of artificial intelligence (AI) techniques to crack encrypted keys. The researchers said graphene's physical and electrical properties ensure the novel PUF is more energy-efficient, scalable, and secure than silicon PUFs. The researchers tested the PUF's security by using a simulation of 64 million graphene-based PUFs to train an AI to determine whether it could make predictions about the encrypted data and identify system insecurities. Penn State's Saptarshi Das said, "We found that AI could not develop a model, and it was not possible for the encryption process to be learned."
5/10/2021
By Gabrielle Stewart
UNIVERSITY PARK, Pa. - As more private data is stored and shared digitally, researchers are exploring new ways to protect data against attacks from bad actors. Current silicon technology exploits microscopic differences between computing components to create secure keys, but artificial intelligence (AI) techniques can be used to predict these keys and gain access to data. Now, Penn State researchers have designed a way to make the encrypted keys harder to crack.
Led by Saptarshi Das, assistant professor of engineering science and mechanics, the researchers used graphene - a layer of carbon one atom thick - to develop a novel low-power, scalable, reconfigurable hardware security device with significant resilience to AI attacks. They published their findings in Nature Electronics today (May 10).
"There has been more and more breaching of private data recently," Das said. "We developed a new hardware security device that could eventually be implemented to protect these data across industries and sectors."
The device, called a physically unclonable function (PUF), is the first demonstration of a graphene-based PUF, according to the researchers. The physical and electrical properties of graphene, as well as the fabrication process, make the novel PUF more energy-efficient, scalable, and secure against AI attacks that pose a threat to silicon PUFs.
The team first fabricated nearly 2,000 identical graphene transistors, which switch current on and off in a circuit. Despite their structural similarity, the transistors' electrical conductivity varied due to the inherent randomness arising from the production process. While such variation is typically a drawback for electronic devices, it's a desirable quality for a PUF not shared by silicon-based devices.
After the graphene transistors were implemented into PUFs, the researchers modeled their characteristics to create a simulation of 64 million graphene-based PUFs. To test the PUFs' security, Das and his team used machine learning, a method that allows AI to study a system and find new patterns. The researchers trained the AI with the graphene PUF simulation data, testing to see if the AI could use this training to make predictions about the encrypted data and reveal system insecurities.
"Neural networks are very good at developing a model from a huge amount of data, even if humans are unable to," Das said. "We found that AI could not develop a model, and it was not possible for the encryption process to be learned."
This resistance to machine learning attacks makes the PUF more secure because potential hackers could not use breached data to reverse engineer a device for future exploitation, Das said. Even if the key could be predicted, the graphene PUF could generate a new key through a reconfiguration process requiring no additional hardware or replacement of components.
"Normally, once a system's security has been compromised, it is permanently compromised," said Akhil Dodda, an engineering science and mechanics graduate student conducting research under Das's mentorship. "We developed a scheme where such a compromised system could be reconfigured and used again, adding tamper resistance as another security feature."
With these features, as well as the capacity to operate across a wide range of temperatures, the graphene-based PUF could be used in a variety of applications. Further research can open pathways for its use in flexible and printable electronics, household devices and more.
Paper co-authors include Dodda, Shiva Subbulakshmi Radhakrishnan, Thomas Schranghamer and Drew Buzzell from Penn State; and Parijat Sengupta from Purdue University. Das is also affiliated with the Penn State Department of Materials Science and Engineering and the Materials Research Institute . |
|||
436 | Ransomware Attack Leads to Shutdown of Major U.S. Pipeline System | A ransomware attack forced operators of the Colonial Pipeline to shut down its network on Friday, highlighting the vulnerability of industrial sectors to such threats. A U.S. official and another source familiar with the matter said the attack appears to have been conducted by DarkSide, an Eastern European-based criminal gang believed to operate primarily out of Russia. Private companies that probe cyberattacks say they are handling cases involving DarkSide targeting U.S. industrial firms with ransomware, while many other ransomware gangs also appear to be attacking such companies in greater numbers than previously known. Eric Goldstein at the U.S. Department of Homeland Security's Cybersecurity and Infrastructure Security Agency said, "We encourage every organization to take action to strengthen their cybersecurity posture to reduce their exposure to these types of threats." | [] | [] | [] | scitechnews | None | None | None | None | A ransomware attack forced operators of the Colonial Pipeline to shut down its network on Friday, highlighting the vulnerability of industrial sectors to such threats. A U.S. official and another source familiar with the matter said the attack appears to have been conducted by DarkSide, an Eastern European-based criminal gang believed to operate primarily out of Russia. Private companies that probe cyberattacks say they are handling cases involving DarkSide targeting U.S. industrial firms with ransomware, while many other ransomware gangs also appear to be attacking such companies in greater numbers than previously known. Eric Goldstein at the U.S. Department of Homeland Security's Cybersecurity and Infrastructure Security Agency said, "We encourage every organization to take action to strengthen their cybersecurity posture to reduce their exposure to these types of threats."
|
||||
437 | 'Intelligent' Shoe Helps Blind People Avoid Obstacles | Computer scientists have created an 'intelligent' shoe that helps blind and visually-impaired people avoid multiple obstacles.
The £2,700 (€3,200) product, called InnoMake, has been developed by Austrian company Tec-Innovation, backed by Graz University of Technology (TU Graz).
The product consists of waterproof ultrasonic sensors attached to the tip of each shoe, which vibrate and make noises near obstacles.
The closer the wearer gets to an obstacle, the faster the vibration becomes, much like a parking sensor on the back of a vehicle.
Tec-Innovation is now working on embedding an AI-powered camera as part of a new iteration of the product.
'Ultrasonic sensors on the toe of the shoe detect obstacles up to four meters [13 feet] away,' said Markus Raffer, a founder of Tec-Innovation and himself visually impaired.
'The wearer is then warned by vibration and/or acoustic signals. This works very well and is already a great help to me personally.'
The product price includes one device per foot, along with one pair of shoes (or installation on an exiting pair of shoes), as well as a USB charger.
The system detects two pieces of information that are key to avoiding obstacles, the scientists say - the nature of an obstacle and its directional path, especially if downward facing, such as holes or stairs leading into a subway.
'Not only is the warning that I am facing an obstacle relevant, but also the information about what kind of obstacle I am facing, because it makes a big difference whether it's a wall, a car or a staircase,' said Raffer.
The 'approved medical device, which is available to buy on Tec-Innovation's website, is just the first version of the product, however.
The scientists are working on integrating a camera-based recognition system that's powered by machine learning, a type of artificial intelligence (AI) .
Images captured by the embedded camera will essentially allow it to detect more about each obstacle as the wearer walks around.
'We have developed state-of-the-art deep-learning algorithms modelled on neural networks that can do two main things after detecting and interpreting the content of the image,' said Friedrich Fraundorfer at TU Graz.
'They use camera images from the foot perspective to determine an area that is free of obstacles and thus safe to walk on, and they can recognise and distinguish objects.'
Tec-Innovation is now working on integrating the camera system into a new prototype so it's both robust and comfortable.
The firm also want to combine the information collected while wearing the shoe into a kind of 'street view navigation map' for visually impaired people.
'As it currently stands, only the wearer benefits in each case from the data the shoe collects as he or she walks,' said Fraundorfer.
'It would be much more sustainable if this data could also be made available to other people as a navigation aid.'
A funding application is currently being submitted to the Austrian Research Promotion Agency FFG to bring the navigation map to fruition, which researchers say would likely happen in the 'distant future'. | Computer scientists at Austria's Graz University of Technology (TU Graz) and medical technology maker Tec-Innovation have designed a shoe with ultrasonic sensors that can detect obstacles and emit vibrations to warn blind and visually impaired wearers. The sensors are attached to the toe of each InnoMake shoe, which will sell for £2,700 ($3,759) per pair, and the vibrations become faster the closer the wearer gets to an obstacle. Tec-Innovation's Markus Raffer said the sensors detect obstacles up to four meters (13 feet) off, with the device able to identify the nature of an obstacle and its directional path, especially if it is downward-facing. | [] | [] | [] | scitechnews | None | None | None | None | Computer scientists at Austria's Graz University of Technology (TU Graz) and medical technology maker Tec-Innovation have designed a shoe with ultrasonic sensors that can detect obstacles and emit vibrations to warn blind and visually impaired wearers. The sensors are attached to the toe of each InnoMake shoe, which will sell for £2,700 ($3,759) per pair, and the vibrations become faster the closer the wearer gets to an obstacle. Tec-Innovation's Markus Raffer said the sensors detect obstacles up to four meters (13 feet) off, with the device able to identify the nature of an obstacle and its directional path, especially if it is downward-facing.
Computer scientists have created an 'intelligent' shoe that helps blind and visually-impaired people avoid multiple obstacles.
The £2,700 (€3,200) product, called InnoMake, has been developed by Austrian company Tec-Innovation, backed by Graz University of Technology (TU Graz).
The product consists of waterproof ultrasonic sensors attached to the tip of each shoe, which vibrate and make noises near obstacles.
The closer the wearer gets to an obstacle, the faster the vibration becomes, much like a parking sensor on the back of a vehicle.
Tec-Innovation is now working on embedding an AI-powered camera as part of a new iteration of the product.
'Ultrasonic sensors on the toe of the shoe detect obstacles up to four meters [13 feet] away,' said Markus Raffer, a founder of Tec-Innovation and himself visually impaired.
'The wearer is then warned by vibration and/or acoustic signals. This works very well and is already a great help to me personally.'
The product price includes one device per foot, along with one pair of shoes (or installation on an exiting pair of shoes), as well as a USB charger.
The system detects two pieces of information that are key to avoiding obstacles, the scientists say - the nature of an obstacle and its directional path, especially if downward facing, such as holes or stairs leading into a subway.
'Not only is the warning that I am facing an obstacle relevant, but also the information about what kind of obstacle I am facing, because it makes a big difference whether it's a wall, a car or a staircase,' said Raffer.
The 'approved medical device, which is available to buy on Tec-Innovation's website, is just the first version of the product, however.
The scientists are working on integrating a camera-based recognition system that's powered by machine learning, a type of artificial intelligence (AI) .
Images captured by the embedded camera will essentially allow it to detect more about each obstacle as the wearer walks around.
'We have developed state-of-the-art deep-learning algorithms modelled on neural networks that can do two main things after detecting and interpreting the content of the image,' said Friedrich Fraundorfer at TU Graz.
'They use camera images from the foot perspective to determine an area that is free of obstacles and thus safe to walk on, and they can recognise and distinguish objects.'
Tec-Innovation is now working on integrating the camera system into a new prototype so it's both robust and comfortable.
The firm also want to combine the information collected while wearing the shoe into a kind of 'street view navigation map' for visually impaired people.
'As it currently stands, only the wearer benefits in each case from the data the shoe collects as he or she walks,' said Fraundorfer.
'It would be much more sustainable if this data could also be made available to other people as a navigation aid.'
A funding application is currently being submitted to the Austrian Research Promotion Agency FFG to bring the navigation map to fruition, which researchers say would likely happen in the 'distant future'. |
|||
439 | Learning on the Fly: Computational Model Demonstrates Similarity in How Humans, Insects Learn | Even the humble fruit fly craves a dose of the happy hormone, according to a new study from the University of Sussex which shows how they may use dopamine to learn in a similar manner to humans.
Informatics experts at the University of Sussex have developed a new computational model that demonstrates a long sought after link between insect and mammalian learning, as detailed in a new paper published today in Nature Communications .
Incorporating anatomical and functional data from recent experiments, Dr James Bennett and colleagues modelled how the anatomy and physiology of the fruit fly's brain can support learning according to the reward prediction error (RPE) hypothesis.
The computational model indicates how dopamine neurons in an area of a fruit fly's brain, known as the mushroom body, can produce similar signals to dopamine neurons in mammals, and how these dopamine signals can reliably instruct learning.
The academics believe that establishing whether flies also use prediction errors to learn could lead to more humane animal research allowing researchers to replace animals with more simple insect species for future studies into the mechanisms of learning.
By opening up new opportunities to study neural mechanisms of learning, the researchers hope the model could also be helpful in illuminating greater understanding of mental health issues such as depression or addiction which are underpinned by the RPE hypothesis.
Dr Bennett, research fellow in the University of Sussex's School of Engineering and Informatics , said: "Using our computational model, we were able to show that data from insect experiments did not necessarily conflict with predictions from the RPE hypothesis, as had been thought previously.
"Establishing a bridge between insect and mammal studies on learning may open up the possibility to exploit the powerful genetic tools available for performing experiments in insects, and the smaller scale of their brains, to make sense of brain function and disease in mammals, including humans."
Understanding of how mammals learn has come a long way thanks to the RPE hypothesis, which suggests that associative memories are learned in proportion to how inaccurate they are.
The hypothesis has had considerable success explaining experimental data about learning in mammals, and has been extensively applied to decision-making and mental health illnesses such as addiction and depression. But scientists have encountered difficulties when applying the hypothesis to learning in insects due to conflicting results from different experiments.
The University of Sussex research team created a computational model to show how the major features of mushroom body anatomy and physiology can implement learning according to the RPE hypothesis.
The model simulates a simplification of the mushroom body, including different neuron types and the connections between them, and how the activity of those neurons promote learning and influence the decisions a fly makes when certain choices are rewarded.
To further understanding of learning in fly brains, the research team used their model to make five novel predictions about the influence different neurons in the mushroom body have on learning and decision-making, in the hope that they promote future experimental work.
Dr Bennett said: "While other models of the mushroom body have been created, to the best of our knowledge no other model until now has included connections between dopamine neurons and another set of neurons that predict and drive behaviour towards rewards. For example, when the reward is the sugar content of food, these connections would allow the predicted sugar availability to be compared with the actual sugar ingested, allowing more accurate predictions and appropriate sugar-seeking behaviours to be learned.
"The model can explain a large array of behaviours exhibited by fruit flies when the activity of particular neurons in their brains are either silenced or activated artificially in experiments. We also propose connections between dopamine neurons and other neurons in the mushroom body, which have not yet been reported in experiments, but would help to explain even more experimental data."
Thomas Nowotny , Professor of Informatics at the University of Sussex, said: "The model brings together learning theory and experimental knowledge in a way that allows us to think systematically how fly brains actually work. The results show how learning in simple flies might be more similar to how we learn than previously thought." | A computational model developed by researchers at the U.K.'s University of Sussex shows similarities in the way insects and mammals learn. The model demonstrates that dopamine neurons in the brain of a fruit fly, or the mushroom body, produce signals similar to those of dopamine neurons in mammals, and that these signals support learning according to the reward prediction error (RPE) hypothesis. Sussex researcher James Bennett said, "Establishing a bridge between insect and mammal studies on learning may open up the possibility to exploit the powerful genetic tools available for performing experiments in insects, and the smaller scale of their brains, to make sense of brain function and disease in mammals, including humans." | [] | [] | [] | scitechnews | None | None | None | None | A computational model developed by researchers at the U.K.'s University of Sussex shows similarities in the way insects and mammals learn. The model demonstrates that dopamine neurons in the brain of a fruit fly, or the mushroom body, produce signals similar to those of dopamine neurons in mammals, and that these signals support learning according to the reward prediction error (RPE) hypothesis. Sussex researcher James Bennett said, "Establishing a bridge between insect and mammal studies on learning may open up the possibility to exploit the powerful genetic tools available for performing experiments in insects, and the smaller scale of their brains, to make sense of brain function and disease in mammals, including humans."
Even the humble fruit fly craves a dose of the happy hormone, according to a new study from the University of Sussex which shows how they may use dopamine to learn in a similar manner to humans.
Informatics experts at the University of Sussex have developed a new computational model that demonstrates a long sought after link between insect and mammalian learning, as detailed in a new paper published today in Nature Communications .
Incorporating anatomical and functional data from recent experiments, Dr James Bennett and colleagues modelled how the anatomy and physiology of the fruit fly's brain can support learning according to the reward prediction error (RPE) hypothesis.
The computational model indicates how dopamine neurons in an area of a fruit fly's brain, known as the mushroom body, can produce similar signals to dopamine neurons in mammals, and how these dopamine signals can reliably instruct learning.
The academics believe that establishing whether flies also use prediction errors to learn could lead to more humane animal research allowing researchers to replace animals with more simple insect species for future studies into the mechanisms of learning.
By opening up new opportunities to study neural mechanisms of learning, the researchers hope the model could also be helpful in illuminating greater understanding of mental health issues such as depression or addiction which are underpinned by the RPE hypothesis.
Dr Bennett, research fellow in the University of Sussex's School of Engineering and Informatics , said: "Using our computational model, we were able to show that data from insect experiments did not necessarily conflict with predictions from the RPE hypothesis, as had been thought previously.
"Establishing a bridge between insect and mammal studies on learning may open up the possibility to exploit the powerful genetic tools available for performing experiments in insects, and the smaller scale of their brains, to make sense of brain function and disease in mammals, including humans."
Understanding of how mammals learn has come a long way thanks to the RPE hypothesis, which suggests that associative memories are learned in proportion to how inaccurate they are.
The hypothesis has had considerable success explaining experimental data about learning in mammals, and has been extensively applied to decision-making and mental health illnesses such as addiction and depression. But scientists have encountered difficulties when applying the hypothesis to learning in insects due to conflicting results from different experiments.
The University of Sussex research team created a computational model to show how the major features of mushroom body anatomy and physiology can implement learning according to the RPE hypothesis.
The model simulates a simplification of the mushroom body, including different neuron types and the connections between them, and how the activity of those neurons promote learning and influence the decisions a fly makes when certain choices are rewarded.
To further understanding of learning in fly brains, the research team used their model to make five novel predictions about the influence different neurons in the mushroom body have on learning and decision-making, in the hope that they promote future experimental work.
Dr Bennett said: "While other models of the mushroom body have been created, to the best of our knowledge no other model until now has included connections between dopamine neurons and another set of neurons that predict and drive behaviour towards rewards. For example, when the reward is the sugar content of food, these connections would allow the predicted sugar availability to be compared with the actual sugar ingested, allowing more accurate predictions and appropriate sugar-seeking behaviours to be learned.
"The model can explain a large array of behaviours exhibited by fruit flies when the activity of particular neurons in their brains are either silenced or activated artificially in experiments. We also propose connections between dopamine neurons and other neurons in the mushroom body, which have not yet been reported in experiments, but would help to explain even more experimental data."
Thomas Nowotny , Professor of Informatics at the University of Sussex, said: "The model brings together learning theory and experimental knowledge in a way that allows us to think systematically how fly brains actually work. The results show how learning in simple flies might be more similar to how we learn than previously thought." |
|||
440 | 60% of School Apps Are Sharing Kids' Data with 3rd Parties | Over the past year, we've seen schools shift to digital services at an unprecedented rate as a way to educate kids safely during the c ovid-19 pandemic. We've also seen these digital tools slurp up these kid's data at a similarly unprecedented rate, suffer massive breaches , and generally handle student's personal information with a lot less care than they should.
Case in point: A new report published Tuesday by the tech-focused nonprofit Me2B Alliance found the majority of school utility apps were sharing some amount of student data with third-party marketing companies. The Me2B team surveyed a few dozen so-called "utility" apps for school districts - the kind that students and parents download to, say, review their school's calendar or bussing schedules - and found roughly 60% of them sharing everything from a student's location to their entire contact list, to their phone's mobile ad identifiers , all with companies these students and their parents likely never heard of.
In order to figure out what kind of data these apps were sharing, Me2B analyzed the software development kits (or SDK s) that these apps came packaged with. While SDk s can do all sorts of things , these little libraries of code often help developers monetize their free-to-download apps by sharing some sort of data with third-party ad networks. Facebook has some super popular SDKs , as does Google . Of the 73 apps surveyed in the report, there were 486 total SDKs throughout - with an average of just over 10 SDKs per app surveyed.
Of that 486 total bits of code, nearly 63 % (306) were owned and operated by either Facebook or Google. The rest of those SDKs were sharing data with some lesser-known third parties, with names like AdColony and Admob.
But the data sharing didn't stop there. As the report points out, these lesser-known SDKs would often share the data pulled from these student apps with dozens - if not hundreds - of other little-known third parties. What's interesting here is that these SDK s, in particular, were found abundantly in Android apps, but way fewer iOS apps ended up bringing these pieces of tech onboard (91% versus 26%, respectively).
There are a few reasons why this might be the case. First, even if Apple isn't always careful about following its own privacy rules, the company does set a certain standard that every iOS developer needs to follow, particularly when it comes to tracking and targeting the people using their apps. Most recently, Apple turned this up to 11 by mandating App Tracking Transparency (ATT) reports for the apps in its store, which literally request a user's permission in order to track their activity outside of the app.
Even though Android does have its own review process for apps, historically, we've seen some insecure apps slip through the cracks and onto countless people's devices. Also , there's a good chance that m any apps developed for Android are beaming some degree of data right back to Google.
And with Apple slowly tightening its standards surrounding ATT, it's possible that the divide between the two OS's will only keep broadening - which leaves student's data stuck in the middle. | A study by technology-focused nonprofit Me2B Alliance analyzed 73 "utility" apps for school districts and found that about 60% share some student data with third-party marketing companies. These apps are downloaded by students and parents to review school calendars or bus schedules, among other things. The data shared includes the student's location, their contact list, and their phone's mobile ad identifiers. The researchers found 486 software development kits (SDKs), small libraries of code that help monetize the apps by sharing data with third-party app networks, across the 73 apps. About two-thirds of the SDKs were owned and operated by Facebook or Google, and the rest shared data with lesser-known third parties that shared data with dozens, if not hundreds, of other lesser-known third parties. | [] | [] | [] | scitechnews | None | None | None | None | A study by technology-focused nonprofit Me2B Alliance analyzed 73 "utility" apps for school districts and found that about 60% share some student data with third-party marketing companies. These apps are downloaded by students and parents to review school calendars or bus schedules, among other things. The data shared includes the student's location, their contact list, and their phone's mobile ad identifiers. The researchers found 486 software development kits (SDKs), small libraries of code that help monetize the apps by sharing data with third-party app networks, across the 73 apps. About two-thirds of the SDKs were owned and operated by Facebook or Google, and the rest shared data with lesser-known third parties that shared data with dozens, if not hundreds, of other lesser-known third parties.
Over the past year, we've seen schools shift to digital services at an unprecedented rate as a way to educate kids safely during the c ovid-19 pandemic. We've also seen these digital tools slurp up these kid's data at a similarly unprecedented rate, suffer massive breaches , and generally handle student's personal information with a lot less care than they should.
Case in point: A new report published Tuesday by the tech-focused nonprofit Me2B Alliance found the majority of school utility apps were sharing some amount of student data with third-party marketing companies. The Me2B team surveyed a few dozen so-called "utility" apps for school districts - the kind that students and parents download to, say, review their school's calendar or bussing schedules - and found roughly 60% of them sharing everything from a student's location to their entire contact list, to their phone's mobile ad identifiers , all with companies these students and their parents likely never heard of.
In order to figure out what kind of data these apps were sharing, Me2B analyzed the software development kits (or SDK s) that these apps came packaged with. While SDk s can do all sorts of things , these little libraries of code often help developers monetize their free-to-download apps by sharing some sort of data with third-party ad networks. Facebook has some super popular SDKs , as does Google . Of the 73 apps surveyed in the report, there were 486 total SDKs throughout - with an average of just over 10 SDKs per app surveyed.
Of that 486 total bits of code, nearly 63 % (306) were owned and operated by either Facebook or Google. The rest of those SDKs were sharing data with some lesser-known third parties, with names like AdColony and Admob.
But the data sharing didn't stop there. As the report points out, these lesser-known SDKs would often share the data pulled from these student apps with dozens - if not hundreds - of other little-known third parties. What's interesting here is that these SDK s, in particular, were found abundantly in Android apps, but way fewer iOS apps ended up bringing these pieces of tech onboard (91% versus 26%, respectively).
There are a few reasons why this might be the case. First, even if Apple isn't always careful about following its own privacy rules, the company does set a certain standard that every iOS developer needs to follow, particularly when it comes to tracking and targeting the people using their apps. Most recently, Apple turned this up to 11 by mandating App Tracking Transparency (ATT) reports for the apps in its store, which literally request a user's permission in order to track their activity outside of the app.
Even though Android does have its own review process for apps, historically, we've seen some insecure apps slip through the cracks and onto countless people's devices. Also , there's a good chance that m any apps developed for Android are beaming some degree of data right back to Google.
And with Apple slowly tightening its standards surrounding ATT, it's possible that the divide between the two OS's will only keep broadening - which leaves student's data stuck in the middle. |
|||
441 | Researchers Confront Major Hurdle in Quantum Computing | Quantum science has the potential to revolutionize modern technology with more efficient computers, communication, and sensing devices. But challenges remain in achieving these technological goals, especially when it comes to effectively transferring information in quantum systems.
A regular computer consists of billions of transistors, called bits. Quantum computers, on the other hand, are based on quantum bits, also known as qubits, which can be made from a single electron.
Unlike ordinary transistors, which can be either "0" (off) or "1" (on), qubits can be both "0" and "1" at the same time. The ability of individual qubits to occupy these so-called superposition states, where they are in multiple states simultaneously, underlies the great potential of quantum computers. Just like ordinary computers, however, quantum computers need a way to transfer quantum information between distant qubits - and that presents a major experimental challenge.
In a series of papers published in Nature Communications , researchers at the University of Rochester , including John Nichol , an assistant professor of physics and astronomy, and graduate students Yadav Kandel and Haifeng Qiao, the lead authors of the papers, report major strides in enhancing quantum computing by improving the transfer of information between electrons in quantum systems.
In one paper , the researchers demonstrated a route of transferring information between qubits, called adiabatic quantum state transfer (AQT), for the first time with electron-spin qubits. Unlike most methods of transferring information between qubits, which rely on carefully tuned electric or magnetic-field pulses, AQT isn't as affected by pulse errors and noise.
To envision how AQT works, imagine you are driving your car and want to park it. If you don't hit your brakes at the proper time, the car won't be where you want it, with potential negative consequences. In this sense, the control pulses - the gas and brake pedals - to the car must be tuned carefully. AQT is different in that it doesn't really matter how long you press the pedals or how hard you press them: the car will always end up in the right spot. As a result, AQT has the potential to improve the transfer of information between qubits, which is essential for quantum networking and error correction.
The researchers demonstrated AQT's effectiveness by exploiting entanglement - one of the basic concepts of quantum physics in which the properties of one particle affect the properties of another, even when the particles are separated by a large distance. The researchers were able to use AQT to transfer one electron's quantum spin state across a chain of four electrons in semiconductor quantum dots - tiny, nanoscale semiconductors with remarkable properties. This is the longest chain over which a spin state has ever been transferred, tying the record set by the researchers in a previous Nature paper .
"Because AQT is robust against pulse errors and noise, and because of its major potential applications in quantum computing, this demonstration is a key milestone for quantum computing with spin qubits," Nichol says.
In a second paper , the researchers demonstrated another technique of transferring information between qubits, using an exotic state of matter called time crystals. A time crystal is a strange state of matter in which interactions between the particles that make up the crystal can stabilize oscillations of the system in time indefinitely. Imagine a clock that keeps ticking forever; the pendulum of the clock oscillates in time, much like the oscillating time crystal.
By implementing a series of electric-field pulses on electrons, the researchers were able to create a state similar to a time crystal. They found that they could then exploit this state to improve the transfer of an electron's spin state in a chain of semiconductor quantum dots.
"Our work takes the first steps toward showing how strange and exotic states of matter, like time crystals, can potentially by used for quantum information processing applications, such as transferring information between qubits," Nichol says. "We also theoretically show how this scenario can implement other single- and multi-qubit operations that could be used to improve the performance of quantum computers."
Both AQT and time crystals, while different, could be used simultaneously with quantum computing systems to improve performance.
"These two results illustrate the strange and interesting ways that quantum physics allows for information to be sent from one place to another, which is one of the main challenges in constructing viable quantum computers and networks," Nichol says.
Tags: Arts and Sciences , Department of Physics and Astronomy , John Nichol , quantum computing , quantum physics
Category : Science & Technology | University of Rochester (UR) researchers have reported significant progress in improving data transfer between electrons in quantum systems. One study described adiabatic quantum state transfer (AQT) between quantum bits (qubits) using electron-spin qubits, which is immune to pulse errors and noise. The UR team successfully transferred one electron's quantum spin state across a chain of four electrons in semiconductor quantum dots via AQT. A second study highlighted data transfer between qubits using an exotic state of matter called time crystals, in a chain of semiconductor quantum dots. UR's John Nichol said, "These two results illustrate the strange and interesting ways that quantum physics allows for information to be sent from one place to another, which is one of the main challenges in constructing viable quantum computers and networks." | [] | [] | [] | scitechnews | None | None | None | None | University of Rochester (UR) researchers have reported significant progress in improving data transfer between electrons in quantum systems. One study described adiabatic quantum state transfer (AQT) between quantum bits (qubits) using electron-spin qubits, which is immune to pulse errors and noise. The UR team successfully transferred one electron's quantum spin state across a chain of four electrons in semiconductor quantum dots via AQT. A second study highlighted data transfer between qubits using an exotic state of matter called time crystals, in a chain of semiconductor quantum dots. UR's John Nichol said, "These two results illustrate the strange and interesting ways that quantum physics allows for information to be sent from one place to another, which is one of the main challenges in constructing viable quantum computers and networks."
Quantum science has the potential to revolutionize modern technology with more efficient computers, communication, and sensing devices. But challenges remain in achieving these technological goals, especially when it comes to effectively transferring information in quantum systems.
A regular computer consists of billions of transistors, called bits. Quantum computers, on the other hand, are based on quantum bits, also known as qubits, which can be made from a single electron.
Unlike ordinary transistors, which can be either "0" (off) or "1" (on), qubits can be both "0" and "1" at the same time. The ability of individual qubits to occupy these so-called superposition states, where they are in multiple states simultaneously, underlies the great potential of quantum computers. Just like ordinary computers, however, quantum computers need a way to transfer quantum information between distant qubits - and that presents a major experimental challenge.
In a series of papers published in Nature Communications , researchers at the University of Rochester , including John Nichol , an assistant professor of physics and astronomy, and graduate students Yadav Kandel and Haifeng Qiao, the lead authors of the papers, report major strides in enhancing quantum computing by improving the transfer of information between electrons in quantum systems.
In one paper , the researchers demonstrated a route of transferring information between qubits, called adiabatic quantum state transfer (AQT), for the first time with electron-spin qubits. Unlike most methods of transferring information between qubits, which rely on carefully tuned electric or magnetic-field pulses, AQT isn't as affected by pulse errors and noise.
To envision how AQT works, imagine you are driving your car and want to park it. If you don't hit your brakes at the proper time, the car won't be where you want it, with potential negative consequences. In this sense, the control pulses - the gas and brake pedals - to the car must be tuned carefully. AQT is different in that it doesn't really matter how long you press the pedals or how hard you press them: the car will always end up in the right spot. As a result, AQT has the potential to improve the transfer of information between qubits, which is essential for quantum networking and error correction.
The researchers demonstrated AQT's effectiveness by exploiting entanglement - one of the basic concepts of quantum physics in which the properties of one particle affect the properties of another, even when the particles are separated by a large distance. The researchers were able to use AQT to transfer one electron's quantum spin state across a chain of four electrons in semiconductor quantum dots - tiny, nanoscale semiconductors with remarkable properties. This is the longest chain over which a spin state has ever been transferred, tying the record set by the researchers in a previous Nature paper .
"Because AQT is robust against pulse errors and noise, and because of its major potential applications in quantum computing, this demonstration is a key milestone for quantum computing with spin qubits," Nichol says.
In a second paper , the researchers demonstrated another technique of transferring information between qubits, using an exotic state of matter called time crystals. A time crystal is a strange state of matter in which interactions between the particles that make up the crystal can stabilize oscillations of the system in time indefinitely. Imagine a clock that keeps ticking forever; the pendulum of the clock oscillates in time, much like the oscillating time crystal.
By implementing a series of electric-field pulses on electrons, the researchers were able to create a state similar to a time crystal. They found that they could then exploit this state to improve the transfer of an electron's spin state in a chain of semiconductor quantum dots.
"Our work takes the first steps toward showing how strange and exotic states of matter, like time crystals, can potentially by used for quantum information processing applications, such as transferring information between qubits," Nichol says. "We also theoretically show how this scenario can implement other single- and multi-qubit operations that could be used to improve the performance of quantum computers."
Both AQT and time crystals, while different, could be used simultaneously with quantum computing systems to improve performance.
"These two results illustrate the strange and interesting ways that quantum physics allows for information to be sent from one place to another, which is one of the main challenges in constructing viable quantum computers and networks," Nichol says.
Tags: Arts and Sciences , Department of Physics and Astronomy , John Nichol , quantum computing , quantum physics
Category : Science & Technology |
|||
442 | The Next Frontier for Gesture Control: Teeth | Is totally unobtrusive control of my devices really too much to ask? Apparently, it is, because every device that I own insists that I either poke it, yammer at it, or wave it around to get it to do even the simplest of things. This is mildly annoying when I'm doing just about anything that isn't lying on the couch, and majorly annoying when I'm doing some specific tasks like washing dishes or riding my bike.
Personally, I think that the ideal control system would be something that I can use when my hands are full, or when there's a lot of ambient noise, or when I simply want to be unobtrusive about telling my phone what I want it to do, whether that's muting an incoming call during dinner or in any number of other situations that might be far more serious. Options at this point are limited. Toe control? Tongue control? Let's go even simpler, and make teeth control of devices a thing.
When we talk about controlling stuff with our teeth, the specific method does not involve replacing teeth with buttons or tiny little joysticks or anything, however cool that might be. Rather, you can think of your teeth as a system for generating gestures that also produce noises at the same time. All you have to do is gently bite in a specific area, and you've produced a repeatable sound and motion that can be detected by a combination of microphones and IMUs:
In this work by the Smart Computer Interfaces for Future Interactions (SciFi) Lab at Cornell , researchers developed a prototype of a wearable system called TeethTap that was able to detect and distinguish 13 different teeth-tapping gestures with a real-time classification accuracy of over 90% in a controlled environment. The system uses IMUs just behind the bottom of the ear where the jawline begins, along with contact microphones up against the temporal bone behind the ear. Obviously, the prototype is not what anybody wants to be wearing, but that's because it's just a proof of concept, and the general idea is that the electronics ought to be small enough to integrate into a set of headphones, earpiece, or even possibly the frame of a pair of glasses.
Photos: Cornell SciFi Lab
During extended testing, TeethTap managed to work (more or less) while study participants were in the middle of talking with one the researchers, writing on a paper while talking, walking or running around the lab, and even while they were eating or drinking, which is pretty remarkable. The system is tuned so that you're much more likely to get a false negative over a false positive, and the researchers are already working on optimization strategies to improve accuracy, especially if you're using the system while moving.
While it's tempting to look at early-stage work like this and not take it seriously, I honestly hope that something comes of TeethTap. If it could be as well integrated as the researchers are suggesting that it could be, I would be an enthusiastic early adopter.
TeethTap will be presented next week at CHI 2021 , and you can read the paper on arXiv here . | Researchers at Cornell University's Smart Computer Interfaces for Future Interactions Lab have developed a prototype wearable system controlled by teeth-tapping gestures. The prototype features an inertial measurement unit (IMU) located behind the bottom of the ear where the jawline begins, and contact microphones that sit against the temporal bone behind the ear. The TeethTap system was capable of identifying and distinguishing 13 different teeth-tapping gestures in a controlled environment with a real-time classification accuracy rate of more than 90%. The researchers found TeethTap worked while study participants were talking, writing, walking, running, eating, or drinking. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Cornell University's Smart Computer Interfaces for Future Interactions Lab have developed a prototype wearable system controlled by teeth-tapping gestures. The prototype features an inertial measurement unit (IMU) located behind the bottom of the ear where the jawline begins, and contact microphones that sit against the temporal bone behind the ear. The TeethTap system was capable of identifying and distinguishing 13 different teeth-tapping gestures in a controlled environment with a real-time classification accuracy rate of more than 90%. The researchers found TeethTap worked while study participants were talking, writing, walking, running, eating, or drinking.
Is totally unobtrusive control of my devices really too much to ask? Apparently, it is, because every device that I own insists that I either poke it, yammer at it, or wave it around to get it to do even the simplest of things. This is mildly annoying when I'm doing just about anything that isn't lying on the couch, and majorly annoying when I'm doing some specific tasks like washing dishes or riding my bike.
Personally, I think that the ideal control system would be something that I can use when my hands are full, or when there's a lot of ambient noise, or when I simply want to be unobtrusive about telling my phone what I want it to do, whether that's muting an incoming call during dinner or in any number of other situations that might be far more serious. Options at this point are limited. Toe control? Tongue control? Let's go even simpler, and make teeth control of devices a thing.
When we talk about controlling stuff with our teeth, the specific method does not involve replacing teeth with buttons or tiny little joysticks or anything, however cool that might be. Rather, you can think of your teeth as a system for generating gestures that also produce noises at the same time. All you have to do is gently bite in a specific area, and you've produced a repeatable sound and motion that can be detected by a combination of microphones and IMUs:
In this work by the Smart Computer Interfaces for Future Interactions (SciFi) Lab at Cornell , researchers developed a prototype of a wearable system called TeethTap that was able to detect and distinguish 13 different teeth-tapping gestures with a real-time classification accuracy of over 90% in a controlled environment. The system uses IMUs just behind the bottom of the ear where the jawline begins, along with contact microphones up against the temporal bone behind the ear. Obviously, the prototype is not what anybody wants to be wearing, but that's because it's just a proof of concept, and the general idea is that the electronics ought to be small enough to integrate into a set of headphones, earpiece, or even possibly the frame of a pair of glasses.
Photos: Cornell SciFi Lab
During extended testing, TeethTap managed to work (more or less) while study participants were in the middle of talking with one the researchers, writing on a paper while talking, walking or running around the lab, and even while they were eating or drinking, which is pretty remarkable. The system is tuned so that you're much more likely to get a false negative over a false positive, and the researchers are already working on optimization strategies to improve accuracy, especially if you're using the system while moving.
While it's tempting to look at early-stage work like this and not take it seriously, I honestly hope that something comes of TeethTap. If it could be as well integrated as the researchers are suggesting that it could be, I would be an enthusiastic early adopter.
TeethTap will be presented next week at CHI 2021 , and you can read the paper on arXiv here . |
|||
443 | Where Should Our Digital Data Go After We Die? | People want control over what personal digital data is passed along after they die, along with tools to make it easier to do so, according to a new case study by computer science researchers at the University of British Columbia.
On the other hand, people found the idea of creating AI-powered replicas of a deceased person "creepy."
The study's full findings - among the first to look at ways of preparing personal digital data for death - will be presented next week at the 2021 Human Computer Interaction Conference (CHI.)
"As someone growing up completely in the digital revolution, all the memories captured of my life are stored digitally. It struck me that many of the platforms I use don't have great tools to support that data after I'm gone," said lead author Janet Chen , who was an undergraduate student in the department of computer science at the time of the study. "We wanted to look at how to curate this data both while living, and after death."
Using a method called 'research through design,' Chen and her co-authors, Francesco Vitale and Dr. Joanna McGrenere, created 12 rough design concepts for data management. The concepts were then presented to study participants, aged 18 to 81.
The researchers also explored different levels of user-control by presenting human-selected, computer-selected and AI-powered alternatives. These employed techniques such as nudging the user to complete tasks, collaborating with family and friends and gamifying the process.
Some of the concepts leveraged design elements from existing tools, such as videos that are auto-generated from iPhone photos. One entirely new concept was 'Generation Cloud,' whereby a family could upload meaningful data to something like Google Drive, then access it, or even contribute more, in the future.
Other tools included:
"Users were receptive to the autogenerated video tool, because they could easily pass a pre-made video along to family and friends, as well as really liking Generation Cloud," said Chen. "One concept people clearly did not like at all was an AI-powered replica of the deceased person, which would interact with future generations. They said it was scary and creepy."
Overall, the researchers found that participants had not really thought about their digital data, but when presented with concepts, preferred ideas that allowed them to preserve their sense of agency and control over what was passed along, with tools to make that process easier.
At the moment, there are only a few tools available that consider data after death, such as Facebook's Legacy Contact or Google's Inactive Account manager, but the researchers say these are platform-specific and limited in range.
"Ten years from now it will likely be quite commonplace for people to be thinking about the reams of data they have online, and there is a huge opportunity for research with larger populations and new interfaces to support people who care about what happens to their data," said Dr. McGrenere, a professor in the department of computer science. "Tools need to be lightweight to use, and designed so that they support the range of individual differences that we saw in our participants in terms of how they want to manage their data."
The paper "What Happens After Death? Using a Design Workbook to Understand User Expectations for Preparing their Data," will be presented on May 11 and 12 at the 2021 Human Computer Interaction Conference (CHI.) | A study by computer scientists at the University of British Columbia (UBC) in Canada considered ways for people to control what happens to their personal digital data after they die. The researchers presented study participants ages 18 to 81 with 12 rough design concepts for data management featuring different levels of user-control, including human-selected, computer-selected, and artificial intelligence (AI) -powered options. The researchers found study participants generally had not previously thought about what happens to their digital data after death, but when presented with the study's concepts, preferred ideas that let them preserve their sense of agency over what remains online after their passing. Observed UBC's Janet Chen, "One concept people clearly did not like at all was an AI-powered replica of the deceased person, which would interact with future generations. They said it was scary and creepy." | [] | [] | [] | scitechnews | None | None | None | None | A study by computer scientists at the University of British Columbia (UBC) in Canada considered ways for people to control what happens to their personal digital data after they die. The researchers presented study participants ages 18 to 81 with 12 rough design concepts for data management featuring different levels of user-control, including human-selected, computer-selected, and artificial intelligence (AI) -powered options. The researchers found study participants generally had not previously thought about what happens to their digital data after death, but when presented with the study's concepts, preferred ideas that let them preserve their sense of agency over what remains online after their passing. Observed UBC's Janet Chen, "One concept people clearly did not like at all was an AI-powered replica of the deceased person, which would interact with future generations. They said it was scary and creepy."
People want control over what personal digital data is passed along after they die, along with tools to make it easier to do so, according to a new case study by computer science researchers at the University of British Columbia.
On the other hand, people found the idea of creating AI-powered replicas of a deceased person "creepy."
The study's full findings - among the first to look at ways of preparing personal digital data for death - will be presented next week at the 2021 Human Computer Interaction Conference (CHI.)
"As someone growing up completely in the digital revolution, all the memories captured of my life are stored digitally. It struck me that many of the platforms I use don't have great tools to support that data after I'm gone," said lead author Janet Chen , who was an undergraduate student in the department of computer science at the time of the study. "We wanted to look at how to curate this data both while living, and after death."
Using a method called 'research through design,' Chen and her co-authors, Francesco Vitale and Dr. Joanna McGrenere, created 12 rough design concepts for data management. The concepts were then presented to study participants, aged 18 to 81.
The researchers also explored different levels of user-control by presenting human-selected, computer-selected and AI-powered alternatives. These employed techniques such as nudging the user to complete tasks, collaborating with family and friends and gamifying the process.
Some of the concepts leveraged design elements from existing tools, such as videos that are auto-generated from iPhone photos. One entirely new concept was 'Generation Cloud,' whereby a family could upload meaningful data to something like Google Drive, then access it, or even contribute more, in the future.
Other tools included:
"Users were receptive to the autogenerated video tool, because they could easily pass a pre-made video along to family and friends, as well as really liking Generation Cloud," said Chen. "One concept people clearly did not like at all was an AI-powered replica of the deceased person, which would interact with future generations. They said it was scary and creepy."
Overall, the researchers found that participants had not really thought about their digital data, but when presented with concepts, preferred ideas that allowed them to preserve their sense of agency and control over what was passed along, with tools to make that process easier.
At the moment, there are only a few tools available that consider data after death, such as Facebook's Legacy Contact or Google's Inactive Account manager, but the researchers say these are platform-specific and limited in range.
"Ten years from now it will likely be quite commonplace for people to be thinking about the reams of data they have online, and there is a huge opportunity for research with larger populations and new interfaces to support people who care about what happens to their data," said Dr. McGrenere, a professor in the department of computer science. "Tools need to be lightweight to use, and designed so that they support the range of individual differences that we saw in our participants in terms of how they want to manage their data."
The paper "What Happens After Death? Using a Design Workbook to Understand User Expectations for Preparing their Data," will be presented on May 11 and 12 at the 2021 Human Computer Interaction Conference (CHI.) |
|||
445 | 3D Detectors Measure Social Distancing to Help Fight COVID-19 | "When Switzerland went into lockdown last year, we were working on an algorithm for self-driving cars," says Lorenzo Bertoni, a PhD student at EPFL's Visual Intelligence for Transportation (VITA) Laboratory. "But we quickly saw that by adding just a few features, we could make our program a useful tool for managing the pandemic." The VITA lab is headed by tenure-track assistant professor Alexandre Alahi.
After spending several weeks reading up on how the Covid-19 virus is spread, Bertoni and his team began to realize - along with the rest of the scientific community - that microdroplets play a key role in spreading the virus and that it's essential for people to maintain a distance of at least 1.5 meters if they're not wearing a face mask. The researchers therefore began tweaking their algorithm, which was initially designed to detect the presence of another car or a pedestrian on the road and instruct the self-driving car to slow down, stop, change direction or accelerate. The researchers just published their work in IEEE Transactions on Intelligent Transportation Systems and will present it at the International Conference on Robotics and Automation (ICRA) on 2 June 2021.
A different calculation method Distance detectors currently on the market use fixed-place cameras and LiDAR (laser-based) sensors. But EPFL's 3D detector, called MonoLoco, can be easily attached to any kind of camera or video recorder - even those sold by consumer electronics retailers - or to a smartphone. That's because it uses an innovative approach which entails calculating the dimensions of human silhouettes and the distance between them. In other words, it estimates how far apart two people are based on their sizes instead of on ground measurements. "Most detectors locate individuals in the 3D space by assuming they're on the same flat surface. The camera has to be perfectly still and its utility is therefore limited - there are problems with accuracy if, for example, someone is coming up the stairs," says Bertoni, the study's lead author. "So we wanted to develop a detector that was more accurate and wouldn't mistake a streetlight for a pedestrian."
Other innovative features of EPFL's algorithm are that it can identify people's body orientation, determine how a group of people are interacting - and especially whether they're talking - and evaluate whether they're staying 1.5 m apart. That's all because it uses a different calculation method than existing detectors. What's more, MonoLoco keeps the faces and silhouettes of people who are filmed completely anonymous because it measures only the distances between their joints (i.e., their shoulders, wrists, hips and knees). It takes a picture or video of a given area and converts the people's bodies into unidentifiable silhouettes sketched out with lines and dots. This information lets the algorithm calculate how far apart they are and their respective body orientation. "Our program doesn't need to store the original pictures and videos. And we believe that's a step in the right direction with regard to protecting people's privacy," says Bertoni.
Several possible applications "We came up with several possible applications for our program during a pandemic," says Bertoni. "On public transport, of course, but also in shops, restaurants, offices and train stations - and even in factories, since it could let people work safely by maintaining the necessary distance." And the distance requirement can be configured at up to 40 meters apart, whether between people or objects or both, as can their orientation. The researchers have published their algorithm's source code on the VITA website and are planning an initial deployment in Swiss postal buses through a joint project with Swiss Post. | Researchers at the Swiss Federal Institute of Technology, Lausanne, repurposed an algorithm initially created for autonomous vehicles to help people comply with COVID-19-related social distancing requirements. Using a camera, the three-dimensional MonoLoco algorithm calculates the dimensions of human silhouettes and the distance between them to determine whether individuals are maintaining proper infection-preventive distances, without collecting personal data. MonoLoco can identify bodily orientation, determine how a group of people are interacting, and assess whether they remain 1.5 meters (about five feet) apart. Said EPFL's Lorenzo Bertoni, "When Switzerland went into lockdown last year, we were working on an algorithm for self-driving cars, but we quickly saw that by adding just a few features, we could make our program a useful tool for managing the pandemic." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the Swiss Federal Institute of Technology, Lausanne, repurposed an algorithm initially created for autonomous vehicles to help people comply with COVID-19-related social distancing requirements. Using a camera, the three-dimensional MonoLoco algorithm calculates the dimensions of human silhouettes and the distance between them to determine whether individuals are maintaining proper infection-preventive distances, without collecting personal data. MonoLoco can identify bodily orientation, determine how a group of people are interacting, and assess whether they remain 1.5 meters (about five feet) apart. Said EPFL's Lorenzo Bertoni, "When Switzerland went into lockdown last year, we were working on an algorithm for self-driving cars, but we quickly saw that by adding just a few features, we could make our program a useful tool for managing the pandemic."
"When Switzerland went into lockdown last year, we were working on an algorithm for self-driving cars," says Lorenzo Bertoni, a PhD student at EPFL's Visual Intelligence for Transportation (VITA) Laboratory. "But we quickly saw that by adding just a few features, we could make our program a useful tool for managing the pandemic." The VITA lab is headed by tenure-track assistant professor Alexandre Alahi.
After spending several weeks reading up on how the Covid-19 virus is spread, Bertoni and his team began to realize - along with the rest of the scientific community - that microdroplets play a key role in spreading the virus and that it's essential for people to maintain a distance of at least 1.5 meters if they're not wearing a face mask. The researchers therefore began tweaking their algorithm, which was initially designed to detect the presence of another car or a pedestrian on the road and instruct the self-driving car to slow down, stop, change direction or accelerate. The researchers just published their work in IEEE Transactions on Intelligent Transportation Systems and will present it at the International Conference on Robotics and Automation (ICRA) on 2 June 2021.
A different calculation method Distance detectors currently on the market use fixed-place cameras and LiDAR (laser-based) sensors. But EPFL's 3D detector, called MonoLoco, can be easily attached to any kind of camera or video recorder - even those sold by consumer electronics retailers - or to a smartphone. That's because it uses an innovative approach which entails calculating the dimensions of human silhouettes and the distance between them. In other words, it estimates how far apart two people are based on their sizes instead of on ground measurements. "Most detectors locate individuals in the 3D space by assuming they're on the same flat surface. The camera has to be perfectly still and its utility is therefore limited - there are problems with accuracy if, for example, someone is coming up the stairs," says Bertoni, the study's lead author. "So we wanted to develop a detector that was more accurate and wouldn't mistake a streetlight for a pedestrian."
Other innovative features of EPFL's algorithm are that it can identify people's body orientation, determine how a group of people are interacting - and especially whether they're talking - and evaluate whether they're staying 1.5 m apart. That's all because it uses a different calculation method than existing detectors. What's more, MonoLoco keeps the faces and silhouettes of people who are filmed completely anonymous because it measures only the distances between their joints (i.e., their shoulders, wrists, hips and knees). It takes a picture or video of a given area and converts the people's bodies into unidentifiable silhouettes sketched out with lines and dots. This information lets the algorithm calculate how far apart they are and their respective body orientation. "Our program doesn't need to store the original pictures and videos. And we believe that's a step in the right direction with regard to protecting people's privacy," says Bertoni.
Several possible applications "We came up with several possible applications for our program during a pandemic," says Bertoni. "On public transport, of course, but also in shops, restaurants, offices and train stations - and even in factories, since it could let people work safely by maintaining the necessary distance." And the distance requirement can be configured at up to 40 meters apart, whether between people or objects or both, as can their orientation. The researchers have published their algorithm's source code on the VITA website and are planning an initial deployment in Swiss postal buses through a joint project with Swiss Post. |
|||
446 | Lab Launches Free Library of Virtual, AI-Calculated Organic Compounds | Alán Aspuru-Guzik 's research group at the University of Toronto has launched an open-access tool that promises to accelerate the discovery of new chemical reactions that underpin the development of everything from smartphones to life-saving drugs.
The free tool, called Kraken , is a library of virtual, machine-learning calculated organic compounds - roughly 300,000 of them, with 190 descriptors each.
It was created through a collaboration between Aspuru-Guzik's Matter Lab , the Sigman Research Group at the University of Utah, Technische Universität Berlin, Karlsruhe Institute of Technology, Vector Institute for Artificial Intelligence, the Center for Computer Assisted Synthesis at the University of Notre Dame, IBM Research and AstraZeneca
"The world has no time for science as usual," says Aspuru-Guzik, a professor in U of T's departments of chemistry and computer science in the Faculty of Arts & Science. "Neither for science done in a silo.
"This is a collaborative effort to accelerate catalysis science that involves a very exciting team from academia and industry."
When developing a transition-metal catalyzed chemical reaction, a chemist must find a suitable combination of metal and ligand. Despite the innovations in computer-optimized ligand design led by the Sigman group, ligands would typically be identified by trial and error in the lab. With Kraken, however, chemists will eventually have a vast data-rich collection at their fingertips, reducing the number of trials necessary to achieve optimal results.
"It takes a long time, a lot of money, and a whole lot of human resources to discover, develop and understand new catalysts and chemical reactions." says co-lead author and Banting Postdoctoral Fellow Gabriel dos Passos Gomes . "These are some of the tools that allow molecular scientists to precisely develop materials and drugs, from the plastics in your smartphone to the probes that allowed for humanity to achieve the COVID-19 vaccines at an unforeseen pace.
"This work shows how machine learning can change the field."
The Kraken library features organophosphorus ligands, what Tobias Gensch - one of the co-lead authors of this work - described as "some of the most prevalent ligands in homogeneous catalysis."
"We worked extremely hard to make this not only open and available to the community, but as convenient and easy to use as we possibly could," says Gomes, who worked with computer science graduate student Théophile Gaudin in the development of the web application. "With that in mind, we created a web app where users can search for ligands and their properties in a straightforward manner."
While 330,000 compounds will be available at launch, the team plans to create a much larger library of more than 190 million ligands. In comparison, similar libraries have been limited to compounds in the hundreds - with far fewer properties.
"This is very exciting as it shows the potential of AI for scientific research," says Aspuru-Guzik. "In this context, the University of Toronto has launched a global initiative called the Acceleration Consortium which hopes to bring academia, government, and industry together to tackle AI-driven materials discovery.
"It is exciting to have Professor Matthew Sigman on board with the consortium and seeing results of this collaborative work come to fruition."
In January 2022, Gomes will take on a new role as assistant professor in the departments of chemistry and chemical engineering at Carnegie Mellon University, where he aims to pioneer research on the design of catalysts and reaction discovery.
Kraken can be freely accessed online and t he preprint describing how the dataset was elaborated and how the tool can be used for reaction optimization can be accessed at ChemRxiv . | Researchers at Canada's University of Toronto (U of T) have launched a free open access tool containing a library of 330,000 virtual machine learning-calculated organic compounds to accelerate catalysis science. The Kraken tool features organophosphorus ligands, and U of T's Théophile Gaudin said the team created a Web application "where users can search for ligands and their properties in a straightforward manner." Their hope is that the library will allow chemists to reduce the number of trials needed to realize optimal results in their work. U of T's Alan Aspuru-Guzik said, "The world has no time for science as usual; neither for science done in a silo. This is a collaborative effort to accelerate catalysis science that involves a very exciting team from academia and industry." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Canada's University of Toronto (U of T) have launched a free open access tool containing a library of 330,000 virtual machine learning-calculated organic compounds to accelerate catalysis science. The Kraken tool features organophosphorus ligands, and U of T's Théophile Gaudin said the team created a Web application "where users can search for ligands and their properties in a straightforward manner." Their hope is that the library will allow chemists to reduce the number of trials needed to realize optimal results in their work. U of T's Alan Aspuru-Guzik said, "The world has no time for science as usual; neither for science done in a silo. This is a collaborative effort to accelerate catalysis science that involves a very exciting team from academia and industry."
Alán Aspuru-Guzik 's research group at the University of Toronto has launched an open-access tool that promises to accelerate the discovery of new chemical reactions that underpin the development of everything from smartphones to life-saving drugs.
The free tool, called Kraken , is a library of virtual, machine-learning calculated organic compounds - roughly 300,000 of them, with 190 descriptors each.
It was created through a collaboration between Aspuru-Guzik's Matter Lab , the Sigman Research Group at the University of Utah, Technische Universität Berlin, Karlsruhe Institute of Technology, Vector Institute for Artificial Intelligence, the Center for Computer Assisted Synthesis at the University of Notre Dame, IBM Research and AstraZeneca
"The world has no time for science as usual," says Aspuru-Guzik, a professor in U of T's departments of chemistry and computer science in the Faculty of Arts & Science. "Neither for science done in a silo.
"This is a collaborative effort to accelerate catalysis science that involves a very exciting team from academia and industry."
When developing a transition-metal catalyzed chemical reaction, a chemist must find a suitable combination of metal and ligand. Despite the innovations in computer-optimized ligand design led by the Sigman group, ligands would typically be identified by trial and error in the lab. With Kraken, however, chemists will eventually have a vast data-rich collection at their fingertips, reducing the number of trials necessary to achieve optimal results.
"It takes a long time, a lot of money, and a whole lot of human resources to discover, develop and understand new catalysts and chemical reactions." says co-lead author and Banting Postdoctoral Fellow Gabriel dos Passos Gomes . "These are some of the tools that allow molecular scientists to precisely develop materials and drugs, from the plastics in your smartphone to the probes that allowed for humanity to achieve the COVID-19 vaccines at an unforeseen pace.
"This work shows how machine learning can change the field."
The Kraken library features organophosphorus ligands, what Tobias Gensch - one of the co-lead authors of this work - described as "some of the most prevalent ligands in homogeneous catalysis."
"We worked extremely hard to make this not only open and available to the community, but as convenient and easy to use as we possibly could," says Gomes, who worked with computer science graduate student Théophile Gaudin in the development of the web application. "With that in mind, we created a web app where users can search for ligands and their properties in a straightforward manner."
While 330,000 compounds will be available at launch, the team plans to create a much larger library of more than 190 million ligands. In comparison, similar libraries have been limited to compounds in the hundreds - with far fewer properties.
"This is very exciting as it shows the potential of AI for scientific research," says Aspuru-Guzik. "In this context, the University of Toronto has launched a global initiative called the Acceleration Consortium which hopes to bring academia, government, and industry together to tackle AI-driven materials discovery.
"It is exciting to have Professor Matthew Sigman on board with the consortium and seeing results of this collaborative work come to fruition."
In January 2022, Gomes will take on a new role as assistant professor in the departments of chemistry and chemical engineering at Carnegie Mellon University, where he aims to pioneer research on the design of catalysts and reaction discovery.
Kraken can be freely accessed online and t he preprint describing how the dataset was elaborated and how the tool can be used for reaction optimization can be accessed at ChemRxiv . |
|||
447 | Spotify Urged to Rule Out 'Invasive' Voice Recognition Tech | (Thomson Reuters Foundation) - A coalition of musicians and human rights groups urged music streaming company Spotify on Tuesday to rule out possible use of a speech recognition tool it recently developed to suggest songs - describing the technology as "creepy" and "invasive."
In January, Sweden-based Spotify patented a technology that analyses users' speech and background noise to suggest tracks based on their mood, gender, age, accent or surroundings.
The company did not immediately reply to a request for comment, pointing instead to a letter it published in April in which it said it has never implemented the tool in its products and does not plan to do so in the future.
But in an open letter, more than 180 artists and activists called on the firm to abandon the project altogether and make a public commitment to never use, license, sell, or monetise it.
"This recommendation technology is dangerous, a violation of privacy and other human rights, and should not be implemented by Spotify or any other company," the letter said.
"Any use of this technology is unacceptable."
Signatories included American guitarist Tom Morello of Rage Against the Machine, rapper Talib Kweli, Laura Jane Grace of rock band Against Me!, and advocacy groups Amnesty International and Access Now.
"You can't rock out when you're under constant corporate surveillance," Morello said in a statement.
In the patent application first filed in 2018, Spotify, which has 356 million active users, said it was common for a media streaming application to include features that provide personalized recommendations to users.
But tailoring suggestions around someone's taste usually requires them to "tediously input answers to multiple queries," it said.
The technology aimed to streamline the process for suggesting songs that fit people's mood or setting, with background noise that could be used to infer whether someone is listening to music alone, in a car or in a group.
But the letter's signatories said that raised privacy concerns as devices could take in private information and make inferences about other people in the room who might not be aware that they were being listened to.
Using artificial intelligence to recommend music could also exacerbate existing disparities in the music industry, they said.
"Claiming to be able to infer someone's taste in music based on their accent or detect their gender based on the sound of their voice is racist, transphobic, and just plain creepy," musician Evan Greer said in statement.
Voice recognition software is increasingly being used in a range of sectors from customer services to automatic translations and digital assistants.
But the technology suffers from some of the same issues as facial recognition in terms of potential discrimination, inaccuracy, and surveillance, said Daniel Leufer, Europe policy analyst at Access Now.
"When designing voice recognition systems, certain languages, dialects, and even accents are prioritised over others," Leufer told the Thomson Reuters Foundation.
"This ends up effectively either excluding people who don't speak those languages, dialects, or with those accents, or forcing them to adapt their speech to what is hardcoded into these systems as 'normal'," he said in an emailed statement. | A coalition of musicians and human rights groups has called on music streaming service Spotify to exclude the use of a recently developed speech recognition tool it developed for suggesting songs, describing the product as "invasive." The tool analyzes users' speech and background noise to suggest tracks based on mood, gender, age, accent, or surroundings. In its original patent application, Spotify said the tool was designed to streamline the tedious process of personalizing music suggestions to users' tastes; the coalition warned such devices could absorb private information and make deductions about other people in the room who might be unaware they were being surveilled. An open letter by the coalition called the technology "dangerous, a violation of privacy and other human rights," and urged Spotify to discard it altogether, and publicly vow never to "use, license, sell, or monetize it." | [] | [] | [] | scitechnews | None | None | None | None | A coalition of musicians and human rights groups has called on music streaming service Spotify to exclude the use of a recently developed speech recognition tool it developed for suggesting songs, describing the product as "invasive." The tool analyzes users' speech and background noise to suggest tracks based on mood, gender, age, accent, or surroundings. In its original patent application, Spotify said the tool was designed to streamline the tedious process of personalizing music suggestions to users' tastes; the coalition warned such devices could absorb private information and make deductions about other people in the room who might be unaware they were being surveilled. An open letter by the coalition called the technology "dangerous, a violation of privacy and other human rights," and urged Spotify to discard it altogether, and publicly vow never to "use, license, sell, or monetize it."
(Thomson Reuters Foundation) - A coalition of musicians and human rights groups urged music streaming company Spotify on Tuesday to rule out possible use of a speech recognition tool it recently developed to suggest songs - describing the technology as "creepy" and "invasive."
In January, Sweden-based Spotify patented a technology that analyses users' speech and background noise to suggest tracks based on their mood, gender, age, accent or surroundings.
The company did not immediately reply to a request for comment, pointing instead to a letter it published in April in which it said it has never implemented the tool in its products and does not plan to do so in the future.
But in an open letter, more than 180 artists and activists called on the firm to abandon the project altogether and make a public commitment to never use, license, sell, or monetise it.
"This recommendation technology is dangerous, a violation of privacy and other human rights, and should not be implemented by Spotify or any other company," the letter said.
"Any use of this technology is unacceptable."
Signatories included American guitarist Tom Morello of Rage Against the Machine, rapper Talib Kweli, Laura Jane Grace of rock band Against Me!, and advocacy groups Amnesty International and Access Now.
"You can't rock out when you're under constant corporate surveillance," Morello said in a statement.
In the patent application first filed in 2018, Spotify, which has 356 million active users, said it was common for a media streaming application to include features that provide personalized recommendations to users.
But tailoring suggestions around someone's taste usually requires them to "tediously input answers to multiple queries," it said.
The technology aimed to streamline the process for suggesting songs that fit people's mood or setting, with background noise that could be used to infer whether someone is listening to music alone, in a car or in a group.
But the letter's signatories said that raised privacy concerns as devices could take in private information and make inferences about other people in the room who might not be aware that they were being listened to.
Using artificial intelligence to recommend music could also exacerbate existing disparities in the music industry, they said.
"Claiming to be able to infer someone's taste in music based on their accent or detect their gender based on the sound of their voice is racist, transphobic, and just plain creepy," musician Evan Greer said in statement.
Voice recognition software is increasingly being used in a range of sectors from customer services to automatic translations and digital assistants.
But the technology suffers from some of the same issues as facial recognition in terms of potential discrimination, inaccuracy, and surveillance, said Daniel Leufer, Europe policy analyst at Access Now.
"When designing voice recognition systems, certain languages, dialects, and even accents are prioritised over others," Leufer told the Thomson Reuters Foundation.
"This ends up effectively either excluding people who don't speak those languages, dialects, or with those accents, or forcing them to adapt their speech to what is hardcoded into these systems as 'normal'," he said in an emailed statement. |
|||
449 | 3D-Printed 'Artificial Leaves' Could Provide Sustainable Energy on Mars | A group of international researchers led by the Delft University of Technology (TU Delft) in Netherlands used 3D printing to create a living material made of algae that could lead to sustainable energy production on Mars as well as a number of other applications, a TU Delft press release explains .
The researchers used a novel bioprinting technique to print microalgae into a living, resilient material that is capable of photosynthesis. Their research is published in the journal Advanced Functional Materials .
"We created a material that can produce energy simply by placing it into the light," Kui Yu, a Ph.D. student involved in the work, explained in the release. "The biodegradable nature of the material itself and the recyclable nature of microalgal cells make it a sustainable living material."
Using non-living bacterial cellulose and living microalgae to print a unique material with the photosynthetic capability of the microalgae and the tough resilience of the bacterial cellulose. The researchers say the material is also eco-friendly, biodegradable, and scalable for mass production.
"The printing of living cells is an attractive technology for the fabrication of engineered living materials." Marie-Eve Aubin-Tam, an associate professor from the Faculty of Applied Sciences. "Our photosynthetic living material has the unique advantage of being sufficiently mechanically robust for applications in real-life settings."
One of the applications touted by the TU Delft team is as a sustainable source of energy on space colonies, such as the future colony planned for Mars .
The team says the material could be used to create artificial leaves that could produce sustainable energy and oxygen in environments where plants typically don't grow well, such as in space.
The leaves would store energy in chemical form as sugars, which can then be converted into fuels. Oxygen could also be collected during photosynthesis.
Their research adds to a growing list of scientific literature on solutions for growing plants in space rather than sending supplies from Earth, which would be prohibitively expensive - it costs approximately $10,000 to send a pound (453 grams) of materials into low-earth orbit.
In 2017, for example, Germany's space agency (DLR) tested growing tomatoes in recycled astronaut urine aboard the ISS. Española peppers were also chosen as the first fruit to grow in space due to their resilience, something the TU Delft team says their artificial leaves would also incorporate.
As for carrying the material to Mars or any other future space colony, you might ask? The TU Delft team says the microalgae in the artificial leaves regenerate, meaning that a small batch can, in theory, grow into a much larger quantity out in space. | A novel bioprinting technique developed by an international team led by researchers at Delft University of Technology in the Netherlands could help pave the way for sustainable energy production on Mars. The researchers used non-living bacterial cellulose and living microalgae to generate a living, resilient material capable of photosynthesis via three-dimensional (3D) printing. Researcher Kui Yu said the material "can produce energy simply by placing it into the light." The researchers said the material could be used to make artificial leaves that could produce sustainable energy and oxygen in environments not conducive to plant growth. | [] | [] | [] | scitechnews | None | None | None | None | A novel bioprinting technique developed by an international team led by researchers at Delft University of Technology in the Netherlands could help pave the way for sustainable energy production on Mars. The researchers used non-living bacterial cellulose and living microalgae to generate a living, resilient material capable of photosynthesis via three-dimensional (3D) printing. Researcher Kui Yu said the material "can produce energy simply by placing it into the light." The researchers said the material could be used to make artificial leaves that could produce sustainable energy and oxygen in environments not conducive to plant growth.
A group of international researchers led by the Delft University of Technology (TU Delft) in Netherlands used 3D printing to create a living material made of algae that could lead to sustainable energy production on Mars as well as a number of other applications, a TU Delft press release explains .
The researchers used a novel bioprinting technique to print microalgae into a living, resilient material that is capable of photosynthesis. Their research is published in the journal Advanced Functional Materials .
"We created a material that can produce energy simply by placing it into the light," Kui Yu, a Ph.D. student involved in the work, explained in the release. "The biodegradable nature of the material itself and the recyclable nature of microalgal cells make it a sustainable living material."
Using non-living bacterial cellulose and living microalgae to print a unique material with the photosynthetic capability of the microalgae and the tough resilience of the bacterial cellulose. The researchers say the material is also eco-friendly, biodegradable, and scalable for mass production.
"The printing of living cells is an attractive technology for the fabrication of engineered living materials." Marie-Eve Aubin-Tam, an associate professor from the Faculty of Applied Sciences. "Our photosynthetic living material has the unique advantage of being sufficiently mechanically robust for applications in real-life settings."
One of the applications touted by the TU Delft team is as a sustainable source of energy on space colonies, such as the future colony planned for Mars .
The team says the material could be used to create artificial leaves that could produce sustainable energy and oxygen in environments where plants typically don't grow well, such as in space.
The leaves would store energy in chemical form as sugars, which can then be converted into fuels. Oxygen could also be collected during photosynthesis.
Their research adds to a growing list of scientific literature on solutions for growing plants in space rather than sending supplies from Earth, which would be prohibitively expensive - it costs approximately $10,000 to send a pound (453 grams) of materials into low-earth orbit.
In 2017, for example, Germany's space agency (DLR) tested growing tomatoes in recycled astronaut urine aboard the ISS. Española peppers were also chosen as the first fruit to grow in space due to their resilience, something the TU Delft team says their artificial leaves would also incorporate.
As for carrying the material to Mars or any other future space colony, you might ask? The TU Delft team says the microalgae in the artificial leaves regenerate, meaning that a small batch can, in theory, grow into a much larger quantity out in space. |
|||
450 | IBM Unveils Two-Nanometer Chip Technology for Faster Computing | May 6 (Reuters) - For decades, each generation of computer chips got faster and more power-efficient because their most basic building blocks, called transistors, got smaller.
The pace of those improvements has slowed, but International Business Machines Corp (IBM.N) on Thursday said that silicon has at least one more generational advance in store.
IBM introduced what it says is the world's first 2-nanometer chipmaking technology. The technology could be as much as 45% faster than the mainstream 7-nanometer chips in many of today's laptops and phones and up to 75% more power efficient, the company said.
The technology likely will take several years to come to market. Once a major manufacturer of chips, IBM now outsources its high-volume chip production to Samsung Electronics Co Ltd (005930.KS) but maintains a chip manufacturing research center in Albany, New York that produces test runs of chips and has joint technology development deals with Samsung and Intel Corp (INTC.O) to use IBM's chipmaking technology.
The 2-nanometer chips will be smaller and faster than today's leading edge 5-nanonmeter chips, which are just now showing up in premium smartphones like Apple Inc's (AAPL.O) iPhone 12 models, and the 3-nanometer chips expected to come after 5-nanometer.
The technology IBM showed Thursday is the most basic building block of a chip: a transistor, which acts like an electrical on-off switch to form the 1s and 0s of binary digits at that foundation of all modern computing.
Making the switches very tiny makes them faster and more power efficient, but it also creates problems with electrons leaking when the switches are supposed to be off. Darío Gil, senior vice president and director of IBM Research, told Reuters in an interview that scientists were able to drape sheets of insulating material just a few nanometers thick to stop leaks.
"In the end, there's transistors, and everything else (in computing) relies on whether that transistor gets better or not. And it's not a guarantee that there will be a transistor advance generation to generation anymore. So it's a big deal every time we get a chance to say there will be another," Gil said.
Our Standards: The Thomson Reuters Trust Principles. | IBM has unveiled what it is calling the world's first 2-nanometer chipmaking technology, which will be smaller and faster than current leading-edge 5-nanometer processors, as well as the 3-nanometer chips that are expected to follow. The company said the new chips could be up to 45% faster than the mainstream 7-nanometer chips used in many modern laptops and phones, and up to 75% more power-efficient. IBM Research's Dario Gil said miniaturizing the chips' transistors boosts their speed and efficiency, while also creating problems with electron leakage when the switches are supposed to be off. Gil said the IBM scientists draped sheets of insulating material just a few nanometers thick to stop leaks. The company said the 2-nanometer chips will take several years to come to market. | [] | [] | [] | scitechnews | None | None | None | None | IBM has unveiled what it is calling the world's first 2-nanometer chipmaking technology, which will be smaller and faster than current leading-edge 5-nanometer processors, as well as the 3-nanometer chips that are expected to follow. The company said the new chips could be up to 45% faster than the mainstream 7-nanometer chips used in many modern laptops and phones, and up to 75% more power-efficient. IBM Research's Dario Gil said miniaturizing the chips' transistors boosts their speed and efficiency, while also creating problems with electron leakage when the switches are supposed to be off. Gil said the IBM scientists draped sheets of insulating material just a few nanometers thick to stop leaks. The company said the 2-nanometer chips will take several years to come to market.
May 6 (Reuters) - For decades, each generation of computer chips got faster and more power-efficient because their most basic building blocks, called transistors, got smaller.
The pace of those improvements has slowed, but International Business Machines Corp (IBM.N) on Thursday said that silicon has at least one more generational advance in store.
IBM introduced what it says is the world's first 2-nanometer chipmaking technology. The technology could be as much as 45% faster than the mainstream 7-nanometer chips in many of today's laptops and phones and up to 75% more power efficient, the company said.
The technology likely will take several years to come to market. Once a major manufacturer of chips, IBM now outsources its high-volume chip production to Samsung Electronics Co Ltd (005930.KS) but maintains a chip manufacturing research center in Albany, New York that produces test runs of chips and has joint technology development deals with Samsung and Intel Corp (INTC.O) to use IBM's chipmaking technology.
The 2-nanometer chips will be smaller and faster than today's leading edge 5-nanonmeter chips, which are just now showing up in premium smartphones like Apple Inc's (AAPL.O) iPhone 12 models, and the 3-nanometer chips expected to come after 5-nanometer.
The technology IBM showed Thursday is the most basic building block of a chip: a transistor, which acts like an electrical on-off switch to form the 1s and 0s of binary digits at that foundation of all modern computing.
Making the switches very tiny makes them faster and more power efficient, but it also creates problems with electrons leaking when the switches are supposed to be off. Darío Gil, senior vice president and director of IBM Research, told Reuters in an interview that scientists were able to drape sheets of insulating material just a few nanometers thick to stop leaks.
"In the end, there's transistors, and everything else (in computing) relies on whether that transistor gets better or not. And it's not a guarantee that there will be a transistor advance generation to generation anymore. So it's a big deal every time we get a chance to say there will be another," Gil said.
Our Standards: The Thomson Reuters Trust Principles. |
|||
454 | An Uncrackable Combination of Invisible Ink, AI | "Paper Information Recording and Security Protection Using Invisible Ink and Artificial Intelligence" ACS Applied Materials & Interfaces
Coded messages in invisible ink sound like something only found in espionage books, but in real life, they can have important security purposes. Yet, they can be cracked if their encryption is predictable. Now, researchers reporting in ACS Applied Materials & Interfaces have printed complexly encoded data with normal ink and a carbon nanoparticle-based invisible ink, requiring both UV light and a computer that has been taught the code to reveal the correct messages.
Even as electronic records advance, paper is still a common way to preserve data. Invisible ink can hide classified economic, commercial or military information from prying eyes, but many popular inks contain toxic compounds or can be seen with predictable methods, such as light, heat or chemicals. Carbon nanoparticles, which have low toxicity, can be essentially invisible under ambient lighting but can create vibrant images when exposed to ultraviolet (UV) light - a modern take on invisible ink. In addition, advances in artificial intelligence (AI) models - made by networks of processing algorithms that learn how to handle complex information - can ensure that messages are only decipherable on properly trained computers. So, Weiwei Zhao, Kang Li, Jie Xu and colleagues wanted to train an AI model to identify and decrypt symbols printed in a fluorescent carbon nanoparticle ink, revealing hidden messages when exposed to UV light.
The researchers made carbon nanoparticles from citric acid and cysteine, which they diluted with water to create an invisible ink that appeared blue when exposed to UV light. The team loaded the solution into an ink cartridge and printed a series of simple symbols onto paper with an inkjet printer. Then, they taught an AI model, composed of multiple algorithms, to recognize symbols illuminated by UV light and decode them using a special codebook. Finally, they tested the AI model's ability to decode messages printed using a combination of both regular red ink and the UV fluorescent ink. With 100% accuracy, the AI model read the regular ink symbols as "STOP," but when a UV light was shown on the writing, the invisible ink illustrated the desired message "BEGIN." Because these algorithms can notice minute modifications in symbols, this approach has the potential to encrypt messages securely using hundreds of different unpredictable symbols, the researchers say.
The authors acknowledge funding from the Shenzhen Peacock Team Plan and the Bureau of Industry and Information Technology of Shenzhen through the Graphene Manufacturing Innovation Center (201901161514). | Researchers have printed complexly encoded data using a carbon nanoparticle-based ink that can be read only by an artificial intelligence (AI) model when exposed to ultraviolet (UV) light. The researchers created the 'invisible' ink, which appears blue when exposed to UV light, using carbon nanoparticles from citric acid and cysteine. They then trained an AI model to identify symbols written in the ink and illuminated by UV light, and to use a special codebook to decode them. The model, which was tested using a combination of normal red ink and UV fluorescent ink, read the messages with 100% accuracy. The researchers said the algorithms potentially could be used for secure encryption with hundreds of unpredictable symbols because they can detect minute modifications in symbols. | [] | [] | [] | scitechnews | None | None | None | None | Researchers have printed complexly encoded data using a carbon nanoparticle-based ink that can be read only by an artificial intelligence (AI) model when exposed to ultraviolet (UV) light. The researchers created the 'invisible' ink, which appears blue when exposed to UV light, using carbon nanoparticles from citric acid and cysteine. They then trained an AI model to identify symbols written in the ink and illuminated by UV light, and to use a special codebook to decode them. The model, which was tested using a combination of normal red ink and UV fluorescent ink, read the messages with 100% accuracy. The researchers said the algorithms potentially could be used for secure encryption with hundreds of unpredictable symbols because they can detect minute modifications in symbols.
"Paper Information Recording and Security Protection Using Invisible Ink and Artificial Intelligence" ACS Applied Materials & Interfaces
Coded messages in invisible ink sound like something only found in espionage books, but in real life, they can have important security purposes. Yet, they can be cracked if their encryption is predictable. Now, researchers reporting in ACS Applied Materials & Interfaces have printed complexly encoded data with normal ink and a carbon nanoparticle-based invisible ink, requiring both UV light and a computer that has been taught the code to reveal the correct messages.
Even as electronic records advance, paper is still a common way to preserve data. Invisible ink can hide classified economic, commercial or military information from prying eyes, but many popular inks contain toxic compounds or can be seen with predictable methods, such as light, heat or chemicals. Carbon nanoparticles, which have low toxicity, can be essentially invisible under ambient lighting but can create vibrant images when exposed to ultraviolet (UV) light - a modern take on invisible ink. In addition, advances in artificial intelligence (AI) models - made by networks of processing algorithms that learn how to handle complex information - can ensure that messages are only decipherable on properly trained computers. So, Weiwei Zhao, Kang Li, Jie Xu and colleagues wanted to train an AI model to identify and decrypt symbols printed in a fluorescent carbon nanoparticle ink, revealing hidden messages when exposed to UV light.
The researchers made carbon nanoparticles from citric acid and cysteine, which they diluted with water to create an invisible ink that appeared blue when exposed to UV light. The team loaded the solution into an ink cartridge and printed a series of simple symbols onto paper with an inkjet printer. Then, they taught an AI model, composed of multiple algorithms, to recognize symbols illuminated by UV light and decode them using a special codebook. Finally, they tested the AI model's ability to decode messages printed using a combination of both regular red ink and the UV fluorescent ink. With 100% accuracy, the AI model read the regular ink symbols as "STOP," but when a UV light was shown on the writing, the invisible ink illustrated the desired message "BEGIN." Because these algorithms can notice minute modifications in symbols, this approach has the potential to encrypt messages securely using hundreds of different unpredictable symbols, the researchers say.
The authors acknowledge funding from the Shenzhen Peacock Team Plan and the Bureau of Industry and Information Technology of Shenzhen through the Graphene Manufacturing Innovation Center (201901161514). |
|||
455 | A Modular Building Platform for the Most Ingenious of Robots | As one would do with a Lego system, the scientists can randomly combine individual components. The building blocks or voxels - which could be described as 3D pixels - are made of different materials: from basic matrix materials that hold up the construction to magnetic components enabling the control of the soft machine. "You can put the individual soft parts together in any way you wish, with no limitations on what you can achieve. In this way, each robot has an individual magnetisation profile," says Jiachen Zhang. Together with Ziyu Ren and Wenqi Hu he is first author of the paper entitled " Voxelated three-dimensional miniature magnetic soft machines via multimaterial heterogeneous assembly ." The paper was published in Science Robotics on April 28, 2021. A submillimeter small soft machine that can change its shape. The project is the result of many previous projects conducted in the Physical Intelligence Department at MPI-IS. For many years, scientists there have been working on magnetically controlled robots for wireless medical device applications at the small scale, from millimeters down to micrometers size. While the state-of-the-art designs they have developed to date have attracted attention around the world, they were limited by the single material with which they were made, which constrained their functionality. The heterogeneous assembly approach fabricates a flower-shaped soft machine with complex stiffness distribution and reversible shape-morphing. Three of its petals bloomed first in lower magnetic field strength and the rest three bloomed later as the field strength increased. "When building soft-bodied miniature robots, we have many different and often complex designs. As a result of their small size, the available fabrication capability is very limited, and this poses a major challenge. For years, researchers have been trying to develop an innovative fabrication platform that provides scientists with completely new capabilities. Our team has now succeeded in demonstrating a new way to construct much more complex soft robots with different components rather than just one. By mixing and matching, we enable tailor-made functionalities and complex robot morphologies. Our new modular building platform will pave the way for many new functional wireless robots, some of which could potentially become the minimally-invasive medical devices of the future," says Metin Sitti, who leads the Physical Intelligence Department and has pioneered many wireless medical and bio-inspired miniature robots. "We have seen 3-D printing or mold casting of voxels with only one material. That limits the functionality - a single material can only do so many things," says Ziyu Ren. "If you want more functionality like we did and a unique magnetisation profile, you have to introduce a whole set of different materials, for instance by mixing various non-magnetic and magnetic materials. "Previously, each robot's magnetisation profile was limited to certain patterns due to the strong coupling with the geometry of the robot. Now, we have created a platform that can achieve a flexible magnetisation profile. We can do so by freely integrating multiple magnetic parts together in one system," Jiachen Zhang adds. Illustrations of shape-morphing miniature robots fabricated using the heterogeneous assembly platform. Voxels made of multimaterials are freely integrated together to create robots with arbitrary 3D geometries and magnetization profiles. The new building platform enables many new designs and is an important milestone in the research field of soft robotics. The Physical Intelligence Department has already developed a wide variety of robots, from a crawling and rolling caterpillar-inspired robot, a spider-like construct that can jump high, to a robotic grasshopper leg and magnetically-controlled machines that swim as gracefully as jellyfish. The new platform will accelerate the momentum and open up a world of new possibilities to construct even more state-of-the-art miniature soft-bodied machines. The scientists base each construction on two material categories. The base is mainly a polymer that holds up the matrix. But also other kinds of soft elastomers, including biocompatible materials like gelatin, are used. The second category comprises materials embedded with magnetic micro- or nanoparticles that make the robot controllable and responsive to a magnetic field. A heterogeneous assembly platform fabricates multimaterial shape-morphing miniature robots. Thousands of voxels are fabricated in one step. Like dough being distributed in a cookie tray, the scientists use tiny mold casts to create the individual blocks - each of which is no longer than around 100 micrometers. The composition then happens manually under a microscope, as automating the process of putting the particles together is still too complex. While the team integrated simulation before building a robot, they took a trial and error approach to the design until they achieved perfection. Ultimately, however, the team aims for automation; only then can they reap the economies of scale should they commercialize the robots in the future. "In our work, automated fabrication will become a high priority," Jiachen Zhang says. "As for the robot designs we do today, we rely on our intuition based on extensive experience working with different materials and soft robots." Video | Scientists at Germany's Max Planck Institute for Intelligent Systems (MPI-IS) have developed a system for fabricating soft miniature robots in a modular fashion. MPI-IS's Jiachen Zhang said, "You can put the individual soft parts together in any way you wish, with no limitations on what you can achieve. In this way, each robot has an individual magnetization profile." The process fabricates thousands of voxels (three-dimensional pixels) in one step, and the researchers use tiny mold casts to generate individual blocks no longer than about 100 micrometers. Said MPI-IS's Metin Sitti, "By mixing and matching, we enable tailor-made functionalities and complex robot morphologies." | [] | [] | [] | scitechnews | None | None | None | None | Scientists at Germany's Max Planck Institute for Intelligent Systems (MPI-IS) have developed a system for fabricating soft miniature robots in a modular fashion. MPI-IS's Jiachen Zhang said, "You can put the individual soft parts together in any way you wish, with no limitations on what you can achieve. In this way, each robot has an individual magnetization profile." The process fabricates thousands of voxels (three-dimensional pixels) in one step, and the researchers use tiny mold casts to generate individual blocks no longer than about 100 micrometers. Said MPI-IS's Metin Sitti, "By mixing and matching, we enable tailor-made functionalities and complex robot morphologies."
As one would do with a Lego system, the scientists can randomly combine individual components. The building blocks or voxels - which could be described as 3D pixels - are made of different materials: from basic matrix materials that hold up the construction to magnetic components enabling the control of the soft machine. "You can put the individual soft parts together in any way you wish, with no limitations on what you can achieve. In this way, each robot has an individual magnetisation profile," says Jiachen Zhang. Together with Ziyu Ren and Wenqi Hu he is first author of the paper entitled " Voxelated three-dimensional miniature magnetic soft machines via multimaterial heterogeneous assembly ." The paper was published in Science Robotics on April 28, 2021. A submillimeter small soft machine that can change its shape. The project is the result of many previous projects conducted in the Physical Intelligence Department at MPI-IS. For many years, scientists there have been working on magnetically controlled robots for wireless medical device applications at the small scale, from millimeters down to micrometers size. While the state-of-the-art designs they have developed to date have attracted attention around the world, they were limited by the single material with which they were made, which constrained their functionality. The heterogeneous assembly approach fabricates a flower-shaped soft machine with complex stiffness distribution and reversible shape-morphing. Three of its petals bloomed first in lower magnetic field strength and the rest three bloomed later as the field strength increased. "When building soft-bodied miniature robots, we have many different and often complex designs. As a result of their small size, the available fabrication capability is very limited, and this poses a major challenge. For years, researchers have been trying to develop an innovative fabrication platform that provides scientists with completely new capabilities. Our team has now succeeded in demonstrating a new way to construct much more complex soft robots with different components rather than just one. By mixing and matching, we enable tailor-made functionalities and complex robot morphologies. Our new modular building platform will pave the way for many new functional wireless robots, some of which could potentially become the minimally-invasive medical devices of the future," says Metin Sitti, who leads the Physical Intelligence Department and has pioneered many wireless medical and bio-inspired miniature robots. "We have seen 3-D printing or mold casting of voxels with only one material. That limits the functionality - a single material can only do so many things," says Ziyu Ren. "If you want more functionality like we did and a unique magnetisation profile, you have to introduce a whole set of different materials, for instance by mixing various non-magnetic and magnetic materials. "Previously, each robot's magnetisation profile was limited to certain patterns due to the strong coupling with the geometry of the robot. Now, we have created a platform that can achieve a flexible magnetisation profile. We can do so by freely integrating multiple magnetic parts together in one system," Jiachen Zhang adds. Illustrations of shape-morphing miniature robots fabricated using the heterogeneous assembly platform. Voxels made of multimaterials are freely integrated together to create robots with arbitrary 3D geometries and magnetization profiles. The new building platform enables many new designs and is an important milestone in the research field of soft robotics. The Physical Intelligence Department has already developed a wide variety of robots, from a crawling and rolling caterpillar-inspired robot, a spider-like construct that can jump high, to a robotic grasshopper leg and magnetically-controlled machines that swim as gracefully as jellyfish. The new platform will accelerate the momentum and open up a world of new possibilities to construct even more state-of-the-art miniature soft-bodied machines. The scientists base each construction on two material categories. The base is mainly a polymer that holds up the matrix. But also other kinds of soft elastomers, including biocompatible materials like gelatin, are used. The second category comprises materials embedded with magnetic micro- or nanoparticles that make the robot controllable and responsive to a magnetic field. A heterogeneous assembly platform fabricates multimaterial shape-morphing miniature robots. Thousands of voxels are fabricated in one step. Like dough being distributed in a cookie tray, the scientists use tiny mold casts to create the individual blocks - each of which is no longer than around 100 micrometers. The composition then happens manually under a microscope, as automating the process of putting the particles together is still too complex. While the team integrated simulation before building a robot, they took a trial and error approach to the design until they achieved perfection. Ultimately, however, the team aims for automation; only then can they reap the economies of scale should they commercialize the robots in the future. "In our work, automated fabrication will become a high priority," Jiachen Zhang says. "As for the robot designs we do today, we rely on our intuition based on extensive experience working with different materials and soft robots." Video |
|||
456 | Fertility Apps Collect, Share Intimate Data Without Users' Knowledge or Permission | The majority of top-rated fertility apps collect and even share intimate data without the users' knowledge or permission, a collaborative study by Newcastle University and Umea University has found.
Researchers are now calling for a tightening of the categorization of these apps by platforms to protect women from intimate and deeply personal information being exploited and sold.
For hundreds of millions of women fertility tracking applications offer an affordable solution when trying to conceive or manage their pregnancy. But as this technology grows in popularity, experts have revealed that most of the top-rated fertility apps collect and share sensitive private information without users' consent.
Dr Maryam Mehrnezhad, of Newcastle University's School of Computing and Dr Teresa Almeida, from the Department of Informatics, Umeå University, Sweden, explored the privacy risks that can originate from the mismanagement, misuse, and misappropriation of intimate data, which are entwined in individual life events and in public health issues such as abortion, infertility, and pregnancy.
Dr Mehrnezhad and Dr Almeida analysed the privacy notices and tracking practices of 30 apps, available at no cost and dedicated to potential fertility. The apps were selected from the top search results in the Google Play Store and let a user regularly input personal and intimate information, including temperature, mood, sexual activity, orgasm and medical records.
Once the apps were downloaded, the researchers analysed GDPR requirements, privacy notices and tracking practices. They found out that the majority of these apps are not complying with the GDPR in terms of their privacy notices and tracking practices. The study also shows that these apps activate 3.8 trackers on average right after they are installed and opened by the user, even if the user does not engage with the privacy notice.
Presenting their findings at the CHI 2021 Conference, taking place on May 8-13, Dr Mehrnezhad and Dr Almeida warn that the approach of these apps to user privacy has implications for reproductive health, and rights.
Dr Almeida added: "Data are kept in such a vulnerable condition, one in which a default setting allows not only for monetizing data but also to sustain systems of interpersonal violence or harm, such as in cases of pregnancy loss or abortion, demands a more careful approach to how technology is designed and developed.
"While digital health technologies help people better manage their reproductive lives, risks increase when data given voluntarily are not justly protected and data subjects see their reproductive rights challenged to the point of e.g. personal safety."
The study shows that majority of these fertility apps are classified as 'Health & Fitness', a few as 'Medical', and one as 'Communication'. The authors argue that miscategorising an unsecure app which contains medical records as 'Health & Fitness' would enable the developers to avoid the potential consequences, for example, of remaining in the app market without drawing significant attention to it. This means that fertility app data could continue to be sold to third parties for a variety of unauthorised uses, such as advertising and app development.
The team is currently looking into the security, privacy, bias and trust in IoT devices in Femtech. In light of their research, these researchers are calling for more adequate, lawful, and ethical processes when dealing with this data to ensure women get protection for the intimate information that is being collected by such technologies. They advise to seek to improve the understanding of how marginalised user groups can help to shape the design and use of cybersecurity and privacy of such technologies. | A study by researchers at Newcastle University in the U.K. and Sweden's Umea University found that many top-rated fertility apps collect and share personal information without the knowledge or permission of users. The researchers studied the privacy notices and tracking practices of 30 free fertility apps chosen from the top search results in the Google Play Store. They determined that the privacy notices and tracking practices of the majority of these apps do not comply with the EU's General Data Protection Regulation. The researchers also found that regardless of whether the user engages with the apps' privacy notices, an average of 3.8 trackers were activated as soon the apps were installed and opened. The researchers believe more adequate lawful and ethical processes are needed to handle such data. | [] | [] | [] | scitechnews | None | None | None | None | A study by researchers at Newcastle University in the U.K. and Sweden's Umea University found that many top-rated fertility apps collect and share personal information without the knowledge or permission of users. The researchers studied the privacy notices and tracking practices of 30 free fertility apps chosen from the top search results in the Google Play Store. They determined that the privacy notices and tracking practices of the majority of these apps do not comply with the EU's General Data Protection Regulation. The researchers also found that regardless of whether the user engages with the apps' privacy notices, an average of 3.8 trackers were activated as soon the apps were installed and opened. The researchers believe more adequate lawful and ethical processes are needed to handle such data.
The majority of top-rated fertility apps collect and even share intimate data without the users' knowledge or permission, a collaborative study by Newcastle University and Umea University has found.
Researchers are now calling for a tightening of the categorization of these apps by platforms to protect women from intimate and deeply personal information being exploited and sold.
For hundreds of millions of women fertility tracking applications offer an affordable solution when trying to conceive or manage their pregnancy. But as this technology grows in popularity, experts have revealed that most of the top-rated fertility apps collect and share sensitive private information without users' consent.
Dr Maryam Mehrnezhad, of Newcastle University's School of Computing and Dr Teresa Almeida, from the Department of Informatics, Umeå University, Sweden, explored the privacy risks that can originate from the mismanagement, misuse, and misappropriation of intimate data, which are entwined in individual life events and in public health issues such as abortion, infertility, and pregnancy.
Dr Mehrnezhad and Dr Almeida analysed the privacy notices and tracking practices of 30 apps, available at no cost and dedicated to potential fertility. The apps were selected from the top search results in the Google Play Store and let a user regularly input personal and intimate information, including temperature, mood, sexual activity, orgasm and medical records.
Once the apps were downloaded, the researchers analysed GDPR requirements, privacy notices and tracking practices. They found out that the majority of these apps are not complying with the GDPR in terms of their privacy notices and tracking practices. The study also shows that these apps activate 3.8 trackers on average right after they are installed and opened by the user, even if the user does not engage with the privacy notice.
Presenting their findings at the CHI 2021 Conference, taking place on May 8-13, Dr Mehrnezhad and Dr Almeida warn that the approach of these apps to user privacy has implications for reproductive health, and rights.
Dr Almeida added: "Data are kept in such a vulnerable condition, one in which a default setting allows not only for monetizing data but also to sustain systems of interpersonal violence or harm, such as in cases of pregnancy loss or abortion, demands a more careful approach to how technology is designed and developed.
"While digital health technologies help people better manage their reproductive lives, risks increase when data given voluntarily are not justly protected and data subjects see their reproductive rights challenged to the point of e.g. personal safety."
The study shows that majority of these fertility apps are classified as 'Health & Fitness', a few as 'Medical', and one as 'Communication'. The authors argue that miscategorising an unsecure app which contains medical records as 'Health & Fitness' would enable the developers to avoid the potential consequences, for example, of remaining in the app market without drawing significant attention to it. This means that fertility app data could continue to be sold to third parties for a variety of unauthorised uses, such as advertising and app development.
The team is currently looking into the security, privacy, bias and trust in IoT devices in Femtech. In light of their research, these researchers are calling for more adequate, lawful, and ethical processes when dealing with this data to ensure women get protection for the intimate information that is being collected by such technologies. They advise to seek to improve the understanding of how marginalised user groups can help to shape the design and use of cybersecurity and privacy of such technologies. |
|||
457 | Millions of Older Broadband Routers Have Security Flaws, Warn Researchers | Millions of households in the UK are using old broadband routers that could fall prey to hackers, according to a new investigation carried out by consumer watchdog Which? in collaboration with security researchers.
After surveying more than 6,000 adults, Which? identified 13 older routers that are still commonly used by consumers across the country, and sent them to security specialists from technology consultancy Red Maple Technologies. Nine of the devices, it was found, did not meet modern security standards .
Up to 7.5 million users in the UK could potentially be affected, estimated Which?, as vulnerable routers present an opportunity for malicious actors to spy on people as they browse, or to direct them to spam websites.
SEE: Network security policy (TechRepublic Premium)
One major issue concerns the lack of upgrades that older routers receive. Some of the models that respondents reported using haven't been updated since 2018, and even in some cases since 2016.
The devices highlighted for their lack of updates included Sky's SR101 and SR102, the Virgin Media Super Hub and Super Hub 2, and TalkTalk's HG523a, HG635, and HG533.
Most of the providers, when they were contacted by Which?, said that they regularly monitor the devices for threats and update them if needed.
Virgin dismissed the research, saying that 90% of its customers are using later-generation routers. TalkTalk told ZDNet that it had nothing to add to the release.
The researchers also found a local network vulnerability with EE's Brightbox 2, which could let a hacker take full control of the device.
An EE spokesperson told ZDNet: "We take the security of our products and services very seriously. As detailed in the report, this is a very low risk vulnerability for the small number of our customers who still use the EE Brightbox 2. (...) We would like to reassure EE Brightbox 2 customers that we are working on a service patch which we will be pushing out to affected devices in an upcoming background update."
In addition, BT Group - which owns EE - told Which? that older routers still receive security patches if problems are found. Red Maple's researchers found that old devices from BT have been recently updated, and so did routers from Plusnet.
The consumer watchdog advised that consumers who are still using one of the router models that are no longer being updated ask their providers for a new device as soon as possible.
This, however, is by no means a given: while Virgin Media says that it gives free upgrades for customers with older routers, the policy is not always as clear with other providers.
"It doesn't hurt to ask," said Hollie Hennessy, senior researcher at Which?. "While an internet provider is not obliged to provide you with a new router for free, if you call and explain your concerns you might get lucky, especially if your router is quite old."
For consumers whose contracts are expiring soon, Hennessy suggested asking for a new router as a condition to stick with a given provider - and consider switching if the request is not met.
On top of being denied regular updates, many older routers were also found to come with weak default passwords, which can be easily guessed by hackers and grant an outsider access.
This was the case of the same TalkTalk and Sky routers, as well as the Virgin Media Super Hub 2 and the Vodafone HHG2500.
The first thing to do, for consumers who own one of these models, is to change the password to a stronger one, as opposed to the default password provided, said Which?.
The organization, in fact, is calling for the government to ban default passwords and prevent manufacturers from allowing consumers to set weak passwords as part of a new legislation that was proposed last month.
SEE: Wi-Fi hotspots, pollution meters, gunshot locators: How lampposts are making cities smarter
As part of an effort to make devices "secure by design," the UK's department for Digital, Culture, Media and Sport has announced a new law that will stop manufacturers from using default passwords such as "password" or "admin," to better protect consumers from cyberattacks.
The future law would also make it mandatory to tell customers how long their new product will receive security updates for. In addition, manufacturers would have to provide a public point of contact to make it easier to report security vulnerabilities in the products.
In a similar vein, Which? called for more transparency from internet service providers. The organization said that providers should be more upfront about how long routers will be receiving firmware and security updates, and should actively upgrade customers who are at risk.
Only Sky, Virgin Media and Vodafone appear to have a web page dedicated to letting researchers submit the vulnerabilities that they found in the companies' products, according to Which?. | Millions of U.K. households use old broadband routers that hackers could exploit, according to a probe conducted by consumer watchdog Which? and security researchers at consultancy Red Maple Technologies. Which? polled over 6,000 adults and flagged 13 older routers still commonly used by consumers across Britain; Red Maple analysts determined nine of the 13 devices did not meet modern security standards. Which? calculated that up to 7.5 million U.K. users could potentially be affected, as vulnerable routers present an opportunity for hackers to spy on people as they browse, or to steer them to spam websites. The researchers also highlighted weak default passwords as a vulnerability in older routers. | [] | [] | [] | scitechnews | None | None | None | None | Millions of U.K. households use old broadband routers that hackers could exploit, according to a probe conducted by consumer watchdog Which? and security researchers at consultancy Red Maple Technologies. Which? polled over 6,000 adults and flagged 13 older routers still commonly used by consumers across Britain; Red Maple analysts determined nine of the 13 devices did not meet modern security standards. Which? calculated that up to 7.5 million U.K. users could potentially be affected, as vulnerable routers present an opportunity for hackers to spy on people as they browse, or to steer them to spam websites. The researchers also highlighted weak default passwords as a vulnerability in older routers.
Millions of households in the UK are using old broadband routers that could fall prey to hackers, according to a new investigation carried out by consumer watchdog Which? in collaboration with security researchers.
After surveying more than 6,000 adults, Which? identified 13 older routers that are still commonly used by consumers across the country, and sent them to security specialists from technology consultancy Red Maple Technologies. Nine of the devices, it was found, did not meet modern security standards .
Up to 7.5 million users in the UK could potentially be affected, estimated Which?, as vulnerable routers present an opportunity for malicious actors to spy on people as they browse, or to direct them to spam websites.
SEE: Network security policy (TechRepublic Premium)
One major issue concerns the lack of upgrades that older routers receive. Some of the models that respondents reported using haven't been updated since 2018, and even in some cases since 2016.
The devices highlighted for their lack of updates included Sky's SR101 and SR102, the Virgin Media Super Hub and Super Hub 2, and TalkTalk's HG523a, HG635, and HG533.
Most of the providers, when they were contacted by Which?, said that they regularly monitor the devices for threats and update them if needed.
Virgin dismissed the research, saying that 90% of its customers are using later-generation routers. TalkTalk told ZDNet that it had nothing to add to the release.
The researchers also found a local network vulnerability with EE's Brightbox 2, which could let a hacker take full control of the device.
An EE spokesperson told ZDNet: "We take the security of our products and services very seriously. As detailed in the report, this is a very low risk vulnerability for the small number of our customers who still use the EE Brightbox 2. (...) We would like to reassure EE Brightbox 2 customers that we are working on a service patch which we will be pushing out to affected devices in an upcoming background update."
In addition, BT Group - which owns EE - told Which? that older routers still receive security patches if problems are found. Red Maple's researchers found that old devices from BT have been recently updated, and so did routers from Plusnet.
The consumer watchdog advised that consumers who are still using one of the router models that are no longer being updated ask their providers for a new device as soon as possible.
This, however, is by no means a given: while Virgin Media says that it gives free upgrades for customers with older routers, the policy is not always as clear with other providers.
"It doesn't hurt to ask," said Hollie Hennessy, senior researcher at Which?. "While an internet provider is not obliged to provide you with a new router for free, if you call and explain your concerns you might get lucky, especially if your router is quite old."
For consumers whose contracts are expiring soon, Hennessy suggested asking for a new router as a condition to stick with a given provider - and consider switching if the request is not met.
On top of being denied regular updates, many older routers were also found to come with weak default passwords, which can be easily guessed by hackers and grant an outsider access.
This was the case of the same TalkTalk and Sky routers, as well as the Virgin Media Super Hub 2 and the Vodafone HHG2500.
The first thing to do, for consumers who own one of these models, is to change the password to a stronger one, as opposed to the default password provided, said Which?.
The organization, in fact, is calling for the government to ban default passwords and prevent manufacturers from allowing consumers to set weak passwords as part of a new legislation that was proposed last month.
SEE: Wi-Fi hotspots, pollution meters, gunshot locators: How lampposts are making cities smarter
As part of an effort to make devices "secure by design," the UK's department for Digital, Culture, Media and Sport has announced a new law that will stop manufacturers from using default passwords such as "password" or "admin," to better protect consumers from cyberattacks.
The future law would also make it mandatory to tell customers how long their new product will receive security updates for. In addition, manufacturers would have to provide a public point of contact to make it easier to report security vulnerabilities in the products.
In a similar vein, Which? called for more transparency from internet service providers. The organization said that providers should be more upfront about how long routers will be receiving firmware and security updates, and should actively upgrade customers who are at risk.
Only Sky, Virgin Media and Vodafone appear to have a web page dedicated to letting researchers submit the vulnerabilities that they found in the companies' products, according to Which?. |
|||
459 | T-GPS Processes a Graph with a Trillion Edges on a Single Computer | Trillion-scale graph processing simulation on a single computer presents a new concept of graph processing A KAIST research team has developed a new technology that enables to process a large-scale graph algorithm without storing the graph in the main memory or on disks. Named as T-GPS (Trillion-scale Graph Processing Simulation) by the developer Professor Min-Soo Kim from the School of Computing at KAIST, it can process a graph with one trillion edges using a single computer. Graphs are widely used to represent and analyze real-world objects in many domains such as social networks, business intelligence, biology, and neuroscience. As the number of graph applications increases rapidly, developing and testing new graph algorithms is becoming more important than ever before. Nowadays, many industrial applications require a graph algorithm to process a large-scale graph (e.g., one trillion edges). So, when developing and testing graph algorithms such for a large-scale graph, a synthetic graph is usually used instead of a real graph. This is because sharing and utilizing large-scale real graphs is very limited due to their being proprietary or being practically impossible to collect. Conventionally, developing and testing graph algorithms is done via the following two-step approach: generating and storing a graph and executing an algorithm on the graph using a graph processing engine. The first step generates a synthetic graph and stores it on disks. The synthetic graph is usually generated by either parameter-based generation methods or graph upscaling methods. The former extracts a small number of parameters that can capture some properties of a given real graph and generates the synthetic graph with the parameters. The latter upscales a given real graph to a larger one so as to preserve the properties of the original real graph as much as possible. The second step loads the stored graph into the main memory of the graph processing engine such as Apache GraphX and executes a given graph algorithm on the engine. Since the size of the graph is too large to fit in the main memory of a single computer, the graph engine typically runs on a cluster of several tens or hundreds of computers. Therefore, the cost of the conventional two-step approach is very high. The research team solved the problem of the conventional two-step approach. It does not generate and store a large-scale synthetic graph. Instead, it just loads the initial small real graph into main memory. Then, T-GPS processes a graph algorithm on the small real graph as if the large-scale synthetic graph that should be generated from the real graph exists in main memory. After the algorithm is done, T-GPS returns the exactly same result as the conventional two-step approach. The key idea of T-GPS is generating only the part of the synthetic graph that the algorithm needs to access on the fly and modifying the graph processing engine to recognize the part generated on the fly as the part of the synthetic graph actually generated. The research team showed that T-GPS can process a graph of 1 trillion edges using a single computer, while the conventional two-step approach can only process of a graph of 1 billion edges using a cluster of eleven computers of the same specification. Thus, T-GPS outperforms the conventional approach by 10,000 times in terms of computing resources. The team also showed that the speed of processing an algorithm in T-GPS is up to 43 times faster than the conventional approach. This is because T-GPS has no network communication overhead, while the conventional approach has a lot of communication overhead among computers. Professor Kim believes that this work will have a large impact on the IT industry where almost every area utilizes graph data, adding, "T-GPS can significantly increase both the scale and efficiency of developing a new graph algorithm." This work was supported by the National Research Foundation (NRF) of Korea and Institute of Information & communications Technology Planning & Evaluation (IITP). Publication: Park, H., et al. (2021) "Trillion-scale Graph Processing Simulation based on Top-Down Graph Upscaling," Presented at the I E E E I C D E 2021 (April 19-22, 2021, Chania, Greece) Profile: Min-Soo Kim Associate Professor minsoo.k@kaist.ac.kr http://infolab.kaist.ac.kr School of Computing KAIST | A new technology developed by researchers at South Korea's Korea Advanced Institute of Science and Technology (KAIST) requires just a single computer to process a graph with 1 trillion edges. Traditionally, developing and testing graph algorithms involves generating a synthetic graph and storing it on disks, then loading the stored graph into the main memory of a graph processing engine and executing the graph algorithm. With T-GPS (Trillion-scale Graph Processing Simulation), the initial small real graph is loaded into main memory, and the graph algorithm is processed on the small real graph, generating the same result as the conventional approach. The researchers found that T-GPS outperforms the conventional approach by 10,000 in terms of computing resources, and is up to 43 times faster due to the lack of network communication. | [] | [] | [] | scitechnews | None | None | None | None | A new technology developed by researchers at South Korea's Korea Advanced Institute of Science and Technology (KAIST) requires just a single computer to process a graph with 1 trillion edges. Traditionally, developing and testing graph algorithms involves generating a synthetic graph and storing it on disks, then loading the stored graph into the main memory of a graph processing engine and executing the graph algorithm. With T-GPS (Trillion-scale Graph Processing Simulation), the initial small real graph is loaded into main memory, and the graph algorithm is processed on the small real graph, generating the same result as the conventional approach. The researchers found that T-GPS outperforms the conventional approach by 10,000 in terms of computing resources, and is up to 43 times faster due to the lack of network communication.
Trillion-scale graph processing simulation on a single computer presents a new concept of graph processing A KAIST research team has developed a new technology that enables to process a large-scale graph algorithm without storing the graph in the main memory or on disks. Named as T-GPS (Trillion-scale Graph Processing Simulation) by the developer Professor Min-Soo Kim from the School of Computing at KAIST, it can process a graph with one trillion edges using a single computer. Graphs are widely used to represent and analyze real-world objects in many domains such as social networks, business intelligence, biology, and neuroscience. As the number of graph applications increases rapidly, developing and testing new graph algorithms is becoming more important than ever before. Nowadays, many industrial applications require a graph algorithm to process a large-scale graph (e.g., one trillion edges). So, when developing and testing graph algorithms such for a large-scale graph, a synthetic graph is usually used instead of a real graph. This is because sharing and utilizing large-scale real graphs is very limited due to their being proprietary or being practically impossible to collect. Conventionally, developing and testing graph algorithms is done via the following two-step approach: generating and storing a graph and executing an algorithm on the graph using a graph processing engine. The first step generates a synthetic graph and stores it on disks. The synthetic graph is usually generated by either parameter-based generation methods or graph upscaling methods. The former extracts a small number of parameters that can capture some properties of a given real graph and generates the synthetic graph with the parameters. The latter upscales a given real graph to a larger one so as to preserve the properties of the original real graph as much as possible. The second step loads the stored graph into the main memory of the graph processing engine such as Apache GraphX and executes a given graph algorithm on the engine. Since the size of the graph is too large to fit in the main memory of a single computer, the graph engine typically runs on a cluster of several tens or hundreds of computers. Therefore, the cost of the conventional two-step approach is very high. The research team solved the problem of the conventional two-step approach. It does not generate and store a large-scale synthetic graph. Instead, it just loads the initial small real graph into main memory. Then, T-GPS processes a graph algorithm on the small real graph as if the large-scale synthetic graph that should be generated from the real graph exists in main memory. After the algorithm is done, T-GPS returns the exactly same result as the conventional two-step approach. The key idea of T-GPS is generating only the part of the synthetic graph that the algorithm needs to access on the fly and modifying the graph processing engine to recognize the part generated on the fly as the part of the synthetic graph actually generated. The research team showed that T-GPS can process a graph of 1 trillion edges using a single computer, while the conventional two-step approach can only process of a graph of 1 billion edges using a cluster of eleven computers of the same specification. Thus, T-GPS outperforms the conventional approach by 10,000 times in terms of computing resources. The team also showed that the speed of processing an algorithm in T-GPS is up to 43 times faster than the conventional approach. This is because T-GPS has no network communication overhead, while the conventional approach has a lot of communication overhead among computers. Professor Kim believes that this work will have a large impact on the IT industry where almost every area utilizes graph data, adding, "T-GPS can significantly increase both the scale and efficiency of developing a new graph algorithm." This work was supported by the National Research Foundation (NRF) of Korea and Institute of Information & communications Technology Planning & Evaluation (IITP). Publication: Park, H., et al. (2021) "Trillion-scale Graph Processing Simulation based on Top-Down Graph Upscaling," Presented at the I E E E I C D E 2021 (April 19-22, 2021, Chania, Greece) Profile: Min-Soo Kim Associate Professor minsoo.k@kaist.ac.kr http://infolab.kaist.ac.kr School of Computing KAIST |
|||
461 | Ancient Australian 'Superhighways' Suggested by Massive Supercomputing Study | A multi-institutional team of researchers used supercomputers to plot the most likely migration routes of ancient humans across Australia. The team built the first detailed topographic map of the ancient Sahul landmass from satellite, aerial, and undersea mapping data, then calculated the optimal walking routes across this landscape via least-cost path analysis. Devin White at the U.S. Department of Energy's Sandia National Laboratories said a supercomputer operated by the U.S. government ran the simulations over weeks, which yielded a network of "optimal superhighways" featuring the most appealing combinations of easy walking, water, and landmarks. The University of Montana, Missoula's Kyle Bocinsky said, "This is a really compelling illustration of the power of using these [simulation] techniques, at a huge, continental scale, to understand how people navigate landscapes. It's impressive, extreme computing." | [] | [] | [] | scitechnews | None | None | None | None | A multi-institutional team of researchers used supercomputers to plot the most likely migration routes of ancient humans across Australia. The team built the first detailed topographic map of the ancient Sahul landmass from satellite, aerial, and undersea mapping data, then calculated the optimal walking routes across this landscape via least-cost path analysis. Devin White at the U.S. Department of Energy's Sandia National Laboratories said a supercomputer operated by the U.S. government ran the simulations over weeks, which yielded a network of "optimal superhighways" featuring the most appealing combinations of easy walking, water, and landmarks. The University of Montana, Missoula's Kyle Bocinsky said, "This is a really compelling illustration of the power of using these [simulation] techniques, at a huge, continental scale, to understand how people navigate landscapes. It's impressive, extreme computing."
|
||||
462 | Speeding New COVID Treatments with Computational Tool | "Becoming aware of this, I was like, 'Wait a minute, there's enough data here for us to build solid machine learning models,'" Oprea says. The results from NCATS laboratory assays gauged each molecule's ability to inhibit viral entry, infectivity and reproduction, such as the cytopathic effect - the ability to protect a cell from being killed by the virus.
Biomedicine researchers often tend to focus on the positive findings from their studies, but in this case, the NCATS scientists also reported which molecules had no virus-fighting effects. The inclusion of negative data actually enhances the accuracy of machine learning, Oprea says.
"The idea was that we identify molecules that fit the perfect profile," he says. "You want to find molecules that do all these things and don't do the things that we don't want them to do."
The coronavirus is a wily adversary, Oprea says. "I don't think there is a drug that will fit everything to a T." Instead, researchers will likely devise a multi-drug cocktail that attacks the virus on multiple fronts. "It goes back to the one-two punch," he says.
REDIAL-2020 is based on machine learning algorithms capable of rapidly processing huge amounts of data and teasing out hidden patterns that might not be perceivable by a human researcher. Oprea's team validated the machine learning predictions based on the NCATS data by comparing them against the known effects of approved drugs in UNM's DrugCentral database.
In principle, this computational workflow is flexible and could be trained to evaluate compounds against other pathogens, as well as evaluate chemicals that have not yet been approved for human use, Oprea says.
"Our main intent remains drug repurposing, but we're actually focusing on any small molecule," he says. "It doesn't have to be an approved drug. Anyone who tests their molecule could come up with something important." | Scientists at the University of New Mexico (UNM) and the University of Texas at El Paso have developed a computational tool to help drug researchers quickly identify anti-COVID molecules before the virus invades human cells or disable it in the early stages of infection. The team unveiled REDIAL-2020, an open source suite of computational models that can help to rapidly screen small molecules for potential COVID-fighting traits. REDIAL-2020 is based on machine learning (ML) algorithms that quickly process massive volumes of data and tease out patterns that might be missed by human researchers. The team validated the ML forecasts by comparing datasets from the National Center for Advancing Translational Sciences to the known effects of approved drugs in UNM's DrugCentral database. | [] | [] | [] | scitechnews | None | None | None | None | Scientists at the University of New Mexico (UNM) and the University of Texas at El Paso have developed a computational tool to help drug researchers quickly identify anti-COVID molecules before the virus invades human cells or disable it in the early stages of infection. The team unveiled REDIAL-2020, an open source suite of computational models that can help to rapidly screen small molecules for potential COVID-fighting traits. REDIAL-2020 is based on machine learning (ML) algorithms that quickly process massive volumes of data and tease out patterns that might be missed by human researchers. The team validated the ML forecasts by comparing datasets from the National Center for Advancing Translational Sciences to the known effects of approved drugs in UNM's DrugCentral database.
"Becoming aware of this, I was like, 'Wait a minute, there's enough data here for us to build solid machine learning models,'" Oprea says. The results from NCATS laboratory assays gauged each molecule's ability to inhibit viral entry, infectivity and reproduction, such as the cytopathic effect - the ability to protect a cell from being killed by the virus.
Biomedicine researchers often tend to focus on the positive findings from their studies, but in this case, the NCATS scientists also reported which molecules had no virus-fighting effects. The inclusion of negative data actually enhances the accuracy of machine learning, Oprea says.
"The idea was that we identify molecules that fit the perfect profile," he says. "You want to find molecules that do all these things and don't do the things that we don't want them to do."
The coronavirus is a wily adversary, Oprea says. "I don't think there is a drug that will fit everything to a T." Instead, researchers will likely devise a multi-drug cocktail that attacks the virus on multiple fronts. "It goes back to the one-two punch," he says.
REDIAL-2020 is based on machine learning algorithms capable of rapidly processing huge amounts of data and teasing out hidden patterns that might not be perceivable by a human researcher. Oprea's team validated the machine learning predictions based on the NCATS data by comparing them against the known effects of approved drugs in UNM's DrugCentral database.
In principle, this computational workflow is flexible and could be trained to evaluate compounds against other pathogens, as well as evaluate chemicals that have not yet been approved for human use, Oprea says.
"Our main intent remains drug repurposing, but we're actually focusing on any small molecule," he says. "It doesn't have to be an approved drug. Anyone who tests their molecule could come up with something important." |
|||
465 | Patch Issued to Tackle Critical Security Issues Present in Dell Driver Software Since 2009 | Five serious vulnerabilities in a driver used by Dell devices have been disclosed by researchers.
On Tuesday, SentinelLabs said the vulnerabilities were discovered by security researcher Kasif Dekel, who explored Dell's DBUtil BIOS driver -- software used in the vendor's desktop and laptop PCs, notebooks, and tablet products.
The team says that the driver has been vulnerable since 2009, although there is no evidence, at present, that the bugs have been exploited in the wild.
The DBUtil BIOS driver comes on many Dell machines running Windows and contains a component -- the dbutil_2_3.sys module -- which is installed and loaded on-demand by initiating the firmware update process and then unloaded after a system reboot -- and this module was subject to Dekel's scrutiny.
Dell has assigned one CVE ( CVE-2021-21551 ), CVSS 8.8, to cover the five vulnerabilities disclosed by SentinelLabs.
Two are memory corruption issues in the driver, two are security failures caused by a lack of input validation, and one logic issue was found that could be exploited to trigger denial-of-service.
"These multiple critical vulnerabilities in Dell software could allow attackers to escalate privileges from a non-administrator user to kernel mode privileges," the researchers say.
The team notes that the most crucial issue in the driver is that access-control list (ACL) requirements, which set permissions, are not invoked during Input/Output Control (IOCTL) requests.
As drivers often operate with high levels of privilege, this means requests can be sent locally by non-privileged users.
"[This] can be invoked by a non-privileged user," the researchers say. "Allowing any process to communicate with your driver is often a bad practice since drivers operate with the highest of privileges; thus, some IOCTL functions can be abused "by design."
Functions in the driver were also exposed, creating read/write vulnerabilities usable to overwrite tokens and escalate privileges.
Another interesting bug was the possibility to use arbitrary operands to run IN/OUT (I/O) instructions in kernel mode.
"Since IOPL (I/O privilege level) equals to CPL (current privilege level), it is obviously possible to interact with peripheral devices such as the HDD and GPU to either read/write directly to the disk or invoke DMA operations," the team noted. "For example, we could communicate with ATA port IO for directly writing to the disk, then overwrite a binary that is loaded by a privileged process."
SentinelLabs commented:
Proof-of-Concept (PoC) code is being withheld until June to allow users time to patch.
Dell was made aware of Dekel's findings on December 1, 2020. Following triage and issues surrounding some fixes for end-of-life products, Dell worked with Microsoft and has now issued a fixed driver for Windows machines.
The PC giant has issued an advisory (DSA-2021-088) and a FAQ document containing remediation steps to patch the bugs. Dell has described the security flaw as "a driver (dbutil_2_3.sys) packaged with Dell Client firmware update utility packages and software tools [which] contains an insufficient access control vulnerability which may lead to escalation of privileges, denial of service, or information disclosure."
"Local authenticated user access is first required before this vulnerability can be exploited," Dell added.
"We remediated a vulnerability (CVE-2021-21551) in a driver (dbutil_2_3.sys) affecting certain Windows-based Dell computers," a Dell spokesperson said. "We have seen no evidence this vulnerability has been exploited by malicious actors to date. We appreciate the researchers working directly with us to resolve the issue."
Update 18.35 BST: Inclusion and improved clarity of the module's loading process.
Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 | Computer vendor Dell has issued a patch to remedy five longstanding vulnerabilities in driver software discovered by a team at threat intelligence solutions provider SentinelLabs. Security researcher Kasif Dekel found the flaws by exploring the DBUtil BIOS driver found in Dell's desktop and laptop PCs, notebooks, and tablets. The focus of his investigation was the software's dbutil_2_3.sys module, which is installed and loaded on-demand by initiating the firmware update process, then unloaded after a system reboot. Two of the flaws identified were memory corruption issues in the driver, another two were security failures rooted in a lack of input validation, and the final issue found could be leveraged to trigger a denial of service. The SentinelLabs team said these vulnerabilities have been present since 2009, although there is no evidence of exploitation in the wild. | [] | [] | [] | scitechnews | None | None | None | None | Computer vendor Dell has issued a patch to remedy five longstanding vulnerabilities in driver software discovered by a team at threat intelligence solutions provider SentinelLabs. Security researcher Kasif Dekel found the flaws by exploring the DBUtil BIOS driver found in Dell's desktop and laptop PCs, notebooks, and tablets. The focus of his investigation was the software's dbutil_2_3.sys module, which is installed and loaded on-demand by initiating the firmware update process, then unloaded after a system reboot. Two of the flaws identified were memory corruption issues in the driver, another two were security failures rooted in a lack of input validation, and the final issue found could be leveraged to trigger a denial of service. The SentinelLabs team said these vulnerabilities have been present since 2009, although there is no evidence of exploitation in the wild.
Five serious vulnerabilities in a driver used by Dell devices have been disclosed by researchers.
On Tuesday, SentinelLabs said the vulnerabilities were discovered by security researcher Kasif Dekel, who explored Dell's DBUtil BIOS driver -- software used in the vendor's desktop and laptop PCs, notebooks, and tablet products.
The team says that the driver has been vulnerable since 2009, although there is no evidence, at present, that the bugs have been exploited in the wild.
The DBUtil BIOS driver comes on many Dell machines running Windows and contains a component -- the dbutil_2_3.sys module -- which is installed and loaded on-demand by initiating the firmware update process and then unloaded after a system reboot -- and this module was subject to Dekel's scrutiny.
Dell has assigned one CVE ( CVE-2021-21551 ), CVSS 8.8, to cover the five vulnerabilities disclosed by SentinelLabs.
Two are memory corruption issues in the driver, two are security failures caused by a lack of input validation, and one logic issue was found that could be exploited to trigger denial-of-service.
"These multiple critical vulnerabilities in Dell software could allow attackers to escalate privileges from a non-administrator user to kernel mode privileges," the researchers say.
The team notes that the most crucial issue in the driver is that access-control list (ACL) requirements, which set permissions, are not invoked during Input/Output Control (IOCTL) requests.
As drivers often operate with high levels of privilege, this means requests can be sent locally by non-privileged users.
"[This] can be invoked by a non-privileged user," the researchers say. "Allowing any process to communicate with your driver is often a bad practice since drivers operate with the highest of privileges; thus, some IOCTL functions can be abused "by design."
Functions in the driver were also exposed, creating read/write vulnerabilities usable to overwrite tokens and escalate privileges.
Another interesting bug was the possibility to use arbitrary operands to run IN/OUT (I/O) instructions in kernel mode.
"Since IOPL (I/O privilege level) equals to CPL (current privilege level), it is obviously possible to interact with peripheral devices such as the HDD and GPU to either read/write directly to the disk or invoke DMA operations," the team noted. "For example, we could communicate with ATA port IO for directly writing to the disk, then overwrite a binary that is loaded by a privileged process."
SentinelLabs commented:
Proof-of-Concept (PoC) code is being withheld until June to allow users time to patch.
Dell was made aware of Dekel's findings on December 1, 2020. Following triage and issues surrounding some fixes for end-of-life products, Dell worked with Microsoft and has now issued a fixed driver for Windows machines.
The PC giant has issued an advisory (DSA-2021-088) and a FAQ document containing remediation steps to patch the bugs. Dell has described the security flaw as "a driver (dbutil_2_3.sys) packaged with Dell Client firmware update utility packages and software tools [which] contains an insufficient access control vulnerability which may lead to escalation of privileges, denial of service, or information disclosure."
"Local authenticated user access is first required before this vulnerability can be exploited," Dell added.
"We remediated a vulnerability (CVE-2021-21551) in a driver (dbutil_2_3.sys) affecting certain Windows-based Dell computers," a Dell spokesperson said. "We have seen no evidence this vulnerability has been exploited by malicious actors to date. We appreciate the researchers working directly with us to resolve the issue."
Update 18.35 BST: Inclusion and improved clarity of the module's loading process.
Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 |
|||
466 | How Much Does It Itch? | Itch torments its sufferers and can be as debilitating as chronic pain.
But it's a hard symptom to measure - particularly for the 10 million U.S. children with atopic dermatitis, also known as eczema. They can't always verbalize or quantify their suffering via a survey or scale.
It can also be difficult to objectively measure itch for adults with liver disease, kidney disease and certain cancers who experience its symptoms.
So, it's hard to track how well treatments and drugs are working.
But now there is a soft, wearable sensor developed by Northwestern University scientists that actually quantifies itch by measuring scratching when placed on the hand. While it was tested in patients with atopic dermatitis, it can be used in any condition that causes itch. The novel sensor can support clinical trials for new treatments, track treatment response and monitor for disease worsening - all in the home setting.
This is the first sensor able to capture all forms of scratching - finger, wrist and elbow motion related. It also is the first validated in a pediatric population where conditions like atopic dermatitis are the most common.
"Itch torments so many patients across so many conditions. It can be as debilitating as chronic pain," said lead author Dr. Shuai "Steve" Xu, assistant professor of dermatology and of pediatrics at Northwestern University Feinberg School of Medicine. "If we're able to quantify scratching accurately, then we have the ability to objectively quantify itching. This is really important in patients - like children - who can't always verbalize or quantify their suffering."
The paper will be published April 30 in Science Advances.
Xu also is an assistant professor of biomedical engineering at McCormick School of Engineering and Applied Science and medical director of the Querrey Simpson Institute for Bioelectronics, both at Northwestern.
About 10 million U.S. children have atopic dermatitis. The hallmark symptom is itch leading to sleep disturbance, poor neurocognitive development and, on average, a full night of sleep lost per week.
"Atopic dermatitis is so much more than just itchy skin," Xu said. "It is a devastating disease that causes tremendous suffering worldwide. The quality of life of severe atopic dermatitis (not only for the child but also the parent) is equivalent to many life-threatening diseases.
"Patients with atopic dermatitis are 44% more likely to report suicidal thoughts as a result of the itch compared to controls. Thus, the ability to quantify their symptoms is really important to help new drugs get approved, but also support their day to day lives. In some ways - it's like measuring glucose for diabetes ... measuring itching in an atopic dermatitis patient may be just as important."
"This is an exciting time for children and adults with atopic dermatitis -- or eczema -- because of the flurry of activity in developing new therapeutics," said Dr. Amy Paller, chair of dermatology at Northwestern. "Nothing is more important to measure a medication's effectiveness for eczema than itch, the symptom that both defines eczema and has the greatest impact on quality of life. This sensor could play a critical role in this regard, especially for children."
In addition, clinicians and parents have the ability to track how well itch is being controlled in patients at home to monitor for treatment response, as well as early signs of worsening disease, Xu said.
This sensor marries advances in soft, flexible electronics that wrap seamlessly around the hand with machine learning algorithms that specifically identifies scratching without being tricked by similar motion-related movements (e.g. hand waving). The sensor measures both low-frequency motion and high-frequency vibrations from the hand to significantly improve accuracy compared to wrist-watch tools.
The sensor was accepted into the Food and Drug Administration's Drug Discovery Tool program. This program allows novel devices like this sensor to be qualified to aid in the approval of new drugs.
The study was conducted in two parts. The first part involved training the sensor to pick up scratching in healthy adults doing voluntary scratching behaviors. The second part tested the sensors on pediatric patients with atopic dermatitis. Parents set up an infrared camera to serve as the "gold standard." The algorithm and sensor were then used to count scratches in this pediatric patient population. More than 300 hours of sleep data was manually reviewed and scored for scratching and linked to the sensors.
Other Northwestern authors are Keum San Chun, Youn J. Kang, Jong Yoon Lee, Morgan Nguyen, Brad Lee, Rachel Lee, Han Heul Jo , Emily Allen , Hope Chen , Jungwoo Kim , Lian Yu , Xiaoyue Ni , KunHyuck Lee , Hyoyoung Jeong, JooHee Lee, Yoonseok Park, Ha Uk Chung, Alvin W. Li, Peter A. Lio, Albert Yang, Anna B. Fishbein and John A. Rogers.
The research was supported by FDA grant U01FD007001), Pfizer ASPIRE award and Novartis Pharmaceuticals. The work was also supported by the Querrey Simpson Institute for Bioelectronics at Northwestern University. | A soft, wearable sensor developed by Northwestern University scientists can measure the itchiness suffered by children with atopic dermatitis (eczema), as well as by adults with liver disease, kidney disease, and certain cancers who suffer similar symptoms. The sensor quantifies itch by measuring scratching when placed on the hand, including finger-, wrist-, and elbow motion-related scratching. Incorporated into the device are machine learning algorithms that identify scratching without misidentifying similar motion-related movement. The sensor gauges both low-frequency motion and high-frequency vibrations from the hand to improve accuracy compared to wrist-watch tools. Said Northwestern's Shuai Xu, "Patients with atopic dermatitis are 44% more likely to report suicidal thoughts as a result of the itch compared to controls. Thus, the ability to quantify their symptoms is really important to help new drugs get approved, but also support their day-to-day lives." | [] | [] | [] | scitechnews | None | None | None | None | A soft, wearable sensor developed by Northwestern University scientists can measure the itchiness suffered by children with atopic dermatitis (eczema), as well as by adults with liver disease, kidney disease, and certain cancers who suffer similar symptoms. The sensor quantifies itch by measuring scratching when placed on the hand, including finger-, wrist-, and elbow motion-related scratching. Incorporated into the device are machine learning algorithms that identify scratching without misidentifying similar motion-related movement. The sensor gauges both low-frequency motion and high-frequency vibrations from the hand to improve accuracy compared to wrist-watch tools. Said Northwestern's Shuai Xu, "Patients with atopic dermatitis are 44% more likely to report suicidal thoughts as a result of the itch compared to controls. Thus, the ability to quantify their symptoms is really important to help new drugs get approved, but also support their day-to-day lives."
Itch torments its sufferers and can be as debilitating as chronic pain.
But it's a hard symptom to measure - particularly for the 10 million U.S. children with atopic dermatitis, also known as eczema. They can't always verbalize or quantify their suffering via a survey or scale.
It can also be difficult to objectively measure itch for adults with liver disease, kidney disease and certain cancers who experience its symptoms.
So, it's hard to track how well treatments and drugs are working.
But now there is a soft, wearable sensor developed by Northwestern University scientists that actually quantifies itch by measuring scratching when placed on the hand. While it was tested in patients with atopic dermatitis, it can be used in any condition that causes itch. The novel sensor can support clinical trials for new treatments, track treatment response and monitor for disease worsening - all in the home setting.
This is the first sensor able to capture all forms of scratching - finger, wrist and elbow motion related. It also is the first validated in a pediatric population where conditions like atopic dermatitis are the most common.
"Itch torments so many patients across so many conditions. It can be as debilitating as chronic pain," said lead author Dr. Shuai "Steve" Xu, assistant professor of dermatology and of pediatrics at Northwestern University Feinberg School of Medicine. "If we're able to quantify scratching accurately, then we have the ability to objectively quantify itching. This is really important in patients - like children - who can't always verbalize or quantify their suffering."
The paper will be published April 30 in Science Advances.
Xu also is an assistant professor of biomedical engineering at McCormick School of Engineering and Applied Science and medical director of the Querrey Simpson Institute for Bioelectronics, both at Northwestern.
About 10 million U.S. children have atopic dermatitis. The hallmark symptom is itch leading to sleep disturbance, poor neurocognitive development and, on average, a full night of sleep lost per week.
"Atopic dermatitis is so much more than just itchy skin," Xu said. "It is a devastating disease that causes tremendous suffering worldwide. The quality of life of severe atopic dermatitis (not only for the child but also the parent) is equivalent to many life-threatening diseases.
"Patients with atopic dermatitis are 44% more likely to report suicidal thoughts as a result of the itch compared to controls. Thus, the ability to quantify their symptoms is really important to help new drugs get approved, but also support their day to day lives. In some ways - it's like measuring glucose for diabetes ... measuring itching in an atopic dermatitis patient may be just as important."
"This is an exciting time for children and adults with atopic dermatitis -- or eczema -- because of the flurry of activity in developing new therapeutics," said Dr. Amy Paller, chair of dermatology at Northwestern. "Nothing is more important to measure a medication's effectiveness for eczema than itch, the symptom that both defines eczema and has the greatest impact on quality of life. This sensor could play a critical role in this regard, especially for children."
In addition, clinicians and parents have the ability to track how well itch is being controlled in patients at home to monitor for treatment response, as well as early signs of worsening disease, Xu said.
This sensor marries advances in soft, flexible electronics that wrap seamlessly around the hand with machine learning algorithms that specifically identifies scratching without being tricked by similar motion-related movements (e.g. hand waving). The sensor measures both low-frequency motion and high-frequency vibrations from the hand to significantly improve accuracy compared to wrist-watch tools.
The sensor was accepted into the Food and Drug Administration's Drug Discovery Tool program. This program allows novel devices like this sensor to be qualified to aid in the approval of new drugs.
The study was conducted in two parts. The first part involved training the sensor to pick up scratching in healthy adults doing voluntary scratching behaviors. The second part tested the sensors on pediatric patients with atopic dermatitis. Parents set up an infrared camera to serve as the "gold standard." The algorithm and sensor were then used to count scratches in this pediatric patient population. More than 300 hours of sleep data was manually reviewed and scored for scratching and linked to the sensors.
Other Northwestern authors are Keum San Chun, Youn J. Kang, Jong Yoon Lee, Morgan Nguyen, Brad Lee, Rachel Lee, Han Heul Jo , Emily Allen , Hope Chen , Jungwoo Kim , Lian Yu , Xiaoyue Ni , KunHyuck Lee , Hyoyoung Jeong, JooHee Lee, Yoonseok Park, Ha Uk Chung, Alvin W. Li, Peter A. Lio, Albert Yang, Anna B. Fishbein and John A. Rogers.
The research was supported by FDA grant U01FD007001), Pfizer ASPIRE award and Novartis Pharmaceuticals. The work was also supported by the Querrey Simpson Institute for Bioelectronics at Northwestern University. |
|||
467 | Researchers Successfully Use 3D 'Bioprinting' to Create Nose Cartilage | A team of University of Alberta researchers has discovered a way to use 3-D bioprinting technology to create custom-shaped cartilage for use in surgical procedures. The work aims to make it easier for surgeons to safely restore the features of skin cancer patients living with nasal cartilage defects after surgery.
The researchers used a specially designed hydrogel - a material similar to Jell-O - that could be mixed with cells harvested from a patient and then printed in a specific shape captured through 3-D imaging. Over a matter of weeks, the material is cultured in a lab to become functional cartilage.
"It takes a lifetime to make cartilage in an individual, while this method takes about four weeks. So you still expect that there will be some degree of maturity that it has to go through, especially when implanted in the body. But functionally it's able to do the things that cartilage does," said Adetola Adesida , a professor of surgery in the Faculty of Medicine & Dentistry and member of the Cancer Research Institute of Northern Alberta .
The researchers use a specially designed hydrogel that could be mixed with cells collected from a patient and then printed in a specific shape. Over about four weeks, the material is cultured in a lab to become functional cartilage. (Video: Supplied)
"It has to have certain mechanical properties and it has to have strength. This meets those requirements with a material that (at the outset) is 92 per cent water," added Yaman Boluk , a professor in the Faculty of Engineering .
Adesida, Boluk and graduate student Xiaoyi Lan led the project to create the 3-D printed cartilage in hopes of providing a better solution for a clinical problem facing many patients with skin cancer.
Each year upwards of three million people in North America are diagnosed with non-melanoma skin cancer. Of those, 40 per cent will have lesions on their noses, with many requiring surgery to remove them. As part of the procedure, many patients may have cartilage removed, leaving facial disfiguration.
Traditionally, surgeons would take cartilage from one of the patient's ribs and reshape it to fit the needed size and shape for reconstructive surgery. But the procedure comes with complications.
"When the surgeons restructure the nose, it is straight. But when it adapts to its new environment, it goes through a period of remodelling where it warps, almost like the curvature of the rib," said Adesida. "Visually on the face, that's a problem.
"The other issue is that you're opening the rib compartment, which protects the lungs, just to restructure the nose. It's a very vital anatomical location. The patient could have a collapsed lung and has a much higher risk of dying," he added.
The researchers say their work is an example of both precision medicine and regenerative medicine. Lab-grown cartilage printed specifically for the patient can remove the risk of lung collapse, infection in the lungs and severe scarring at the site of a patient's ribs.
"This is to the benefit of the patient. They can go on the operating table, have a small biopsy taken from their nose in about 30 minutes, and from there we can build different shapes of cartilage specifically for them," said Adesida. "We can even bank the cells and use them later to build everything needed for the surgery. This is what this technology allows you to do."
The team is continuing its research and is now testing whether the lab-grown cartilage retains its properties after transplantation in animal models. The team hopes to move the work to a clinical trial within the next two to three years.
The research was supported by grants from the Canadian Institutes of Health Research , Alberta Cancer Foundation , Canadian Foundation for Innovation , University Hospital Foundation , Natural Sciences and Engineering Research Council of Canada and Edmonton Civic Employees Charitable Assistance Fund .
The study, " Bioprinting of human nasoseptal chondrocytes‐laden collagen hydrogel for cartilage tissue engineering ," was published in The FASEB Journal . | A three-dimensional (3D) bioprinting technique developed by researchers at Canada's University of Alberta (U of A) can generate customized cartilage for use in restorative surgeries. The team employed a specially designed hydrogel that is combined with cells harvested from a patient, then printed in a specific configuration captured through 3D imaging. The material is cultured in a laboratory to become functional cartilage, which U of A's Adetola Adesida said can be ready for implantation within four weeks. Adesida said with this technology, a patient "can go on the operating table, have a small biopsy taken from their nose in about 30 minutes, and from there we can build different shapes of cartilage specifically for them. We can even bank the cells and use them later to build everything needed for the surgery." | [] | [] | [] | scitechnews | None | None | None | None | A three-dimensional (3D) bioprinting technique developed by researchers at Canada's University of Alberta (U of A) can generate customized cartilage for use in restorative surgeries. The team employed a specially designed hydrogel that is combined with cells harvested from a patient, then printed in a specific configuration captured through 3D imaging. The material is cultured in a laboratory to become functional cartilage, which U of A's Adetola Adesida said can be ready for implantation within four weeks. Adesida said with this technology, a patient "can go on the operating table, have a small biopsy taken from their nose in about 30 minutes, and from there we can build different shapes of cartilage specifically for them. We can even bank the cells and use them later to build everything needed for the surgery."
A team of University of Alberta researchers has discovered a way to use 3-D bioprinting technology to create custom-shaped cartilage for use in surgical procedures. The work aims to make it easier for surgeons to safely restore the features of skin cancer patients living with nasal cartilage defects after surgery.
The researchers used a specially designed hydrogel - a material similar to Jell-O - that could be mixed with cells harvested from a patient and then printed in a specific shape captured through 3-D imaging. Over a matter of weeks, the material is cultured in a lab to become functional cartilage.
"It takes a lifetime to make cartilage in an individual, while this method takes about four weeks. So you still expect that there will be some degree of maturity that it has to go through, especially when implanted in the body. But functionally it's able to do the things that cartilage does," said Adetola Adesida , a professor of surgery in the Faculty of Medicine & Dentistry and member of the Cancer Research Institute of Northern Alberta .
The researchers use a specially designed hydrogel that could be mixed with cells collected from a patient and then printed in a specific shape. Over about four weeks, the material is cultured in a lab to become functional cartilage. (Video: Supplied)
"It has to have certain mechanical properties and it has to have strength. This meets those requirements with a material that (at the outset) is 92 per cent water," added Yaman Boluk , a professor in the Faculty of Engineering .
Adesida, Boluk and graduate student Xiaoyi Lan led the project to create the 3-D printed cartilage in hopes of providing a better solution for a clinical problem facing many patients with skin cancer.
Each year upwards of three million people in North America are diagnosed with non-melanoma skin cancer. Of those, 40 per cent will have lesions on their noses, with many requiring surgery to remove them. As part of the procedure, many patients may have cartilage removed, leaving facial disfiguration.
Traditionally, surgeons would take cartilage from one of the patient's ribs and reshape it to fit the needed size and shape for reconstructive surgery. But the procedure comes with complications.
"When the surgeons restructure the nose, it is straight. But when it adapts to its new environment, it goes through a period of remodelling where it warps, almost like the curvature of the rib," said Adesida. "Visually on the face, that's a problem.
"The other issue is that you're opening the rib compartment, which protects the lungs, just to restructure the nose. It's a very vital anatomical location. The patient could have a collapsed lung and has a much higher risk of dying," he added.
The researchers say their work is an example of both precision medicine and regenerative medicine. Lab-grown cartilage printed specifically for the patient can remove the risk of lung collapse, infection in the lungs and severe scarring at the site of a patient's ribs.
"This is to the benefit of the patient. They can go on the operating table, have a small biopsy taken from their nose in about 30 minutes, and from there we can build different shapes of cartilage specifically for them," said Adesida. "We can even bank the cells and use them later to build everything needed for the surgery. This is what this technology allows you to do."
The team is continuing its research and is now testing whether the lab-grown cartilage retains its properties after transplantation in animal models. The team hopes to move the work to a clinical trial within the next two to three years.
The research was supported by grants from the Canadian Institutes of Health Research , Alberta Cancer Foundation , Canadian Foundation for Innovation , University Hospital Foundation , Natural Sciences and Engineering Research Council of Canada and Edmonton Civic Employees Charitable Assistance Fund .
The study, " Bioprinting of human nasoseptal chondrocytes‐laden collagen hydrogel for cartilage tissue engineering ," was published in The FASEB Journal . |
|||
468 | Multinode Quantum Network is a Breakthrough for Quantum Internet | Scientists have gotten one step closer to a quantum internet by creating the world's first multinode quantum network.
Researchers at the QuTech research center in the Netherlands created the system, which is made up of three quantum nodes entangled by the spooky laws of quantum mechanics that govern subatomic particles. It is the first time that more than two quantum bits, or "qubits," that do the calculations in quantum computing have been linked together as "nodes," or network endpoints.
Researchers expect the first quantum networks to unlock a wealth of computing applications that can't be performed by existing classical devices - such as faster computation and improved cryptography.
Related: 12 stunning quantum physics experiments
"It will allow us to connect quantum computers for more computing power, create unhackable networks and connect atomic clocks and telescopes together with unprecedented levels of coordination," Matteo Pompili, a member of the QuTech research team that created the network at Delft University of Technology in the Netherlands, told Live Science. "There are also loads of applications that we can't really foresee. One could be to create an algorithm that will run elections in a secure way, for instance."
In much the same way that the traditional computer bit is the basic unit of digital information, the qubit is the basic unit of quantum information. Like the bit, the qubit can be either a 1 or a 0, which represent two possible positions in a two-state system.
But that's just about where the similarities end. Thanks to the bizarre laws of the quantum world, the qubit can exist in a superposition of both the 1 and 0 states until the moment it is measured, when it will randomly collapse into either a 1 or a 0. This strange behavior is the key to the power of quantum computing, as it allows a qubit to perform multiple calculations simultaneously.
Related: The 18 biggest unsolved mysteries in physics
The biggest challenge in linking those qubits together into a quantum network is in establishing and maintaining a process called entanglement , or what Albert Einstein dubbed "spooky action at a distance." This is when two qubits become coupled, linking their properties so that any change in one particle will cause a change in the other, even if they are separated by vast distances.
You can entangle quantum nodes in a lot of ways, but one common method works by first entangling the stationary qubits (which form the network's nodes) with photons, or light particles, before firing the photons at each other. When they meet, the two photons also become entangled, thereby entangling the qubits. This binds the two stationary nodes that are separated by a distance. Any change made to one is reflected by an instantaneous change to the other.
"Spooky action at a distance" lets scientists change the state of a particle by altering the state of its distant entangled partner, effectively teleporting information across big gaps. But maintaining a state of entanglement is a tough task, especially as the entangled system is always at risk of interacting with the outside world and being destroyed by a process called decoherence.
This means, first, that the quantum nodes have to be kept at extremely cold temperatures inside devices called cryostats to minimize the chances that the qubits will interfere with something outside the system. Second, the photons used in the entanglement can't travel very long distances before they are absorbed or scattered, - destroying the signal being sent between two nodes.
"The problem is, unlike classical networks, you cannot amplify quantum signals. If you try to copy the qubit, you destroy the original copy," Pompili said, referring to physics' "no-cloning theorem," which states that it is impossible to create an identical copy of an unknown quantum state. "This really limits the distances we can send quantum signals to the tens of hundreds of kilometers. If you want to set up quantum communication with someone on the other side of the world, you'll need relay nodes in between."
To solve the problem, the team created a network with three nodes, in which photons essentially "pass" the entanglement from a qubit at one of the outer nodes to one at the middle node. The middle node has two qubits - one to acquire an entangled state and one to store it. Once the entanglement between one outer node and the middle node is stored, the middle node entangles the other outer node with its spare qubit. With all of this done, the middle node entangles its two qubits, causing the qubits of the outer nodes to become entangled.
But designing this weird quantum mechanical spin on the classic "river crossing puzzle" was the least of the researchers' troubles - weird, for sure, but not too tricky an idea. To make the entangled photons and beam them to the nodes in the right way, the researchers had to use a complex system of mirrors and laser light. The really tough part was the technological challenge of reducing pesky noise in the system, as well as making sure all of the lasers used to produce the photons were perfectly synchronized.
"We're talking about having three to four lasers for every node, so you start to have 10 lasers and three cryostats that all need to work at the same time, along with all of the electronics and the synchronization," Pompili said.
The three-node system is particularly useful as the memory qubit allows researchers to establish entanglement across the network node by node, rather than the more demanding requirement of doing it all at once. As soon as this is done, information can be beamed across the network.
Some of the researchers' next steps with their new network will be to attempt this information beaming, along with improving essential components of the network's computing abilities so that they can work like regular computer networks do. All of these things will set the scale that the new quantum network could reach.
They also want to see if their system will allow them to establish entanglement between Delft and The Hague, two Dutch cities that are roughly 6 miles (10 kilometers) apart.
"Right now, all of our nodes are within 10 to 20 meters [32 to 66 feet] of each other," Pompili said. "If you want something useful, you need to go to kilometers. This is going to be the first time that we're going to make a link between long distances."
The researchers published their findings April 16 in the journal Science .
Originally published on Live Science. | Scientists at the Delft University of Technology (TU Delft) in the Netherlands have designed the world's first multinode quantum network, comprised of three entangled quantum bits (qubits) or nodes. The researchers used an intricate system of mirrors and laser light to produce and beam the entangled photons to the nodes correctly; mitigating noise and ensuring perfect laser synchronization were key challenges. TU Delft's Matteo Pompili said the multinode system "will allow us to connect quantum computers for more computing power, create unhackable networks, and connect atomic clocks and telescopes together with unprecedented levels of coordination." | [] | [] | [] | scitechnews | None | None | None | None | Scientists at the Delft University of Technology (TU Delft) in the Netherlands have designed the world's first multinode quantum network, comprised of three entangled quantum bits (qubits) or nodes. The researchers used an intricate system of mirrors and laser light to produce and beam the entangled photons to the nodes correctly; mitigating noise and ensuring perfect laser synchronization were key challenges. TU Delft's Matteo Pompili said the multinode system "will allow us to connect quantum computers for more computing power, create unhackable networks, and connect atomic clocks and telescopes together with unprecedented levels of coordination."
Scientists have gotten one step closer to a quantum internet by creating the world's first multinode quantum network.
Researchers at the QuTech research center in the Netherlands created the system, which is made up of three quantum nodes entangled by the spooky laws of quantum mechanics that govern subatomic particles. It is the first time that more than two quantum bits, or "qubits," that do the calculations in quantum computing have been linked together as "nodes," or network endpoints.
Researchers expect the first quantum networks to unlock a wealth of computing applications that can't be performed by existing classical devices - such as faster computation and improved cryptography.
Related: 12 stunning quantum physics experiments
"It will allow us to connect quantum computers for more computing power, create unhackable networks and connect atomic clocks and telescopes together with unprecedented levels of coordination," Matteo Pompili, a member of the QuTech research team that created the network at Delft University of Technology in the Netherlands, told Live Science. "There are also loads of applications that we can't really foresee. One could be to create an algorithm that will run elections in a secure way, for instance."
In much the same way that the traditional computer bit is the basic unit of digital information, the qubit is the basic unit of quantum information. Like the bit, the qubit can be either a 1 or a 0, which represent two possible positions in a two-state system.
But that's just about where the similarities end. Thanks to the bizarre laws of the quantum world, the qubit can exist in a superposition of both the 1 and 0 states until the moment it is measured, when it will randomly collapse into either a 1 or a 0. This strange behavior is the key to the power of quantum computing, as it allows a qubit to perform multiple calculations simultaneously.
Related: The 18 biggest unsolved mysteries in physics
The biggest challenge in linking those qubits together into a quantum network is in establishing and maintaining a process called entanglement , or what Albert Einstein dubbed "spooky action at a distance." This is when two qubits become coupled, linking their properties so that any change in one particle will cause a change in the other, even if they are separated by vast distances.
You can entangle quantum nodes in a lot of ways, but one common method works by first entangling the stationary qubits (which form the network's nodes) with photons, or light particles, before firing the photons at each other. When they meet, the two photons also become entangled, thereby entangling the qubits. This binds the two stationary nodes that are separated by a distance. Any change made to one is reflected by an instantaneous change to the other.
"Spooky action at a distance" lets scientists change the state of a particle by altering the state of its distant entangled partner, effectively teleporting information across big gaps. But maintaining a state of entanglement is a tough task, especially as the entangled system is always at risk of interacting with the outside world and being destroyed by a process called decoherence.
This means, first, that the quantum nodes have to be kept at extremely cold temperatures inside devices called cryostats to minimize the chances that the qubits will interfere with something outside the system. Second, the photons used in the entanglement can't travel very long distances before they are absorbed or scattered, - destroying the signal being sent between two nodes.
"The problem is, unlike classical networks, you cannot amplify quantum signals. If you try to copy the qubit, you destroy the original copy," Pompili said, referring to physics' "no-cloning theorem," which states that it is impossible to create an identical copy of an unknown quantum state. "This really limits the distances we can send quantum signals to the tens of hundreds of kilometers. If you want to set up quantum communication with someone on the other side of the world, you'll need relay nodes in between."
To solve the problem, the team created a network with three nodes, in which photons essentially "pass" the entanglement from a qubit at one of the outer nodes to one at the middle node. The middle node has two qubits - one to acquire an entangled state and one to store it. Once the entanglement between one outer node and the middle node is stored, the middle node entangles the other outer node with its spare qubit. With all of this done, the middle node entangles its two qubits, causing the qubits of the outer nodes to become entangled.
But designing this weird quantum mechanical spin on the classic "river crossing puzzle" was the least of the researchers' troubles - weird, for sure, but not too tricky an idea. To make the entangled photons and beam them to the nodes in the right way, the researchers had to use a complex system of mirrors and laser light. The really tough part was the technological challenge of reducing pesky noise in the system, as well as making sure all of the lasers used to produce the photons were perfectly synchronized.
"We're talking about having three to four lasers for every node, so you start to have 10 lasers and three cryostats that all need to work at the same time, along with all of the electronics and the synchronization," Pompili said.
The three-node system is particularly useful as the memory qubit allows researchers to establish entanglement across the network node by node, rather than the more demanding requirement of doing it all at once. As soon as this is done, information can be beamed across the network.
Some of the researchers' next steps with their new network will be to attempt this information beaming, along with improving essential components of the network's computing abilities so that they can work like regular computer networks do. All of these things will set the scale that the new quantum network could reach.
They also want to see if their system will allow them to establish entanglement between Delft and The Hague, two Dutch cities that are roughly 6 miles (10 kilometers) apart.
"Right now, all of our nodes are within 10 to 20 meters [32 to 66 feet] of each other," Pompili said. "If you want something useful, you need to go to kilometers. This is going to be the first time that we're going to make a link between long distances."
The researchers published their findings April 16 in the journal Science .
Originally published on Live Science. |
|||
469 | Machine Vision System for Almond Grading, Safety | 03 May 2021
Researchers at UniSA have developed a world-first automated technique for simultaneously grading almond quality and detecting potentially serious mycotoxin contamination in kernels.
In 2019-2020, Australia's almond crop was worth just over $1 billion, and the value of the sector is expected to expand to $1.5 billion in coming years, with Australian almond growing conditions among the best in the world.
Given the local industry is now exporting to more than 50 nations, accurate and consistent grading of almonds is paramount, ensuring international markets can trust the Australian product.
Traditionally, almonds have been graded manually, with samples taken hourly from production lines to check for consistency of appearance, chips and scratches, double kernels, insect and mould damage, and other defects.
This process, however, is labour intensive, slow, and subjective, all of which can lead to inaccurate and inconsistent grading, particularly from season to season due to staff turnover.
In conjunction with industry partner SureNut, researchers at the University of South Australia have developed a machine that dramatically improves the accuracy of almond grading, in addition to detecting potentially fatal contaminants common in almond kernels.
Funded through the Cooperative Research Centres Projects program , a research team led by Associate Professor Sang-Heon Lee combined two high definition cameras, a hyperspectral camera and purpose developed AI algorithms to create a system that can examine almond quality in far greater detail than the human eye.
The system can accurately assess physical defects such as chips and scratches and detect harmful contaminants, including the presence of aflatoxin B1, a potent carcinogen that may be implicated in more than 20 per cent of global liver cancer cases.
"Our goal with this innovation was not to simply replicate what a human being could do, but to go far beyond that," Assoc Prof Lee says.
"So, in respect to physical appearance, this machine can detect defects more quickly and more accurately than manual grading, and by using two high definition cameras and a transparent viewing surface, it can also view both sides of the nut simultaneously."
While this visual functionality alone puts the SureNut system at the forefront of innovation in this field, the addition of the hyperspectral camera for contamination detection is a ground-breaking world first.
UniSA engineering researchers, Dr Wilmer Ariza and Dr Gayatri Mishra, developed the hyperspectral system used on the SureNut machine, and Dr Ariza says almonds presented a unique challenge for the technology.
"We are the first team to successfully use hyperspectral imaging this way with almonds, even though other researchers have tried," Dr Ariza says.
"Certain characteristics of the nut kernels - which we are keeping a secret - made the process extremely difficult, but we overcame that, and now we can deliver highly accurate analysis using hyperspectral imaging.
"Through the process, we also discovered some new information about hyperspectral imaging in general, which we will share with the wider research community in time."
Thanks to this hyperspectral innovation, the SureNut system can monitor four key indicators in almonds - moisture content; free fatty acid content (FFA) and peroxide value (PV), which are associated with rancidity; and aflatoxin B1 content.
"Moisture, FFA, PV and aflatoxin B1 content were all correctly predicted by the developed model with accuracy of 95 per cent, 93 per cent, 91 per cent and 94 per cent, respectively," Assoc Prof Lee says.
"Previously, the only way to detect these contaminants was through laboratory methods that require the sample to be ground up and chemically treated, making validation difficult, so our technique is a major safety improvement for the industry given rancidity and aflatoxin have significant effect on consumer health."
While the hyperspectral analysis has only been tested on a lab prototype device at this stage, a SureNut machine running the UniSA-developed grading system has recently been field tested at Riverland Almonds , one of South Australia's leading almond producers.
Quality Assurance Supervisor at Riverland Almonds, Deanne Crawford, says the trials conducted during this season's harvest show the huge potential for the SureNut system.
"Riverland Almonds have continued to work closely with SureNut and have been trialing a unit during the 2021 production season," Crawford says.
"Whilst the unit is a prototype, and still under development, we are encouraged by the progress made and optimistic that the results achieved over the course of this season will demonstrate that the technology has a commercial application within the industry.
"We are particularly encouraged by the value-add features that SureNut are developing with a view to integrating into the unit."
Notes to editors:
....................................................................................................................................... Media: Dan Lander | mobile: 0408 882 809 | email: dan.lander@unisa.edu.au | An automated technique developed by researchers at the University of South Australia (UniSA) and industry partner SureNut can grade almond quality while detecting potential mycotoxin contamination in almond kernels. The researchers developed a machine equipped with two high-definition cameras, a hyperspectral camera, and purpose-developed artificial intelligence algorithms, which UniSA's Sang-Heon Lee said "can detect defects more quickly and more accurately than manual grading." Lee said the model correctly predicted moisture, free fatty acid content, and peroxide value, which are associated with rancidity, with accuracy rates of 95%, 93%, and 91%, respectively. It also correctly predicted aflatoxin B1 content with 94% accuracy. | [] | [] | [] | scitechnews | None | None | None | None | An automated technique developed by researchers at the University of South Australia (UniSA) and industry partner SureNut can grade almond quality while detecting potential mycotoxin contamination in almond kernels. The researchers developed a machine equipped with two high-definition cameras, a hyperspectral camera, and purpose-developed artificial intelligence algorithms, which UniSA's Sang-Heon Lee said "can detect defects more quickly and more accurately than manual grading." Lee said the model correctly predicted moisture, free fatty acid content, and peroxide value, which are associated with rancidity, with accuracy rates of 95%, 93%, and 91%, respectively. It also correctly predicted aflatoxin B1 content with 94% accuracy.
03 May 2021
Researchers at UniSA have developed a world-first automated technique for simultaneously grading almond quality and detecting potentially serious mycotoxin contamination in kernels.
In 2019-2020, Australia's almond crop was worth just over $1 billion, and the value of the sector is expected to expand to $1.5 billion in coming years, with Australian almond growing conditions among the best in the world.
Given the local industry is now exporting to more than 50 nations, accurate and consistent grading of almonds is paramount, ensuring international markets can trust the Australian product.
Traditionally, almonds have been graded manually, with samples taken hourly from production lines to check for consistency of appearance, chips and scratches, double kernels, insect and mould damage, and other defects.
This process, however, is labour intensive, slow, and subjective, all of which can lead to inaccurate and inconsistent grading, particularly from season to season due to staff turnover.
In conjunction with industry partner SureNut, researchers at the University of South Australia have developed a machine that dramatically improves the accuracy of almond grading, in addition to detecting potentially fatal contaminants common in almond kernels.
Funded through the Cooperative Research Centres Projects program , a research team led by Associate Professor Sang-Heon Lee combined two high definition cameras, a hyperspectral camera and purpose developed AI algorithms to create a system that can examine almond quality in far greater detail than the human eye.
The system can accurately assess physical defects such as chips and scratches and detect harmful contaminants, including the presence of aflatoxin B1, a potent carcinogen that may be implicated in more than 20 per cent of global liver cancer cases.
"Our goal with this innovation was not to simply replicate what a human being could do, but to go far beyond that," Assoc Prof Lee says.
"So, in respect to physical appearance, this machine can detect defects more quickly and more accurately than manual grading, and by using two high definition cameras and a transparent viewing surface, it can also view both sides of the nut simultaneously."
While this visual functionality alone puts the SureNut system at the forefront of innovation in this field, the addition of the hyperspectral camera for contamination detection is a ground-breaking world first.
UniSA engineering researchers, Dr Wilmer Ariza and Dr Gayatri Mishra, developed the hyperspectral system used on the SureNut machine, and Dr Ariza says almonds presented a unique challenge for the technology.
"We are the first team to successfully use hyperspectral imaging this way with almonds, even though other researchers have tried," Dr Ariza says.
"Certain characteristics of the nut kernels - which we are keeping a secret - made the process extremely difficult, but we overcame that, and now we can deliver highly accurate analysis using hyperspectral imaging.
"Through the process, we also discovered some new information about hyperspectral imaging in general, which we will share with the wider research community in time."
Thanks to this hyperspectral innovation, the SureNut system can monitor four key indicators in almonds - moisture content; free fatty acid content (FFA) and peroxide value (PV), which are associated with rancidity; and aflatoxin B1 content.
"Moisture, FFA, PV and aflatoxin B1 content were all correctly predicted by the developed model with accuracy of 95 per cent, 93 per cent, 91 per cent and 94 per cent, respectively," Assoc Prof Lee says.
"Previously, the only way to detect these contaminants was through laboratory methods that require the sample to be ground up and chemically treated, making validation difficult, so our technique is a major safety improvement for the industry given rancidity and aflatoxin have significant effect on consumer health."
While the hyperspectral analysis has only been tested on a lab prototype device at this stage, a SureNut machine running the UniSA-developed grading system has recently been field tested at Riverland Almonds , one of South Australia's leading almond producers.
Quality Assurance Supervisor at Riverland Almonds, Deanne Crawford, says the trials conducted during this season's harvest show the huge potential for the SureNut system.
"Riverland Almonds have continued to work closely with SureNut and have been trialing a unit during the 2021 production season," Crawford says.
"Whilst the unit is a prototype, and still under development, we are encouraged by the progress made and optimistic that the results achieved over the course of this season will demonstrate that the technology has a commercial application within the industry.
"We are particularly encouraged by the value-add features that SureNut are developing with a view to integrating into the unit."
Notes to editors:
....................................................................................................................................... Media: Dan Lander | mobile: 0408 882 809 | email: dan.lander@unisa.edu.au |
|||
470 | Untangle Your Hair With Help From Robots | With rapidly growing demands on health care systems, nurses typically spend 18 to 40 percent of their time performing direct patient care tasks , oftentimes for many patients and with little time to spare. Personal care robots that brush your hair could provide substantial help and relief.
This may seem like a truly radical form of "self-care," but crafty robots for things like shaving, hair washing, and make-up are not new. In 2011, the tech giant Panasonic developed a robot that could wash, massage, and even blow dry hair, explicitly designed to help support "safe and comfortable living of the elderly and people with limited mobility, while reducing the burden of caregivers."
Hair combing bots, however, proved to be less explored, leading scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Soft Math Lab at Harvard University to develop a robotic arm setup with a sensorized soft brush, equipped with a camera that helps the arm "see" and assess curliness, to let the system plan a delicate and time-efficient brush-out.
Their control strategy is adaptive to the degree of tangling in the fiber bunch, and they put "RoboWig" to the test by brushing wigs ranging from straight to very curly hair.
While the hardware set-up of "RoboWig" looks futuristic and shiny, the underlying model of the hair fibers is what makes it tick. CSAIL postdoc Josie Hughes and her team's approach examined entangled soft fiber bundles as sets of entwined double helices - think classic DNA strands. This level of granularity provided key insights into mathematical models and control systems for manipulating bundles of soft fibers, with a wide range of applications in the textile industry, animal care, and other fibrous systems.
RoboWig could also potentially assist with, in pure "Lady and the Tramp " fashion, efficiently manipulating spaghetti.
"By developing a model of tangled fibers, we understand from a model-based perspective how hairs must be entangled: starting from the bottom and slowly working the way up to prevent 'jamming' of the fibers," says Hughes, the lead author on a paper about RoboWig. "This is something everyone who has brushed hair has learned from experience, but is now something we can demonstrate through a model, and use to inform a robot."
SPLIT ENDS
This task at hand is a tangled one. Every head of hair is different, and the intricate interplay between hairs when combing can easily lead to knots. What's more, if the incorrect brushing strategy is used, the process can be very painful and damaging to the hair.
Previous research in the brushing domain has mostly been on the mechanical, dynamic and visual properties of hair, as opposed to RoboWig's refined focus on tangling and combing behavior.
To brush and manipulate the hair, the researchers added a soft-bristled sensorized brush to the robot arm, to allow forces during brushing to be measured. They combined this setup with something called a "closed-loop control system," which takes feedback from an output and automatically performs an action without human intervention. This created "force feedback" from the brush -- a control method that lets the user feel what the device is doing -- so the length of the stroke could be optimized to take into account both the potential "pain," and time taken to brush.
Initial tests preserved the human head - for now - and instead were done on a number of wigs of various hair styles and types. The model provided insight into the behaviors of the combing, related to the number of entanglements, and how those could be efficiently and effectively brushed out by choosing appropriate brushing lengths. For example, for curlier hair, the pain cost would dominate, so shorter brush lengths were optimal.
The team wants to eventually perform more realistic experiments on humans, to better understand the performance of the robot with respect to their experience of pain - a metric that is obviously highly subjective, as one person's "two" could be another's "eight."
"To allow robots to extend their task solving abilities to more complex tasks such as hairbrushing, we need not only novel safe hardware, but also an understanding of the complex behavior of the soft hair and tangled fibers," says Hughes. "In addition to hair brushing, the insights provided by our approach could be applied to brushing of fibers for textiles, or animal fibers."
Hughes wrote the paper alongside Harvard School of Engineering and Applied Sciences PhD students Thomas Bolton Plumb-Reyes and Nicholas Charles, professor L. Mahadevan of the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) , and Departments of Physics, and Organismic and Evolutionary Biology at Harvard University, and MIT professor and CSAIL director Daniela Rus. They presented the paper virtually at the IEEE Conference on Soft Robotics (RoboSoft) earlier this month.
The project was co-supported by the NSF Emerging Frontiers in Research and Innovation program between MIT CSAIL and the Soft Math Lab at Harvard. | Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Harvard University's Soft Math Lab teamed up to develop a robotic arm setup that can comb tangled hair. "RoboWig" features a sensorized soft brush equipped with a camera to assess curliness, so the system can adapt to the degree of hair tangling it encounters. CSAIL's Josie Hughes said, "By developing a model of tangled fibers, we understand from a model-based perspective how hairs must be entangled: starting from the bottom and slowly working the way up to prevent 'jamming' of the fibers." Tests on wigs of various hair styles and hair types helped determine appropriate brushing lengths, taking into consideration the number of entanglements and pain levels. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Harvard University's Soft Math Lab teamed up to develop a robotic arm setup that can comb tangled hair. "RoboWig" features a sensorized soft brush equipped with a camera to assess curliness, so the system can adapt to the degree of hair tangling it encounters. CSAIL's Josie Hughes said, "By developing a model of tangled fibers, we understand from a model-based perspective how hairs must be entangled: starting from the bottom and slowly working the way up to prevent 'jamming' of the fibers." Tests on wigs of various hair styles and hair types helped determine appropriate brushing lengths, taking into consideration the number of entanglements and pain levels.
With rapidly growing demands on health care systems, nurses typically spend 18 to 40 percent of their time performing direct patient care tasks , oftentimes for many patients and with little time to spare. Personal care robots that brush your hair could provide substantial help and relief.
This may seem like a truly radical form of "self-care," but crafty robots for things like shaving, hair washing, and make-up are not new. In 2011, the tech giant Panasonic developed a robot that could wash, massage, and even blow dry hair, explicitly designed to help support "safe and comfortable living of the elderly and people with limited mobility, while reducing the burden of caregivers."
Hair combing bots, however, proved to be less explored, leading scientists from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Soft Math Lab at Harvard University to develop a robotic arm setup with a sensorized soft brush, equipped with a camera that helps the arm "see" and assess curliness, to let the system plan a delicate and time-efficient brush-out.
Their control strategy is adaptive to the degree of tangling in the fiber bunch, and they put "RoboWig" to the test by brushing wigs ranging from straight to very curly hair.
While the hardware set-up of "RoboWig" looks futuristic and shiny, the underlying model of the hair fibers is what makes it tick. CSAIL postdoc Josie Hughes and her team's approach examined entangled soft fiber bundles as sets of entwined double helices - think classic DNA strands. This level of granularity provided key insights into mathematical models and control systems for manipulating bundles of soft fibers, with a wide range of applications in the textile industry, animal care, and other fibrous systems.
RoboWig could also potentially assist with, in pure "Lady and the Tramp " fashion, efficiently manipulating spaghetti.
"By developing a model of tangled fibers, we understand from a model-based perspective how hairs must be entangled: starting from the bottom and slowly working the way up to prevent 'jamming' of the fibers," says Hughes, the lead author on a paper about RoboWig. "This is something everyone who has brushed hair has learned from experience, but is now something we can demonstrate through a model, and use to inform a robot."
SPLIT ENDS
This task at hand is a tangled one. Every head of hair is different, and the intricate interplay between hairs when combing can easily lead to knots. What's more, if the incorrect brushing strategy is used, the process can be very painful and damaging to the hair.
Previous research in the brushing domain has mostly been on the mechanical, dynamic and visual properties of hair, as opposed to RoboWig's refined focus on tangling and combing behavior.
To brush and manipulate the hair, the researchers added a soft-bristled sensorized brush to the robot arm, to allow forces during brushing to be measured. They combined this setup with something called a "closed-loop control system," which takes feedback from an output and automatically performs an action without human intervention. This created "force feedback" from the brush -- a control method that lets the user feel what the device is doing -- so the length of the stroke could be optimized to take into account both the potential "pain," and time taken to brush.
Initial tests preserved the human head - for now - and instead were done on a number of wigs of various hair styles and types. The model provided insight into the behaviors of the combing, related to the number of entanglements, and how those could be efficiently and effectively brushed out by choosing appropriate brushing lengths. For example, for curlier hair, the pain cost would dominate, so shorter brush lengths were optimal.
The team wants to eventually perform more realistic experiments on humans, to better understand the performance of the robot with respect to their experience of pain - a metric that is obviously highly subjective, as one person's "two" could be another's "eight."
"To allow robots to extend their task solving abilities to more complex tasks such as hairbrushing, we need not only novel safe hardware, but also an understanding of the complex behavior of the soft hair and tangled fibers," says Hughes. "In addition to hair brushing, the insights provided by our approach could be applied to brushing of fibers for textiles, or animal fibers."
Hughes wrote the paper alongside Harvard School of Engineering and Applied Sciences PhD students Thomas Bolton Plumb-Reyes and Nicholas Charles, professor L. Mahadevan of the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) , and Departments of Physics, and Organismic and Evolutionary Biology at Harvard University, and MIT professor and CSAIL director Daniela Rus. They presented the paper virtually at the IEEE Conference on Soft Robotics (RoboSoft) earlier this month.
The project was co-supported by the NSF Emerging Frontiers in Research and Innovation program between MIT CSAIL and the Soft Math Lab at Harvard. |
|||
471 | Digital Decision-Aiding Tool to Personalize Choice of Test for Patients with Chest Pain | The choice between two non-invasive diagnostic tests is a common dilemma in patients who present with chest pain. Yale cardiologist Rohan Khera, MD, MS, and colleagues have developed ASSIST © , a new digital decision-aiding tool.
By applying machine learning techniques to data from two large clinical trials, this new tool identifies which imaging test to pursue in patients who may have coronary artery disease or CAD, a condition caused by plaque buildup in the arterial wall.
The new tool, described in a study published April 21 in the European Heart Journal , focuses on the long-term outcome for a given patient.
"There are strengths and limitations for each of these diagnostic tests," said Khera, an assistant professor of cardiology at Yale School of Medicine. Patients may have calcium in their blood vessels or a more advanced stage of the disease than can be missed. "If you are able to establish the diagnosis correctly, you would be more likely to pursue optimal medical and procedural therapy, which may then influence the outcomes of patients."
Recent clinical trials have attempted to determine if one test is optimal. The PROMISE and SCOT-HEART clinical trials have suggested that anatomical imaging has similar outcomes to stress testing, but may improve long-term outcomes in certain patients.
"When patients present with chest pain you have two major testing strategies. Large clinical trials have been done without a conclusive answer, so we wanted to see if the trial data could be used to better understand whether a given patient would benefit from one testing strategy or the other," said Khera. Both strategies are currently used in clinical practice.
To create ASSIST, Khera and his team obtained data from 9,572 patients who were enrolled in the PROMISE trial through the National Heart, Lung and Blood Institute and created a novel strategy that embedded local data experiments within the larger clinical trial.
The tool also proved effective in a distinct population of patients in the SCOT-HEART trial. Among 2,135 patients who underwent functional-first or anatomical-first testing, the authors observed a two-fold lower risk of adverse cardiac events when there was agreement between the test performed and the one recommended by ASSIST. Khera said he hopes this tool will provide further insight to clinicians while they make the choice between anatomical or functional testing in chest pain evaluation.
Functional testing, commonly known as a stress test, examines patients for CAD by detecting reduced blood flow to the heart. The second option, anatomical testing, or coronary computed tomography angiography (CCTA), identifies blockages in the blood vessels. Using machine learning algorithms ASSIST provides a recommendation for each patient.
"While we used advanced methods to derive ASSIST, its application is practical for the clinical setting. It relies on routinely captured patient characteristics and can be used by clinicians with a simple online calculator or can be incorporated in the electronic health record," said Evangelos Oikonomou, MD, DPhil, a resident physician in Internal Medicine at Yale and the study's first author.
ASSIST is part of a broader enterprise concept called Evidence2Health. Khera will present this new concept at the Yale Innovation Summit on May 19. Hosted by Yale Office of Cooperate Research, the Yale Innovation Summit is the largest gathering in Connecticut of venture capital and institutional investors from around the country. | A new digital decision-aiding tool, ASSIST, developed by Yale University researchers uses machine learning to identify which of two imaging tests should be used on patients who may have coronary artery disease. The researchers applied machine learning techniques to data from two large clinical trials. They developed a novel strategy using data from 9,572 patients enrolled in the PROMISE trial that embedded local data experiments within the larger clinical trial. They also found that data on 2,135 patients in the SCOT-HEART trial who underwent functional first or anatomical-first testing showed the risk of adverse cardiac events was two-fold lower when the test performed and the one recommended by ASSIST were in agreement. Yale's Dr. Rohan Khera said, "A unique aspect of our approach is that we leverage both arms of a clinical trial, overcoming the limitation of real-world data, where decisions made by clinicians can introduce bias into algorithms." | [] | [] | [] | scitechnews | None | None | None | None | A new digital decision-aiding tool, ASSIST, developed by Yale University researchers uses machine learning to identify which of two imaging tests should be used on patients who may have coronary artery disease. The researchers applied machine learning techniques to data from two large clinical trials. They developed a novel strategy using data from 9,572 patients enrolled in the PROMISE trial that embedded local data experiments within the larger clinical trial. They also found that data on 2,135 patients in the SCOT-HEART trial who underwent functional first or anatomical-first testing showed the risk of adverse cardiac events was two-fold lower when the test performed and the one recommended by ASSIST were in agreement. Yale's Dr. Rohan Khera said, "A unique aspect of our approach is that we leverage both arms of a clinical trial, overcoming the limitation of real-world data, where decisions made by clinicians can introduce bias into algorithms."
The choice between two non-invasive diagnostic tests is a common dilemma in patients who present with chest pain. Yale cardiologist Rohan Khera, MD, MS, and colleagues have developed ASSIST © , a new digital decision-aiding tool.
By applying machine learning techniques to data from two large clinical trials, this new tool identifies which imaging test to pursue in patients who may have coronary artery disease or CAD, a condition caused by plaque buildup in the arterial wall.
The new tool, described in a study published April 21 in the European Heart Journal , focuses on the long-term outcome for a given patient.
"There are strengths and limitations for each of these diagnostic tests," said Khera, an assistant professor of cardiology at Yale School of Medicine. Patients may have calcium in their blood vessels or a more advanced stage of the disease than can be missed. "If you are able to establish the diagnosis correctly, you would be more likely to pursue optimal medical and procedural therapy, which may then influence the outcomes of patients."
Recent clinical trials have attempted to determine if one test is optimal. The PROMISE and SCOT-HEART clinical trials have suggested that anatomical imaging has similar outcomes to stress testing, but may improve long-term outcomes in certain patients.
"When patients present with chest pain you have two major testing strategies. Large clinical trials have been done without a conclusive answer, so we wanted to see if the trial data could be used to better understand whether a given patient would benefit from one testing strategy or the other," said Khera. Both strategies are currently used in clinical practice.
To create ASSIST, Khera and his team obtained data from 9,572 patients who were enrolled in the PROMISE trial through the National Heart, Lung and Blood Institute and created a novel strategy that embedded local data experiments within the larger clinical trial.
The tool also proved effective in a distinct population of patients in the SCOT-HEART trial. Among 2,135 patients who underwent functional-first or anatomical-first testing, the authors observed a two-fold lower risk of adverse cardiac events when there was agreement between the test performed and the one recommended by ASSIST. Khera said he hopes this tool will provide further insight to clinicians while they make the choice between anatomical or functional testing in chest pain evaluation.
Functional testing, commonly known as a stress test, examines patients for CAD by detecting reduced blood flow to the heart. The second option, anatomical testing, or coronary computed tomography angiography (CCTA), identifies blockages in the blood vessels. Using machine learning algorithms ASSIST provides a recommendation for each patient.
"While we used advanced methods to derive ASSIST, its application is practical for the clinical setting. It relies on routinely captured patient characteristics and can be used by clinicians with a simple online calculator or can be incorporated in the electronic health record," said Evangelos Oikonomou, MD, DPhil, a resident physician in Internal Medicine at Yale and the study's first author.
ASSIST is part of a broader enterprise concept called Evidence2Health. Khera will present this new concept at the Yale Innovation Summit on May 19. Hosted by Yale Office of Cooperate Research, the Yale Innovation Summit is the largest gathering in Connecticut of venture capital and institutional investors from around the country. |
|||
472 | Protocol Makes Bitcoin Transactions More Secure, Faster Than Lightning | Cryptocurrencies like Bitcoin are becoming increasingly popular. At first glance, they have many advantages: Transactions are usually anonymous, fast and inexpensive. But sometimes there are problems with them. In certain situations, fraud is possible, users can discover information about other users that should be kept secret, and sometimes delays occur.
The research unit "Security and Privacy" at TU Wien (Lukas Aumayr and his supervisor Prof. Matteo Maffei) in collaboration with the IMDEA Software Institute (Prof. Pedro Moreno-Sanchez, previously postdoc at TU Wien) and the Purdue University (Prof. Aniket Kate) analyzed these problems and developed an improved protocol. It has now been published and will be presented this year at the USENIX Security Symposium - one of the "Big Four" IT security conferences worldwide, which are considered very prestigious.
"It has long been known that Bitcoin and other blockchain technologies have a scalability problem: There can only be a maximum of ten transactions per second," says Lukas Aumayr of the Security and Privacy research unit at TU Wien. "That's very few compared to credit card companies, for example, which perform tens of thousands of transactions per second worldwide."
An approach to solve this problem is the "Lightning Network" - an additional network of payment channels between blockchain users. For example, if two people want to process many transactions in a short period of time, they can exchange payments directly between each other in this way, without each individual transaction being published on the blockchain. Only at the beginning and end of this series of transactions is there an official entry in the blockchain.
These "side branches" of the blockchain can also be made relatively complicated, with chains of multiple users. "Problems can arise in the process," says Lukas Aumayr. "In certain cases, users can then get hold of data about other users. In addition, everyone in this chain has to contribute a certain amount of money, which is locked as collateral. Sometimes a transaction fails, and then a lot of money can remain locked for a relatively long time - the more people involved, the longer."
The research team at TU Wien analyzed how this transaction protocol can be improved and developed an alternative construction. "You can analyze the security of such protocols using formal methods. So we can mathematically prove that our new protocol does not allow certain errors and problems in any situation," says Aumayr.
This makes it possible to rule out very specific security-critical attacks that were previously possible, and also to prevent long-term money blocking: "Previously, two rounds of communication were necessary: In the first round, the money is locked, in the second round it is released - or refunded if there were problems. That could mean an extra day of delay for each user in that chain. With our protocol, the communication chain only has to be run through once," explains Lukas Aumayr.
However, it is not only the fundamental logical structure of the new protocol that is important, but also its practicality. Therefore, the team simulated in a payment channel network how the new technology behaves compared to the previous Lightning network. The advantages of the new protocol became particularly apparent: depending on the situation, such as whether or not there are attacks and fraud attempts, the new protocol results in a factor of 4 to 33 fewer failed transactions than with the conventional Lightning network.
The TU Wien team is already in contact with the Lightning network's development organizations. "Of course, we hope that our technology will be quickly deployed, or at least offered as a more secure alternative to the current technology," says Lukas Aumayr. "Technically, this could be implemented immediately."
L. Aumayr, P. Moreno-Sanchez, A. Kate, M. Maffei, Blitz: Secure Multi-Hop Payments Without Two-Phase Commits , opens an external URL in a new window , USENIX Security Symposium 2021
Dipl.-Ing. Lukas Aumayr
Forschungsbereich Security and Privacy
Institut für Logic and Computation
Technische Universität Wien
Favoritenstraße 9-11, 1040 Wien, Österreich
+43 1 58801 192611 lukas.aumayr @ tuwien.ac.at | Researchers at Austria's Vienna University of Technology (TU Wein), Spain's IMDEA Software Institute, and Purdue University have developed an improved protocol for faster, more secure Bitcoin transactions. The researchers sought to improve on the "Lightning Network" of payment channels between blockchain users, which allows many transactions to be processed in a short amount of time. A simulation showed the new protocol results in a factor of four to 33 fewer failed transactions, compared with the Lightning Network. TU Wein's Lukas Aumayr said, "We can mathematically prove that our new protocol does not allow certain errors and problems in any situation." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Austria's Vienna University of Technology (TU Wein), Spain's IMDEA Software Institute, and Purdue University have developed an improved protocol for faster, more secure Bitcoin transactions. The researchers sought to improve on the "Lightning Network" of payment channels between blockchain users, which allows many transactions to be processed in a short amount of time. A simulation showed the new protocol results in a factor of four to 33 fewer failed transactions, compared with the Lightning Network. TU Wein's Lukas Aumayr said, "We can mathematically prove that our new protocol does not allow certain errors and problems in any situation."
Cryptocurrencies like Bitcoin are becoming increasingly popular. At first glance, they have many advantages: Transactions are usually anonymous, fast and inexpensive. But sometimes there are problems with them. In certain situations, fraud is possible, users can discover information about other users that should be kept secret, and sometimes delays occur.
The research unit "Security and Privacy" at TU Wien (Lukas Aumayr and his supervisor Prof. Matteo Maffei) in collaboration with the IMDEA Software Institute (Prof. Pedro Moreno-Sanchez, previously postdoc at TU Wien) and the Purdue University (Prof. Aniket Kate) analyzed these problems and developed an improved protocol. It has now been published and will be presented this year at the USENIX Security Symposium - one of the "Big Four" IT security conferences worldwide, which are considered very prestigious.
"It has long been known that Bitcoin and other blockchain technologies have a scalability problem: There can only be a maximum of ten transactions per second," says Lukas Aumayr of the Security and Privacy research unit at TU Wien. "That's very few compared to credit card companies, for example, which perform tens of thousands of transactions per second worldwide."
An approach to solve this problem is the "Lightning Network" - an additional network of payment channels between blockchain users. For example, if two people want to process many transactions in a short period of time, they can exchange payments directly between each other in this way, without each individual transaction being published on the blockchain. Only at the beginning and end of this series of transactions is there an official entry in the blockchain.
These "side branches" of the blockchain can also be made relatively complicated, with chains of multiple users. "Problems can arise in the process," says Lukas Aumayr. "In certain cases, users can then get hold of data about other users. In addition, everyone in this chain has to contribute a certain amount of money, which is locked as collateral. Sometimes a transaction fails, and then a lot of money can remain locked for a relatively long time - the more people involved, the longer."
The research team at TU Wien analyzed how this transaction protocol can be improved and developed an alternative construction. "You can analyze the security of such protocols using formal methods. So we can mathematically prove that our new protocol does not allow certain errors and problems in any situation," says Aumayr.
This makes it possible to rule out very specific security-critical attacks that were previously possible, and also to prevent long-term money blocking: "Previously, two rounds of communication were necessary: In the first round, the money is locked, in the second round it is released - or refunded if there were problems. That could mean an extra day of delay for each user in that chain. With our protocol, the communication chain only has to be run through once," explains Lukas Aumayr.
However, it is not only the fundamental logical structure of the new protocol that is important, but also its practicality. Therefore, the team simulated in a payment channel network how the new technology behaves compared to the previous Lightning network. The advantages of the new protocol became particularly apparent: depending on the situation, such as whether or not there are attacks and fraud attempts, the new protocol results in a factor of 4 to 33 fewer failed transactions than with the conventional Lightning network.
The TU Wien team is already in contact with the Lightning network's development organizations. "Of course, we hope that our technology will be quickly deployed, or at least offered as a more secure alternative to the current technology," says Lukas Aumayr. "Technically, this could be implemented immediately."
L. Aumayr, P. Moreno-Sanchez, A. Kate, M. Maffei, Blitz: Secure Multi-Hop Payments Without Two-Phase Commits , opens an external URL in a new window , USENIX Security Symposium 2021
Dipl.-Ing. Lukas Aumayr
Forschungsbereich Security and Privacy
Institut für Logic and Computation
Technische Universität Wien
Favoritenstraße 9-11, 1040 Wien, Österreich
+43 1 58801 192611 lukas.aumayr @ tuwien.ac.at |
|||
473 | Neural Nets Used to Rethink Material Design | NEWS RELEASE
Jeff Falk 713-348-6775 jfalk@rice.edu
Mike Williams 713-348-6728 mikewilliams@rice.edu
HOUSTON - (April 30, 2021) - The microscopic structures and properties of materials are intimately linked, and customizing them is a challenge. Rice University engineers are determined to simplify the process through machine learning.
To that end, the Rice lab of materials scientist Ming Tang , in collaboration with physicist Fei Zhou at Lawrence Livermore National Laboratory, introduced a technique to predict the evolution of microstructures - structural features between 10 nanometers and 100 microns - in materials.
Their open-access paper in the Cell Press journal Patterns shows how neural networks (computer models that mimic the brain's neurons) can train themselves to predict how a structure will grow under a certain environment, much like a snowflake forms from moisture in nature.
In fact, snowflake-like, dendritic crystal structures were one of the examples the lab used in its proof-of-concept study.
"In modern material science, it's widely accepted that the microstructure often plays a critical role in controlling a material's properties," Tang said. "You not only want to control how the atoms are arranged on lattices, but also what the microstructure looks like, to give you good performance and even new functionality.
"The holy grail of designing materials is to be able to predict how a microstructure will change under given conditions, whether we heat it up or apply stress or some other type of stimulation," he said.
Tang has worked to refine microstructure prediction for his entire career, but said the traditional equation-based approach faces significant challenges to allow scientists to keep up with the demand for new materials.
"The tremendous progress in machine learning encouraged Fei at Lawrence Livermore and us to see if we could apply it to materials," he said.
Fortunately, there was plenty of data from the traditional method to help train the team's neural networks, which view the early evolution of microstructures to predict the next step, and the next one, and so on.
"This is what machinery is good at, seeing the correlation in a very complex way that the human mind is not able to," Tang said. "We take advantage of that."
The researchers tested their neural networks on four distinct types of microstructure: plane-wave propagation , grain growth , spinodal decomposition and dendritic crystal growth.
In each test, the networks were fed between 1,000 and 2,000 sets of 20 successive images illustrating a material's microstructure evolution as predicted by the equations. After learning the evolution rules from these data, the network was then given from 1 to 10 images to predict the next 50 to 200 frames, and usually did so in seconds.
The new technique's advantages quickly became clear: The neural networks, powered by graphic processors, sped the computations up to 718 times for grain growth, compared to the previous algorithm. When run on a standard central processor, they were still up to 87 times faster than the old method. The prediction of other types of microstructure evolution showed similar, though not as dramatic, speed increases.
Comparisons with images from the traditional simulation method proved the predictions were largely on the mark, Tang said. "Based on that, we see how we can update the parameters to make the prediction more and more accurate," he said. "Then we can use these predictions to help design materials we have not seen before.
"Another benefit is that it's able to make predictions even when we do not know everything about the material properties in a system," Tang said. "We couldn't do that with the equation-based method, which needs to know all the parameter values in the equations to perform simulations."
Tang said the computation efficiency of neural networks could accelerate the development of novel materials. He expects that will be helpful in his lab's ongoing design of more efficient batteries. "We're thinking about novel three-dimensional structures that will help charge and discharge batteries much faster than what we have now," Tang said. "This is an optimization problem that is perfect for our new approach."
Rice graduate student Kaiqi Yang is lead author of the paper. Co-authors are Rice alumnus Yifan Cao and graduate students Youtian Zhang and Shaoxun Fan; and researchers Daniel Aberg and Babak Sadigh of Lawrence Livermore. Zhou is a physicist at Lawrence Livermore. Tang is an assistant professor of materials science and nanoengineering at Rice.
The Department of Energy, the National Science Foundation and the American Chemical Society Petroleum Research Fund supported the research.
-30-
Read the paper at https://www.cell.com/patterns/fulltext/S2666-3899 (21) 00063-5 .
Follow Rice News and Media Relations via Twitter @RiceUNews .
Related materials:
Mesoscale Materials Science Group: http://tanggroup.rice.edu/research/
Department of Materials Science and NanoEngineering: https://msne.rice.edu
George R. Brown School of Engineering: https://engineering.rice.edu
Video:
https://youtu.be/nWXuAb_JJ0Y
Image for download:
https://news.rice.edu/files/2021/04/0503_MICROSTRUCTURE-1-WEB.jpg
Engineers at Rice University and Lawrence Livermore National Laboratory are using neural networks to accelerate the prediction of how microstructures of materials evolve. This example predicts snowflake-like dendritic crystal growth. (Credit: Mesoscale Materials Science Group/Rice University)
Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation's top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,978 undergraduates and 3,192 graduate students, Rice's undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 1 for quality of life by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger's Personal Finance. | A technique developed by researchers at Rice University and Lawrence Livermore National Laboratory uses machine learning to predict the evolution of microstructures in materials. The researchers demonstrated that neural networks can train themselves to predict a structure's growth in a particular environment. The researchers trained their neural networks using data from the traditional equation-based approach to predict microstructure changes and tested them on four microstructure types: plane-wave propagation, grain growth, spinodal decomposition, and dendritic crystal growth. The neural networks were 718 times faster for grain growth when powered by graphic processors compared to the prior algorithm, and 87 times faster when run on a standard central processor. Rice's Ming Tang said the new method can "make predictions even when we do not know everything about the material properties in a system," and will be useful in designing more efficient batteries. | [] | [] | [] | scitechnews | None | None | None | None | A technique developed by researchers at Rice University and Lawrence Livermore National Laboratory uses machine learning to predict the evolution of microstructures in materials. The researchers demonstrated that neural networks can train themselves to predict a structure's growth in a particular environment. The researchers trained their neural networks using data from the traditional equation-based approach to predict microstructure changes and tested them on four microstructure types: plane-wave propagation, grain growth, spinodal decomposition, and dendritic crystal growth. The neural networks were 718 times faster for grain growth when powered by graphic processors compared to the prior algorithm, and 87 times faster when run on a standard central processor. Rice's Ming Tang said the new method can "make predictions even when we do not know everything about the material properties in a system," and will be useful in designing more efficient batteries.
NEWS RELEASE
Jeff Falk 713-348-6775 jfalk@rice.edu
Mike Williams 713-348-6728 mikewilliams@rice.edu
HOUSTON - (April 30, 2021) - The microscopic structures and properties of materials are intimately linked, and customizing them is a challenge. Rice University engineers are determined to simplify the process through machine learning.
To that end, the Rice lab of materials scientist Ming Tang , in collaboration with physicist Fei Zhou at Lawrence Livermore National Laboratory, introduced a technique to predict the evolution of microstructures - structural features between 10 nanometers and 100 microns - in materials.
Their open-access paper in the Cell Press journal Patterns shows how neural networks (computer models that mimic the brain's neurons) can train themselves to predict how a structure will grow under a certain environment, much like a snowflake forms from moisture in nature.
In fact, snowflake-like, dendritic crystal structures were one of the examples the lab used in its proof-of-concept study.
"In modern material science, it's widely accepted that the microstructure often plays a critical role in controlling a material's properties," Tang said. "You not only want to control how the atoms are arranged on lattices, but also what the microstructure looks like, to give you good performance and even new functionality.
"The holy grail of designing materials is to be able to predict how a microstructure will change under given conditions, whether we heat it up or apply stress or some other type of stimulation," he said.
Tang has worked to refine microstructure prediction for his entire career, but said the traditional equation-based approach faces significant challenges to allow scientists to keep up with the demand for new materials.
"The tremendous progress in machine learning encouraged Fei at Lawrence Livermore and us to see if we could apply it to materials," he said.
Fortunately, there was plenty of data from the traditional method to help train the team's neural networks, which view the early evolution of microstructures to predict the next step, and the next one, and so on.
"This is what machinery is good at, seeing the correlation in a very complex way that the human mind is not able to," Tang said. "We take advantage of that."
The researchers tested their neural networks on four distinct types of microstructure: plane-wave propagation , grain growth , spinodal decomposition and dendritic crystal growth.
In each test, the networks were fed between 1,000 and 2,000 sets of 20 successive images illustrating a material's microstructure evolution as predicted by the equations. After learning the evolution rules from these data, the network was then given from 1 to 10 images to predict the next 50 to 200 frames, and usually did so in seconds.
The new technique's advantages quickly became clear: The neural networks, powered by graphic processors, sped the computations up to 718 times for grain growth, compared to the previous algorithm. When run on a standard central processor, they were still up to 87 times faster than the old method. The prediction of other types of microstructure evolution showed similar, though not as dramatic, speed increases.
Comparisons with images from the traditional simulation method proved the predictions were largely on the mark, Tang said. "Based on that, we see how we can update the parameters to make the prediction more and more accurate," he said. "Then we can use these predictions to help design materials we have not seen before.
"Another benefit is that it's able to make predictions even when we do not know everything about the material properties in a system," Tang said. "We couldn't do that with the equation-based method, which needs to know all the parameter values in the equations to perform simulations."
Tang said the computation efficiency of neural networks could accelerate the development of novel materials. He expects that will be helpful in his lab's ongoing design of more efficient batteries. "We're thinking about novel three-dimensional structures that will help charge and discharge batteries much faster than what we have now," Tang said. "This is an optimization problem that is perfect for our new approach."
Rice graduate student Kaiqi Yang is lead author of the paper. Co-authors are Rice alumnus Yifan Cao and graduate students Youtian Zhang and Shaoxun Fan; and researchers Daniel Aberg and Babak Sadigh of Lawrence Livermore. Zhou is a physicist at Lawrence Livermore. Tang is an assistant professor of materials science and nanoengineering at Rice.
The Department of Energy, the National Science Foundation and the American Chemical Society Petroleum Research Fund supported the research.
-30-
Read the paper at https://www.cell.com/patterns/fulltext/S2666-3899 (21) 00063-5 .
Follow Rice News and Media Relations via Twitter @RiceUNews .
Related materials:
Mesoscale Materials Science Group: http://tanggroup.rice.edu/research/
Department of Materials Science and NanoEngineering: https://msne.rice.edu
George R. Brown School of Engineering: https://engineering.rice.edu
Video:
https://youtu.be/nWXuAb_JJ0Y
Image for download:
https://news.rice.edu/files/2021/04/0503_MICROSTRUCTURE-1-WEB.jpg
Engineers at Rice University and Lawrence Livermore National Laboratory are using neural networks to accelerate the prediction of how microstructures of materials evolve. This example predicts snowflake-like dendritic crystal growth. (Credit: Mesoscale Materials Science Group/Rice University)
Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation's top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,978 undergraduates and 3,192 graduate students, Rice's undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 1 for quality of life by the Princeton Review. Rice is also rated as a best value among private universities by Kiplinger's Personal Finance. |
|||
474 | The Robot Surgeon Will See You Now | Sitting on a stool several feet from a long-armed robot, Dr. Danyal Fer wrapped his fingers around two metal handles near his chest.
As he moved the handles - up and down, left and right - the robot mimicked each small motion with its own two arms. Then, when he pinched his thumb and forefinger together, one of the robot's tiny claws did much the same. This is how surgeons like Dr. Fer have long used robots when operating on patients . They can remove a prostate from a patient while sitting at a computer console across the room.
But after this brief demonstration, Dr. Fer and his fellow researchers at the University of California, Berkeley, showed how they hope to advance the state of the art. Dr. Fer let go of the handles, and a new kind of computer software took over. As he and the other researchers looked on, the robot started to move entirely on its own.
With one claw, the machine lifted a tiny plastic ring from an equally tiny peg on the table, passed the ring from one claw to the other, moved it across the table and gingerly hooked it onto a new peg. Then the robot did the same with several more rings, completing the task as quickly as it had when guided by Dr. Fer. | Scientists are developing autonomous surgical robots, using the underlying technologies of driverless cars, autonomous drones, and warehouse robots. Such projects aim to reduce surgeons' workloads and possibly raise surgical success rates by automating particular stages of surgery. Johns Hopkins University's Greg Hager said while total surgical automation is not possible without human oversight, "We can start to build automation tools that make the life of a surgeon a little bit easier." Upgrades to computer vision driven by artificial intelligence could enable robots to perform surgical tasks by themselves, without light-emitting markers to guide their movements. Key to this advancement are neural networks, which learn from images captured by surgical robots, and are incorporated into the University of California, Berkeley's da Vinci Surgical System. | [] | [] | [] | scitechnews | None | None | None | None | Scientists are developing autonomous surgical robots, using the underlying technologies of driverless cars, autonomous drones, and warehouse robots. Such projects aim to reduce surgeons' workloads and possibly raise surgical success rates by automating particular stages of surgery. Johns Hopkins University's Greg Hager said while total surgical automation is not possible without human oversight, "We can start to build automation tools that make the life of a surgeon a little bit easier." Upgrades to computer vision driven by artificial intelligence could enable robots to perform surgical tasks by themselves, without light-emitting markers to guide their movements. Key to this advancement are neural networks, which learn from images captured by surgical robots, and are incorporated into the University of California, Berkeley's da Vinci Surgical System.
Sitting on a stool several feet from a long-armed robot, Dr. Danyal Fer wrapped his fingers around two metal handles near his chest.
As he moved the handles - up and down, left and right - the robot mimicked each small motion with its own two arms. Then, when he pinched his thumb and forefinger together, one of the robot's tiny claws did much the same. This is how surgeons like Dr. Fer have long used robots when operating on patients . They can remove a prostate from a patient while sitting at a computer console across the room.
But after this brief demonstration, Dr. Fer and his fellow researchers at the University of California, Berkeley, showed how they hope to advance the state of the art. Dr. Fer let go of the handles, and a new kind of computer software took over. As he and the other researchers looked on, the robot started to move entirely on its own.
With one claw, the machine lifted a tiny plastic ring from an equally tiny peg on the table, passed the ring from one claw to the other, moved it across the table and gingerly hooked it onto a new peg. Then the robot did the same with several more rings, completing the task as quickly as it had when guided by Dr. Fer. |
|||
475 | 'Bat-Sense' Tech Generates Images From Sound | Scientists have found a way to equip everyday objects like smartphones and laptops with a bat-like sense of their surroundings.
At the heart of the technique is a sophisticated machine-learning algorithm which uses reflected echoes to generate images, similar to the way bats navigate and hunt using echolocation.
The algorithm measures the time it takes for blips of sound emitted by speakers or radio waves pulsed from small antennas to bounce around inside an indoor space and return to the sensor. By cleverly analysing the results, the algorithm can deduce the shape, size and layout of a room, as well as pick out in the presence of objects or people. The results are displayed as a video feed which turns the echo data into three-dimensional vision.
One key difference between the team's achievement and the echolocation of bats is that bats have two ears to help them navigate, while the algorithm is tuned to work with data collected from a single point, like a microphone or a radio antenna.
The researchers say that the technique could be used to generate images through potentially any devices equipped with microphones and speakers or radio antennae.
The research, outlined in a paper published today by computing scientists and physicists from the University of Glasgow in the journal Physical Review Letters, could have applications in security and healthcare.
Dr Alex Turpin and Dr Valentin Kapitany, of the University of Glasgow's School of Computing Science and School of Physics and Astronomy, are the lead authors of the paper.
Dr Turpin said: "Echolocation in animals is a remarkable ability, and science has managed to recreate the ability to generate three-dimensional images from reflected echoes in a number of different ways, like RADAR and LiDAR.
"What sets this research apart from other systems is that, firstly, it requires data from just a single input - the microphone or the antenna - to create three-dimensional images. Secondly, we believe that the algorithm we've developed could turn any device with either of those pieces of kit into an echolocation device.
"That means that the cost of this kind of 3D imaging could be greatly reduced, opening up many new applications. A building could be kept secure without traditional cameras by picking up the signals reflected from an intruder, for example. The same could be done to keep track of the movements of vulnerable patients in nursing homes. We could even see the system being used to track the rise and fall of a patient's chest in healthcare settings, alerting staff to changes in their breathing." The paper outlines how the researchers used the speakers and microphone from a laptop to generate and receive acoustic waves in the kilohertz range. They also used an antenna to do the same with radio-frequency sounds in the gigahertz range.
In each case, they collected data about the reflections of the waves taken in a room as a single person moved around. At the same time, they also recorded data about the room using a special camera which uses a process known as time-of-flight to measure the dimensions of the room and provide a low-resolution image.
By combining the echo data from the microphone and the image data from the time-of-flight camera, the team 'trained' their machine-learning algorithm over hundreds of repetitions to associate specific delays in the echoes with images. Eventually, the algorithm had learned enough to generate its own highly accurate images of the room and its contents from the echo data alone, giving it the 'bat-like' ability to sense its surroundings.
The research builds on previous work by the team, which trained a neural-network algorithm to build three-dimensional images by measuring the reflections from flashes of light using a single-pixel detector.
Dr Turpin added: "We've now been able to demonstrate the effectiveness of this algorithmic machine-learning technique using light and sound, which is very exciting. It's clear that there is a lot of potential here for sensing the world in new ways, and we're keen to continue exploring the possibilities of generating more high-resolution images in the future."
The team's paper, titled '3D imaging from multipath temporal echoes', is published in Physical Review Letters. The research was supported by funding from the Royal Academy of Engineering and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).
First published: 30 April 2021 | The means for equipping everyday objects with a bat-like sense of their surroundings has been developed by scientists at the U.K.'s University of Glasgow. They used a machine learning algorithm to produce images via reflected echoes, by measuring the time it takes for sound blips emitted by speakers or radio waves pulsed from antennas to bounce within an indoor environment and return to the sensor. The program can infer the shape, size, and layout of a room, and identify the presence of objects or people, with the results displayed as a video feed that renders the echo data into three-dimensional vision. The researchers said the technique could be used to generate images through potentially any devices outfitted with microphones and speakers, or radio antennae. | [] | [] | [] | scitechnews | None | None | None | None | The means for equipping everyday objects with a bat-like sense of their surroundings has been developed by scientists at the U.K.'s University of Glasgow. They used a machine learning algorithm to produce images via reflected echoes, by measuring the time it takes for sound blips emitted by speakers or radio waves pulsed from antennas to bounce within an indoor environment and return to the sensor. The program can infer the shape, size, and layout of a room, and identify the presence of objects or people, with the results displayed as a video feed that renders the echo data into three-dimensional vision. The researchers said the technique could be used to generate images through potentially any devices outfitted with microphones and speakers, or radio antennae.
Scientists have found a way to equip everyday objects like smartphones and laptops with a bat-like sense of their surroundings.
At the heart of the technique is a sophisticated machine-learning algorithm which uses reflected echoes to generate images, similar to the way bats navigate and hunt using echolocation.
The algorithm measures the time it takes for blips of sound emitted by speakers or radio waves pulsed from small antennas to bounce around inside an indoor space and return to the sensor. By cleverly analysing the results, the algorithm can deduce the shape, size and layout of a room, as well as pick out in the presence of objects or people. The results are displayed as a video feed which turns the echo data into three-dimensional vision.
One key difference between the team's achievement and the echolocation of bats is that bats have two ears to help them navigate, while the algorithm is tuned to work with data collected from a single point, like a microphone or a radio antenna.
The researchers say that the technique could be used to generate images through potentially any devices equipped with microphones and speakers or radio antennae.
The research, outlined in a paper published today by computing scientists and physicists from the University of Glasgow in the journal Physical Review Letters, could have applications in security and healthcare.
Dr Alex Turpin and Dr Valentin Kapitany, of the University of Glasgow's School of Computing Science and School of Physics and Astronomy, are the lead authors of the paper.
Dr Turpin said: "Echolocation in animals is a remarkable ability, and science has managed to recreate the ability to generate three-dimensional images from reflected echoes in a number of different ways, like RADAR and LiDAR.
"What sets this research apart from other systems is that, firstly, it requires data from just a single input - the microphone or the antenna - to create three-dimensional images. Secondly, we believe that the algorithm we've developed could turn any device with either of those pieces of kit into an echolocation device.
"That means that the cost of this kind of 3D imaging could be greatly reduced, opening up many new applications. A building could be kept secure without traditional cameras by picking up the signals reflected from an intruder, for example. The same could be done to keep track of the movements of vulnerable patients in nursing homes. We could even see the system being used to track the rise and fall of a patient's chest in healthcare settings, alerting staff to changes in their breathing." The paper outlines how the researchers used the speakers and microphone from a laptop to generate and receive acoustic waves in the kilohertz range. They also used an antenna to do the same with radio-frequency sounds in the gigahertz range.
In each case, they collected data about the reflections of the waves taken in a room as a single person moved around. At the same time, they also recorded data about the room using a special camera which uses a process known as time-of-flight to measure the dimensions of the room and provide a low-resolution image.
By combining the echo data from the microphone and the image data from the time-of-flight camera, the team 'trained' their machine-learning algorithm over hundreds of repetitions to associate specific delays in the echoes with images. Eventually, the algorithm had learned enough to generate its own highly accurate images of the room and its contents from the echo data alone, giving it the 'bat-like' ability to sense its surroundings.
The research builds on previous work by the team, which trained a neural-network algorithm to build three-dimensional images by measuring the reflections from flashes of light using a single-pixel detector.
Dr Turpin added: "We've now been able to demonstrate the effectiveness of this algorithmic machine-learning technique using light and sound, which is very exciting. It's clear that there is a lot of potential here for sensing the world in new ways, and we're keen to continue exploring the possibilities of generating more high-resolution images in the future."
The team's paper, titled '3D imaging from multipath temporal echoes', is published in Physical Review Letters. The research was supported by funding from the Royal Academy of Engineering and the Engineering and Physical Sciences Research Council (EPSRC), part of UK Research and Innovation (UKRI).
First published: 30 April 2021 |
|||
477 | 3D-Printed Home in Dutch City Expands Housing Options | EINDHOVEN, Netherlands (AP) - Elize Lutz and Harrie Dekkers' new home is a 94-square meter (1,011-square foot) two-bedroom bungalow that resembles a boulder with windows.
The curving lines of its gray concrete walls look and feel natural. But they are actually at the cutting edge of housing construction technology in the Netherlands and around the world: They were 3D printed at a nearby factory.
"It's special. It's a form that's unusual, and when I saw it for the first time, it reminds me of something you knew when you were young," Lutz said Friday. She will rent the house with Dekkers for six months for 800 euros ($970) per month.
The house, for now, looks strange with its layers of printed concrete clearly visible - even a few places where printing problems caused imperfections.
In the future, as the Netherlands seeks ways to tackle a chronic housing shortage, such construction could become commonplace. The country needs to build hundreds of thousands of new homes this decade to accommodate a growing population.
Theo Salet, a professor at Eindhoven's Technical University, is working in 3D printing, also known as additive manufacturing, to find ways of making concrete construction more sustainable. He figures houses can be 3D printed in the future using 30% less material.
"Why? The answer is sustainability," he said. "And the first way to do that is by cutting down the amount of concrete that we use."
He explained that 3D printing can deposit the material only where you need it.
A new generation of start-ups in the United States also are among the companies looking to bring 3D-printed homes into the mainstream.
Fittingly, Lutz and Dekkers' new house is in Eindhoven, a city that markets itself as a center of innovation.
The home is made up of 24 concrete elements "printed" by a machine that squirts layer upon layer of concrete at a factory in the city before being trucked to a neighborhood of other new homes. There, the finishing touches - including a roof - were added.
The layers give a ribbed texture to its walls, inside and out. The house complies with all Dutch construction codes and the printing process took just 120 hours.
The home is the product of collaboration between city hall, Eindhoven's Technical University and construction companies called Project Milestone. They are planning to build a total of five houses, honing their techniques with each one. Future homes will have more than one floor.
The process uses concrete the consistency of toothpaste, Salet said. That ensures it is strong enough to build with but also wet enough so the layers stick to another. The printed elements are hollow and filled with insulation material.
The hope is that such homes, which are quicker to build than traditional houses and use less concrete, could become a factor in solving housing shortages in a nation that is one-third of the size of Florida with a population of 17.4 million people and rising.
In a report this month, the Netherlands Environmental Assessment Agency said that education and innovation can spur the construction industry in the long term. But other measures are needed to tackle Dutch housing shortages, including reforming zoning.
Salet believes 3D printing can help by digitizing the design and production of houses.
"If you ask me, 'will we build 1 million of the houses, as you see here?' The answer is no. But will we use this technology as part of other houses combined with wooden structures? Combined with other materials? Then my answer is yes," he said.
Dekkers has already noticed great acoustics in the home even when he's just playing music on his phone. And when he's not listening to music, he enjoys the silence that the insulated walls provide.
"It gives a very good feel, because if you're inside you don't hear anything from outside," he said.
___
Read all AP stories about climate change and sustainability at https://apnews.com/hub/climate. | Elize Lutz and Harrie Dekkers' new three-dimensionally (3D) printed home in Eindhoven, the Netherlands, could become commonplace as the country deals with a housing shortage. The residence is composed of 24 concrete elements that were printed layer by layer through additive manufacturing at a factory in Eindhoven, before being transported to a neighborhood. The home is part of Project Milestone, an initiative of Eindhoven's city hall, its Technical University, and construction companies. The university's Theo Salet said the process uses concrete with the consistency of toothpaste, to ensure sufficient strength for construction and sufficient wetness so the layers stick together. | [] | [] | [] | scitechnews | None | None | None | None | Elize Lutz and Harrie Dekkers' new three-dimensionally (3D) printed home in Eindhoven, the Netherlands, could become commonplace as the country deals with a housing shortage. The residence is composed of 24 concrete elements that were printed layer by layer through additive manufacturing at a factory in Eindhoven, before being transported to a neighborhood. The home is part of Project Milestone, an initiative of Eindhoven's city hall, its Technical University, and construction companies. The university's Theo Salet said the process uses concrete with the consistency of toothpaste, to ensure sufficient strength for construction and sufficient wetness so the layers stick together.
EINDHOVEN, Netherlands (AP) - Elize Lutz and Harrie Dekkers' new home is a 94-square meter (1,011-square foot) two-bedroom bungalow that resembles a boulder with windows.
The curving lines of its gray concrete walls look and feel natural. But they are actually at the cutting edge of housing construction technology in the Netherlands and around the world: They were 3D printed at a nearby factory.
"It's special. It's a form that's unusual, and when I saw it for the first time, it reminds me of something you knew when you were young," Lutz said Friday. She will rent the house with Dekkers for six months for 800 euros ($970) per month.
The house, for now, looks strange with its layers of printed concrete clearly visible - even a few places where printing problems caused imperfections.
In the future, as the Netherlands seeks ways to tackle a chronic housing shortage, such construction could become commonplace. The country needs to build hundreds of thousands of new homes this decade to accommodate a growing population.
Theo Salet, a professor at Eindhoven's Technical University, is working in 3D printing, also known as additive manufacturing, to find ways of making concrete construction more sustainable. He figures houses can be 3D printed in the future using 30% less material.
"Why? The answer is sustainability," he said. "And the first way to do that is by cutting down the amount of concrete that we use."
He explained that 3D printing can deposit the material only where you need it.
A new generation of start-ups in the United States also are among the companies looking to bring 3D-printed homes into the mainstream.
Fittingly, Lutz and Dekkers' new house is in Eindhoven, a city that markets itself as a center of innovation.
The home is made up of 24 concrete elements "printed" by a machine that squirts layer upon layer of concrete at a factory in the city before being trucked to a neighborhood of other new homes. There, the finishing touches - including a roof - were added.
The layers give a ribbed texture to its walls, inside and out. The house complies with all Dutch construction codes and the printing process took just 120 hours.
The home is the product of collaboration between city hall, Eindhoven's Technical University and construction companies called Project Milestone. They are planning to build a total of five houses, honing their techniques with each one. Future homes will have more than one floor.
The process uses concrete the consistency of toothpaste, Salet said. That ensures it is strong enough to build with but also wet enough so the layers stick to another. The printed elements are hollow and filled with insulation material.
The hope is that such homes, which are quicker to build than traditional houses and use less concrete, could become a factor in solving housing shortages in a nation that is one-third of the size of Florida with a population of 17.4 million people and rising.
In a report this month, the Netherlands Environmental Assessment Agency said that education and innovation can spur the construction industry in the long term. But other measures are needed to tackle Dutch housing shortages, including reforming zoning.
Salet believes 3D printing can help by digitizing the design and production of houses.
"If you ask me, 'will we build 1 million of the houses, as you see here?' The answer is no. But will we use this technology as part of other houses combined with wooden structures? Combined with other materials? Then my answer is yes," he said.
Dekkers has already noticed great acoustics in the home even when he's just playing music on his phone. And when he's not listening to music, he enjoys the silence that the insulated walls provide.
"It gives a very good feel, because if you're inside you don't hear anything from outside," he said.
___
Read all AP stories about climate change and sustainability at https://apnews.com/hub/climate. |
|||
479 | Advanced Core Processing: Robot Technology Appealing for Apple Growers | 28 April 2021
Monash University engineers have developed a robot capable of performing autonomous apple harvesting.
Video footage: Robot Harvester / Robot Harvester Combined
New autonomous robotic technology developed by Monash University researchers has the potential to become the 'apple of my eye' for Australia's food industry as it deals with labour shortages and an increased demand for fresh produce.
A research team, led by Dr Chao Chen in Monash University's Department of Mechanical and Aerospace Engineering , has developed an autonomous harvesting robot capable of identifying, picking and depositing apples in as little as seven seconds at full capacity.
Following extensive trials in February and March at Fankhauser Apples in Drouin, Victoria, the robot was able to harvest more than 85 per cent of all reachable apples in the canopy as identified by its vision system.
Of all apples harvested, less than 6 per cent were damaged due to stem removal. Apples without stems can still be sold, but don't necessarily fit the cosmetic guidelines of some retailers.
With the robot limited to half its maximum speed, the median harvest rate was 12.6 seconds per apple. In streamlined pick-and-drop scenarios, the cycle time reduced to roughly nine seconds.
By using the robot's capacity speed, individual apple harvesting time can drop to as little as seven seconds.
"Our developed vision system can not only positively identify apples in a tree within its range in an outdoors orchard environment by means of deep learning, but also identify and categorise obstacles, such as leaves and branches, to calculate the optimum trajectory for apple extraction," Dr Chen, the Director of Laboratory of Motion Generation and Analysis (LMGA), said.
Automatic harvesting robots, while a promising technology for the agricultural industry, pose challenges for fruit and vegetable growers.
Robotic harvesting of fruit and vegetables require the vision system to detect and localise the produce. To increase the success rate and reduce the damage of produce during the harvesting process, information on the shape, and stem-branch joint location and orientation are also required.
To counter this problem, researchers created a state-of-the-art motion-planning algorithm featuring fast-generation of collision-free trajectories to minimise processing and travel times between apples, reducing harvesting time and maximising the number of apples that can be harvested at a single location.
The robot's vision system can identify more than 90 per cent of all visible apples seen within the camera's view from a distance of approximately 1.2m. The system can work in all types of lighting and weather conditions, including intense sunlight and rain, and takes less than 200 milliseconds to process the image of an apple.
"We also implemented a 'path-planning' algorithm that was able to generate collision-free trajectories for more than 95 per cent of all reachable apples in the canopy. It takes just eight seconds to plan the entire trajectory for the robot to grasp and deposit an apple," Dr Chen said.
"The robot grasps apples with a specially designed, pneumatically powered, soft gripper with four independently actuated fingers and suction system that grasps and extracts apples efficiently, while minimising damage to the fruit and the tree itself.
"In addition, the suction system draws the apple from the canopy into the gripper, reducing the need for the gripper to reach into the canopy and potentially damaging its surroundings. The gripper can extract more than 85 per cent of all apples from the canopy that were planned for harvesting."
Dr Chen said the system can address the challenges of solving the current labour shortage in Australia's agricultural sector, the future food crisis as population grows and decreased arable land. He said technological advances could also help increase the productivity of fruit and attract younger people to working in farms with this technology.
The research team comprises Dr Chao Chen, Dr Wesley Au, Mr Xing Wang, Mr Hugh Zhou, and Dr Hanwen Kang in LMGA at Monash. The project is funded by the Australian Research Council Industrial Transformation Research Hubs scheme (ARC Nanocomm Hub - IH150100006).
MEDIA ENQUIRIES Leigh Dawson T: +61 455 368 260 E: media@monash.edu | Researchers at Australia's Monash University have developed autonomous robotic technology capable of harvesting apples. At full capacity, the robot can identify, pick, and deposit an apple in as little as seven seconds, with a median rate of 12.6 seconds per apple. Trials showed the robot could harvest over 85% of reachable apples within a canopy as identified by its vision system, with less than 6% of the harvest damaged by stem removal. Monash's Chao Chen said the vision system uses deep learning to identify apples within its range, and to identify and categorize obstacles like branches. Said Chen, "We also implemented a 'path-planning' algorithm that was able to generate collision-free trajectories for more than 95% of all reachable apples in the canopy." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Australia's Monash University have developed autonomous robotic technology capable of harvesting apples. At full capacity, the robot can identify, pick, and deposit an apple in as little as seven seconds, with a median rate of 12.6 seconds per apple. Trials showed the robot could harvest over 85% of reachable apples within a canopy as identified by its vision system, with less than 6% of the harvest damaged by stem removal. Monash's Chao Chen said the vision system uses deep learning to identify apples within its range, and to identify and categorize obstacles like branches. Said Chen, "We also implemented a 'path-planning' algorithm that was able to generate collision-free trajectories for more than 95% of all reachable apples in the canopy."
28 April 2021
Monash University engineers have developed a robot capable of performing autonomous apple harvesting.
Video footage: Robot Harvester / Robot Harvester Combined
New autonomous robotic technology developed by Monash University researchers has the potential to become the 'apple of my eye' for Australia's food industry as it deals with labour shortages and an increased demand for fresh produce.
A research team, led by Dr Chao Chen in Monash University's Department of Mechanical and Aerospace Engineering , has developed an autonomous harvesting robot capable of identifying, picking and depositing apples in as little as seven seconds at full capacity.
Following extensive trials in February and March at Fankhauser Apples in Drouin, Victoria, the robot was able to harvest more than 85 per cent of all reachable apples in the canopy as identified by its vision system.
Of all apples harvested, less than 6 per cent were damaged due to stem removal. Apples without stems can still be sold, but don't necessarily fit the cosmetic guidelines of some retailers.
With the robot limited to half its maximum speed, the median harvest rate was 12.6 seconds per apple. In streamlined pick-and-drop scenarios, the cycle time reduced to roughly nine seconds.
By using the robot's capacity speed, individual apple harvesting time can drop to as little as seven seconds.
"Our developed vision system can not only positively identify apples in a tree within its range in an outdoors orchard environment by means of deep learning, but also identify and categorise obstacles, such as leaves and branches, to calculate the optimum trajectory for apple extraction," Dr Chen, the Director of Laboratory of Motion Generation and Analysis (LMGA), said.
Automatic harvesting robots, while a promising technology for the agricultural industry, pose challenges for fruit and vegetable growers.
Robotic harvesting of fruit and vegetables require the vision system to detect and localise the produce. To increase the success rate and reduce the damage of produce during the harvesting process, information on the shape, and stem-branch joint location and orientation are also required.
To counter this problem, researchers created a state-of-the-art motion-planning algorithm featuring fast-generation of collision-free trajectories to minimise processing and travel times between apples, reducing harvesting time and maximising the number of apples that can be harvested at a single location.
The robot's vision system can identify more than 90 per cent of all visible apples seen within the camera's view from a distance of approximately 1.2m. The system can work in all types of lighting and weather conditions, including intense sunlight and rain, and takes less than 200 milliseconds to process the image of an apple.
"We also implemented a 'path-planning' algorithm that was able to generate collision-free trajectories for more than 95 per cent of all reachable apples in the canopy. It takes just eight seconds to plan the entire trajectory for the robot to grasp and deposit an apple," Dr Chen said.
"The robot grasps apples with a specially designed, pneumatically powered, soft gripper with four independently actuated fingers and suction system that grasps and extracts apples efficiently, while minimising damage to the fruit and the tree itself.
"In addition, the suction system draws the apple from the canopy into the gripper, reducing the need for the gripper to reach into the canopy and potentially damaging its surroundings. The gripper can extract more than 85 per cent of all apples from the canopy that were planned for harvesting."
Dr Chen said the system can address the challenges of solving the current labour shortage in Australia's agricultural sector, the future food crisis as population grows and decreased arable land. He said technological advances could also help increase the productivity of fruit and attract younger people to working in farms with this technology.
The research team comprises Dr Chao Chen, Dr Wesley Au, Mr Xing Wang, Mr Hugh Zhou, and Dr Hanwen Kang in LMGA at Monash. The project is funded by the Australian Research Council Industrial Transformation Research Hubs scheme (ARC Nanocomm Hub - IH150100006).
MEDIA ENQUIRIES Leigh Dawson T: +61 455 368 260 E: media@monash.edu |
|||
481 | As More Retailers Turn to Tech, Macy's Store Employees Score Victory in Challenging Self-Checkout in Mobile App | A union for employees at department store chain Macy's has won a victory against automation, as an independent arbitrator ruled the retailer breached its bargaining agreement and must exclude departments that have commission-based pay from self-checkout. The United Food and Commercial Workers union sued on behalf of about 600 of its members at Macy's stores in Boston and Rhode Island on the grounds the mobile scan and pay self-checkout application prevented plaintiffs from earning commissions on sales. The suit highlights the tension between technology and retail employees as electronic commerce steals business from brick-and-mortar stores, a situation compounded by the pandemic. Santiago Gallino at the Wharton School of the University of Pennsylvania said retailers are facing pressure "to reinvent themselves and rethink the role of employees," or risk being driven out of business. | [] | [] | [] | scitechnews | None | None | None | None | A union for employees at department store chain Macy's has won a victory against automation, as an independent arbitrator ruled the retailer breached its bargaining agreement and must exclude departments that have commission-based pay from self-checkout. The United Food and Commercial Workers union sued on behalf of about 600 of its members at Macy's stores in Boston and Rhode Island on the grounds the mobile scan and pay self-checkout application prevented plaintiffs from earning commissions on sales. The suit highlights the tension between technology and retail employees as electronic commerce steals business from brick-and-mortar stores, a situation compounded by the pandemic. Santiago Gallino at the Wharton School of the University of Pennsylvania said retailers are facing pressure "to reinvent themselves and rethink the role of employees," or risk being driven out of business.
|
||||
483 | Drones Provide Bird's Eye View of How Turbulent Tidal Flows Affect Seabird Foraging Habits | The foraging behaviour of seabirds is dramatically affected
by turbulence caused by natural coastal features and manmade ocean structures,
new research has shown.
In a first-of-its-kind study, scientists from the UK and
Germany used drones to provide a synchronised bird's eye view of what seabirds
see and how their behaviour changes depending on the movement of tidal flows beneath
them.
The research focused on the wake of a tidal turbine structure
set in a tidal channel - Strangford Lough in Northern Ireland - that has
previously been identified as a foraging hotspot for terns .
Through a combination of drone
tracking and advanced statistical modelling , it showed that terns were more likely to actively forage over vortices (swirling patches of water).
However, eruptions of upwelling water (boils) ahead of the terns'
flight path prompted them to stay on course as they approached.
Writing in the Royal's Society flagship biological research
journal, Proceedings
of the Royal Society B , the researchers say their
findings offer a never-before-seen insight into how tidal turbulence can impact
foraging behaviours.
They also say it potentially gives them the ability to
predict how species might respond to environmental changes such as the
increased future development of ocean renewable energy sites and climate change.
The study was conducted
by researchers from Queen's University Belfast and the University of Plymouth
(UK), and Bielefeld University (Germany).
Dr Lilian Lieber, Bryden Centre Research Fellow at Queen's and the study's lead investigator,
said:
Co-investigator Professor Roland Langrock,
Professor in Statistics and Data Analysis at Bielefeld, said: | Researchers from Queen's University Belfast and the University of Plymouth in the U.K. and Bielefeld University in Germany used drones and machine learning to determine how a seabird's behavior changes based on the movement of tidal flows. Their research was focused on a popular foraging spot for terns in the wake of a tidal turbine structure set in a tidal channel in Northern Ireland. Using drone tracking in conjunction with advanced statistical modeling, they found that terns actively foraged over swirling patches of water, but would stay on course when eruptions of upwelling water were detected ahead of their flight path. | [] | [] | [] | scitechnews | None | None | None | None | Researchers from Queen's University Belfast and the University of Plymouth in the U.K. and Bielefeld University in Germany used drones and machine learning to determine how a seabird's behavior changes based on the movement of tidal flows. Their research was focused on a popular foraging spot for terns in the wake of a tidal turbine structure set in a tidal channel in Northern Ireland. Using drone tracking in conjunction with advanced statistical modeling, they found that terns actively foraged over swirling patches of water, but would stay on course when eruptions of upwelling water were detected ahead of their flight path.
The foraging behaviour of seabirds is dramatically affected
by turbulence caused by natural coastal features and manmade ocean structures,
new research has shown.
In a first-of-its-kind study, scientists from the UK and
Germany used drones to provide a synchronised bird's eye view of what seabirds
see and how their behaviour changes depending on the movement of tidal flows beneath
them.
The research focused on the wake of a tidal turbine structure
set in a tidal channel - Strangford Lough in Northern Ireland - that has
previously been identified as a foraging hotspot for terns .
Through a combination of drone
tracking and advanced statistical modelling , it showed that terns were more likely to actively forage over vortices (swirling patches of water).
However, eruptions of upwelling water (boils) ahead of the terns'
flight path prompted them to stay on course as they approached.
Writing in the Royal's Society flagship biological research
journal, Proceedings
of the Royal Society B , the researchers say their
findings offer a never-before-seen insight into how tidal turbulence can impact
foraging behaviours.
They also say it potentially gives them the ability to
predict how species might respond to environmental changes such as the
increased future development of ocean renewable energy sites and climate change.
The study was conducted
by researchers from Queen's University Belfast and the University of Plymouth
(UK), and Bielefeld University (Germany).
Dr Lilian Lieber, Bryden Centre Research Fellow at Queen's and the study's lead investigator,
said:
Co-investigator Professor Roland Langrock,
Professor in Statistics and Data Analysis at Bielefeld, said: |
|||
484 | Microsoft Finds Memory Allocation Holes in Range of IoT, Industrial Technology | The security research group for Azure Defender for IoT , dubbed Section 52, has found a batch of bad memory allocation operations in code used in Internet of Things and operational technology (OT) such as industrial control systems that could lead to malicious code execution.
Given the trendy vulnerability name of BadAlloc, the vulnerabilities are related to not properly validating input, which leads to heap overflows, and can eventually end at code execution.
"All of these vulnerabilities stem from the usage of vulnerable memory functions such as malloc, calloc, realloc, memalign, valloc, pvalloc, and more," the research team wrote in a blog post .
The use of these functions gets problematic when passed external input that can cause an integer overflow or wraparound as values to the functions.
"The concept is as follows: When sending this value, the returned outcome is a freshly allocated memory buffer," the team said.
"While the size of the allocated memory remains small due to the wraparound, the payload associated with the memory allocation exceeds the actual allocated buffer, resulting in a heap overflow. This heap overflow enables an attacker to execute malicious code on the target device."
Microsoft said it worked with the US Department of Homeland Security to alert the impacted vendors and patch the vulnerabilities.
The list of affected products in the advisory includes devices from Google Cloud, Arm, Amazon, Red Hat, Texas Instruments, and Samsung Tizen. CVSS v3 scores range from 3.2 in the case of Tizen to 9.8 for Red Hat newlib prior to version 4.
As with most vulnerabilities, Microsoft's primary piece of advice is to patch the affected products, but with the possibility of industrial equipment being hard to update, Redmond suggests disconnecting devices from the internet if possible or putting them behind a VPN with 2FA authentication, have a form of network security and monitoring to detect behavioural indicators of compromise, and use network segmentation to protect critical assets.
"Network segmentation is important for zero trust because it limits the attacker's ability to move laterally and compromise your crown jewel assets, after the initial intrusion," the team wrote.
"In particular, IoT devices and OT networks should be isolated from corporate IT networks using firewalls." | The security research unit for Microsoft's new Azure Defender for IoT product discovered a number of poor memory allocation operations in code used in Internet of Things (IoT) and operational technology (OT), like industrial control systems, that could fuel malicious code execution. Dubbed BadAlloc, the exploits are associated with improperly validating input, which leads to heap overflows. The team, called Section 52, said the use of these functions becomes problematic when passed external input that can trigger an integer overflow or wraparound as values to the functions. Microsoft said it alerted the affected vendors (including Google Cloud, ARM, Amazon, Red Hat, Texas Instruments, and Samsung Tizen) and patched the vulnerabilities in cooperation with the U.S. Department of Homeland Security. The team recommended the isolation of IoT devices and OT networks from corporate information technology networks using firewalls. | [] | [] | [] | scitechnews | None | None | None | None | The security research unit for Microsoft's new Azure Defender for IoT product discovered a number of poor memory allocation operations in code used in Internet of Things (IoT) and operational technology (OT), like industrial control systems, that could fuel malicious code execution. Dubbed BadAlloc, the exploits are associated with improperly validating input, which leads to heap overflows. The team, called Section 52, said the use of these functions becomes problematic when passed external input that can trigger an integer overflow or wraparound as values to the functions. Microsoft said it alerted the affected vendors (including Google Cloud, ARM, Amazon, Red Hat, Texas Instruments, and Samsung Tizen) and patched the vulnerabilities in cooperation with the U.S. Department of Homeland Security. The team recommended the isolation of IoT devices and OT networks from corporate information technology networks using firewalls.
The security research group for Azure Defender for IoT , dubbed Section 52, has found a batch of bad memory allocation operations in code used in Internet of Things and operational technology (OT) such as industrial control systems that could lead to malicious code execution.
Given the trendy vulnerability name of BadAlloc, the vulnerabilities are related to not properly validating input, which leads to heap overflows, and can eventually end at code execution.
"All of these vulnerabilities stem from the usage of vulnerable memory functions such as malloc, calloc, realloc, memalign, valloc, pvalloc, and more," the research team wrote in a blog post .
The use of these functions gets problematic when passed external input that can cause an integer overflow or wraparound as values to the functions.
"The concept is as follows: When sending this value, the returned outcome is a freshly allocated memory buffer," the team said.
"While the size of the allocated memory remains small due to the wraparound, the payload associated with the memory allocation exceeds the actual allocated buffer, resulting in a heap overflow. This heap overflow enables an attacker to execute malicious code on the target device."
Microsoft said it worked with the US Department of Homeland Security to alert the impacted vendors and patch the vulnerabilities.
The list of affected products in the advisory includes devices from Google Cloud, Arm, Amazon, Red Hat, Texas Instruments, and Samsung Tizen. CVSS v3 scores range from 3.2 in the case of Tizen to 9.8 for Red Hat newlib prior to version 4.
As with most vulnerabilities, Microsoft's primary piece of advice is to patch the affected products, but with the possibility of industrial equipment being hard to update, Redmond suggests disconnecting devices from the internet if possible or putting them behind a VPN with 2FA authentication, have a form of network security and monitoring to detect behavioural indicators of compromise, and use network segmentation to protect critical assets.
"Network segmentation is important for zero trust because it limits the attacker's ability to move laterally and compromise your crown jewel assets, after the initial intrusion," the team wrote.
"In particular, IoT devices and OT networks should be isolated from corporate IT networks using firewalls." |
|||
486 | ACM Chuck Thacker Breakthrough Award Goes to Innovator Who Transformed Web Applications | NEW YORK, April 28, 2021 - ACM, the Association for Computing Machinery, today announced that Michael Franz of the University of California, Irvine is the recipient of the ACM Charles P. âChuckâ Thacker Breakthrough in Computing Award. Franz is recognized for the development of just-in-time compilation techniques that enable fast and feature-rich web services on the internet. Every day, millions of people around the world use online applications such as Gmail and Facebook. These web applications would not have been possible without the groundbreaking compilation technique Franz developed in the mid 1990s. | ACM has named Michael Franz at the University of California, Irvine, recipient of the Charles P. "Chuck" Thacker Breakthrough in Computing Award, for developing just-in-time (JIT) dynamic compilation methods that facilitate rapid, feature-rich Web services online. Franz invented a new compilation technique on which he based a JIT compiler for JavaScript, then worked with Mozilla to incorporate it into the Firefox browser. Franz later devised "trace tree" program-loop optimization and a compiler that operated in various settings, upgrading the JIT compiler's performance on JavaScript five- to 10-fold. ACM President Gabriele Kotsis said, "Franz displayed foresight in working with Mozilla to implement his ideas on their browser and in making his technology open source so that it could be continually refined and adapted by developers worldwide." | [] | [] | [] | scitechnews | None | None | None | None | ACM has named Michael Franz at the University of California, Irvine, recipient of the Charles P. "Chuck" Thacker Breakthrough in Computing Award, for developing just-in-time (JIT) dynamic compilation methods that facilitate rapid, feature-rich Web services online. Franz invented a new compilation technique on which he based a JIT compiler for JavaScript, then worked with Mozilla to incorporate it into the Firefox browser. Franz later devised "trace tree" program-loop optimization and a compiler that operated in various settings, upgrading the JIT compiler's performance on JavaScript five- to 10-fold. ACM President Gabriele Kotsis said, "Franz displayed foresight in working with Mozilla to implement his ideas on their browser and in making his technology open source so that it could be continually refined and adapted by developers worldwide."
NEW YORK, April 28, 2021 - ACM, the Association for Computing Machinery, today announced that Michael Franz of the University of California, Irvine is the recipient of the ACM Charles P. âChuckâ Thacker Breakthrough in Computing Award. Franz is recognized for the development of just-in-time compilation techniques that enable fast and feature-rich web services on the internet. Every day, millions of people around the world use online applications such as Gmail and Facebook. These web applications would not have been possible without the groundbreaking compilation technique Franz developed in the mid 1990s. |
|||
487 | Can a Gaze Measurement App Help Spot Potential Signs of Autism in Toddlers? | Autism spectrum disorder (ASD) is a complex developmental condition that involves challenges in social interaction, nonverbal communication, speech, and repetitive behaviors.
In the United States alone, about 1 in 54 children is diagnosed with ASD. Boys are four times more likely to be diagnosed with autism than girls. Early diagnosis is crucial for treatment and therapies.
Researchers at Duke University in Durham, North Carolina, USA, used computational methods based on computer vision analysis and a smartphone or tablet to help detect early signs of ASD.
The study, published in JAMA Network , detected distinctive eye-gaze patterns using a mobile device application in a smartphone or tablet. The method can help distinguish toddlers with ASD and typically developing toddlers.
The mobile app, which allows the child to watch short videos, can track gaze, detecting ASD with 90 percent accuracy.
Autism spectrum disorder (ASD) is a developmental disability that causes effective communication, social, and behavioral problems. Although autism can be diagnosed at any age, the signs and symptoms usually appear in the first two years of a child's life.
People with ASD may communicate, behave, interact, and learn in different ways. Some ASD children may need assistance in their daily lives; others need less. Also, those with this condition's learning and thinking abilities may range from gifted to severely challenged.
The common signs and symptoms include making little or inconsistent eye contact, tending not to look at or listen to people, rarely share the enjoyment of objects or activities by pointing, unable or slow in responding to someone calling their name, facial expressions that do not match what's being said, and problem in understanding another person's actions.
Those with ASD may also have restrictive and repetitive behaviors like repeating specific behaviors or phrases, having focused interest, getting upset in slight changes in a routine, and being sensitive to stimuli, such as light, noise, or temperature.
Toddlers with ASD also tend to prefer looking at objects than people. This is common among children as young as 17 months who are later diagnosed with autism.
The study highlights the need for technology-based detection tools to spot ASD early. This way, children will be given the appropriate treatment and support more promptly.
The researchers conducted a study in prospective primary care clinics between December 2018 and March 2020. They compared toddlers with and without ASD.
More than 1,500 caregivers of toddlers were invited to participate during a well-child visit. Overall, 993 toddlers completed the study measures, including children between the ages of 16 and 38 months.
First, the children were screened using the Modified Checklist for Autism in Toddlers during routine care.
The team used computer vision analysis to detect toddlers with probable ASD that quantified eye-gaze patterns elicited by the app. The results were compared between toddlers with ASD and those with typical development.
The app was not designed to diagnose autism but to help healthcare practitioners determine a child should be referred to a pediatric specialist for a formal diagnostic evaluation.
The team found that distinctive eye-gaze patterns were spotted in toddlers with ASD, characterized by reduced gaze to social stimuli and social moments during videos or movies. The children each took a test for less than 10 minutes, wherein three videos were shown - an adult playing with a toy or two adults conversing and two control videos showing a puppy or floating bubbles.
The phone or tablet's camera will record the child's gaze as they watch the clips. From there, the researchers referred 79 children to specialists for an autism evaluation. Of these, 40 of the children were later diagnosed with ASD.
The toddlers diagnosed with ASD spent more time looking at the videos with toys than at the person playing with them. Further, the team observed that the children with ASD did not follow the conversation flow as closely as the non-autistic children's gazes. | Potential signs of autism spectrum disorder (ASD) can be identified in toddlers using computational gaze-measurement methods developed by Duke University researchers. The technique can detect eye-gaze patterns characteristic of ASD with 90% accuracy, using a mobile app on a smartphone or tablet. The children viewed videos in the app while the phone or tablet's camera recorded their gaze. The Duke team then used computer-vision analysis to measure the eye-gaze patterns elicited by the app to detect the likelihood of ASD in those toddlers. The researchers said, "These novel results may have the potential for developing scalable autism screening tools, exportable to natural settings, and enabling data sets amenable to machine learning." | [] | [] | [] | scitechnews | None | None | None | None | Potential signs of autism spectrum disorder (ASD) can be identified in toddlers using computational gaze-measurement methods developed by Duke University researchers. The technique can detect eye-gaze patterns characteristic of ASD with 90% accuracy, using a mobile app on a smartphone or tablet. The children viewed videos in the app while the phone or tablet's camera recorded their gaze. The Duke team then used computer-vision analysis to measure the eye-gaze patterns elicited by the app to detect the likelihood of ASD in those toddlers. The researchers said, "These novel results may have the potential for developing scalable autism screening tools, exportable to natural settings, and enabling data sets amenable to machine learning."
Autism spectrum disorder (ASD) is a complex developmental condition that involves challenges in social interaction, nonverbal communication, speech, and repetitive behaviors.
In the United States alone, about 1 in 54 children is diagnosed with ASD. Boys are four times more likely to be diagnosed with autism than girls. Early diagnosis is crucial for treatment and therapies.
Researchers at Duke University in Durham, North Carolina, USA, used computational methods based on computer vision analysis and a smartphone or tablet to help detect early signs of ASD.
The study, published in JAMA Network , detected distinctive eye-gaze patterns using a mobile device application in a smartphone or tablet. The method can help distinguish toddlers with ASD and typically developing toddlers.
The mobile app, which allows the child to watch short videos, can track gaze, detecting ASD with 90 percent accuracy.
Autism spectrum disorder (ASD) is a developmental disability that causes effective communication, social, and behavioral problems. Although autism can be diagnosed at any age, the signs and symptoms usually appear in the first two years of a child's life.
People with ASD may communicate, behave, interact, and learn in different ways. Some ASD children may need assistance in their daily lives; others need less. Also, those with this condition's learning and thinking abilities may range from gifted to severely challenged.
The common signs and symptoms include making little or inconsistent eye contact, tending not to look at or listen to people, rarely share the enjoyment of objects or activities by pointing, unable or slow in responding to someone calling their name, facial expressions that do not match what's being said, and problem in understanding another person's actions.
Those with ASD may also have restrictive and repetitive behaviors like repeating specific behaviors or phrases, having focused interest, getting upset in slight changes in a routine, and being sensitive to stimuli, such as light, noise, or temperature.
Toddlers with ASD also tend to prefer looking at objects than people. This is common among children as young as 17 months who are later diagnosed with autism.
The study highlights the need for technology-based detection tools to spot ASD early. This way, children will be given the appropriate treatment and support more promptly.
The researchers conducted a study in prospective primary care clinics between December 2018 and March 2020. They compared toddlers with and without ASD.
More than 1,500 caregivers of toddlers were invited to participate during a well-child visit. Overall, 993 toddlers completed the study measures, including children between the ages of 16 and 38 months.
First, the children were screened using the Modified Checklist for Autism in Toddlers during routine care.
The team used computer vision analysis to detect toddlers with probable ASD that quantified eye-gaze patterns elicited by the app. The results were compared between toddlers with ASD and those with typical development.
The app was not designed to diagnose autism but to help healthcare practitioners determine a child should be referred to a pediatric specialist for a formal diagnostic evaluation.
The team found that distinctive eye-gaze patterns were spotted in toddlers with ASD, characterized by reduced gaze to social stimuli and social moments during videos or movies. The children each took a test for less than 10 minutes, wherein three videos were shown - an adult playing with a toy or two adults conversing and two control videos showing a puppy or floating bubbles.
The phone or tablet's camera will record the child's gaze as they watch the clips. From there, the researchers referred 79 children to specialists for an autism evaluation. Of these, 40 of the children were later diagnosed with ASD.
The toddlers diagnosed with ASD spent more time looking at the videos with toys than at the person playing with them. Further, the team observed that the children with ASD did not follow the conversation flow as closely as the non-autistic children's gazes. |
|||
488 | Girl Scout Cookies Take Flight in Virginia Drone Deliveries | Wing, a subsidiary of Google's corporate parent Alphabet, is adding Girl Scout cookies to its commercial drone delivery tests in Christiansburg, VA. Since 2019, the company has used drones to deliver drugstore orders, FedEx packages, and food to residents of the suburb. Wing reached out to local Girl Scout troops, which have had a hard time selling cookies during the pandemic. Wing's autonomous drones, which made their first deliveries in Christiansburg in 2019, feature two forward propellers on their wings, 12 smaller vertical propellers, and a tether to drop the package as the drone hovers over the recipient's front lawn. | [] | [] | [] | scitechnews | None | None | None | None | Wing, a subsidiary of Google's corporate parent Alphabet, is adding Girl Scout cookies to its commercial drone delivery tests in Christiansburg, VA. Since 2019, the company has used drones to deliver drugstore orders, FedEx packages, and food to residents of the suburb. Wing reached out to local Girl Scout troops, which have had a hard time selling cookies during the pandemic. Wing's autonomous drones, which made their first deliveries in Christiansburg in 2019, feature two forward propellers on their wings, 12 smaller vertical propellers, and a tether to drop the package as the drone hovers over the recipient's front lawn.
|
||||
489 | 'Brain-Like Device' Mimics Human Learning in Major Computing Breakthrough | Scientists have developed a device modelled on the human brain that can learn by association in the same way as Pavlov's dog.
In the famous experiment, Russian physiologist Ivan Pavlov conditioned a dog to associate a bell with food. In order to replicate this way of learning, researchers from Northwestern University in the US and the University of Hong Kong developed so-called "synaptic transistors" capable of simultaneously processing and storing information in the same way as a brain.
Instead of a bell and food, the researchers conditioned the circuit to associate light with pressure by pulsing an LED lightbulb and then immediately applying pressure with a finger press.
The organic electrochemical material allowed the device to build memories and after five training cycles, the circuit associated light with pressure in such a way that light alone was able to trigger a signal for the pressure.
This novel way of learning over time overcomes many of the limitations of traditional computing.
"Although the modern computer is outstanding, the human brain can easily outperform it in some complex and unstructured tasks, such as pattern recognition, motor control and multisensory integration," said Jonathan Rivnay, an assistant professor of biomedical engineering at Northwestern University.
"This is thanks to the plasticity of the synapse, which is the basic building block of the brain's computational power. These synapses enable the brain to work in a highly parallel, fault tolerant and energy-efficient manner... mimicking key functions of a biological synapse."
Conventional computers store data and process data using separate systems, meaning data-intensive tasks consume a lot of energy.
Xudong Ji, a postdoctoral researcher in Dr Rivnay's group, explained that their goal was to "bring those two separate functions together," in order to "save space and save on energy costs."
In recent years, researchers have used memory resistors - known as "memristors" - to combine the processing and memory units in the same way as the human brain. However, these are also energy costly and are less biocompatible - meaning they cannot be used in biological applications.
"While our application is a proof-of-concept, our proposed circuit can be further extended to include more sensory inputs and integrated with other electronics to enable on-site, low-power computation," Dr Rivnay said.
"Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics."
The research was published today in the journal Nature Communications . | A device modeled after the human brain by researchers at Northwestern University and the University of Hong Kong can learn by association, via synaptic transistors that simultaneously process and store information. The researchers programmed the circuit to associate light with pressure by pulsing a light-emitting diode (LED) lightbulb and then applying pressure with a finger press. The organic electrochemical material enabled the device to construct memories, and after five training cycles it associated light with pressure and could detect pressure from light alone. Northwestern's Jonathan Rivnay said, "Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics." | [] | [] | [] | scitechnews | None | None | None | None | A device modeled after the human brain by researchers at Northwestern University and the University of Hong Kong can learn by association, via synaptic transistors that simultaneously process and store information. The researchers programmed the circuit to associate light with pressure by pulsing a light-emitting diode (LED) lightbulb and then applying pressure with a finger press. The organic electrochemical material enabled the device to construct memories, and after five training cycles it associated light with pressure and could detect pressure from light alone. Northwestern's Jonathan Rivnay said, "Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics."
Scientists have developed a device modelled on the human brain that can learn by association in the same way as Pavlov's dog.
In the famous experiment, Russian physiologist Ivan Pavlov conditioned a dog to associate a bell with food. In order to replicate this way of learning, researchers from Northwestern University in the US and the University of Hong Kong developed so-called "synaptic transistors" capable of simultaneously processing and storing information in the same way as a brain.
Instead of a bell and food, the researchers conditioned the circuit to associate light with pressure by pulsing an LED lightbulb and then immediately applying pressure with a finger press.
The organic electrochemical material allowed the device to build memories and after five training cycles, the circuit associated light with pressure in such a way that light alone was able to trigger a signal for the pressure.
This novel way of learning over time overcomes many of the limitations of traditional computing.
"Although the modern computer is outstanding, the human brain can easily outperform it in some complex and unstructured tasks, such as pattern recognition, motor control and multisensory integration," said Jonathan Rivnay, an assistant professor of biomedical engineering at Northwestern University.
"This is thanks to the plasticity of the synapse, which is the basic building block of the brain's computational power. These synapses enable the brain to work in a highly parallel, fault tolerant and energy-efficient manner... mimicking key functions of a biological synapse."
Conventional computers store data and process data using separate systems, meaning data-intensive tasks consume a lot of energy.
Xudong Ji, a postdoctoral researcher in Dr Rivnay's group, explained that their goal was to "bring those two separate functions together," in order to "save space and save on energy costs."
In recent years, researchers have used memory resistors - known as "memristors" - to combine the processing and memory units in the same way as the human brain. However, these are also energy costly and are less biocompatible - meaning they cannot be used in biological applications.
"While our application is a proof-of-concept, our proposed circuit can be further extended to include more sensory inputs and integrated with other electronics to enable on-site, low-power computation," Dr Rivnay said.
"Because it is compatible with biological environments, the device can directly interface with living tissue, which is critical for next-generation bioelectronics."
The research was published today in the journal Nature Communications . |
|||
490 | Technology Could Turn You Into a Tiffany | "Many craftsmen thought integrating technology would erase their work," Hélène Poulit-Duquesne , Boucheron's chief executive, said in a video interview. "I've always fought to explain to them the hands of human beings are so important, they will always be at the center, but technology should be at the service of those hands."
Ms. Poulit-Duquesne is one of the few executives in the high jewelry arena who is comfortable speaking openly about technology. She and Claire Choisne, Boucheron's creative director, employed A.I. for the first time last summer to create the house's Contemplation collection, which included a quivering cloudlike necklace of 7,000 titanium wires set with more than 4,000 diamonds and 2,000 glass beads.
"Claire wanted the woman to wear a cloud so we worked with mathematicians to create an algorithm," Ms. Poulit-Duquesne said.
A kindred spirit lurks in Nick Koss, founder of Volund Jewelry, a private jeweler based in Vancouver, British Columbia. After several years of research and experimentation with A.I., he relied on an algorithm to help create his 2020 Night collection, a homage to the reliquaries and the candlelit roadside shrines that captured his imagination on a trip through southern Italy in the early 2000s.
Mr. Koss said he believed that computers had a singular ability to re-create patterns found in nature, and that humans played a critical role in how the machines generate those patterns. He offered an example of a collection inspired by leaves. | A number of companies are developing digital tools for the design of personalized jewelry, which can be fabricated via sophisticated three-dimensional (3D) metal printing. Joining these emergent businesses are conventional jewelry houses like France's Boucheron, which used artificial intelligence (AI) for the first time last summer to create its Contemplation collection, crafting an algorithm with mathematicians to generate a cloudlike diamond-and-bead necklace. Nick Koss at Canada-based Volund Jewelry also uses AI to create jewelry through generative design. Another AI-enabled process considered to have even more potential for the jewelry industry is parametric design, which simplifies and expedites making patterns and exploring alternatives. High-end jeweler Tiffany & Company uses parametric design and 3D printing to create prototypes at its Jewelry Design and Innovation Workshop in New York City. | [] | [] | [] | scitechnews | None | None | None | None | A number of companies are developing digital tools for the design of personalized jewelry, which can be fabricated via sophisticated three-dimensional (3D) metal printing. Joining these emergent businesses are conventional jewelry houses like France's Boucheron, which used artificial intelligence (AI) for the first time last summer to create its Contemplation collection, crafting an algorithm with mathematicians to generate a cloudlike diamond-and-bead necklace. Nick Koss at Canada-based Volund Jewelry also uses AI to create jewelry through generative design. Another AI-enabled process considered to have even more potential for the jewelry industry is parametric design, which simplifies and expedites making patterns and exploring alternatives. High-end jeweler Tiffany & Company uses parametric design and 3D printing to create prototypes at its Jewelry Design and Innovation Workshop in New York City.
"Many craftsmen thought integrating technology would erase their work," Hélène Poulit-Duquesne , Boucheron's chief executive, said in a video interview. "I've always fought to explain to them the hands of human beings are so important, they will always be at the center, but technology should be at the service of those hands."
Ms. Poulit-Duquesne is one of the few executives in the high jewelry arena who is comfortable speaking openly about technology. She and Claire Choisne, Boucheron's creative director, employed A.I. for the first time last summer to create the house's Contemplation collection, which included a quivering cloudlike necklace of 7,000 titanium wires set with more than 4,000 diamonds and 2,000 glass beads.
"Claire wanted the woman to wear a cloud so we worked with mathematicians to create an algorithm," Ms. Poulit-Duquesne said.
A kindred spirit lurks in Nick Koss, founder of Volund Jewelry, a private jeweler based in Vancouver, British Columbia. After several years of research and experimentation with A.I., he relied on an algorithm to help create his 2020 Night collection, a homage to the reliquaries and the candlelit roadside shrines that captured his imagination on a trip through southern Italy in the early 2000s.
Mr. Koss said he believed that computers had a singular ability to re-create patterns found in nature, and that humans played a critical role in how the machines generate those patterns. He offered an example of a collection inspired by leaves. |
|||
491 | U.S. Army Technique Enhances Robot Battlefield Operations | ADELPHI, Md. -- Army researchers developed a technique that allows robots to remain resilient when faced with intermittent communication losses on the battlefield.
The technique, called α-shape, provides an efficient method for resolving goal conflicts between multiple robots that may want to visit the same area during missions including unmanned search and rescue, robotic reconnaissance, perimeter surveillance and robotic detection of physical phenomena, such as radiation and underwater concentration of lifeforms.
Researchers from the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory and the University of Nebraska, Omaha Computer Science Department collaborated, which led to a paper featured in ScienceDirect's journal Robotics and Autonomous Systems .
"Robots working in teams need a method to ensure that they do not duplicate effort," said Army researcher Dr. Bradley Woosley. "When all robots can communicate, there are many techniques that can be used; however, in environments where the robots cannot communicate widely due to needing to stay covert, clutter leading to radios not working for long distance communications, or to preserve battery or bandwidth for more important messages, the robots will need a method to coordinate with as few communications as possible."
This coordination is accomplished through sharing their next task with the team, and select team members will remember this information, allowing other robots to ask if any other robot will perform that task without needing to communicate directly with the robot that selected the task, Woosley said.
The robot that remembers a task is based on the topology of their wireless communications network and the geometric layout of the robots, he said. Each robot is assigned a bounding shape representing the area of the environment that they are caching goal locations for, which enables a quick search in the communications network to find the robot that would know if there were any goals requested in that area.
"This research enables coordination between robots when each robot is empowered to make decisions about its next tasks without requiring it to check in with the rest of the team first," Woosley said. "Allowing the robots to make progress towards what the robots feel is the most important next step while handling any conflicts between two robots as they are discovered when robots move in and out of communications range with each other."
The technique uses a geometric approximation called α-shape to group together regions of the environment that a robot can communicate with other robots using multi-hop communications over a communications network. This technique is integrated with an intelligent search algorithm over the robots' communication tree to find conflicts and store them even if the robot that selects the goal disconnects from the communication tree before reaching the goal.
The team reported experimental results on simulated robots within multiple environments and physical Clearpath Jackal Robots.
"To our knowledge, this work is one of the first attempts to integrate geometry-based prediction of potential conflict regions to improve multi-robot information collection under communication constraints, while gracefully handling intermittent connectivity loss between robots," Woosley said.
According to Woosley, other available approaches can only get input from the robots that are inside the same communications network, which is less efficient when robots can move in and out of communications range with the team.
In contrast, he said, this research provides a mechanism for the robot to quickly find potential conflicts between its goal and the goal another robot selected, but is not in the communications network anymore.
What specifically makes this research unique includes:
Woosley said that he is optimistic this research will pave the way for other communications limited cooperation methods that will be helpful when robots are deployed in a mission that requires covert communications.
He and the research team, including DEVCOM ARL researchers Dr. John Rogers and Jeffrey Twigg and Naval Research Laboratory research scientist Dr. Prithviraj Dasgupta, will continue to work on collaboration between robotic team members through limited communications, especially in directions of predicting the other robot's actions in order to avoid conflicting tasks to begin with.
Visit the laboratory's Media Center to discover more Army science and technology stories
DEVCOM Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command . As the Army's corporate research laboratory, ARL is operationalizing science to achieve transformational overmatch. Through collaboration across the command's core technical competencies, DEVCOM leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more successful at winning the nation's wars and come home safely. DEVCOM is a major subordinate command of the Army Futures Command . | Researchers at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory (ARL) and the University of Nebraska, Omaha, have developed a technique that preserves the resilience of robots working in teams amid patchy battlefield communications. ARL's Bradley Woosley said the robots coordinate by sharing their next task with the team; specific members of the team will recall this information, enabling others to ask if any other robot will execute that task without engaging with the robot that selected the task. A geometric approximation (a-shape) clusters regions of the environment in which one robot can communicate with others using multi-hop communications. Said Woosley, "To our knowledge, this work is one of the first attempts to integrate geometry-based prediction of potential conflict regions to improve multi-robot information collection under communication constraints, while gracefully handling intermittent connectivity loss between robots." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory (ARL) and the University of Nebraska, Omaha, have developed a technique that preserves the resilience of robots working in teams amid patchy battlefield communications. ARL's Bradley Woosley said the robots coordinate by sharing their next task with the team; specific members of the team will recall this information, enabling others to ask if any other robot will execute that task without engaging with the robot that selected the task. A geometric approximation (a-shape) clusters regions of the environment in which one robot can communicate with others using multi-hop communications. Said Woosley, "To our knowledge, this work is one of the first attempts to integrate geometry-based prediction of potential conflict regions to improve multi-robot information collection under communication constraints, while gracefully handling intermittent connectivity loss between robots."
ADELPHI, Md. -- Army researchers developed a technique that allows robots to remain resilient when faced with intermittent communication losses on the battlefield.
The technique, called α-shape, provides an efficient method for resolving goal conflicts between multiple robots that may want to visit the same area during missions including unmanned search and rescue, robotic reconnaissance, perimeter surveillance and robotic detection of physical phenomena, such as radiation and underwater concentration of lifeforms.
Researchers from the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory and the University of Nebraska, Omaha Computer Science Department collaborated, which led to a paper featured in ScienceDirect's journal Robotics and Autonomous Systems .
"Robots working in teams need a method to ensure that they do not duplicate effort," said Army researcher Dr. Bradley Woosley. "When all robots can communicate, there are many techniques that can be used; however, in environments where the robots cannot communicate widely due to needing to stay covert, clutter leading to radios not working for long distance communications, or to preserve battery or bandwidth for more important messages, the robots will need a method to coordinate with as few communications as possible."
This coordination is accomplished through sharing their next task with the team, and select team members will remember this information, allowing other robots to ask if any other robot will perform that task without needing to communicate directly with the robot that selected the task, Woosley said.
The robot that remembers a task is based on the topology of their wireless communications network and the geometric layout of the robots, he said. Each robot is assigned a bounding shape representing the area of the environment that they are caching goal locations for, which enables a quick search in the communications network to find the robot that would know if there were any goals requested in that area.
"This research enables coordination between robots when each robot is empowered to make decisions about its next tasks without requiring it to check in with the rest of the team first," Woosley said. "Allowing the robots to make progress towards what the robots feel is the most important next step while handling any conflicts between two robots as they are discovered when robots move in and out of communications range with each other."
The technique uses a geometric approximation called α-shape to group together regions of the environment that a robot can communicate with other robots using multi-hop communications over a communications network. This technique is integrated with an intelligent search algorithm over the robots' communication tree to find conflicts and store them even if the robot that selects the goal disconnects from the communication tree before reaching the goal.
The team reported experimental results on simulated robots within multiple environments and physical Clearpath Jackal Robots.
"To our knowledge, this work is one of the first attempts to integrate geometry-based prediction of potential conflict regions to improve multi-robot information collection under communication constraints, while gracefully handling intermittent connectivity loss between robots," Woosley said.
According to Woosley, other available approaches can only get input from the robots that are inside the same communications network, which is less efficient when robots can move in and out of communications range with the team.
In contrast, he said, this research provides a mechanism for the robot to quickly find potential conflicts between its goal and the goal another robot selected, but is not in the communications network anymore.
What specifically makes this research unique includes:
Woosley said that he is optimistic this research will pave the way for other communications limited cooperation methods that will be helpful when robots are deployed in a mission that requires covert communications.
He and the research team, including DEVCOM ARL researchers Dr. John Rogers and Jeffrey Twigg and Naval Research Laboratory research scientist Dr. Prithviraj Dasgupta, will continue to work on collaboration between robotic team members through limited communications, especially in directions of predicting the other robot's actions in order to avoid conflicting tasks to begin with.
Visit the laboratory's Media Center to discover more Army science and technology stories
DEVCOM Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command . As the Army's corporate research laboratory, ARL is operationalizing science to achieve transformational overmatch. Through collaboration across the command's core technical competencies, DEVCOM leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more successful at winning the nation's wars and come home safely. DEVCOM is a major subordinate command of the Army Futures Command . |
|||
492 | AI, Captain! First Autonomous Ship Prepares for Maiden Voyage | The autonomous ship Mayflower 400 is preparing for a transatlantic journey from England to Plymouth, MA. The solar-powered, radar- and camera-equipped trimaran has an onboard artificial intelligence which learned to identify maritime obstacles by analyzing thousands of photos. IBM's Rosie Lickorish said the unmanned craft provided an advantage in the "unforgiving environment" of the open ocean; "Having a ship without people on board allows scientists to expand the area they can observe." The Mayflower 400 will analyze marine pollution and plastic in the water, as well as tracking aquatic mammals; the data it collects will be released for free. | [] | [] | [] | scitechnews | None | None | None | None | The autonomous ship Mayflower 400 is preparing for a transatlantic journey from England to Plymouth, MA. The solar-powered, radar- and camera-equipped trimaran has an onboard artificial intelligence which learned to identify maritime obstacles by analyzing thousands of photos. IBM's Rosie Lickorish said the unmanned craft provided an advantage in the "unforgiving environment" of the open ocean; "Having a ship without people on board allows scientists to expand the area they can observe." The Mayflower 400 will analyze marine pollution and plastic in the water, as well as tracking aquatic mammals; the data it collects will be released for free.
|
||||
493 | With Smartphones, Anyone Can Help Track Brood X - and Maybe Unlock Cicada Mysteries | Tens of thousands of people have signed up to participate in a crowdsourced project to track Brood X, the largest emergence of cicadas in the U.S., using a smartphone application developed by researchers at Mount St. Joseph University in Cincinnati, OH. The Cicada Safari app lets users of any expertise level participate, building a community as they share photos and videos, marking Brood X sightings with green pins on maps. The University of Maryland's Mike Raupp said the app has already yielded significant findings, which could inform his own investigations into how heat emissions from cities could doom cicadas that surface prematurely. Cicada Safari creator Gene Kritsky said he hopes "boots on the ground" cicada tracking will offer scientists "a really good baseline map" to evaluate anthropogenic effects on cicadas. | [] | [] | [] | scitechnews | None | None | None | None | Tens of thousands of people have signed up to participate in a crowdsourced project to track Brood X, the largest emergence of cicadas in the U.S., using a smartphone application developed by researchers at Mount St. Joseph University in Cincinnati, OH. The Cicada Safari app lets users of any expertise level participate, building a community as they share photos and videos, marking Brood X sightings with green pins on maps. The University of Maryland's Mike Raupp said the app has already yielded significant findings, which could inform his own investigations into how heat emissions from cities could doom cicadas that surface prematurely. Cicada Safari creator Gene Kritsky said he hopes "boots on the ground" cicada tracking will offer scientists "a really good baseline map" to evaluate anthropogenic effects on cicadas.
|
||||
494 | Category Killers of the Internet Are Significantly Reducing Online Diversity | The number of distinctive sources and voices on the internet is proven to be in long-term decline, according to new research.
A paper entitled ' Evolution of diversity and dominance of companies in online activity ' published in the PLOS One scientific journal has shown between 60 per cent and 70 per cent of all attention on key social media platforms in different market segments is focused towards just 10 popular domains.
In stark contrast, new competitors are struggling to survive against such dominant players, with just 3% of online domains born in 2015 still active today, compared to nearly 40% of those formed back in 2006.
The researchers say if these were organic lives, the infant mortality rate would be considered a crisis.
Paul X. McCarthy, a co-author of the paper and Adjunct Professor from UNSW Sydney's Engineering faculty, said: "The internet started as a source of innovation, new ideas and inspiration, a technology that opens up the playing field. But it is now becoming a medium that actually stifles competition, promotes monopolies and the dominance of a small number of players.
"The results indicate the end state of many new industries is likely to be more concentrated than in the analogue economy with a winner-takes-most outcome for many.
"It means that there's not as much natural competition in established domains, for example in retail with Amazon or in music with Spotify. This may lead to non-competitive behaviour by market leaders such as price discrimination and the use of market power to control suppliers and stifle potential future rivals.
"That is why some see a new role for market regulators to step in here."
The research team, which also included Dr Marian-Andrei Rizoiu from UTS, Sina Eghbal from ANU, and Dr Daniel Falster and Xian Gong from UNSW, performed a large-scale longitudinal study to quantify the distribution of attention given in the online environment to competing organisations.
They tallied the number of external links to an organisation's main domain posted on two large social media channels, namely Reddit and Twitter, as a proxy for online attention towards an organisation.
More than 6 billion user comments were analysed from Reddit, dating back to 2006, while the Twitter data comprised 11.8 billion posts published since 2011. In total, a massive trove of 5.6 terabytes of data was analysed from over a decade of global activity online - a data set more than four times the size of the original data from the Hubble Space Telescope.
And the results proved that over the long run, a small number of competitive giants are likely to dominate each functional market segment, such as search, retail and social media.
For example, the top 10 most popular domains mentioned on Reddit received around 35% of all links in 2006, which grew to 60% in 2019. On Twitter, the top 10 domains grew from 50% in 2011 to 70% in 2019.
The paper notes: "If there are too few competitors or a small number of players become too dominant within any economic sector, there emerges the potential for artificially high prices and constraints to supply. Even more importantly, in the long-term, this gives rise to constraints on innovation."
Co-author Dr. Marian-Andrei Rizoiu, who leads the Behavioural Data Science lab at the UTS Data Science Institute, said: "This research indicates the environment for new players online is becoming increasingly difficult. In the way pine trees sterilise the ground under their branches to prevent other trees competing with them, once they are established dominant players online crowd out competitors in their functional niche.
"Diversity of sources is in long term decline and although the worldwide web continues to grow, the attention is focused on fewer and few organisations.
"Attention online is a new form of currency. This new research illustrates that using a variety of analytic techniques and large-scale, global, longitudinal data, we can reveal patterns hitherto unseen on a global scale."
The research team's review of data going back 15 years allowed them to observe some specific dynamics that often play out when a new online market or function starts to flourish.
At first, there is a great burst of diverse businesses that appear and attempt to serve the market. After that comes a development phase when the number of competitors peaks and then starts to dwindle.
Finally, in the maturity phase, there is a significant reduction in diversity as users converge around a single dominant organisation - such as Google in terms of web searching and Amazon for online shopping.
Co-author Dr Daniel Falster, who leads his own lab in Evolutionary Ecology in the School of Biological, Earth and Environmental Sciences at UNSW, said: "As with the natural environment, we can see the birth, growth and survival patterns of companies online when we look at them on a large enough scale.
"Now we can see consistent patterns of competitive dynamics that are becoming clearer and clearer. The loss of diversity is a cause for concern."
Despite the findings, Prof. McCarthy says there are still opportunities for new global business to emerge and thrive.
"Businesses looking to grow and expand in the online world should be focused on innovation and collaboration - two salient features of many of the biggest winners in the digital economy," he said.
"This new research - inspired in part by ecology - illustrates while 'species diversity' is in decline, there are also many new functional openings continually emerging in the jungle with global potential." | A multi-institutional team of Australian researchers has found the diversity of online players is declining, although the Internet continues to grow along with functional and geographic opportunities. The large-scale longitudinal study measured the distribution of attention given in the online environment to competing organizations going back 15 years, and showed that a small number of competitors are likely to gain control of each functional market segment. Between 60% and 70% of all attention on key social media platforms in different market segments is concentrated on just 10 popular domains, according to the study. The University of New South Wales Sydney's Paul X. McCarthy said the Internet "is now becoming a medium that actually stifles competition, promotes monopolies, and the dominance of a small number of players." | [] | [] | [] | scitechnews | None | None | None | None | A multi-institutional team of Australian researchers has found the diversity of online players is declining, although the Internet continues to grow along with functional and geographic opportunities. The large-scale longitudinal study measured the distribution of attention given in the online environment to competing organizations going back 15 years, and showed that a small number of competitors are likely to gain control of each functional market segment. Between 60% and 70% of all attention on key social media platforms in different market segments is concentrated on just 10 popular domains, according to the study. The University of New South Wales Sydney's Paul X. McCarthy said the Internet "is now becoming a medium that actually stifles competition, promotes monopolies, and the dominance of a small number of players."
The number of distinctive sources and voices on the internet is proven to be in long-term decline, according to new research.
A paper entitled ' Evolution of diversity and dominance of companies in online activity ' published in the PLOS One scientific journal has shown between 60 per cent and 70 per cent of all attention on key social media platforms in different market segments is focused towards just 10 popular domains.
In stark contrast, new competitors are struggling to survive against such dominant players, with just 3% of online domains born in 2015 still active today, compared to nearly 40% of those formed back in 2006.
The researchers say if these were organic lives, the infant mortality rate would be considered a crisis.
Paul X. McCarthy, a co-author of the paper and Adjunct Professor from UNSW Sydney's Engineering faculty, said: "The internet started as a source of innovation, new ideas and inspiration, a technology that opens up the playing field. But it is now becoming a medium that actually stifles competition, promotes monopolies and the dominance of a small number of players.
"The results indicate the end state of many new industries is likely to be more concentrated than in the analogue economy with a winner-takes-most outcome for many.
"It means that there's not as much natural competition in established domains, for example in retail with Amazon or in music with Spotify. This may lead to non-competitive behaviour by market leaders such as price discrimination and the use of market power to control suppliers and stifle potential future rivals.
"That is why some see a new role for market regulators to step in here."
The research team, which also included Dr Marian-Andrei Rizoiu from UTS, Sina Eghbal from ANU, and Dr Daniel Falster and Xian Gong from UNSW, performed a large-scale longitudinal study to quantify the distribution of attention given in the online environment to competing organisations.
They tallied the number of external links to an organisation's main domain posted on two large social media channels, namely Reddit and Twitter, as a proxy for online attention towards an organisation.
More than 6 billion user comments were analysed from Reddit, dating back to 2006, while the Twitter data comprised 11.8 billion posts published since 2011. In total, a massive trove of 5.6 terabytes of data was analysed from over a decade of global activity online - a data set more than four times the size of the original data from the Hubble Space Telescope.
And the results proved that over the long run, a small number of competitive giants are likely to dominate each functional market segment, such as search, retail and social media.
For example, the top 10 most popular domains mentioned on Reddit received around 35% of all links in 2006, which grew to 60% in 2019. On Twitter, the top 10 domains grew from 50% in 2011 to 70% in 2019.
The paper notes: "If there are too few competitors or a small number of players become too dominant within any economic sector, there emerges the potential for artificially high prices and constraints to supply. Even more importantly, in the long-term, this gives rise to constraints on innovation."
Co-author Dr. Marian-Andrei Rizoiu, who leads the Behavioural Data Science lab at the UTS Data Science Institute, said: "This research indicates the environment for new players online is becoming increasingly difficult. In the way pine trees sterilise the ground under their branches to prevent other trees competing with them, once they are established dominant players online crowd out competitors in their functional niche.
"Diversity of sources is in long term decline and although the worldwide web continues to grow, the attention is focused on fewer and few organisations.
"Attention online is a new form of currency. This new research illustrates that using a variety of analytic techniques and large-scale, global, longitudinal data, we can reveal patterns hitherto unseen on a global scale."
The research team's review of data going back 15 years allowed them to observe some specific dynamics that often play out when a new online market or function starts to flourish.
At first, there is a great burst of diverse businesses that appear and attempt to serve the market. After that comes a development phase when the number of competitors peaks and then starts to dwindle.
Finally, in the maturity phase, there is a significant reduction in diversity as users converge around a single dominant organisation - such as Google in terms of web searching and Amazon for online shopping.
Co-author Dr Daniel Falster, who leads his own lab in Evolutionary Ecology in the School of Biological, Earth and Environmental Sciences at UNSW, said: "As with the natural environment, we can see the birth, growth and survival patterns of companies online when we look at them on a large enough scale.
"Now we can see consistent patterns of competitive dynamics that are becoming clearer and clearer. The loss of diversity is a cause for concern."
Despite the findings, Prof. McCarthy says there are still opportunities for new global business to emerge and thrive.
"Businesses looking to grow and expand in the online world should be focused on innovation and collaboration - two salient features of many of the biggest winners in the digital economy," he said.
"This new research - inspired in part by ecology - illustrates while 'species diversity' is in decline, there are also many new functional openings continually emerging in the jungle with global potential." |
|||
495 | Is Amazon Recommending Books on QAnon, White Nationalism? | A study by London-based think tank the Institute for Strategic Dialogue (ISD) found that Amazon's recommendation algorithms directs people to books about conspiracy theories and extremism, including those by authors banned by other online platforms. People browsing a book about a conspiracy on Amazon are likely to receive suggestions of more books on that subject, as well as books about other conspiracy theories. ISD's Chloe Colliver said features like auto-complete in the search bar and content suggestions for the author or similar authors also can steer users toward extremist content. Said Colliver, "Given how vital the recommendation systems are to Amazon's sales functions, it is safe to assume that recommendations of dangerous extremist or conspiracy content could be extremely pervasive." | [] | [] | [] | scitechnews | None | None | None | None | A study by London-based think tank the Institute for Strategic Dialogue (ISD) found that Amazon's recommendation algorithms directs people to books about conspiracy theories and extremism, including those by authors banned by other online platforms. People browsing a book about a conspiracy on Amazon are likely to receive suggestions of more books on that subject, as well as books about other conspiracy theories. ISD's Chloe Colliver said features like auto-complete in the search bar and content suggestions for the author or similar authors also can steer users toward extremist content. Said Colliver, "Given how vital the recommendation systems are to Amazon's sales functions, it is safe to assume that recommendations of dangerous extremist or conspiracy content could be extremely pervasive."
|
||||
496 | Spending on Cloud Computing Hits US$42 Billion Worldwide: Tracker | Market tracker Canalys said global cloud computing spending reached a record-high US$41.8 billion in the first quarter of 2021 as businesses used the Internet heavily to weather the pandemic. Worldwide spending on cloud infrastructure services rose nearly US$11 billion year over year, according to Canalys. The company's Blake Murray said, "Organizations depended on digital services and being online to maintain operations and adapt to the unfolding situation," although most businesses have not yet made the "digital transformation." Canalys ranked Amazon Web Services as the world's top cloud service provider, accounting for 32% of the market, followed by Microsoft's Azure platform with 19% and Google Cloud with 7%. Going forward, Murray expects continued migration to the cloud amid improving economic confidence and the revitalization of postponed projects. | [] | [] | [] | scitechnews | None | None | None | None | Market tracker Canalys said global cloud computing spending reached a record-high US$41.8 billion in the first quarter of 2021 as businesses used the Internet heavily to weather the pandemic. Worldwide spending on cloud infrastructure services rose nearly US$11 billion year over year, according to Canalys. The company's Blake Murray said, "Organizations depended on digital services and being online to maintain operations and adapt to the unfolding situation," although most businesses have not yet made the "digital transformation." Canalys ranked Amazon Web Services as the world's top cloud service provider, accounting for 32% of the market, followed by Microsoft's Azure platform with 19% and Google Cloud with 7%. Going forward, Murray expects continued migration to the cloud amid improving economic confidence and the revitalization of postponed projects.
|
||||
497 | Computer Model Helps Brings the Sun Into the Laboratory | Every day, the sun ejects large amounts of a hot particle soup known as plasma toward Earth where it can disrupt telecommunications satellites and damage electrical grids. Now, scientists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University's Department of Astrophysical Sciences have made a discovery that could lead to better predictions of this space weather and help safeguard sensitive infrastructure.
The discovery comes from a new computer model that predicts the behavior of the plasma in the region above the surface of the sun known as the solar corona. The model was originally inspired by a similar model that describes the behavior of the plasma that fuels fusion reactions in doughnut-shaped fusion facilities known as tokamaks .
Fusion , the power that drives the sun and stars, combines light elements in the form of plasma - the hot, charged state of matter composed of free electrons and atomic nuclei - that generates massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
The Princeton scientists made their findings while studying roped-together magnetic fields that loop into and out of the sun. Under certain conditions, the loops can cause hot particles to erupt from the sun's surface in enormous burps known as coronal mass ejections. Those particles can eventually hit the magnetic field surrounding Earth and cause auroras, as well as interfere with electrical and communications systems.
"We need to understand the causes of these eruptions to predict space weather," said Andrew Alt, a graduate student in the Princeton Program in Plasma Physics at PPPL and lead author of the paper reporting the results in the Astrophysical Journal .
The model relies on a new mathematical method that incorporates a novel insight that Alt and collaborators discovered into what causes the instability. The scientists found that a type of jiggling known as the "torus instability" could cause roped magnetic fields to untether from the sun's surface, triggering a flood of plasma.
The torus instability loosens some of the forces keeping the ropes tied down. Once those forces weaken, another force causes the ropes to expand and lift further off the solar surface. "Our model's ability to accurately predict the behavior of magnetic ropes indicates that our method could ultimately be used to improve space weather prediction," Alt said.
The scientists have also developed a way to more accurately translate laboratory results to conditions on the sun. Past models have relied on assumptions that made calculations easier but did not always simulate plasma precisely. The new technique relies only on raw data. "The assumptions built into previous models remove important physical effects that we want to consider," Alt said. "Without these assumptions, we can make more accurate predictions."
To conduct their research, the scientists created magnetic flux ropes inside PPPL's Magnetic Reconnection Experiment (MRX), a barrel-shaped machine designed to study the coming together and explosive breaking apart of the magnetic field lines in plasma. But flux ropes created in the lab behave differently than ropes on the sun, since, for example, the flux ropes in the lab have to be contained by a metal vessel.
The researchers made alterations to their mathematical tools to account for these differences, ensuring that results from MRX could be translated to the sun. "There are conditions on the sun that we cannot mimic in the laboratory," said PPPL physicist Hantao Ji, a Princeton University professor who advises Alt and contributed to the research. "So, we adjust our equations to account for the absence or presence of certain physical properties. We have to make sure our research compares apples to apples so our results will be accurate."
Discovery of the jiggling plasma behavior could also lead to more efficient generation of fusion-powered electricity. Magnetic reconnection and related plasma behavior occur in tokamaks as well as on the sun, so any insight into these processes could help scientists control them in the future.
Support for this research came from the DOE, the National Aeronautics and Space Administration, and the German Research Foundation. Research partners include Princeton University, Sandia National Laboratories, the University of Potsdam, the Harvard-Smithsonian Center for Astrophysics, and the Bulgarian Academy of Sciences.
PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas - ultra-hot, charged gases - and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy's Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science | Scientists at the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) and Princeton University have developed a computer model that predicts the behavior of plasma above the surface of the sun. In doing so, the team discovered that "torus instability," or jiggling plasma behavior, causes roped-together magnetic fields to escape the sun's surface as coronal mass ejections, which can disrupt electrical and communications systems when they strike the Earth's magnetic field. PPPL's Andrew Alt said, "Our model's ability to accurately predict the behavior of magnetic ropes indicates that our method could ultimately be used to improve space weather prediction." | [] | [] | [] | scitechnews | None | None | None | None | Scientists at the U.S. Department of Energy's Princeton Plasma Physics Laboratory (PPPL) and Princeton University have developed a computer model that predicts the behavior of plasma above the surface of the sun. In doing so, the team discovered that "torus instability," or jiggling plasma behavior, causes roped-together magnetic fields to escape the sun's surface as coronal mass ejections, which can disrupt electrical and communications systems when they strike the Earth's magnetic field. PPPL's Andrew Alt said, "Our model's ability to accurately predict the behavior of magnetic ropes indicates that our method could ultimately be used to improve space weather prediction."
Every day, the sun ejects large amounts of a hot particle soup known as plasma toward Earth where it can disrupt telecommunications satellites and damage electrical grids. Now, scientists at the U.S. Department of Energy's (DOE) Princeton Plasma Physics Laboratory (PPPL) and Princeton University's Department of Astrophysical Sciences have made a discovery that could lead to better predictions of this space weather and help safeguard sensitive infrastructure.
The discovery comes from a new computer model that predicts the behavior of the plasma in the region above the surface of the sun known as the solar corona. The model was originally inspired by a similar model that describes the behavior of the plasma that fuels fusion reactions in doughnut-shaped fusion facilities known as tokamaks .
Fusion , the power that drives the sun and stars, combines light elements in the form of plasma - the hot, charged state of matter composed of free electrons and atomic nuclei - that generates massive amounts of energy. Scientists are seeking to replicate fusion on Earth for a virtually inexhaustible supply of power to generate electricity.
The Princeton scientists made their findings while studying roped-together magnetic fields that loop into and out of the sun. Under certain conditions, the loops can cause hot particles to erupt from the sun's surface in enormous burps known as coronal mass ejections. Those particles can eventually hit the magnetic field surrounding Earth and cause auroras, as well as interfere with electrical and communications systems.
"We need to understand the causes of these eruptions to predict space weather," said Andrew Alt, a graduate student in the Princeton Program in Plasma Physics at PPPL and lead author of the paper reporting the results in the Astrophysical Journal .
The model relies on a new mathematical method that incorporates a novel insight that Alt and collaborators discovered into what causes the instability. The scientists found that a type of jiggling known as the "torus instability" could cause roped magnetic fields to untether from the sun's surface, triggering a flood of plasma.
The torus instability loosens some of the forces keeping the ropes tied down. Once those forces weaken, another force causes the ropes to expand and lift further off the solar surface. "Our model's ability to accurately predict the behavior of magnetic ropes indicates that our method could ultimately be used to improve space weather prediction," Alt said.
The scientists have also developed a way to more accurately translate laboratory results to conditions on the sun. Past models have relied on assumptions that made calculations easier but did not always simulate plasma precisely. The new technique relies only on raw data. "The assumptions built into previous models remove important physical effects that we want to consider," Alt said. "Without these assumptions, we can make more accurate predictions."
To conduct their research, the scientists created magnetic flux ropes inside PPPL's Magnetic Reconnection Experiment (MRX), a barrel-shaped machine designed to study the coming together and explosive breaking apart of the magnetic field lines in plasma. But flux ropes created in the lab behave differently than ropes on the sun, since, for example, the flux ropes in the lab have to be contained by a metal vessel.
The researchers made alterations to their mathematical tools to account for these differences, ensuring that results from MRX could be translated to the sun. "There are conditions on the sun that we cannot mimic in the laboratory," said PPPL physicist Hantao Ji, a Princeton University professor who advises Alt and contributed to the research. "So, we adjust our equations to account for the absence or presence of certain physical properties. We have to make sure our research compares apples to apples so our results will be accurate."
Discovery of the jiggling plasma behavior could also lead to more efficient generation of fusion-powered electricity. Magnetic reconnection and related plasma behavior occur in tokamaks as well as on the sun, so any insight into these processes could help scientists control them in the future.
Support for this research came from the DOE, the National Aeronautics and Space Administration, and the German Research Foundation. Research partners include Princeton University, Sandia National Laboratories, the University of Potsdam, the Harvard-Smithsonian Center for Astrophysics, and the Bulgarian Academy of Sciences.
PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas - ultra-hot, charged gases - and to developing practical solutions for the creation of fusion energy. The Laboratory is managed by the University for the U.S. Department of Energy's Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States and is working to address some of the most pressing challenges of our time. For more information, visit https://energy.gov/science |
|||
498 | Algorithm for the Diagnostics of Dementia | A top-level international research team including researchers from the University of Eastern Finland has developed a new algorithm for the diagnostics of dementia. The algorithm is based on blood and cerebrospinal fluid biomarker measurements. These biomarkers can be used to aid setting of an exact diagnosis already in the early phases of dementia. | A new biomarker-based algorithm for the diagnosis of dementia has been developed by an international team of researchers. The algorithm, developed by scientists at Finland's Universities of Eastern Finland and Oulu and their international colleagues, will help to differentiate patients with different forms of dementia, and can help select patients for clinical drug trials. The program also enables diagnosis of Alzheimer's disease based on blood sample analysis, while cerebrospinal fluid-based analyses might be required to diagnose rarer types of dementia. The researchers think the algorithm's results will expedite the accessibility of biomarker measurements in the near future. | [] | [] | [] | scitechnews | None | None | None | None | A new biomarker-based algorithm for the diagnosis of dementia has been developed by an international team of researchers. The algorithm, developed by scientists at Finland's Universities of Eastern Finland and Oulu and their international colleagues, will help to differentiate patients with different forms of dementia, and can help select patients for clinical drug trials. The program also enables diagnosis of Alzheimer's disease based on blood sample analysis, while cerebrospinal fluid-based analyses might be required to diagnose rarer types of dementia. The researchers think the algorithm's results will expedite the accessibility of biomarker measurements in the near future.
A top-level international research team including researchers from the University of Eastern Finland has developed a new algorithm for the diagnostics of dementia. The algorithm is based on blood and cerebrospinal fluid biomarker measurements. These biomarkers can be used to aid setting of an exact diagnosis already in the early phases of dementia. |
|||
500 | How Data Is Changing the Way Offices Are Run | Commercial real estate developers are tapping property technology (proptech) to use data to improve office buildings, in order to reduce costs and streamline operations. Property managers are applying proptech to enhance control systems like heating, lighting, air quality, and even the flow of workers via data collection and artificial intelligence (AI). Said Charlie Kuntz at real estate investment firm Hines, "There will be a dramatic increase in the information we have about how people use our buildings, and sensors will be more common." One example is Swedish developer Skanska's planned office tower in Houston, which will have environmental controls and smart building features, including a network of sensors tracking movement, occupancy, and efficiency for AI analysis. Data collection raises issues of privacy and cybersecurity, and Hines and other property developers say they strictly use anonymized data and do not track individuals. | [] | [] | [] | scitechnews | None | None | None | None | Commercial real estate developers are tapping property technology (proptech) to use data to improve office buildings, in order to reduce costs and streamline operations. Property managers are applying proptech to enhance control systems like heating, lighting, air quality, and even the flow of workers via data collection and artificial intelligence (AI). Said Charlie Kuntz at real estate investment firm Hines, "There will be a dramatic increase in the information we have about how people use our buildings, and sensors will be more common." One example is Swedish developer Skanska's planned office tower in Houston, which will have environmental controls and smart building features, including a network of sensors tracking movement, occupancy, and efficiency for AI analysis. Data collection raises issues of privacy and cybersecurity, and Hines and other property developers say they strictly use anonymized data and do not track individuals.
|
||||
503 | Cancer Algorithm Flags Genetic Weaknesses in Tumors | The MMRDetect clinical algorithm makes it possible to identify tumours that have 'mismatch repair deficiencies' and then improve the personalisation of cancer therapies to exploit those weaknesses.
The study, led by researchers from the University of Cambridge's Department of Medical Genetics and MRC Cancer Unit, identified nine DNA repair genes that are critical guardians of the human genome from damage caused by oxygen and water, as well as errors during cell division.
The team used a genome editing technology, CRISPR-Cas9, to 'knock out' (make inoperative) these repair genes in healthy human stem cells. In doing so, they observed strong mutation patterns, or mutational signatures, which offer useful markers of those genes and the repair pathways they are involved in, failing.
The study, funded by Cancer Research UK and published today in the journal Nature Cancer , suggests that these signatures of repair pathway defects are on-going and could therefore serve as crucial biomarkers in precision medicine.
Senior author, Dr Serena Nik-Zainal, a Cancer Research UK Advanced Clinician Scientist at Cambridge University's MRC Cancer Unit, said: "When we knock out different DNA repair genes, we find a kind of fingerprint of that gene or pathway being erased. We can then use those fingerprints to figure out which repair pathways have stopped working in each person's tumour, and what treatments should be used specifically to treat their cancer."
The new computer algorithm, MMRDetect, uses the mutational signatures that were identified in the knock out experiments, and was trained on whole genome sequencing data from NHS cancer patients in the 100,000 Genomes Project, to identify tumours with 'mismatch repair deficiency' which makes them sensitive to checkpoint inhibitors, immunotherapies. Having developed the algorithm on tumours in this study, the plan now is to roll it out across all cancers picked up by Genomics England.
The breakthrough demonstrates the value of researchers working with the 100,000 Genomes Project, a pioneering national whole genome sequencing endeavour.
Parker Moss, Chief Commercial and Partnerships Officer at Genomics England, said: "We are very excited to see such impactful research being supported by the 100,000 Genomes Project, and that our data has helped to develop a clinically significant tool. This is a fantastic example of how the sheer size and richness of the 100,000 Genomes Project data can contribute to important research.
"The outcomes from Dr Nik-Zainal and her team's work demonstrate perfectly how quickly and effectively we can return value to patient care by bringing together a community of leading researchers through Genomics England's platform."
The study offers important insights into where DNA damage comes from in our bodies. Water and oxygen are essential for life but are also the biggest sources of internal DNA damage in humans.
Dr Nik-Zainal said: "Because we are alive, we need oxygen and water, yet they cause a constant drip of DNA damage in our cells. Our DNA repair pathways are normally working to limit that damage, which is why, when we knocked out some of the crucial genes, we immediately saw lots of mutations."
"Some DNA repair genes are like precision tools, able to fix very specific kinds of DNA damage. Human DNA has four building blocks: adenine, cytosine, guanine and thymine. As an example, the OGG1 gene has a very specific role of fixing guanine when it is damaged by oxygen. When we knocked out OGG1, this crucial defence was severely weakened resulting in a very specific pattern of guanines that had mutated into thymines throughout the genome."
To be most effective, the MMRDetect algorithm could be used as soon as a patient has received a cancer diagnosis and their tumour characterised by genome sequencing. The team believes that this tool could help to transform the way a wide range of cancers are treated and save many lives.
Michelle Mitchell, Chief Executive of Cancer Research UK, said: "Determining the right treatments for patients will give them the best chance of surviving their disease. Immunotherapy in particular can be powerful, but it doesn't work on everyone, so figuring out how to tell when it will work is vital to making it the most useful treatment it can be.
"Our ability to map and mine useful information from the genomes of tumours has improved massively over the past decade. Thanks to initiatives like the 100,000 Genomes Project, we are beginning to see how we might use this information to benefit patients. We look forward to seeing how this research develops, and its possibilities in helping future patients."
This study was funded by Cancer Research UK (CRUK), Wellcome, Medical Research Council, Dr Josef Steiner Foundation and supported by the Cambridge NIHR Biomedical Research Campus.
Reference
Xueqing Zou et al., ' A systematic CRISPR screen defines mutational mechanisms underpinning signatures caused by replication errors and endogenous DNA damage ', Nature Cancer (26 April 2021). DOI: 10.1038/s43018-021-00200-0. | The MMRDetect clinical algorithm developed by researchers at the U.K.'s University of Cambridge can flag tumors with mismatch repair (MMR) deficiencies, and then enhance personalized therapies to exploit those genetic weaknesses. The team used CRISPR-Cas9 gene-editing technology to render repair genes inoperative in healthy human stem cells, and noticed the failure of strong mutational signatures. The implication is that these signatures of repair pathway defects are continuous, and could function as critical biomarkers in personalized medicine. MMRDetect employs these signatures, and was trained on whole genome sequencing data from U.K. National Health Service cancer patients in the 100,000 Genomes Project, to detect tumors with MMR deficiency. The algorithm could maximize effectiveness if it is applied as soon as a patient has received a cancer diagnosis and their tumor has been characterized by genome sequencing. | [] | [] | [] | scitechnews | None | None | None | None | The MMRDetect clinical algorithm developed by researchers at the U.K.'s University of Cambridge can flag tumors with mismatch repair (MMR) deficiencies, and then enhance personalized therapies to exploit those genetic weaknesses. The team used CRISPR-Cas9 gene-editing technology to render repair genes inoperative in healthy human stem cells, and noticed the failure of strong mutational signatures. The implication is that these signatures of repair pathway defects are continuous, and could function as critical biomarkers in personalized medicine. MMRDetect employs these signatures, and was trained on whole genome sequencing data from U.K. National Health Service cancer patients in the 100,000 Genomes Project, to detect tumors with MMR deficiency. The algorithm could maximize effectiveness if it is applied as soon as a patient has received a cancer diagnosis and their tumor has been characterized by genome sequencing.
The MMRDetect clinical algorithm makes it possible to identify tumours that have 'mismatch repair deficiencies' and then improve the personalisation of cancer therapies to exploit those weaknesses.
The study, led by researchers from the University of Cambridge's Department of Medical Genetics and MRC Cancer Unit, identified nine DNA repair genes that are critical guardians of the human genome from damage caused by oxygen and water, as well as errors during cell division.
The team used a genome editing technology, CRISPR-Cas9, to 'knock out' (make inoperative) these repair genes in healthy human stem cells. In doing so, they observed strong mutation patterns, or mutational signatures, which offer useful markers of those genes and the repair pathways they are involved in, failing.
The study, funded by Cancer Research UK and published today in the journal Nature Cancer , suggests that these signatures of repair pathway defects are on-going and could therefore serve as crucial biomarkers in precision medicine.
Senior author, Dr Serena Nik-Zainal, a Cancer Research UK Advanced Clinician Scientist at Cambridge University's MRC Cancer Unit, said: "When we knock out different DNA repair genes, we find a kind of fingerprint of that gene or pathway being erased. We can then use those fingerprints to figure out which repair pathways have stopped working in each person's tumour, and what treatments should be used specifically to treat their cancer."
The new computer algorithm, MMRDetect, uses the mutational signatures that were identified in the knock out experiments, and was trained on whole genome sequencing data from NHS cancer patients in the 100,000 Genomes Project, to identify tumours with 'mismatch repair deficiency' which makes them sensitive to checkpoint inhibitors, immunotherapies. Having developed the algorithm on tumours in this study, the plan now is to roll it out across all cancers picked up by Genomics England.
The breakthrough demonstrates the value of researchers working with the 100,000 Genomes Project, a pioneering national whole genome sequencing endeavour.
Parker Moss, Chief Commercial and Partnerships Officer at Genomics England, said: "We are very excited to see such impactful research being supported by the 100,000 Genomes Project, and that our data has helped to develop a clinically significant tool. This is a fantastic example of how the sheer size and richness of the 100,000 Genomes Project data can contribute to important research.
"The outcomes from Dr Nik-Zainal and her team's work demonstrate perfectly how quickly and effectively we can return value to patient care by bringing together a community of leading researchers through Genomics England's platform."
The study offers important insights into where DNA damage comes from in our bodies. Water and oxygen are essential for life but are also the biggest sources of internal DNA damage in humans.
Dr Nik-Zainal said: "Because we are alive, we need oxygen and water, yet they cause a constant drip of DNA damage in our cells. Our DNA repair pathways are normally working to limit that damage, which is why, when we knocked out some of the crucial genes, we immediately saw lots of mutations."
"Some DNA repair genes are like precision tools, able to fix very specific kinds of DNA damage. Human DNA has four building blocks: adenine, cytosine, guanine and thymine. As an example, the OGG1 gene has a very specific role of fixing guanine when it is damaged by oxygen. When we knocked out OGG1, this crucial defence was severely weakened resulting in a very specific pattern of guanines that had mutated into thymines throughout the genome."
To be most effective, the MMRDetect algorithm could be used as soon as a patient has received a cancer diagnosis and their tumour characterised by genome sequencing. The team believes that this tool could help to transform the way a wide range of cancers are treated and save many lives.
Michelle Mitchell, Chief Executive of Cancer Research UK, said: "Determining the right treatments for patients will give them the best chance of surviving their disease. Immunotherapy in particular can be powerful, but it doesn't work on everyone, so figuring out how to tell when it will work is vital to making it the most useful treatment it can be.
"Our ability to map and mine useful information from the genomes of tumours has improved massively over the past decade. Thanks to initiatives like the 100,000 Genomes Project, we are beginning to see how we might use this information to benefit patients. We look forward to seeing how this research develops, and its possibilities in helping future patients."
This study was funded by Cancer Research UK (CRUK), Wellcome, Medical Research Council, Dr Josef Steiner Foundation and supported by the Cambridge NIHR Biomedical Research Campus.
Reference
Xueqing Zou et al., ' A systematic CRISPR screen defines mutational mechanisms underpinning signatures caused by replication errors and endogenous DNA damage ', Nature Cancer (26 April 2021). DOI: 10.1038/s43018-021-00200-0. |
|||
504 | AI Tool Calculates Materials' Stress and Strain Based on Photos | Isaac Newton may have met his match.
For centuries, engineers have relied on physical laws - developed by Newton and others - to understand the stresses and strains on the materials they work with. But solving those equations can be a computational slog, especially for complex materials.
MIT researchers have developed a technique to quickly determine certain properties of a material, like stress and strain, based on an image of the material showing its internal structure. The approach could one day eliminate the need for arduous physics-based calculations, instead relying on computer vision and machine learning to generate estimates in real time.
The researchers say the advance could enable faster design prototyping and material inspections. "It's a brand new approach," says Zhenze Yang, adding that the algorithm "completes the whole process without any domain knowledge of physics."
The research appears today in the journal Science Advances . Yang is the paper's lead author and a PhD student in the Department of Materials Science and Engineering. Co-authors include former MIT postdoc Chi-Hua Yu and Markus Buehler, the McAfee Professor of Engineering and the director of the Laboratory for Atomistic and Molecular Mechanics.
Engineers spend lots of time solving equations. They help reveal a material's internal forces, like stress and strain, which can cause that material to deform or break. Such calculations might suggest how a proposed bridge would hold up amid heavy traffic loads or high winds. Unlike Sir Isaac, engineers today don't need pen and paper for the task. "Many generations of mathematicians and engineers have written down these equations and then figured out how to solve them on computers," says Buehler. "But it's still a tough problem. It's very expensive - it can take days, weeks, or even months to run some simulations. So, we thought: Let's teach an AI to do this problem for you."
The researchers turned to a machine learning technique called a Generative Adversarial Neural Network. They trained the network with thousands of paired images - one depicting a material's internal microstructure subject to mechanical forces, and the other depicting that same material's color-coded stress and strain values. With these examples, the network uses principles of game theory to iteratively figure out the relationships between the geometry of a material and its resulting stresses.
"So, from a picture, the computer is able to predict all those forces: the deformations, the stresses, and so forth," Buehler says. "That's really the breakthrough - in the conventional way, you would need to code the equations and ask the computer to solve partial differential equations. We just go picture to picture."
That image-based approach is especially advantageous for complex, composite materials. Forces on a material may operate differently at the atomic scale than at the macroscopic scale. "If you look at an airplane, you might have glue, a metal, and a polymer in between. So, you have all these different faces and different scales that determine the solution," say Buehler. "If you go the hard way - the Newton way - you have to walk a huge detour to get to the answer."
But the researcher's network is adept at dealing with multiple scales. It processes information through a series of "convolutions," which analyze the images at progressively larger scales. "That's why these neural networks are a great fit for describing material properties," says Buehler.
The fully trained network performed well in tests, successfully rendering stress and strain values given a series of close-up images of the microstructure of various soft composite materials. The network was even able to capture "singularities," like cracks developing in a material. In these instances, forces and fields change rapidly across tiny distances. "As a material scientist, you would want to know if the model can recreate those singularities," says Buehler. "And the answer is yes."
The advance could "significantly reduce the iterations needed to design products," according to Suvranu De, a mechanical engineer at Rensselaer Polytechnic Institute who was not involved in the research. "The end-to-end approach proposed in this paper will have a significant impact on a variety of engineering applications - from composites used in the automotive and aircraft industries to natural and engineered biomaterials. It will also have significant applications in the realm of pure scientific inquiry, as force plays a critical role in a surprisingly wide range of applications from micro/nanoelectronics to the migration and differentiation of cells."
In addition to saving engineers time and money, the new technique could give nonexperts access to state-of-the-art materials calculations. Architects or product designers, for example, could test the viability of their ideas before passing the project along to an engineering team. "They can just draw their proposal and find out," says Buehler. "That's a big deal."
Once trained, the network runs almost instantaneously on consumer-grade computer processors. That could enable mechanics and inspectors to diagnose potential problems with machinery simply by taking a picture.
In the new paper, the researchers worked primarily with composite materials that included both soft and brittle components in a variety of random geometrical arrangements. In future work, the team plans to use a wider range of material types. "I really think this method is going to have a huge impact," says Buehler. "Empowering engineers with AI is really what we're trying to do here."
Funding for this research was provided, in part, by the Army Research Office and the Office of Naval Research. | A technique for rapidly assessing material properties like stress and strain, based on an image of the internal structure, has been developed by Massachusetts Institute of Technology (MIT) researchers. The team trained a Generative Adversarial Neural Network with thousands of paired images - respectively depicting a material's internal microstructure subject to mechanical forces, and its color-coded stress and strain values; the network iteratively determined relationships between a material's geometry and its ensuing stresses using principles of game theory. MIT's Markus Buehler said the computer can essentially predict the various forces that act on the material, as opposed to the conventional way, in which "you would need to code the equations and ask the computer to solve partial differential equations. We just go picture to picture." Buehler said the network is well-suited for describing material properties, as it can process data through a series of convolutions, which analyze the images at progressively larger scales. | [] | [] | [] | scitechnews | None | None | None | None | A technique for rapidly assessing material properties like stress and strain, based on an image of the internal structure, has been developed by Massachusetts Institute of Technology (MIT) researchers. The team trained a Generative Adversarial Neural Network with thousands of paired images - respectively depicting a material's internal microstructure subject to mechanical forces, and its color-coded stress and strain values; the network iteratively determined relationships between a material's geometry and its ensuing stresses using principles of game theory. MIT's Markus Buehler said the computer can essentially predict the various forces that act on the material, as opposed to the conventional way, in which "you would need to code the equations and ask the computer to solve partial differential equations. We just go picture to picture." Buehler said the network is well-suited for describing material properties, as it can process data through a series of convolutions, which analyze the images at progressively larger scales.
Isaac Newton may have met his match.
For centuries, engineers have relied on physical laws - developed by Newton and others - to understand the stresses and strains on the materials they work with. But solving those equations can be a computational slog, especially for complex materials.
MIT researchers have developed a technique to quickly determine certain properties of a material, like stress and strain, based on an image of the material showing its internal structure. The approach could one day eliminate the need for arduous physics-based calculations, instead relying on computer vision and machine learning to generate estimates in real time.
The researchers say the advance could enable faster design prototyping and material inspections. "It's a brand new approach," says Zhenze Yang, adding that the algorithm "completes the whole process without any domain knowledge of physics."
The research appears today in the journal Science Advances . Yang is the paper's lead author and a PhD student in the Department of Materials Science and Engineering. Co-authors include former MIT postdoc Chi-Hua Yu and Markus Buehler, the McAfee Professor of Engineering and the director of the Laboratory for Atomistic and Molecular Mechanics.
Engineers spend lots of time solving equations. They help reveal a material's internal forces, like stress and strain, which can cause that material to deform or break. Such calculations might suggest how a proposed bridge would hold up amid heavy traffic loads or high winds. Unlike Sir Isaac, engineers today don't need pen and paper for the task. "Many generations of mathematicians and engineers have written down these equations and then figured out how to solve them on computers," says Buehler. "But it's still a tough problem. It's very expensive - it can take days, weeks, or even months to run some simulations. So, we thought: Let's teach an AI to do this problem for you."
The researchers turned to a machine learning technique called a Generative Adversarial Neural Network. They trained the network with thousands of paired images - one depicting a material's internal microstructure subject to mechanical forces, and the other depicting that same material's color-coded stress and strain values. With these examples, the network uses principles of game theory to iteratively figure out the relationships between the geometry of a material and its resulting stresses.
"So, from a picture, the computer is able to predict all those forces: the deformations, the stresses, and so forth," Buehler says. "That's really the breakthrough - in the conventional way, you would need to code the equations and ask the computer to solve partial differential equations. We just go picture to picture."
That image-based approach is especially advantageous for complex, composite materials. Forces on a material may operate differently at the atomic scale than at the macroscopic scale. "If you look at an airplane, you might have glue, a metal, and a polymer in between. So, you have all these different faces and different scales that determine the solution," say Buehler. "If you go the hard way - the Newton way - you have to walk a huge detour to get to the answer."
But the researcher's network is adept at dealing with multiple scales. It processes information through a series of "convolutions," which analyze the images at progressively larger scales. "That's why these neural networks are a great fit for describing material properties," says Buehler.
The fully trained network performed well in tests, successfully rendering stress and strain values given a series of close-up images of the microstructure of various soft composite materials. The network was even able to capture "singularities," like cracks developing in a material. In these instances, forces and fields change rapidly across tiny distances. "As a material scientist, you would want to know if the model can recreate those singularities," says Buehler. "And the answer is yes."
The advance could "significantly reduce the iterations needed to design products," according to Suvranu De, a mechanical engineer at Rensselaer Polytechnic Institute who was not involved in the research. "The end-to-end approach proposed in this paper will have a significant impact on a variety of engineering applications - from composites used in the automotive and aircraft industries to natural and engineered biomaterials. It will also have significant applications in the realm of pure scientific inquiry, as force plays a critical role in a surprisingly wide range of applications from micro/nanoelectronics to the migration and differentiation of cells."
In addition to saving engineers time and money, the new technique could give nonexperts access to state-of-the-art materials calculations. Architects or product designers, for example, could test the viability of their ideas before passing the project along to an engineering team. "They can just draw their proposal and find out," says Buehler. "That's a big deal."
Once trained, the network runs almost instantaneously on consumer-grade computer processors. That could enable mechanics and inspectors to diagnose potential problems with machinery simply by taking a picture.
In the new paper, the researchers worked primarily with composite materials that included both soft and brittle components in a variety of random geometrical arrangements. In future work, the team plans to use a wider range of material types. "I really think this method is going to have a huge impact," says Buehler. "Empowering engineers with AI is really what we're trying to do here."
Funding for this research was provided, in part, by the Army Research Office and the Office of Naval Research. |
|||
505 | Apple Will Spend $1 Billion to Open 3,000-Employee Campus in North Carolina | Apple announced plans Monday to open a new campus in the Raleigh, North Carolina, area.
Apple will spend over $1 billion on the campus, and it will employ 3,000 people working on technology including software engineering and machine learning.
The campus is a sign of Apple's continued expansion beyond its headquarters in Cupertino, California, where most of its engineering has been based. Apple's $1 billion campus in Austin, Texas, is expected to open next year.
Apple's expansion will be located in North Carolina's Research Triangle area, which gets its name from nearby North Carolina State University, Duke University and the University of North Carolina. Apple CEO Tim Cook and COO Jeff Williams have MBAs from Duke. Apple senior vice president Eddy Cue, who is in charge of the company's online services, graduated from Duke. | Apple plans to open a new campus in the Raleigh, NC, area, with 3,000 employees working on software engineering, machine learning, and other technologies. The company will spend more than $1 billion on the campus, adding to previous plans to open a $1 billion campus in Austin, TX, next year. The NC campus will be located in the Research Triangle area, referring to its proximity to North Carolina State University, Duke University, and the University of North Carolina at Chapel Hill. The move comes as other technology companies, like Oracle, Google, and Amazon, expand outside the San Francisco Bay area to tap a larger pool of talent in areas with lower costs of living. Meanwhile, Apple announced that it would add 20,000 jobs nationwide over the next five years in cities including San Diego, Culver City, Boulder, and Seattle. | [] | [] | [] | scitechnews | None | None | None | None | Apple plans to open a new campus in the Raleigh, NC, area, with 3,000 employees working on software engineering, machine learning, and other technologies. The company will spend more than $1 billion on the campus, adding to previous plans to open a $1 billion campus in Austin, TX, next year. The NC campus will be located in the Research Triangle area, referring to its proximity to North Carolina State University, Duke University, and the University of North Carolina at Chapel Hill. The move comes as other technology companies, like Oracle, Google, and Amazon, expand outside the San Francisco Bay area to tap a larger pool of talent in areas with lower costs of living. Meanwhile, Apple announced that it would add 20,000 jobs nationwide over the next five years in cities including San Diego, Culver City, Boulder, and Seattle.
Apple announced plans Monday to open a new campus in the Raleigh, North Carolina, area.
Apple will spend over $1 billion on the campus, and it will employ 3,000 people working on technology including software engineering and machine learning.
The campus is a sign of Apple's continued expansion beyond its headquarters in Cupertino, California, where most of its engineering has been based. Apple's $1 billion campus in Austin, Texas, is expected to open next year.
Apple's expansion will be located in North Carolina's Research Triangle area, which gets its name from nearby North Carolina State University, Duke University and the University of North Carolina. Apple CEO Tim Cook and COO Jeff Williams have MBAs from Duke. Apple senior vice president Eddy Cue, who is in charge of the company's online services, graduated from Duke. |
|||
507 | SolarWinds, Microsoft Hacks Prompt Focus on Zero-Trust Security | Mr. Sherman said that so-called zero-trust models, which set up internal defenses that constantly verify whether a device, user or program should be able to do what it is asking to, should be more widely adopted by the public and private sectors. This is in contrast to the more reactive approach of traditional cybersecurity defenses, which seek to block hackers from entering a network.
Analysis of the breaches, which exploited vulnerabilities in software from SolarWinds Corp. and Microsoft Corp. , from the Cybersecurity and Infrastructure Security Agency, the National Security Agency and the Federal Bureau of Investigation found that the hackers were often able to gain broad systems access. In many cases the hackers moved through networks unfettered to set up back doors and administrator accounts.
The concept of zero trust has been around since the turn of the century in various forms. However, misconceptions about what it involves have slowed adoption, said Chase Cunningham, chief strategy officer at cybersecurity firm Ericom Software Ltd.
For instance, he said, zero-trust frameworks don't abolish firewalls and other tools that guard the borders of networks, known in the industry as the perimeter. Rather, they add a layer of defense.
"No one who actually understands zero trust says abandon the perimeter," he said. "But the reality of it is that you need to understand your perimeter's probably already compromised, especially when you're in a remote space."
The Pentagon is working toward establishing a zero-trust model, Mr. Sherman said. Though Wanda Jones-Heath, chief information security officer in the Office of the Secretary of the Air Force, said that putting zero trust in place takes time and research, while others warned that cybersecurity vendors often label their products as zero-trust, but that is misleading.
"Zero trust is not a technology, it's not something you buy, it's a strategy," said Gregory Touhill, director of the computer emergency readiness team at Carnegie Mellon University's Software Engineering Institute and former federal CISO in the Obama administration. "And we've got too many folks in industry that are trying to peddle themselves as zero-trust vendors selling the same stuff that wasn't good enough the first time, really."
At the Billington event, federal CISO Chris DeRusha advocated for the use of zero-trust models, but stressed the importance of information sharing between the public and private sectors in conjunction with enhancing defenses.
The response to the SolarWinds attack, which was discovered by cybersecurity firm FireEye Inc., spurred extraordinary cooperation, he said.
The FBI was eventually able to identify a list of about 100 companies and nine federal agencies that were victims of the attack. Investigators and officials have suspected that Russia was behind the hack since it was discovered, and the U.S. government formally blamed the country on April 15, issuing fresh sanctions over the cyberattack and other matters. Russia denies the allegations.
The joint investigative work between businesses and government officials, Mr. DeRusha said, had a direct effect on the speed of recovery, and should continue.
"What I want to think about is how we bottle lightning here and we move forward in our public-private partnerships," he said.
Write to James Rundle at james.rundle@wsj.com | At an April 22 virtual event hosted by Cyber Education Institute LLC's Billington Cybersecurity unit, U.S. Department of Defense's John Sherman said the public and private sectors should adopt zero-trust models that constantly verify whether a device, user, or program should be able to do what it is asking to do. Ericom Software Ltd.'s Chase Cunningham said, "No one who actually understands zero trust says abandon the perimeter. But the reality of it is that you need to understand your perimeter's probably already compromised, especially when you're in a remote space." Carnegie Mellon University's Gregory Touhill stressed that zero trust is not a technology but a strategy, and "we've got too many folks in industry that are trying to peddle themselves as zero-trust vendors selling the same stuff that wasn't good enough the first time." | [] | [] | [] | scitechnews | None | None | None | None | At an April 22 virtual event hosted by Cyber Education Institute LLC's Billington Cybersecurity unit, U.S. Department of Defense's John Sherman said the public and private sectors should adopt zero-trust models that constantly verify whether a device, user, or program should be able to do what it is asking to do. Ericom Software Ltd.'s Chase Cunningham said, "No one who actually understands zero trust says abandon the perimeter. But the reality of it is that you need to understand your perimeter's probably already compromised, especially when you're in a remote space." Carnegie Mellon University's Gregory Touhill stressed that zero trust is not a technology but a strategy, and "we've got too many folks in industry that are trying to peddle themselves as zero-trust vendors selling the same stuff that wasn't good enough the first time."
Mr. Sherman said that so-called zero-trust models, which set up internal defenses that constantly verify whether a device, user or program should be able to do what it is asking to, should be more widely adopted by the public and private sectors. This is in contrast to the more reactive approach of traditional cybersecurity defenses, which seek to block hackers from entering a network.
Analysis of the breaches, which exploited vulnerabilities in software from SolarWinds Corp. and Microsoft Corp. , from the Cybersecurity and Infrastructure Security Agency, the National Security Agency and the Federal Bureau of Investigation found that the hackers were often able to gain broad systems access. In many cases the hackers moved through networks unfettered to set up back doors and administrator accounts.
The concept of zero trust has been around since the turn of the century in various forms. However, misconceptions about what it involves have slowed adoption, said Chase Cunningham, chief strategy officer at cybersecurity firm Ericom Software Ltd.
For instance, he said, zero-trust frameworks don't abolish firewalls and other tools that guard the borders of networks, known in the industry as the perimeter. Rather, they add a layer of defense.
"No one who actually understands zero trust says abandon the perimeter," he said. "But the reality of it is that you need to understand your perimeter's probably already compromised, especially when you're in a remote space."
The Pentagon is working toward establishing a zero-trust model, Mr. Sherman said. Though Wanda Jones-Heath, chief information security officer in the Office of the Secretary of the Air Force, said that putting zero trust in place takes time and research, while others warned that cybersecurity vendors often label their products as zero-trust, but that is misleading.
"Zero trust is not a technology, it's not something you buy, it's a strategy," said Gregory Touhill, director of the computer emergency readiness team at Carnegie Mellon University's Software Engineering Institute and former federal CISO in the Obama administration. "And we've got too many folks in industry that are trying to peddle themselves as zero-trust vendors selling the same stuff that wasn't good enough the first time, really."
At the Billington event, federal CISO Chris DeRusha advocated for the use of zero-trust models, but stressed the importance of information sharing between the public and private sectors in conjunction with enhancing defenses.
The response to the SolarWinds attack, which was discovered by cybersecurity firm FireEye Inc., spurred extraordinary cooperation, he said.
The FBI was eventually able to identify a list of about 100 companies and nine federal agencies that were victims of the attack. Investigators and officials have suspected that Russia was behind the hack since it was discovered, and the U.S. government formally blamed the country on April 15, issuing fresh sanctions over the cyberattack and other matters. Russia denies the allegations.
The joint investigative work between businesses and government officials, Mr. DeRusha said, had a direct effect on the speed of recovery, and should continue.
"What I want to think about is how we bottle lightning here and we move forward in our public-private partnerships," he said.
Write to James Rundle at james.rundle@wsj.com |
|||
508 | Researchers Rev Up Innovative ML Strategies to Reclaim Energy, Time, and Money Lost in Traffic Jams | Inching forward in bumper-to-bumper traffic, drivers bemoan the years of their lives
sacrificed in bad commutes. Even with the pandemic dramatically reducing the volume
of traffic, Americans still lost an average of 26 hours last year to road congestion.
In a typical year, U.S. drivers spend closer to 46 hours stuck behind the wheel - which
can add up to thousands of hours in the course of a lifetime.
Traffic jams not only waste time and more than 3.3 billion gallons of fuel each year,
but they also translate into 8.8 billion hours of lost productivity and surges in
polluting emissions. Recent research led by the U.S. Department of Energy's Oak Ridge
National Laboratory (ORNL) and supported by the National Renewable Energy Laboratory
(NREL) reveals the potential to untangle traffic snarls through a combination of next-generation
sensors and controls with high-performance computing, analytics, and machine learning.
These innovative congestion-combatting strategies target reducing vehicle energy consumption
by up to 20% and recovering as much as $100 billion in lost productivity in the next
10 years.
The NREL team created a series of simulations (or a "digital twin") of Chattanooga,
Tennessee, traffic conditions using real-time data collected via a wide range of sensor
devices. The simulations help identify which controls - in the form of traffic signal
programming, alternative routing, speed harmonization, ramp metering, dynamic speed
limits, and more - can deliver the greatest energy efficiency, while optimizing travel
time, highway speed, and safety. The resulting information can be used by urban planners,
technology developers, automakers, and fleet operators to develop systems and equipment
that will streamline commutes and deliveries.
"Chattanooga provided an ideal microcosm of conditions and opportunities to work with
an exceptional roster of municipal and state partners," said NREL's Vehicle Technologies
Laboratory Program Manager John Farrell. "Eventually, the plan is to apply these solutions
to larger metropolitan areas and regional corridors across the country."
Sensors were used to continuously collect data from more than 500 sources including
automated cameras, traffic signals, on-board GPS devices, radar detectors, and weather
stations. This information fed into simulation, modeling, and select machine-learning
activities headed up by NREL researchers for the ORNL-led project.
The NREL team has developed state-of-the-art techniques and tools to identify and
quantify energy lost to traffic congestion and evaluate and validate mitigation strategies.
By pairing data from multiple sources with high-fidelity machine learning, NREL researchers
can estimate energy use and energy loss, determine where and why systems are losing
energy, and model realistic reactions to changes in conditions and controls. This
provides a scientific basis for strategies to improve traffic flow, which the team
can then assess through simulations and validate through field studies.
For the Chattanooga project, the NREL team created a method for estimating and visualizing
real-time and historic traffic volume, speed, and energy consumption, making it possible
to pinpoint areas with the greatest potential for energy savings through application
of congestion relief strategies. The team also developed machine-learning techniques
to help evaluate traffic signal performance while collaborating with ORNL researchers
on other machine learning and artificial intelligence strategies.
NREL's analyses looked beyond data, using machine learning, data from GPS devices
and vehicle sensors, and visual analytics to examine the underlying causes of congestion.
For example, the team discovered that traffic signals along one major corridor had
not been timed to optimize lighter, off-peak midday traffic flow, which resulted in
a high incidence of delays due to excessive stops at red lights.
The team revealed that the same corridor could act as a strategic area for reducing
energy consumption, with a simulation model of the corridor indicating that optimized
traffic signal settings had the potential to reduce energy consumption at that location
by as much as 17%. Researchers then recommended to Chattanooga Department of Transportation
engineers specific improvements to four signal controllers along the corridor. Real-world
results showed as much as a 16% decrease in fuel use for vehicles on that stretch
of road - almost meeting the target of 20% reductions - through the deployment of very
limited strategies.
"Optimizing the control of the traffic systems could help save significant amounts
of energy and reduce mobility-related emissions in the real world," said Qichao Wang,
NREL postdoctoral researcher and lead for the traffic control effort in this project.
The real-time data crunching required to produce these complex, large-scale simulations
relied on high-performance computing on the Eagle supercomputer at NREL. This computer
can carry out 8 million-billion calculations per second, allowing researchers to complete
in hours, minutes, or seconds computations that would have previously taken days,
weeks, or even months.
"The intersection of high-performance computing, high-fidelity data, machine learning,
and transportation research can deliver powerful results, far beyond what has been
possible in the past with legacy technology," said Juliette Ugirumurera, NREL computational
scientist and co-lead of the laboratory's project team.
More than 11 billion tons of freight are transported across U.S. highways each year, amounting
to more than $32 billion worth of goods each day. This gives commercial freight carriers
even greater motivation than individual drivers to avoid wasting fuel and money in
traffic congestion. Researchers have recently started working with regional and national
carriers in Georgia and Tennessee to explore how to most effectively tailor the simulations
and controls to trucking fleets.
"Up until now, our city-scale prototype has focused more tightly on passenger vehicles
and individual travel patterns," said Wesley Jones, NREL scientific computing group
manager and co-lead of the laboratory's project team. "As we expand our research to
examine freight operations, we'll also take a broader look at the regional and national
routes they travel."
Eventually, it is anticipated that these technologies for passenger and freight transportation
will be applied across the country, with additional sensors and control equipment
integrated in infrastructure and connected and autonomous vehicles.
Other project partners include the City of Chattanooga, the Tennessee Department of
Transportation, the Georgia Department of Transportation, University of Tennessee,
Vanderbilt University, Wayne State University, TomTom, FedEx, USXpress, Covenant Transport
Services, and Freight Waves.
Learn more about NREL's computational science and transportation and mobility research. | A researchers team led by the National Renewable Energy Laboratory (NREL) found that next-generation sensors and controls in combination with high-performance computing, analytics, and machine learning could minimize road congestion. The researchers used real-time data gathered by a wide range of sensors to develop a series of simulations of traffic conditions in Chattanooga, TN, and identify which controls can achieve the greatest energy efficiency while optimizing travel time, highway speed, and safety. The researchers also analyzed the underlying causes of congestion using machine learning, data from GPS devices and vehicle sensors, and visual analytics. The data could help urban planners, technology developers, automakers, and fleet operators design systems and equipment to make commutes and deliveries more efficient. NREL's Juliette Ugirumurera said, "The intersection of high-performance computing, high-fidelity data, machine learning, and transportation research can deliver powerful results, far beyond what has been possible in the past with legacy technology." | [] | [] | [] | scitechnews | None | None | None | None | A researchers team led by the National Renewable Energy Laboratory (NREL) found that next-generation sensors and controls in combination with high-performance computing, analytics, and machine learning could minimize road congestion. The researchers used real-time data gathered by a wide range of sensors to develop a series of simulations of traffic conditions in Chattanooga, TN, and identify which controls can achieve the greatest energy efficiency while optimizing travel time, highway speed, and safety. The researchers also analyzed the underlying causes of congestion using machine learning, data from GPS devices and vehicle sensors, and visual analytics. The data could help urban planners, technology developers, automakers, and fleet operators design systems and equipment to make commutes and deliveries more efficient. NREL's Juliette Ugirumurera said, "The intersection of high-performance computing, high-fidelity data, machine learning, and transportation research can deliver powerful results, far beyond what has been possible in the past with legacy technology."
Inching forward in bumper-to-bumper traffic, drivers bemoan the years of their lives
sacrificed in bad commutes. Even with the pandemic dramatically reducing the volume
of traffic, Americans still lost an average of 26 hours last year to road congestion.
In a typical year, U.S. drivers spend closer to 46 hours stuck behind the wheel - which
can add up to thousands of hours in the course of a lifetime.
Traffic jams not only waste time and more than 3.3 billion gallons of fuel each year,
but they also translate into 8.8 billion hours of lost productivity and surges in
polluting emissions. Recent research led by the U.S. Department of Energy's Oak Ridge
National Laboratory (ORNL) and supported by the National Renewable Energy Laboratory
(NREL) reveals the potential to untangle traffic snarls through a combination of next-generation
sensors and controls with high-performance computing, analytics, and machine learning.
These innovative congestion-combatting strategies target reducing vehicle energy consumption
by up to 20% and recovering as much as $100 billion in lost productivity in the next
10 years.
The NREL team created a series of simulations (or a "digital twin") of Chattanooga,
Tennessee, traffic conditions using real-time data collected via a wide range of sensor
devices. The simulations help identify which controls - in the form of traffic signal
programming, alternative routing, speed harmonization, ramp metering, dynamic speed
limits, and more - can deliver the greatest energy efficiency, while optimizing travel
time, highway speed, and safety. The resulting information can be used by urban planners,
technology developers, automakers, and fleet operators to develop systems and equipment
that will streamline commutes and deliveries.
"Chattanooga provided an ideal microcosm of conditions and opportunities to work with
an exceptional roster of municipal and state partners," said NREL's Vehicle Technologies
Laboratory Program Manager John Farrell. "Eventually, the plan is to apply these solutions
to larger metropolitan areas and regional corridors across the country."
Sensors were used to continuously collect data from more than 500 sources including
automated cameras, traffic signals, on-board GPS devices, radar detectors, and weather
stations. This information fed into simulation, modeling, and select machine-learning
activities headed up by NREL researchers for the ORNL-led project.
The NREL team has developed state-of-the-art techniques and tools to identify and
quantify energy lost to traffic congestion and evaluate and validate mitigation strategies.
By pairing data from multiple sources with high-fidelity machine learning, NREL researchers
can estimate energy use and energy loss, determine where and why systems are losing
energy, and model realistic reactions to changes in conditions and controls. This
provides a scientific basis for strategies to improve traffic flow, which the team
can then assess through simulations and validate through field studies.
For the Chattanooga project, the NREL team created a method for estimating and visualizing
real-time and historic traffic volume, speed, and energy consumption, making it possible
to pinpoint areas with the greatest potential for energy savings through application
of congestion relief strategies. The team also developed machine-learning techniques
to help evaluate traffic signal performance while collaborating with ORNL researchers
on other machine learning and artificial intelligence strategies.
NREL's analyses looked beyond data, using machine learning, data from GPS devices
and vehicle sensors, and visual analytics to examine the underlying causes of congestion.
For example, the team discovered that traffic signals along one major corridor had
not been timed to optimize lighter, off-peak midday traffic flow, which resulted in
a high incidence of delays due to excessive stops at red lights.
The team revealed that the same corridor could act as a strategic area for reducing
energy consumption, with a simulation model of the corridor indicating that optimized
traffic signal settings had the potential to reduce energy consumption at that location
by as much as 17%. Researchers then recommended to Chattanooga Department of Transportation
engineers specific improvements to four signal controllers along the corridor. Real-world
results showed as much as a 16% decrease in fuel use for vehicles on that stretch
of road - almost meeting the target of 20% reductions - through the deployment of very
limited strategies.
"Optimizing the control of the traffic systems could help save significant amounts
of energy and reduce mobility-related emissions in the real world," said Qichao Wang,
NREL postdoctoral researcher and lead for the traffic control effort in this project.
The real-time data crunching required to produce these complex, large-scale simulations
relied on high-performance computing on the Eagle supercomputer at NREL. This computer
can carry out 8 million-billion calculations per second, allowing researchers to complete
in hours, minutes, or seconds computations that would have previously taken days,
weeks, or even months.
"The intersection of high-performance computing, high-fidelity data, machine learning,
and transportation research can deliver powerful results, far beyond what has been
possible in the past with legacy technology," said Juliette Ugirumurera, NREL computational
scientist and co-lead of the laboratory's project team.
More than 11 billion tons of freight are transported across U.S. highways each year, amounting
to more than $32 billion worth of goods each day. This gives commercial freight carriers
even greater motivation than individual drivers to avoid wasting fuel and money in
traffic congestion. Researchers have recently started working with regional and national
carriers in Georgia and Tennessee to explore how to most effectively tailor the simulations
and controls to trucking fleets.
"Up until now, our city-scale prototype has focused more tightly on passenger vehicles
and individual travel patterns," said Wesley Jones, NREL scientific computing group
manager and co-lead of the laboratory's project team. "As we expand our research to
examine freight operations, we'll also take a broader look at the regional and national
routes they travel."
Eventually, it is anticipated that these technologies for passenger and freight transportation
will be applied across the country, with additional sensors and control equipment
integrated in infrastructure and connected and autonomous vehicles.
Other project partners include the City of Chattanooga, the Tennessee Department of
Transportation, the Georgia Department of Transportation, University of Tennessee,
Vanderbilt University, Wayne State University, TomTom, FedEx, USXpress, Covenant Transport
Services, and Freight Waves.
Learn more about NREL's computational science and transportation and mobility research. |
|||
509 | A Growing Problem of 'Deepfake Geography': How AI Falsifies Satellite Images | News releases | Research | Social science | Technology
April 21, 2021
A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space , seem to show widespread fireworks activity.
Both images exemplify what a new University of Washington-led study calls "location spoofing." The photos - created by different people, for different purposes - are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such "deepfake geography" could become a growing problem.
So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking.
"This isn't just Photoshopping things. It's making data look uncannily realistic," said Bo Zhao , assistant professor of geography at the UW and lead author of the study , which published April 21 in the journal Cartography and Geographic Information Science. "The techniques are already there. We're just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it."
As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That's due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs created by the mapmakers. The term "paper towns" describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement. On the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of "Beatosu and "Goblu," a play on "Beat OSU" and "Go Blue," because the then-head of the department wanted to give a shoutout to his alma mater while protecting the copyright of the map.
But with the prevalence of geographic information systems, Google Earth and other satellite imaging systems, location spoofing involves far greater sophistication, researchers say, and carries with it more risks. In 2019, the director of the National Geospatial Intelligence Agency, the organization charged with supplying maps and analyzing satellite images for the U.S. Department of Defense, implied that AI-manipulated satellite images can be a severe national security threat .
To study how satellite images can be faked, Zhao and his team turned to an AI framework that has been used in manipulating other types of digital files. When applied to the field of mapping, the algorithm essentially learns the characteristics of satellite images from an urban area, then generates a deepfake image by feeding the characteristics of the learned satellite image characteristics onto a different base map - similar to how popular image filters can map the features of a human face onto a cat.
Next, the researchers combined maps and satellite images from three cities - Tacoma, Seattle and Beijing - to compare features and create new images of one city, drawn from the characteristics of the other two. They designated Tacoma their "base map" city and then explored how geographic features and urban structures of Seattle (similar in topography and land use) and Beijing (different in both) could be incorporated to produce deepfake images of Tacoma.
In the example below, a Tacoma neighborhood is shown in mapping software (top left) and in a satellite image (top right). The subsequent deep fake satellite images of the same neighborhood reflect the visual patterns of Seattle and Beijing. Low-rise buildings and greenery mark the "Seattle-ized" version of Tacoma on the bottom left, while Beijing's taller buildings, which AI matched to the building structures in the Tacoma image, cast shadows - hence the dark appearance of the structures in the image on the bottom right. Yet in both, the road networks and building locations are similar.
The untrained eye may have difficulty detecting the differences between real and fake, the researchers point out. A casual viewer might attribute the colors and shadows simply to poor image quality. To try to identify a "fake," researchers homed in on more technical aspects of image processing, such as color histograms and frequency and spatial domains.
Some simulated satellite imagery can serve a purpose, Zhao said, especially when representing geographic areas over periods of time to, say, understand urban sprawl or climate change. There may be a location for which there are no images for a certain period of time in the past, or in forecasting the future, so creating new images based on existing ones - and clearly identifying them as simulations - could fill in the gaps and help provide perspective.
The study's goal was not to show that geospatial data can be falsified, Zhao said. Rather, the authors hope to learn how to detect fake images so that geographers can begin to develop the data literacy tools, similar to today's fact-checking services, for public benefit.
"As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data," Zhao said. "We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary," he said.
Co-authors on the study were Yifan Sun, a graduate student in the UW Department of Geography; Shaozeng Zhang and Chunxue Xu of Oregon State University; and Chengbin Deng of Binghamton University.
For more information, contact Zhao at zhaobo@uw.edu . | Researchers at the University of Washington (UW), Oregon State University, and Binghamton University used satellite photos of three cities and manipulation of video and audio files to identify new methods of detecting deepfake satellite images. The team used an artificial intelligence framework that can infer the characteristics of satellite images from an urban area, then produce deepfakes by feeding the characteristics of the learned satellite image properties onto a different base map. The researchers combined maps and satellite imagery from Tacoma, WA, Seattle, and Beijing to compare features and generate deepfakes of Tacoma, based on the characteristics of the other cities. UW's Bo Zhao said, "This study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data. We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the University of Washington (UW), Oregon State University, and Binghamton University used satellite photos of three cities and manipulation of video and audio files to identify new methods of detecting deepfake satellite images. The team used an artificial intelligence framework that can infer the characteristics of satellite images from an urban area, then produce deepfakes by feeding the characteristics of the learned satellite image properties onto a different base map. The researchers combined maps and satellite imagery from Tacoma, WA, Seattle, and Beijing to compare features and generate deepfakes of Tacoma, based on the characteristics of the other cities. UW's Bo Zhao said, "This study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data. We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary."
News releases | Research | Social science | Technology
April 21, 2021
A fire in Central Park seems to appear as a smoke plume and a line of flames in a satellite image. Colorful lights on Diwali night in India, seen from space , seem to show widespread fireworks activity.
Both images exemplify what a new University of Washington-led study calls "location spoofing." The photos - created by different people, for different purposes - are fake but look like genuine images of real places. And with the more sophisticated AI technologies available today, researchers warn that such "deepfake geography" could become a growing problem.
So, using satellite photos of three cities and drawing upon methods used to manipulate video and audio files, a team of researchers set out to identify new ways of detecting fake satellite photos, warn of the dangers of falsified geospatial data and call for a system of geographic fact-checking.
"This isn't just Photoshopping things. It's making data look uncannily realistic," said Bo Zhao , assistant professor of geography at the UW and lead author of the study , which published April 21 in the journal Cartography and Geographic Information Science. "The techniques are already there. We're just trying to expose the possibility of using the same techniques, and of the need to develop a coping strategy for it."
As Zhao and his co-authors point out, fake locations and other inaccuracies have been part of mapmaking since ancient times. That's due in part to the very nature of translating real-life locations to map form, as no map can capture a place exactly as it is. But some inaccuracies in maps are spoofs created by the mapmakers. The term "paper towns" describes discreetly placed fake cities, mountains, rivers or other features on a map to prevent copyright infringement. On the more lighthearted end of the spectrum, an official Michigan Department of Transportation highway map in the 1970s included the fictional cities of "Beatosu and "Goblu," a play on "Beat OSU" and "Go Blue," because the then-head of the department wanted to give a shoutout to his alma mater while protecting the copyright of the map.
But with the prevalence of geographic information systems, Google Earth and other satellite imaging systems, location spoofing involves far greater sophistication, researchers say, and carries with it more risks. In 2019, the director of the National Geospatial Intelligence Agency, the organization charged with supplying maps and analyzing satellite images for the U.S. Department of Defense, implied that AI-manipulated satellite images can be a severe national security threat .
To study how satellite images can be faked, Zhao and his team turned to an AI framework that has been used in manipulating other types of digital files. When applied to the field of mapping, the algorithm essentially learns the characteristics of satellite images from an urban area, then generates a deepfake image by feeding the characteristics of the learned satellite image characteristics onto a different base map - similar to how popular image filters can map the features of a human face onto a cat.
Next, the researchers combined maps and satellite images from three cities - Tacoma, Seattle and Beijing - to compare features and create new images of one city, drawn from the characteristics of the other two. They designated Tacoma their "base map" city and then explored how geographic features and urban structures of Seattle (similar in topography and land use) and Beijing (different in both) could be incorporated to produce deepfake images of Tacoma.
In the example below, a Tacoma neighborhood is shown in mapping software (top left) and in a satellite image (top right). The subsequent deep fake satellite images of the same neighborhood reflect the visual patterns of Seattle and Beijing. Low-rise buildings and greenery mark the "Seattle-ized" version of Tacoma on the bottom left, while Beijing's taller buildings, which AI matched to the building structures in the Tacoma image, cast shadows - hence the dark appearance of the structures in the image on the bottom right. Yet in both, the road networks and building locations are similar.
The untrained eye may have difficulty detecting the differences between real and fake, the researchers point out. A casual viewer might attribute the colors and shadows simply to poor image quality. To try to identify a "fake," researchers homed in on more technical aspects of image processing, such as color histograms and frequency and spatial domains.
Some simulated satellite imagery can serve a purpose, Zhao said, especially when representing geographic areas over periods of time to, say, understand urban sprawl or climate change. There may be a location for which there are no images for a certain period of time in the past, or in forecasting the future, so creating new images based on existing ones - and clearly identifying them as simulations - could fill in the gaps and help provide perspective.
The study's goal was not to show that geospatial data can be falsified, Zhao said. Rather, the authors hope to learn how to detect fake images so that geographers can begin to develop the data literacy tools, similar to today's fact-checking services, for public benefit.
"As technology continues to evolve, this study aims to encourage more holistic understanding of geographic data and information, so that we can demystify the question of absolute reliability of satellite images or other geospatial data," Zhao said. "We also want to develop more future-oriented thinking in order to take countermeasures such as fact-checking when necessary," he said.
Co-authors on the study were Yifan Sun, a graduate student in the UW Department of Geography; Shaozeng Zhang and Chunxue Xu of Oregon State University; and Chengbin Deng of Binghamton University.
For more information, contact Zhao at zhaobo@uw.edu . |
|||
511 | Algorithm Reveals Birdsong Features That May be Key for Courtship | Researchers from McGill University and the University of California, San Francisco have developed a new algorithm capable of identifying features of male zebra finch songs that may underlie the distinction between a short phrase sung during courtship, and the same phrase sung in a non-courtship context.
In a recent study published in PLOS Computational Biology , the team looked at how male zebra finches adapted their vocal signals for specific audiences. Though they may sing the same sequence of syllables during courtship interactions with females as when singing alone, they will do so with subtle modifications. However, humans cannot detect these differences, and it was not clear that female zebra finches could, either.
The researchers first conducted behavioral experiments demonstrating that female zebra finches are indeed highly adept at discriminating between short segments of males' songs recorded in courtship versus non-courtship settings.
Subsequently, they sought to expand on earlier studies that have focused on just a few specific song features that may underlie the distinction between courtship and non-courtship song. Taking a 'bottom-up' approach, the researchers extracted over 5,000 song features from recordings and trained an algorithm to use those features to distinguish between courtship and non-courtship song phrases.
"As vocal communicators ourselves, we have a tendency to focus on aspects of communication signals that are salient to us," explains Sarah Woolley, Associate Professor in the Department of Biology at McGill and one of the co-authors of this study. "Using our bottom-up approach, we identified features that might never have been on our radar."
The trained algorithm uncovered features that may be key for song perception, some of which had not been identified previously. It also made predictions about the distinction capabilities of female zebra finches that aligned well with the results of the behavioral experiments. These findings highlight the potential for bottom-up approaches to reveal acoustic features important for communication and social discrimination.
In terms of next steps, the researchers plan to test whether manipulating the acoustic features they discovered alters what female finches think about those songs. They also hope to evaluate how well their findings might generalize to courtship and non-courtship songs in other species.
Founded in Montreal, Quebec, in 1821, McGill University is Canada's top ranked medical doctoral university. McGill is consistently ranked as one of the top universities, both nationally and internationally. It is a world-renowned institution of higher learning with research activities spanning two campuses, 11 faculties, 13 professional schools, 300 programs of study and over 40,000 students, including more than 10,200 graduate students. McGill attracts students from over 150 countries around the world, its 12,800 international students making up 31% of the student body. Over half of McGill students claim a first language other than English, including approximately 19% of our students who say French is their mother tongue.
Visit the McGill Newsroom | Scientists at Canada's McGill University and the University of California, San Francisco have developed a new algorithm that can identify characteristics of male zebra finch songs that may underpin differences between a phrase sung during and outside of courtship. The researchers used a bottom-up approach to extract more than 5,000 song features from recordings, and trained the algorithm to apply those features to distinguish courtship from non-courtship song phrases. The algorithm flagged features that had not been identified previously, and predicted the distinction capabilities of female finches that were in line with the results of behavioral experiments. This highlights the potential for bottom-up approaches to uncover acoustic features important for communication and social discrimination. The researchers hope to test whether manipulating acoustic features changes what female finches think about those songs, and assess how their findings might generally apply to courtship and non-courtship songs in other species. | [] | [] | [] | scitechnews | None | None | None | None | Scientists at Canada's McGill University and the University of California, San Francisco have developed a new algorithm that can identify characteristics of male zebra finch songs that may underpin differences between a phrase sung during and outside of courtship. The researchers used a bottom-up approach to extract more than 5,000 song features from recordings, and trained the algorithm to apply those features to distinguish courtship from non-courtship song phrases. The algorithm flagged features that had not been identified previously, and predicted the distinction capabilities of female finches that were in line with the results of behavioral experiments. This highlights the potential for bottom-up approaches to uncover acoustic features important for communication and social discrimination. The researchers hope to test whether manipulating acoustic features changes what female finches think about those songs, and assess how their findings might generally apply to courtship and non-courtship songs in other species.
Researchers from McGill University and the University of California, San Francisco have developed a new algorithm capable of identifying features of male zebra finch songs that may underlie the distinction between a short phrase sung during courtship, and the same phrase sung in a non-courtship context.
In a recent study published in PLOS Computational Biology , the team looked at how male zebra finches adapted their vocal signals for specific audiences. Though they may sing the same sequence of syllables during courtship interactions with females as when singing alone, they will do so with subtle modifications. However, humans cannot detect these differences, and it was not clear that female zebra finches could, either.
The researchers first conducted behavioral experiments demonstrating that female zebra finches are indeed highly adept at discriminating between short segments of males' songs recorded in courtship versus non-courtship settings.
Subsequently, they sought to expand on earlier studies that have focused on just a few specific song features that may underlie the distinction between courtship and non-courtship song. Taking a 'bottom-up' approach, the researchers extracted over 5,000 song features from recordings and trained an algorithm to use those features to distinguish between courtship and non-courtship song phrases.
"As vocal communicators ourselves, we have a tendency to focus on aspects of communication signals that are salient to us," explains Sarah Woolley, Associate Professor in the Department of Biology at McGill and one of the co-authors of this study. "Using our bottom-up approach, we identified features that might never have been on our radar."
The trained algorithm uncovered features that may be key for song perception, some of which had not been identified previously. It also made predictions about the distinction capabilities of female zebra finches that aligned well with the results of the behavioral experiments. These findings highlight the potential for bottom-up approaches to reveal acoustic features important for communication and social discrimination.
In terms of next steps, the researchers plan to test whether manipulating the acoustic features they discovered alters what female finches think about those songs. They also hope to evaluate how well their findings might generalize to courtship and non-courtship songs in other species.
Founded in Montreal, Quebec, in 1821, McGill University is Canada's top ranked medical doctoral university. McGill is consistently ranked as one of the top universities, both nationally and internationally. It is a world-renowned institution of higher learning with research activities spanning two campuses, 11 faculties, 13 professional schools, 300 programs of study and over 40,000 students, including more than 10,200 graduate students. McGill attracts students from over 150 countries around the world, its 12,800 international students making up 31% of the student body. Over half of McGill students claim a first language other than English, including approximately 19% of our students who say French is their mother tongue.
Visit the McGill Newsroom |
|||
512 | AI's Carbon Footprint Is Big, But Easy to Reduce, Google Researchers Say | Researchers at the University of California, Berkeley and Google have released the most accurate estimates to date for the carbon footprint of large artificial intelligence (AI) systems. They determined OpenAI's powerful language model GPT-3, for example, produced the equivalent of 552 metric tons of carbon dioxide during its training. The researchers found the carbon footprint of training AI algorithms depends on their design, the computer hardware used to train them, and the nature of electric power generation in the location where the training occurs; changing all three factors could lower that carbon footprint by a factor of up to 1,000. A reduction by a factor of 10 could be achieved through the use of "sparse" neural network algorithms, in which most of the artificial neurons are connected to relatively few other neurons. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the University of California, Berkeley and Google have released the most accurate estimates to date for the carbon footprint of large artificial intelligence (AI) systems. They determined OpenAI's powerful language model GPT-3, for example, produced the equivalent of 552 metric tons of carbon dioxide during its training. The researchers found the carbon footprint of training AI algorithms depends on their design, the computer hardware used to train them, and the nature of electric power generation in the location where the training occurs; changing all three factors could lower that carbon footprint by a factor of up to 1,000. A reduction by a factor of 10 could be achieved through the use of "sparse" neural network algorithms, in which most of the artificial neurons are connected to relatively few other neurons.
|
||||
514 | Researchers Say Changing Simple iPhone Setting Fixes Long-Standing Privacy Bug | Scammers could exploit a bug in iPhones and MacBooks' AirDrop feature to access owners' email and phone numbers, according to researchers at Germany's Technical University of Darmstadt (TU Darmstadt). AirDrop allows users with both Bluetooth and Wi-Fi activated to discover nearby Apple devices, and share documents and other files; however, strangers in range of such devices can extract emails and phone numbers when users open AirDrop, because the function checks such data against the other user's address book during the authentication process. The researchers said they alerted Apple to the vulnerability nearly two years ago, but the company "has neither acknowledged the problem nor indicated that they are working on a solution." They recommend users disable AirDrop and not open the sharing menu, and to only activate the function when file sharing is needed, then deactivate it when done. | [] | [] | [] | scitechnews | None | None | None | None | Scammers could exploit a bug in iPhones and MacBooks' AirDrop feature to access owners' email and phone numbers, according to researchers at Germany's Technical University of Darmstadt (TU Darmstadt). AirDrop allows users with both Bluetooth and Wi-Fi activated to discover nearby Apple devices, and share documents and other files; however, strangers in range of such devices can extract emails and phone numbers when users open AirDrop, because the function checks such data against the other user's address book during the authentication process. The researchers said they alerted Apple to the vulnerability nearly two years ago, but the company "has neither acknowledged the problem nor indicated that they are working on a solution." They recommend users disable AirDrop and not open the sharing menu, and to only activate the function when file sharing is needed, then deactivate it when done.
|
||||
515 | Newer Planes Providing Airlines a Trove of Useful Data | The retirement of older aircraft during the pandemic has resulted in a fleet equipped with digital technologies that can collect more information about emissions, safety, and other factors. Kevin Michaels at aerospace consultancy AeroDynamic Advisory notes that the latest Airbus airliner, the A350, usually records 800 megabytes of data per flight, double the amount recorded by the Airbus A380. As the numbers of modern aircraft in airline fleets grow, so will the amount of data available. New broadcast tracking signals are flight-specific, but can provide information useful for navigation services and arrival planning to help manage the stream of traffic in the air and at airports. | [] | [] | [] | scitechnews | None | None | None | None | The retirement of older aircraft during the pandemic has resulted in a fleet equipped with digital technologies that can collect more information about emissions, safety, and other factors. Kevin Michaels at aerospace consultancy AeroDynamic Advisory notes that the latest Airbus airliner, the A350, usually records 800 megabytes of data per flight, double the amount recorded by the Airbus A380. As the numbers of modern aircraft in airline fleets grow, so will the amount of data available. New broadcast tracking signals are flight-specific, but can provide information useful for navigation services and arrival planning to help manage the stream of traffic in the air and at airports.
|
||||
516 | From Individual Receptors Towards Whole-Brain Function | The RUB studies covered by the article were supported in part by funds from the Collaborative Research Centre (SFB) 874 , which the German Research Foundation has been funding since 2010. Under the subject of "Integration and Representation of Sensory Processes," the SFB examines how sensory signals influence complex behaviour and memory formation. | Teams of researchers at Germany's Ruhr-Universität Bochum, Spain's Pompeu Fabra University, and the U.K.'s Oxford University developed concepts to measure receptor-specific modulations of brain states, and a computer model for predicting the impact of individual receptor types on brain activity. The researchers simulated the impact of individual receptor types on whole-brain dynamics by compiling data using three imaging methods: diffusion-weighted magnetic resonance imaging (MRI) to record information on the brain's anatomical connectivity; functional MRI to obtain information about resting-state activity of participants; and positron emission tomography-recorded distributions of receptor type. From these, the researchers were able to construct an individualized receptome for each subject, reflecting the overall distribution of receptor types in their brain. The receptome model enabled the simulation of interactions between neurons dependent on activations of individual receptor types, which the researchers hope to apply to diagnosing and treating mental disorders. | [] | [] | [] | scitechnews | None | None | None | None | Teams of researchers at Germany's Ruhr-Universität Bochum, Spain's Pompeu Fabra University, and the U.K.'s Oxford University developed concepts to measure receptor-specific modulations of brain states, and a computer model for predicting the impact of individual receptor types on brain activity. The researchers simulated the impact of individual receptor types on whole-brain dynamics by compiling data using three imaging methods: diffusion-weighted magnetic resonance imaging (MRI) to record information on the brain's anatomical connectivity; functional MRI to obtain information about resting-state activity of participants; and positron emission tomography-recorded distributions of receptor type. From these, the researchers were able to construct an individualized receptome for each subject, reflecting the overall distribution of receptor types in their brain. The receptome model enabled the simulation of interactions between neurons dependent on activations of individual receptor types, which the researchers hope to apply to diagnosing and treating mental disorders.
The RUB studies covered by the article were supported in part by funds from the Collaborative Research Centre (SFB) 874 , which the German Research Foundation has been funding since 2010. Under the subject of "Integration and Representation of Sensory Processes," the SFB examines how sensory signals influence complex behaviour and memory formation. |