id
stringlengths 1
169
| pr-title
stringlengths 2
190
| pr-article
stringlengths 0
65k
| pr-summary
stringlengths 47
4.27k
| sc-title
stringclasses 2
values | sc-article
stringlengths 0
2.03M
| sc-abstract
stringclasses 2
values | sc-section_names
sequencelengths 0
0
| sc-sections
sequencelengths 0
0
| sc-authors
sequencelengths 0
0
| source
stringclasses 2
values | Topic
stringclasses 10
values | Citation
stringlengths 4
4.58k
| Paper_URL
stringlengths 4
213
| News_URL
stringlengths 4
119
| pr-summary-and-article
stringlengths 49
66.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
248 | Security Flaw Found in 2G Mobile Data Encryption Standard | BERLIN (AP) - Cybersecurity researchers in Europe say they have discovered a flaw in an encryption algorithm used by cellphones that may have allowed attackers to eavesdrop on some data traffic for more than two decades.
In a paper published Wednesday, researchers from Germany, France and Norway said the flaw affects the GPRS - or 2G - mobile data standard.
While most phones now use 4G or even 5G standards, GPRS remains a fallback for data connections in some countries.
The vulnerability in the GEA-1 algorithm is unlikely to have been an accident, the researchers said. Instead, it was probably created intentionally to provide law enforcement agencies with a "backdoor" and comply with laws restricting the export of strong encryption tools.
"According to our experimental analysis, having six correct numbers in the German lottery twice in a row is about as likely as having these properties of the key occur by chance," Christof Beierle of the Ruhr University Bochum in Germany, a co-author of the paper, said.
The GEA-1 algorithm was meant to be phased out from cellphones as early as 2013, but the researchers said they found it in current Android and iOS smartphones.
Cellphone manufacturers and standards organizations have been notified to fix the flaw, they said. | Cybersecurity researchers from Germany, France, and Norway have identified a flaw in the GEA-1 encryption algorithm that affects the GPRS or 2G mobile data standard. The vulnerability may have enabled attackers to eavesdrop on some data traffic for decades. The researchers said it likely was created intentionally as a "backdoor" for law enforcement agencies. Germany-based Ruhr University Bochum's Christof Beierle said, "According to our experimental analysis, having six correct numbers in the German lottery twice in a row is about as likely as having these properties of the key occur by chance." The GEA-1 algorithm was found in current Android and iOS smartphones, though it was supposed to have been phased out starting in 2013. Most current phones use 4G or 5G mobile data standards, but GPRS remains a fallback for data connections in some countries. | [] | [] | [] | scitechnews | None | None | None | None | Cybersecurity researchers from Germany, France, and Norway have identified a flaw in the GEA-1 encryption algorithm that affects the GPRS or 2G mobile data standard. The vulnerability may have enabled attackers to eavesdrop on some data traffic for decades. The researchers said it likely was created intentionally as a "backdoor" for law enforcement agencies. Germany-based Ruhr University Bochum's Christof Beierle said, "According to our experimental analysis, having six correct numbers in the German lottery twice in a row is about as likely as having these properties of the key occur by chance." The GEA-1 algorithm was found in current Android and iOS smartphones, though it was supposed to have been phased out starting in 2013. Most current phones use 4G or 5G mobile data standards, but GPRS remains a fallback for data connections in some countries.
BERLIN (AP) - Cybersecurity researchers in Europe say they have discovered a flaw in an encryption algorithm used by cellphones that may have allowed attackers to eavesdrop on some data traffic for more than two decades.
In a paper published Wednesday, researchers from Germany, France and Norway said the flaw affects the GPRS - or 2G - mobile data standard.
While most phones now use 4G or even 5G standards, GPRS remains a fallback for data connections in some countries.
The vulnerability in the GEA-1 algorithm is unlikely to have been an accident, the researchers said. Instead, it was probably created intentionally to provide law enforcement agencies with a "backdoor" and comply with laws restricting the export of strong encryption tools.
"According to our experimental analysis, having six correct numbers in the German lottery twice in a row is about as likely as having these properties of the key occur by chance," Christof Beierle of the Ruhr University Bochum in Germany, a co-author of the paper, said.
The GEA-1 algorithm was meant to be phased out from cellphones as early as 2013, but the researchers said they found it in current Android and iOS smartphones.
Cellphone manufacturers and standards organizations have been notified to fix the flaw, they said. |
|||
249 | BSC Researcher Receives HPDC Achievement Award 2021 | June 15, 2021 - Rosa M. Badia, the Workflows and distributed computing group manager at the Barcelona Supercomputing Center (BSC) and coordinator of the EuroHPC project eFlows4HPC, has received the HPDC Achievement Award 2021 for her innovations in parallel task-based programming models, workflow applications and systems, and leadership in the high performance computing research community.
The International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC) recognizes with this prize an individual who has made long-lasting, influential contributions to the foundations or practice of the field of high-performance parallel and distributed computing (HPDC).
Badia is the first researcher to carry out her work in Europe recognized by these awards: "I am very pleased to receive this award for the achievements in my research on parallel programming models for distributed computing, as well as for my community activities. This is for the first time given to a European-based researcher and encourages me to continue my activities in making easier the development of applications for complex computing platforms, as we are doing in the eFlows4HPC project," says Rosa M. Badia.
The 30 th  HPDC  will take place online between 21-25 of June, 2021 and Badia will present the talk Superscalar programming models: a perspective from Barcelona .
About Rosa M. Badia
Rosa M. Badia holds a PhD on Computer Science (1994) from the Technical University of Catalonia (UPC). She is the manager of the Workflows and Distributed Computing research group at the Barcelona Supercomputing Center (BSC). She is considered one of the key researchers in Parallel programming models for multicore and distributed computing due to her contribution to task-based programming models during the last 15 years. The research group focuses on PyCOMPSs/COMPSs, a parallel task-based programming distributed computing, and its application to the development of large heterogeneous workflows that combine HPC, Big Data, and Machine Learning. The group is also doing research around the dislib, a parallel machine learning library parallelized with PyCOMPSs. Dr Badia has published near 200 papers in international conferences and journals on the topics of her research. She has been very active in projects funded by the European Commission in contracts with industry. She has been actively contributing to the BDEC international initiative and is a member of HiPEAC Network of Excellence. She received the Euro-Par Achievement Award 2019 for her contributions to parallel processing and the DonaTIC award, category Academia/Researcher in 2019. She is the IP of the EuroHPC project eFlows4HPC.
About HPDC
HPDC is a premier computer science conference for presenting new research relating to high performance parallel and distributed systems used in both science and industry. HPDC is sponsored by the Association for Computing Machinery and the conference proceedings are archived in the ACM Digital Library.
Source: HPDC | The International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC) named Rosa M. Badia of Spain's Barcelona Supercomputing Center (BSC) recipient of the HPDC Achievement Award 2021. Badia, the first researcher working in Europe to receive the award, is considered a key innovator in parallel programming models for multicore and distributed computing, thanks to her work with task-based programming models. As manager of BSC's Workflows and Distributed Computing research group, Badia supervises investigation into PyCOMPSs/COMPSs, a parallel task-based programming distributed computing model, and its application to the development of large heterogeneous workflows integrating high-performance computing (HPC), big data, and machine learning. Badia also coordinates the EuroHPC project eFlows4HPC, which she said focuses on "making easier the development of applications for complex computing platforms." | [] | [] | [] | scitechnews | None | None | None | None | The International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC) named Rosa M. Badia of Spain's Barcelona Supercomputing Center (BSC) recipient of the HPDC Achievement Award 2021. Badia, the first researcher working in Europe to receive the award, is considered a key innovator in parallel programming models for multicore and distributed computing, thanks to her work with task-based programming models. As manager of BSC's Workflows and Distributed Computing research group, Badia supervises investigation into PyCOMPSs/COMPSs, a parallel task-based programming distributed computing model, and its application to the development of large heterogeneous workflows integrating high-performance computing (HPC), big data, and machine learning. Badia also coordinates the EuroHPC project eFlows4HPC, which she said focuses on "making easier the development of applications for complex computing platforms."
June 15, 2021 - Rosa M. Badia, the Workflows and distributed computing group manager at the Barcelona Supercomputing Center (BSC) and coordinator of the EuroHPC project eFlows4HPC, has received the HPDC Achievement Award 2021 for her innovations in parallel task-based programming models, workflow applications and systems, and leadership in the high performance computing research community.
The International ACM Symposium on High-Performance Parallel and Distributed Computing (HPDC) recognizes with this prize an individual who has made long-lasting, influential contributions to the foundations or practice of the field of high-performance parallel and distributed computing (HPDC).
Badia is the first researcher to carry out her work in Europe recognized by these awards: "I am very pleased to receive this award for the achievements in my research on parallel programming models for distributed computing, as well as for my community activities. This is for the first time given to a European-based researcher and encourages me to continue my activities in making easier the development of applications for complex computing platforms, as we are doing in the eFlows4HPC project," says Rosa M. Badia.
The 30 th  HPDC  will take place online between 21-25 of June, 2021 and Badia will present the talk Superscalar programming models: a perspective from Barcelona .
About Rosa M. Badia
Rosa M. Badia holds a PhD on Computer Science (1994) from the Technical University of Catalonia (UPC). She is the manager of the Workflows and Distributed Computing research group at the Barcelona Supercomputing Center (BSC). She is considered one of the key researchers in Parallel programming models for multicore and distributed computing due to her contribution to task-based programming models during the last 15 years. The research group focuses on PyCOMPSs/COMPSs, a parallel task-based programming distributed computing, and its application to the development of large heterogeneous workflows that combine HPC, Big Data, and Machine Learning. The group is also doing research around the dislib, a parallel machine learning library parallelized with PyCOMPSs. Dr Badia has published near 200 papers in international conferences and journals on the topics of her research. She has been very active in projects funded by the European Commission in contracts with industry. She has been actively contributing to the BDEC international initiative and is a member of HiPEAC Network of Excellence. She received the Euro-Par Achievement Award 2019 for her contributions to parallel processing and the DonaTIC award, category Academia/Researcher in 2019. She is the IP of the EuroHPC project eFlows4HPC.
About HPDC
HPDC is a premier computer science conference for presenting new research relating to high performance parallel and distributed systems used in both science and industry. HPDC is sponsored by the Association for Computing Machinery and the conference proceedings are archived in the ACM Digital Library.
Source: HPDC |
|||
250 | WWW Code That Changed the World Up for Auction as NFT | Computer scientist Tim Berners-Lee's original source code for what would become the World Wide Web now is part of a non-fungible token (NFT) that Sotheby's will auction off, with bidding to start at $1,000. The digitally signed Ethereum blockchain NFT features the source code, an animated visualization, a letter by Berners-Lee, and a digital poster of the code from the original files, which include implementations of the three languages and protocols that Berners-Lee authored: Hypertext Markup Language (HTML), Hypertext Transfer Protocol (HTTP), and Uniform Resource Identifiers. Berners-Lee said the NFT is "a natural thing to do ... when you're a computer scientist and when you write code and have been for many years. It feels right to digitally sign my autograph on a completely digital artifact." | [] | [] | [] | scitechnews | None | None | None | None | Computer scientist Tim Berners-Lee's original source code for what would become the World Wide Web now is part of a non-fungible token (NFT) that Sotheby's will auction off, with bidding to start at $1,000. The digitally signed Ethereum blockchain NFT features the source code, an animated visualization, a letter by Berners-Lee, and a digital poster of the code from the original files, which include implementations of the three languages and protocols that Berners-Lee authored: Hypertext Markup Language (HTML), Hypertext Transfer Protocol (HTTP), and Uniform Resource Identifiers. Berners-Lee said the NFT is "a natural thing to do ... when you're a computer scientist and when you write code and have been for many years. It feels right to digitally sign my autograph on a completely digital artifact."
|
||||
251 | U.S. Task Force to Study Opening Government Data for AI Research | WASHINGTON - The Biden administration launched an initiative Thursday aiming to make more government data available to artificial intelligence researchers, part of a broader push to keep the U.S. on the cutting edge of the crucial new technology.
The National Artificial Intelligence Research Resource Task Force, a group of 12 members from academia, government, and industry led by officials from the White House Office of Science and Technology Policy and the National Science Foundation, will draft a strategy for creating an AI research resource that could, in part, give researchers secure access to stores of anonymous data about Americans, from demographics to health and driving habits. | The Biden administration's new National Artificial Intelligence Research Resource Task Force is tasked with developing a strategy for making government data available to artificial intelligence (AI) scientists. The task force's 12 members hail from academia, government, and industry, and are supervised by officials at the White House Office of Science and Technology Policy (OSTP) and the U.S. National Science Foundation. The panel's strategy could provide researchers with secure access to anonymized data about Americans, as well as to the computing power needed to analyze the data. OSTP's Lynne Parker said the group intends to provide Congress with guidance for establishing a standard AI research infrastructure for non-governmental personnel. | [] | [] | [] | scitechnews | None | None | None | None | The Biden administration's new National Artificial Intelligence Research Resource Task Force is tasked with developing a strategy for making government data available to artificial intelligence (AI) scientists. The task force's 12 members hail from academia, government, and industry, and are supervised by officials at the White House Office of Science and Technology Policy (OSTP) and the U.S. National Science Foundation. The panel's strategy could provide researchers with secure access to anonymized data about Americans, as well as to the computing power needed to analyze the data. OSTP's Lynne Parker said the group intends to provide Congress with guidance for establishing a standard AI research infrastructure for non-governmental personnel.
WASHINGTON - The Biden administration launched an initiative Thursday aiming to make more government data available to artificial intelligence researchers, part of a broader push to keep the U.S. on the cutting edge of the crucial new technology.
The National Artificial Intelligence Research Resource Task Force, a group of 12 members from academia, government, and industry led by officials from the White House Office of Science and Technology Policy and the National Science Foundation, will draft a strategy for creating an AI research resource that could, in part, give researchers secure access to stores of anonymous data about Americans, from demographics to health and driving habits. |
|||
252 | Snails Carrying World's Smallest Computer Help Solve Mass Extinction Survivor Mystery | More than 50 species of tree snail in the South Pacific Society Islands were wiped out following the introduction of an alien predatory snail in the 1970s, but the white-shelled Partula hyalina survived.
Now, thanks to a collaboration between University of Michigan biologists and engineers with the world's smallest computer, scientists understand why: P. hyalina can tolerate more sunlight than its predator, so it was able to persist in sunlit forest edge habitats.
"We were able to get data that nobody had been able to obtain," said David Blaauw, the Kensall D. Wise Collegiate Professor of Electrical Engineering and Computer Science. "And that's because we had a tiny computing system that was small enough to stick on a snail."
The Michigan Micro Mote (M3), considered the world's smallest complete computer , was announced in 2014 by a team Blaauw co-led. This was its first field application.
"The sensing computers are helping us understand how to protect endemic species on islands," said Cindy Bick, who received a Ph.D. in ecology and evolutionary biology from U-M in 2018. "If we are able to map and protect these habitats through appropriate conservation measures, we can figure out ways to ensure the survival of the species."
P. hyalina is important culturally for Polynesians because of its unique color, making it attractive for use in shell leis and jewelry. Tree snails also play a vital role in island forest ecosystems, as the dominant group of native grazers.
The giant African land snail was introduced to the Society Islands, including Tahiti, to cultivate as a food source, but it became a major pest. To control its population, agricultural scientists introduced the rosy wolf snail in 1974. But unfortunately, most of the 61 known species of native Society Islands tree snails were easy prey for the rosy wolf. P. hyalina is one of only five survivors in the wild. Called the "Darwin finches of the snail world" for their island-bound diversity, the loss of so many Partula species is a blow to biologists studying evolution.
"The endemic tree snails had never encountered a predator like the alien rosy wolf snail before it's deliberate introduction. It can climb trees and very quickly drove most of the valley populations to local extinction," said Diarmaid Ó Foighil , professor of ecology and evolutionary biology and curator of the U-M Museum of Zoology.
In 2015, Ó Foighil and Bick hypothesized that P. hyalina's distinctive white shell might give it an important advantage in forest edge habitats, by reflecting rather than absorbing light radiation levels that would be deadly to its darker-shelled predator. To test their idea, they needed to be able to track the light exposure levels P. hyalina and rosy wolf snails experienced in a typical day.
Bick and Ó Foighil wanted to attach light sensors to the snails, but a system made using commercially available chips would have been too big. Bick found news of a smart sensor system that was just 2x5x2 mm, and the developers were at her own institution. But could it be altered to sense light?
"It was important to understand what the biologists were thinking and what they needed," said Inhee Lee , an assistant professor of electrical and computer engineering at the University of Pittsburgh who received a Ph.D. from U-M electrical and computer engineering in 2014. Lee adapted the M3 for the study.
The first step was to figure out how to measure the light intensity of the snails' habitats. At the time, the team had just added an energy harvester to the M3 system to recharge the battery using tiny solar cells. Lee realized he could measure the light level continuously by measuring the speed at which the battery was charging.
After testing enabled by local Michigan snails, 50 M3s made it to Tahiti in 2017. Bick and Lee joined forces with Trevor Coote, a well-known conservation field biologist and specialist on the French Polynesian snails.
The team glued the sensors directly to the rosy wolf snails, but P. hyalina is a protected species and required an indirect approach. They are nocturnal, typically sleeping during the day while attached underneath leaves. Using magnets, the team placed M3s both on the tops and undersides of leaves harboring the resting P. hyalina. At the end of each day, Lee wirelessly downloaded the data from each of the M3s.
During the noon hour, the P. hyalina habitat received on average 10 times more sunlight than the rosy wolf snails. The researchers suspect that the rosy wolf doesn't venture far enough into the forest edge to catch P. hyalina, even under cover of darkness, because they wouldn't be able to escape to shade before the sun became too hot.
"The M3 really opens up the window of what we can do with invertebrate behavioral ecology and we're just at the foothills of those possibilities," Ó Foighil said.
This project has already facilitated a subsequent collaboration between engineering and ecology and evolutionary biology tracking monarch butterflies .
The article in the journal Communications Biology is titled, "Millimeter-sized smart sensors reveal that a solar refuge protects tree snail Partula hyalina from extirpation."
The project was supported by U-M's MCubed program, created to stimulate and support innovative research among interdisciplinary teams. Additional funding was provided by the Department of Ecology and Evolutionary Biology and by National Science Foundation and Arm Ltd. funding to the Blaauw lab. | University of Michigan (U-M) biologists and engineers used the world's smallest computer to learn how the South Pacific Society Islands tree snail | [] | [] | [] | scitechnews | None | None | None | None | University of Michigan (U-M) biologists and engineers used the world's smallest computer to learn how the South Pacific Society Islands tree snail
More than 50 species of tree snail in the South Pacific Society Islands were wiped out following the introduction of an alien predatory snail in the 1970s, but the white-shelled Partula hyalina survived.
Now, thanks to a collaboration between University of Michigan biologists and engineers with the world's smallest computer, scientists understand why: P. hyalina can tolerate more sunlight than its predator, so it was able to persist in sunlit forest edge habitats.
"We were able to get data that nobody had been able to obtain," said David Blaauw, the Kensall D. Wise Collegiate Professor of Electrical Engineering and Computer Science. "And that's because we had a tiny computing system that was small enough to stick on a snail."
The Michigan Micro Mote (M3), considered the world's smallest complete computer , was announced in 2014 by a team Blaauw co-led. This was its first field application.
"The sensing computers are helping us understand how to protect endemic species on islands," said Cindy Bick, who received a Ph.D. in ecology and evolutionary biology from U-M in 2018. "If we are able to map and protect these habitats through appropriate conservation measures, we can figure out ways to ensure the survival of the species."
P. hyalina is important culturally for Polynesians because of its unique color, making it attractive for use in shell leis and jewelry. Tree snails also play a vital role in island forest ecosystems, as the dominant group of native grazers.
The giant African land snail was introduced to the Society Islands, including Tahiti, to cultivate as a food source, but it became a major pest. To control its population, agricultural scientists introduced the rosy wolf snail in 1974. But unfortunately, most of the 61 known species of native Society Islands tree snails were easy prey for the rosy wolf. P. hyalina is one of only five survivors in the wild. Called the "Darwin finches of the snail world" for their island-bound diversity, the loss of so many Partula species is a blow to biologists studying evolution.
"The endemic tree snails had never encountered a predator like the alien rosy wolf snail before it's deliberate introduction. It can climb trees and very quickly drove most of the valley populations to local extinction," said Diarmaid Ó Foighil , professor of ecology and evolutionary biology and curator of the U-M Museum of Zoology.
In 2015, Ó Foighil and Bick hypothesized that P. hyalina's distinctive white shell might give it an important advantage in forest edge habitats, by reflecting rather than absorbing light radiation levels that would be deadly to its darker-shelled predator. To test their idea, they needed to be able to track the light exposure levels P. hyalina and rosy wolf snails experienced in a typical day.
Bick and Ó Foighil wanted to attach light sensors to the snails, but a system made using commercially available chips would have been too big. Bick found news of a smart sensor system that was just 2x5x2 mm, and the developers were at her own institution. But could it be altered to sense light?
"It was important to understand what the biologists were thinking and what they needed," said Inhee Lee , an assistant professor of electrical and computer engineering at the University of Pittsburgh who received a Ph.D. from U-M electrical and computer engineering in 2014. Lee adapted the M3 for the study.
The first step was to figure out how to measure the light intensity of the snails' habitats. At the time, the team had just added an energy harvester to the M3 system to recharge the battery using tiny solar cells. Lee realized he could measure the light level continuously by measuring the speed at which the battery was charging.
After testing enabled by local Michigan snails, 50 M3s made it to Tahiti in 2017. Bick and Lee joined forces with Trevor Coote, a well-known conservation field biologist and specialist on the French Polynesian snails.
The team glued the sensors directly to the rosy wolf snails, but P. hyalina is a protected species and required an indirect approach. They are nocturnal, typically sleeping during the day while attached underneath leaves. Using magnets, the team placed M3s both on the tops and undersides of leaves harboring the resting P. hyalina. At the end of each day, Lee wirelessly downloaded the data from each of the M3s.
During the noon hour, the P. hyalina habitat received on average 10 times more sunlight than the rosy wolf snails. The researchers suspect that the rosy wolf doesn't venture far enough into the forest edge to catch P. hyalina, even under cover of darkness, because they wouldn't be able to escape to shade before the sun became too hot.
"The M3 really opens up the window of what we can do with invertebrate behavioral ecology and we're just at the foothills of those possibilities," Ó Foighil said.
This project has already facilitated a subsequent collaboration between engineering and ecology and evolutionary biology tracking monarch butterflies .
The article in the journal Communications Biology is titled, "Millimeter-sized smart sensors reveal that a solar refuge protects tree snail Partula hyalina from extirpation."
The project was supported by U-M's MCubed program, created to stimulate and support innovative research among interdisciplinary teams. Additional funding was provided by the Department of Ecology and Evolutionary Biology and by National Science Foundation and Arm Ltd. funding to the Blaauw lab. |
|||
253 | DNA-Based Storage System with Files and Metadata | DNA-based data storage appears to offer solutions to some of the problems created by humanity's ever-growing capacity to create data we want to hang on to. Compared to most other media, DNA offers phenomenal data densities. If stored in the right conditions, DNA doesn't require any energy to maintain the data for centuries. And due to DNA's centrality to biology, we're always likely to maintain the ability to read it.
But DNA is not without its downsides. Right now, there's no standard method of encoding bits in the pattern of bases of a DNA strand. Synthesizing specific sequences remains expensive. And accessing the data using current methods is slow and depletes the DNA being used for storage. Try to access the data too many times and you have to restore it in some way - a process that risks introducing errors.
A team from MIT and the Broad Institute has decided to tackle some of these issues. In the process, the researchers have created a DNA-based image-storage system that is somewhere between a file system and a metadata-based database.
Recent systems for storing data in DNA (such as one we've covered ) involve adding specific sequence tags to the stretches of DNA that contain data. To get the data you want, you simply add bits of DNA that can base-pair with the right tags and use them to amplify the full sequence. Think of it like tagging every image in a collection with an ID, then setting things up so that only one specific ID gets amplified.
This method is effective, but it's limited in two ways. First, the amplification step, done using a process called PCR , has limits on the size of the sequence that can be amplified. And each tag takes up some of that limited space, so adding more detailed tags (as might be needed for a complicated file system) cuts into the amount of space for data.
The other limit is that the PCR reaction that amplifies specific pieces of data-containing DNA consumes some of the original library of DNA. In other words, each time you pull out some data, you destroy piles of unrelated data. Access data often enough and you'll end up burning through the entire repository. While there are ways to re-amplify everything, each time this is done, it increases the chance of introducing an error.
The new research has separated out the tag information from data storage. In addition, the researchers created a system in which it's possible to access just the DNA data you're interested in and leave the rest of the data untouched, providing a greater longevity to the data storage.
The basic technology is based on the fact that DNA will stick to silicon-dioxide glass beads. This attraction is independent of the size of the DNA, so you can store arbitrarily large chunks of data using this system (in this case, the fragments were over 10 times the size of the typical chunk of DNA data storage used in the past). Just as importantly, no tags in the DNA were stored in the data, so there was no competition between data storage and file system information.
Once the DNA was on the surface of these beads, the researchers polymerized some additional silicon dioxide on top of it. This process coated the DNA and protected it from the environment. Using a fluorescent tag, the researchers confirmed that the system was efficient; essentially, all of the particles created this way contained DNA.
Only once this shell was in place did the researchers add tags, which were chemically linked to the outer shell. The tags were made of single-stranded DNA, and it was possible to have several distinct tags attached to a single glass shell.
The researchers handled the process separately for each block of data, and once everything was in place, the tagged glass spheres could be mixed into a single data library. While not as compact as the storage of pure DNA, the library still has the advantages of being stable for the long term and requiring no energy for maintenance.
But the fun part is accessing data. The researchers stored a keyword-associated collection of images in the DNA, with each keyword encoded in the DNA attached to the exterior of the glass shell. To use their example, an image of an orange pet cat would be associated with the keywords "orange," "cat," and "domestic," while an image of a tiger would just have "orange" and "cat."
Because these tags were single-stranded, it was possible to design a matching sequence that would base-pair with it to form a double helix. The tags were linked to differently colored fluorescent molecules so that any glass shells linked to the right tags would start glowing specific colors. We already have machines that use lasers to separate things based on what color they glow (normally, the machines are used to sort fluorescently tagged cells). In this machine, an orange domestic cat bead would glow at different wavelengths than an orange cat bead, so the house cat could be pulled out of the library.
The rest of the library would remain untouched, so there's no significant loss of data each time this process occurs. And because the beads are denser than water, it's easy to concentrate the data storage again simply by using a centrifuge to spin the unused portion of the library down to the bottom of a test tube.
The researchers used a glass-etching solution to liberate the DNA, which could then be inserted into bacteria. The DNA used for storage was set up to allow bacteria to make many copies of it for reading the data.
Interestingly, the system allows for Boolean searches with multiple terms. By selecting for or against different tags one after the other, you can build up fairly complicated conditions: true for cat, false for domesticated, true for black, and so on. Labeling two tags with the same fluorescent color would give you the equivalent of a logical OR if you grab anything with that color.
Because each of these tags can be viewed as a piece of metadata about the image stored by the DNA, the collection of beads ends up acting as a metadata-driven image database.
While this research represents a significant leap in complexity for DNA-based storage, it's still just DNA-based storage. That means it's slow on a scale that makes even tape drives seem quick. The researchers calculate that even if they crammed far more data into each glass bead, searches would start topping out at about 1GB of data per second. That would mean searching a petabyte of data would take a bit over two weeks.
And that's just finding the right glass beads. Cracking them open and getting the DNA into bacteria and then doing the sequencing needed to actually determine what is stored in the bead would likely add a couple of days to the process.
But of course, nobody is suggesting that we use DNA storage because it's quick; its helpful properties, as we mentioned above, are in the areas of energy use and data stability. We'd only store something in DNA if we're convinced that we won't want to access it very often. Given that, any methods of making access more functional and flexible are potentially valuable.
Nature Materials, 2021. DOI: 10.1038/s41563-021-01021-3 ( About DOIs ). | A new DNA-based system for storing images created by researchers at the Massachusetts Institute of Technology (MIT) and the Broad Institute encapsulates data-encoding DNA file sequences within silicon-dioxide glass beads that are surface-labeled with fluorescent single-stranded DNA tags. The tagged beads are blended into a data library that benefits from long-term stability and zero-energy maintenance. The researchers stored a keyword-associated archive of images in the DNA, with each keyword encoded in the DNA attached to the bead's surface. The system permits Boolean searches of multiple terms, and since each tag can be viewed as a piece of metadata about the DNA-stored image, the beads collectively function as a metadata-driven image database. | [] | [] | [] | scitechnews | None | None | None | None | A new DNA-based system for storing images created by researchers at the Massachusetts Institute of Technology (MIT) and the Broad Institute encapsulates data-encoding DNA file sequences within silicon-dioxide glass beads that are surface-labeled with fluorescent single-stranded DNA tags. The tagged beads are blended into a data library that benefits from long-term stability and zero-energy maintenance. The researchers stored a keyword-associated archive of images in the DNA, with each keyword encoded in the DNA attached to the bead's surface. The system permits Boolean searches of multiple terms, and since each tag can be viewed as a piece of metadata about the DNA-stored image, the beads collectively function as a metadata-driven image database.
DNA-based data storage appears to offer solutions to some of the problems created by humanity's ever-growing capacity to create data we want to hang on to. Compared to most other media, DNA offers phenomenal data densities. If stored in the right conditions, DNA doesn't require any energy to maintain the data for centuries. And due to DNA's centrality to biology, we're always likely to maintain the ability to read it.
But DNA is not without its downsides. Right now, there's no standard method of encoding bits in the pattern of bases of a DNA strand. Synthesizing specific sequences remains expensive. And accessing the data using current methods is slow and depletes the DNA being used for storage. Try to access the data too many times and you have to restore it in some way - a process that risks introducing errors.
A team from MIT and the Broad Institute has decided to tackle some of these issues. In the process, the researchers have created a DNA-based image-storage system that is somewhere between a file system and a metadata-based database.
Recent systems for storing data in DNA (such as one we've covered ) involve adding specific sequence tags to the stretches of DNA that contain data. To get the data you want, you simply add bits of DNA that can base-pair with the right tags and use them to amplify the full sequence. Think of it like tagging every image in a collection with an ID, then setting things up so that only one specific ID gets amplified.
This method is effective, but it's limited in two ways. First, the amplification step, done using a process called PCR , has limits on the size of the sequence that can be amplified. And each tag takes up some of that limited space, so adding more detailed tags (as might be needed for a complicated file system) cuts into the amount of space for data.
The other limit is that the PCR reaction that amplifies specific pieces of data-containing DNA consumes some of the original library of DNA. In other words, each time you pull out some data, you destroy piles of unrelated data. Access data often enough and you'll end up burning through the entire repository. While there are ways to re-amplify everything, each time this is done, it increases the chance of introducing an error.
The new research has separated out the tag information from data storage. In addition, the researchers created a system in which it's possible to access just the DNA data you're interested in and leave the rest of the data untouched, providing a greater longevity to the data storage.
The basic technology is based on the fact that DNA will stick to silicon-dioxide glass beads. This attraction is independent of the size of the DNA, so you can store arbitrarily large chunks of data using this system (in this case, the fragments were over 10 times the size of the typical chunk of DNA data storage used in the past). Just as importantly, no tags in the DNA were stored in the data, so there was no competition between data storage and file system information.
Once the DNA was on the surface of these beads, the researchers polymerized some additional silicon dioxide on top of it. This process coated the DNA and protected it from the environment. Using a fluorescent tag, the researchers confirmed that the system was efficient; essentially, all of the particles created this way contained DNA.
Only once this shell was in place did the researchers add tags, which were chemically linked to the outer shell. The tags were made of single-stranded DNA, and it was possible to have several distinct tags attached to a single glass shell.
The researchers handled the process separately for each block of data, and once everything was in place, the tagged glass spheres could be mixed into a single data library. While not as compact as the storage of pure DNA, the library still has the advantages of being stable for the long term and requiring no energy for maintenance.
But the fun part is accessing data. The researchers stored a keyword-associated collection of images in the DNA, with each keyword encoded in the DNA attached to the exterior of the glass shell. To use their example, an image of an orange pet cat would be associated with the keywords "orange," "cat," and "domestic," while an image of a tiger would just have "orange" and "cat."
Because these tags were single-stranded, it was possible to design a matching sequence that would base-pair with it to form a double helix. The tags were linked to differently colored fluorescent molecules so that any glass shells linked to the right tags would start glowing specific colors. We already have machines that use lasers to separate things based on what color they glow (normally, the machines are used to sort fluorescently tagged cells). In this machine, an orange domestic cat bead would glow at different wavelengths than an orange cat bead, so the house cat could be pulled out of the library.
The rest of the library would remain untouched, so there's no significant loss of data each time this process occurs. And because the beads are denser than water, it's easy to concentrate the data storage again simply by using a centrifuge to spin the unused portion of the library down to the bottom of a test tube.
The researchers used a glass-etching solution to liberate the DNA, which could then be inserted into bacteria. The DNA used for storage was set up to allow bacteria to make many copies of it for reading the data.
Interestingly, the system allows for Boolean searches with multiple terms. By selecting for or against different tags one after the other, you can build up fairly complicated conditions: true for cat, false for domesticated, true for black, and so on. Labeling two tags with the same fluorescent color would give you the equivalent of a logical OR if you grab anything with that color.
Because each of these tags can be viewed as a piece of metadata about the image stored by the DNA, the collection of beads ends up acting as a metadata-driven image database.
While this research represents a significant leap in complexity for DNA-based storage, it's still just DNA-based storage. That means it's slow on a scale that makes even tape drives seem quick. The researchers calculate that even if they crammed far more data into each glass bead, searches would start topping out at about 1GB of data per second. That would mean searching a petabyte of data would take a bit over two weeks.
And that's just finding the right glass beads. Cracking them open and getting the DNA into bacteria and then doing the sequencing needed to actually determine what is stored in the bead would likely add a couple of days to the process.
But of course, nobody is suggesting that we use DNA storage because it's quick; its helpful properties, as we mentioned above, are in the areas of energy use and data stability. We'd only store something in DNA if we're convinced that we won't want to access it very often. Given that, any methods of making access more functional and flexible are potentially valuable.
Nature Materials, 2021. DOI: 10.1038/s41563-021-01021-3 ( About DOIs ). |
|||
254 | Combining Classical, Quantum Computing Opens Door to Discoveries | Researchers have discovered a new and more efficient computing method for pairing the reliability of a classical computer with the strength of a quantum system.
This new computing method opens the door to different algorithms and experiments that bring quantum researchers closer to near-term applications and discoveries of the technology.
"In the future, quantum computers could be used in a wide variety of applications including helping to remove carbon dioxide from the atmosphere, developing artificial limbs and designing more efficient pharmaceuticals," said Christine Muschik, a principal investigator at the Institute for Quantum Computing (IQC) and a faculty member in physics and astronomy at the University of Waterloo.
Click image to play video explaining how the researchers are pairing classical computer and quantum system to solve hard optimization problems.
The research team from IQC in partnership with the University of Innsbruck is the first to propose the measurement-based approach in a feedback loop with a regular computer, inventing a new way to tackle hard computing problems. Their method is resource-efficient and therefore can use small quantum states because they are custom-tailored to specific types of problems.
Hybrid computing, where a regular computer's processor and a quantum co-processor are paired into a feedback loop, gives researchers a more robust and flexible approach than trying to use a quantum computer alone.
While researchers are currently building hybrid, computers based on quantum gates, Muschik's research team was interested in the quantum computations that could be done without gates. They designed an algorithm in which a hybrid quantum-classical computation is carried out by performing a sequence of measurements on an entangled quantum state.
The team's theoretical research is good news for quantum software developers and experimentalists because it provides a new way of thinking about optimization algorithms. The algorithm offers high error tolerance, often an issue in quantum systems, and works for a wide range of quantum systems, including photonic quantum co-processors.
Hybrid computing is a novel frontier in near-term quantum applications. By removing the reliance on quantum gates, Muschik and her team have removed the struggle with finicky and delicate resources and instead, by using entangled quantum states, they believe they will be able to design feedback loops that can be tailored to the datasets that the computers are researching in a more efficient manner.
"Quantum computers have the potential to solve problems that supercomputers can't, but they are still experimental and fragile," said Muschik.
The study, A measurement-based variational quantum eigensolver, which details the researchers' work was recently published in the journal Physical Review Letters.
This project is funded by CIFAR. | Researchers at the University of Waterloo's Institute for Quantum Computing in Canada and Austria's University of Innsbruck have developed a resource-efficient technique for combining classical computing's reliability with quantum computing's robustness. The method couples a standard computer's processor and a quantum computer's co-processor in a feedback loop to meet difficult computing challenges, using small quantum states customized to specific types of problems. The process utilizes an algorithm designed to execute hybrid quantum-classical computation by conducting a sequence of measurements on an entangled quantum state. The program has high error tolerance, and is applicable across a broad spectrum of quantum systems. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the University of Waterloo's Institute for Quantum Computing in Canada and Austria's University of Innsbruck have developed a resource-efficient technique for combining classical computing's reliability with quantum computing's robustness. The method couples a standard computer's processor and a quantum computer's co-processor in a feedback loop to meet difficult computing challenges, using small quantum states customized to specific types of problems. The process utilizes an algorithm designed to execute hybrid quantum-classical computation by conducting a sequence of measurements on an entangled quantum state. The program has high error tolerance, and is applicable across a broad spectrum of quantum systems.
Researchers have discovered a new and more efficient computing method for pairing the reliability of a classical computer with the strength of a quantum system.
This new computing method opens the door to different algorithms and experiments that bring quantum researchers closer to near-term applications and discoveries of the technology.
"In the future, quantum computers could be used in a wide variety of applications including helping to remove carbon dioxide from the atmosphere, developing artificial limbs and designing more efficient pharmaceuticals," said Christine Muschik, a principal investigator at the Institute for Quantum Computing (IQC) and a faculty member in physics and astronomy at the University of Waterloo.
Click image to play video explaining how the researchers are pairing classical computer and quantum system to solve hard optimization problems.
The research team from IQC in partnership with the University of Innsbruck is the first to propose the measurement-based approach in a feedback loop with a regular computer, inventing a new way to tackle hard computing problems. Their method is resource-efficient and therefore can use small quantum states because they are custom-tailored to specific types of problems.
Hybrid computing, where a regular computer's processor and a quantum co-processor are paired into a feedback loop, gives researchers a more robust and flexible approach than trying to use a quantum computer alone.
While researchers are currently building hybrid, computers based on quantum gates, Muschik's research team was interested in the quantum computations that could be done without gates. They designed an algorithm in which a hybrid quantum-classical computation is carried out by performing a sequence of measurements on an entangled quantum state.
The team's theoretical research is good news for quantum software developers and experimentalists because it provides a new way of thinking about optimization algorithms. The algorithm offers high error tolerance, often an issue in quantum systems, and works for a wide range of quantum systems, including photonic quantum co-processors.
Hybrid computing is a novel frontier in near-term quantum applications. By removing the reliance on quantum gates, Muschik and her team have removed the struggle with finicky and delicate resources and instead, by using entangled quantum states, they believe they will be able to design feedback loops that can be tailored to the datasets that the computers are researching in a more efficient manner.
"Quantum computers have the potential to solve problems that supercomputers can't, but they are still experimental and fragile," said Muschik.
The study, A measurement-based variational quantum eigensolver, which details the researchers' work was recently published in the journal Physical Review Letters.
This project is funded by CIFAR. |
|||
255 | A Big Step Towards Cybersecurity's Holy Grail | The trek towards the holy grail of cybersecurity - a user-friendly computing environment where the guarantee of security is as strong as a mathematical proof - is making big strides.
A team of Carnegie Mellon University CyLab researchers just revealed a new provably secure computing environment that protects users' communication with their devices, such as keyboard, mouse, or display, from all other compromised operating system and application software and other devices. That means that even if malicious hackers compromise operating systems and other applications, this secure environment is protected; "sniffing" users' keystrokes, capturing confidential screen output, stealing or modifying data stored on user-pluggable devices for example, is impossible.
"In contrast to our platform, most existing endpoint-security tools such as antivirus or firewalls offer only limited protection against powerful cyberattacks," says CyLab's Virgil Gligor , a professor of electrical and computer engineering (ECE) and a co-author of the work. "None of them achieve the high assurance of our platform. Protection like this has not been possible to date."
The groundbreaking work was presented by Miao Yu, a postdoctoral researcher in ECE and the team's lead implementor, at last month's IEEE Symposium on Security and Privacy, the world's oldest and most prestigious security and privacy symposium.
Specifically, the researchers presented an I/O separation model, which defines precisely what it means to protect the communications of isolated applications running on frequently compromised operating systems such as Windows, Linux, or MacOS. According to the researchers, the I/O model is the first mathematically-proven model that achieves communication separation for all types of I/O hardware and I/O kernels, the programs that facilitate interactions between software and hardware components.
Imagine that you need to transfer some money online, and the transactions you are about to execute are so sensitive that you'd like a guarantee they will remain private even if your computer has unknowingly been compromised with malware. Performing those transactions in this environment would be provably secure; even your completely compromised operating system would be unable to steal or modify the private data you input using your keyboard or mouse and display on your screen.
This type of secure environment is even more important with the rise of remote work, as more and more workers are utilizing Virtual Desktop Infrastructures (VDIs) which allows them to operate remote desktops.
"Business, government, and industry can benefit from using this platform and its VDI application because of the steady and permanent shift to remote work and the need to protect sensitive applications from future attacks," says Gligor. "Consumers can also benefit from adopting this platform and its VDI clients to secure access banking and investment accounts, perform provably secure e-commerce transactions, and protect digital currency."
This platform is still in the development phase, but Gligor and his team aims to commercialize it in the coming years.
An I/O Separation Model for Formal Verification of Kernel Implementations | Carnegie Mellon University (CMU) scientists have unveiled a provably secure computing environment that employs users' device communications to grant them immunity from compromised components. The researchers proposed an input/output (I/O) separation model that precisely describes mechanisms to safeguard the communications of isolated applications running on often-vulnerable operating systems like Windows, Linux, or MacOS. The CMU team said this is the first mathematically-proven model that enables communication separation for all types of I/O hardware and I/O kernels. CMU's Virgil Gilgor said, "Business, government, and industry can benefit from using this platform and its VDI [Virtual Desktop Infrastructure] application because of the steady and permanent shift to remote work and the need to protect sensitive applications from future attacks. Consumers can also benefit from adopting this platform and its VDI clients to secure access banking and investment accounts, perform provably secure e-commerce transactions, and protect digital currency." | [] | [] | [] | scitechnews | None | None | None | None | Carnegie Mellon University (CMU) scientists have unveiled a provably secure computing environment that employs users' device communications to grant them immunity from compromised components. The researchers proposed an input/output (I/O) separation model that precisely describes mechanisms to safeguard the communications of isolated applications running on often-vulnerable operating systems like Windows, Linux, or MacOS. The CMU team said this is the first mathematically-proven model that enables communication separation for all types of I/O hardware and I/O kernels. CMU's Virgil Gilgor said, "Business, government, and industry can benefit from using this platform and its VDI [Virtual Desktop Infrastructure] application because of the steady and permanent shift to remote work and the need to protect sensitive applications from future attacks. Consumers can also benefit from adopting this platform and its VDI clients to secure access banking and investment accounts, perform provably secure e-commerce transactions, and protect digital currency."
The trek towards the holy grail of cybersecurity - a user-friendly computing environment where the guarantee of security is as strong as a mathematical proof - is making big strides.
A team of Carnegie Mellon University CyLab researchers just revealed a new provably secure computing environment that protects users' communication with their devices, such as keyboard, mouse, or display, from all other compromised operating system and application software and other devices. That means that even if malicious hackers compromise operating systems and other applications, this secure environment is protected; "sniffing" users' keystrokes, capturing confidential screen output, stealing or modifying data stored on user-pluggable devices for example, is impossible.
"In contrast to our platform, most existing endpoint-security tools such as antivirus or firewalls offer only limited protection against powerful cyberattacks," says CyLab's Virgil Gligor , a professor of electrical and computer engineering (ECE) and a co-author of the work. "None of them achieve the high assurance of our platform. Protection like this has not been possible to date."
The groundbreaking work was presented by Miao Yu, a postdoctoral researcher in ECE and the team's lead implementor, at last month's IEEE Symposium on Security and Privacy, the world's oldest and most prestigious security and privacy symposium.
Specifically, the researchers presented an I/O separation model, which defines precisely what it means to protect the communications of isolated applications running on frequently compromised operating systems such as Windows, Linux, or MacOS. According to the researchers, the I/O model is the first mathematically-proven model that achieves communication separation for all types of I/O hardware and I/O kernels, the programs that facilitate interactions between software and hardware components.
Imagine that you need to transfer some money online, and the transactions you are about to execute are so sensitive that you'd like a guarantee they will remain private even if your computer has unknowingly been compromised with malware. Performing those transactions in this environment would be provably secure; even your completely compromised operating system would be unable to steal or modify the private data you input using your keyboard or mouse and display on your screen.
This type of secure environment is even more important with the rise of remote work, as more and more workers are utilizing Virtual Desktop Infrastructures (VDIs) which allows them to operate remote desktops.
"Business, government, and industry can benefit from using this platform and its VDI application because of the steady and permanent shift to remote work and the need to protect sensitive applications from future attacks," says Gligor. "Consumers can also benefit from adopting this platform and its VDI clients to secure access banking and investment accounts, perform provably secure e-commerce transactions, and protect digital currency."
This platform is still in the development phase, but Gligor and his team aims to commercialize it in the coming years.
An I/O Separation Model for Formal Verification of Kernel Implementations |
|||
256 | Computers Predict People's Tastes in Art | Do you like the thick brush strokes and soft color palettes of an impressionist painting such as those by Claude Monet? Or do you prefer the bold colors and abstract shapes of a Rothko? Individual art tastes have a certain mystique to them, but now a new Caltech study shows that a simple computer program can accurately predict which paintings a person will like.
The new study, appearing in the journal Nature Human Behaviour , utilized Amazon's crowdsourcing platform Mechanical Turk to enlist more than 1,500 volunteers to rate paintings in the genres of impressionism, cubism, abstract, and color field. The volunteers' answers were fed into a computer program and then, after this training period, the computer could predict the volunteers' art preferences much better than would happen by chance.
"I used to think the evaluation of art was personal and subjective, so I was surprised by this result," says lead author Kiyohito Iigaya, a postdoctoral scholar who works in the laboratory of Caltech professor of psychology John O'Doherty , an affiliated member of the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech.
The findings not only demonstrated that computers can make these predictions but also led to a new understanding about how people judge art.
"The main point is that we are gaining an insight into the mechanism that people use to make aesthetic judgments," says O'Doherty. "That is, that people appear to use elementary image features and combine over them. That's a first step to understanding how the process works."
In the study, the team programmed the computer to break a painting's visual attributes down into what they called low-level features - traits like contrast, saturation, and hue - as well as high-level features, which require human judgment and include traits such as whether the painting is dynamic or still.
"The computer program then estimates how much a specific feature is taken into account when making a decision about how much to like a particular piece of art," explains Iigaya. "Both the low- and high-level features are combined together when making these decisions. Once the computer has estimated that, then it can successfully predict a person's liking for another previously unseen piece of art."
The researchers also discovered that the volunteers tended to cluster into three general categories: those who like paintings with real-life objects, such as an impressionist painting; those who like colorful abstract paintings, such as a Rothko; and those who like complex paintings, such as Picasso's cubist portraits. The majority of people fell into the first "real-life object" category. "Many people liked the impressionism paintings," says Iigaya.
What is more, the researchers found that they could also train a deep convolutional neural network (DCNN) to learn to predict the volunteer's art preferences with a similar level of accuracy. A DCNN is a type of machine-learning program, in which a computer is fed a series of training images so that it can learn to classify objects, such as cats versus dogs. These neural networks have units that are connected to each other like neurons in a brain. By changing the strength of the connection of one unit to another, the network can "learn."
In this case, the deep-learning approach did not include any of the selected low- or high-level visual features used in the first part of the study, so the computer had to "decide" what features to analyze on its own.
"In deep-neural-network models, we do not actually know exactly how the network is solving a particular task because the models learn by themselves much like real brains do," explains Iigaya. "It can be very mysterious, but when we looked inside the neural network, we were able to tell that it was constructing the same feature categories we selected ourselves." These results hint at the possibility that features used for determining aesthetic preference might emerge naturally in a brain-like architecture.
"We are now actively looking at whether this is indeed the case by looking at people's brains while they make these same types of decisions," says O'Doherty.
In another part of the study, the researchers also demonstrated that their simple computer program, which had already been trained on art preferences, could accurately predict which photos volunteers would like. They showed the volunteers photographs of swimming pools, food, and other scenes, and saw similar results to those involving paintings. Additionally, the researchers showed that reversing the order also worked: after first training volunteers on photos, they could use the program to accurately predict the subjects' art preferences.
While the computer program was successful at predicting the volunteers' art preferences, the researchers say there is still more to learn about the nuances that go into any one individual's taste.
"There are aspects of preferences unique for a given individual that we have not succeeded in explaining using this method," says O'Doherty. "This more idiosyncratic component may relate to semantic features, or the meaning of a painting, past experiences, and other individual personal traits that might influence valuation. It still may be possible to identify and learn about those features in a computer model, but to do so will involve a more detailed study of each individual's preferences in a way that may not generalize across individuals as we found here."
The study, titled, " Aesthetic preference for art can be predicted from a mixture of low- and high-level visual features ," was funded by the National Institute of Mental Health (through Caltech's Conte Center for the Neurobiology of Social Decision Making), the National Institute on Drug Abuse, the Japan Society for Promotion of Science, the Swartz Foundation, the Suntory Foundation, and the William H. and Helen Lang Summer Undergraduate Research Fellowship. Other Caltech authors include Sanghyun Yi, Iman A. Wahle (BS '20), and Koranis Tanwisuth, who is now a graduate student at UC Berkeley. | Researchers at the California Institute of Technology (Caltech) used a program to predict people's art preferences. The team recruited more than 1,500 volunteers via Amazon's Mechanical Turk crowdsourcing platform to rate paintings in various genres and color fields, then fed this data to the program. The researchers taught the computer to deconstruct a painting's visual properties into low-level features (contrast, saturation, and hue) and high-level features that require human evaluation. Caltech's Kiyohito Iigaya said the program combines these features to calculate how much a specific feature is accounted for when deciding on the artwork's appeal; afterwards, the computer can accurately forecast a person's preference for a previously unseen work of art. Caltech's John O'Doherty said the research reveals insights about the underpinnings of human aesthetic judgments, "that people appear to use elementary image features and combine over them." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the California Institute of Technology (Caltech) used a program to predict people's art preferences. The team recruited more than 1,500 volunteers via Amazon's Mechanical Turk crowdsourcing platform to rate paintings in various genres and color fields, then fed this data to the program. The researchers taught the computer to deconstruct a painting's visual properties into low-level features (contrast, saturation, and hue) and high-level features that require human evaluation. Caltech's Kiyohito Iigaya said the program combines these features to calculate how much a specific feature is accounted for when deciding on the artwork's appeal; afterwards, the computer can accurately forecast a person's preference for a previously unseen work of art. Caltech's John O'Doherty said the research reveals insights about the underpinnings of human aesthetic judgments, "that people appear to use elementary image features and combine over them."
Do you like the thick brush strokes and soft color palettes of an impressionist painting such as those by Claude Monet? Or do you prefer the bold colors and abstract shapes of a Rothko? Individual art tastes have a certain mystique to them, but now a new Caltech study shows that a simple computer program can accurately predict which paintings a person will like.
The new study, appearing in the journal Nature Human Behaviour , utilized Amazon's crowdsourcing platform Mechanical Turk to enlist more than 1,500 volunteers to rate paintings in the genres of impressionism, cubism, abstract, and color field. The volunteers' answers were fed into a computer program and then, after this training period, the computer could predict the volunteers' art preferences much better than would happen by chance.
"I used to think the evaluation of art was personal and subjective, so I was surprised by this result," says lead author Kiyohito Iigaya, a postdoctoral scholar who works in the laboratory of Caltech professor of psychology John O'Doherty , an affiliated member of the Tianqiao and Chrissy Chen Institute for Neuroscience at Caltech.
The findings not only demonstrated that computers can make these predictions but also led to a new understanding about how people judge art.
"The main point is that we are gaining an insight into the mechanism that people use to make aesthetic judgments," says O'Doherty. "That is, that people appear to use elementary image features and combine over them. That's a first step to understanding how the process works."
In the study, the team programmed the computer to break a painting's visual attributes down into what they called low-level features - traits like contrast, saturation, and hue - as well as high-level features, which require human judgment and include traits such as whether the painting is dynamic or still.
"The computer program then estimates how much a specific feature is taken into account when making a decision about how much to like a particular piece of art," explains Iigaya. "Both the low- and high-level features are combined together when making these decisions. Once the computer has estimated that, then it can successfully predict a person's liking for another previously unseen piece of art."
The researchers also discovered that the volunteers tended to cluster into three general categories: those who like paintings with real-life objects, such as an impressionist painting; those who like colorful abstract paintings, such as a Rothko; and those who like complex paintings, such as Picasso's cubist portraits. The majority of people fell into the first "real-life object" category. "Many people liked the impressionism paintings," says Iigaya.
What is more, the researchers found that they could also train a deep convolutional neural network (DCNN) to learn to predict the volunteer's art preferences with a similar level of accuracy. A DCNN is a type of machine-learning program, in which a computer is fed a series of training images so that it can learn to classify objects, such as cats versus dogs. These neural networks have units that are connected to each other like neurons in a brain. By changing the strength of the connection of one unit to another, the network can "learn."
In this case, the deep-learning approach did not include any of the selected low- or high-level visual features used in the first part of the study, so the computer had to "decide" what features to analyze on its own.
"In deep-neural-network models, we do not actually know exactly how the network is solving a particular task because the models learn by themselves much like real brains do," explains Iigaya. "It can be very mysterious, but when we looked inside the neural network, we were able to tell that it was constructing the same feature categories we selected ourselves." These results hint at the possibility that features used for determining aesthetic preference might emerge naturally in a brain-like architecture.
"We are now actively looking at whether this is indeed the case by looking at people's brains while they make these same types of decisions," says O'Doherty.
In another part of the study, the researchers also demonstrated that their simple computer program, which had already been trained on art preferences, could accurately predict which photos volunteers would like. They showed the volunteers photographs of swimming pools, food, and other scenes, and saw similar results to those involving paintings. Additionally, the researchers showed that reversing the order also worked: after first training volunteers on photos, they could use the program to accurately predict the subjects' art preferences.
While the computer program was successful at predicting the volunteers' art preferences, the researchers say there is still more to learn about the nuances that go into any one individual's taste.
"There are aspects of preferences unique for a given individual that we have not succeeded in explaining using this method," says O'Doherty. "This more idiosyncratic component may relate to semantic features, or the meaning of a painting, past experiences, and other individual personal traits that might influence valuation. It still may be possible to identify and learn about those features in a computer model, but to do so will involve a more detailed study of each individual's preferences in a way that may not generalize across individuals as we found here."
The study, titled, " Aesthetic preference for art can be predicted from a mixture of low- and high-level visual features ," was funded by the National Institute of Mental Health (through Caltech's Conte Center for the Neurobiology of Social Decision Making), the National Institute on Drug Abuse, the Japan Society for Promotion of Science, the Swartz Foundation, the Suntory Foundation, and the William H. and Helen Lang Summer Undergraduate Research Fellowship. Other Caltech authors include Sanghyun Yi, Iman A. Wahle (BS '20), and Koranis Tanwisuth, who is now a graduate student at UC Berkeley. |
|||
259 | Creating 'Digital Twins' at Scale | Picture this: A delivery drone suffers some minor wing damage on its flight. Should it land immediately, carry on as usual, or reroute to a new destination? A digital twin, a computer model of the drone that has been flying the same route and now experiences the same damage in its virtual world, can help make the call.
Digital twins are an important part of engineering, medicine, and urban planning, but in most of these cases each twin is a bespoke, custom implementation that only works with a specific application. Michael Kapteyn SM '18, PhD '21 has now developed a model that can enable the deployment of digital twins at scale - creating twins for a whole fleet of drones, for instance.
A mathematical representation called a probabilistic graphical model can be the foundation for predictive digital twins, according to a new study by Kapteyn and his colleagues in the journal Nature Computational Science . The researchers tested out the idea on an unpiloted aerial vehicle (UAV) in a scenario like the one described above.
"The custom implementations that have been demonstrated so far typically require a significant amount of resources, which is a barrier to real-world deployment," explains Kapteyn, who recently received his doctorate in computational science and engineering from the MIT Department of Aeronautics and Astronautics.
"This is exacerbated by the fact that digital twins are most useful in situations where you are managing many similar assets," he adds. "When developing our model, we always kept in mind the goal of creating digital twins for an entire fleet of aircraft, or an entire farm of wind turbines, or a population of human cardiac patients."
"Their work pushes the boundaries of digital twins' custom implementations that require considerable deployment resources and a high level of expertise," says Omer San, an assistant professor of mechanical and aerospace engineering at Oklahoma State University who was not involved in the research.
Kapteyn's co-authors on the paper include his PhD advisor Karen Willcox SM '96, PhD '00, MIT visiting professor and director of the Oden Institute for Computational Engineering and Sciences at the University of Texas at Austin, and former MIT engineering and management master's student Jacob Pretorius '03, now chief technology officer of The Jessara Group.
Evolving twins
Digital twins have a long history in aerospace engineering, from one of its earliest uses by NASA in devising strategies to bring the crippled Apollo 13 moon mission home safely in 1970. Researchers in the medical field have been using digital twins for applications like cardiology, to consider treatments such as valve replacement before a surgery.
However, expanding the use of digital twins to guide the flight of hundreds of satellites, or recommend precision therapies for thousands of heart patients, requires a different approach than the one-off, highly specific digital twins that are created usually, the researchers write.
To resolve this, Kapteyn and colleagues sought out a unifying mathematical representation of the relationship between a digital twin and its associated physical asset that was not specific to a particular application or use. The researchers' model mathematically defines a pair of physical and digital dynamic systems, coupled together via two-way data streams as they evolve over time. In the case of the UAV, for example, the parameters of the digital twin are first calibrated with data collected from the physical UAV so that its twin is an accurate reflection from the start.
As the overall state of the UAV changes over time (through processes such as mechanical wear and tear and flight time logged, among others), these changes are observed by the digital twin and used to update its own state so that it matches the physical UAV. This updated digital twin can then predict how the UAV will change in the future, using this information to optimally direct the physical asset going forward.
The graphical model allows each digital twin "to be based on the same underlying computational model, but each physical asset must maintain a unique 'digital state' that defines a unique configuration of this model," Kapteyn explains. This makes it easier to create digital twins for a large collection of similar physical assets.
UAV test case
To test their model, the team used a 12-foot wingspan UAV designed and built together with Aurora Flight Sciences and outfitted with sensor "stickers" from The Jessara Group that were used to collect strain, acceleration, and other relevant data from the UAV.
The UAV was the test bed for everything from calibration experiments to a simulated "light damage" event. Its digital twin was able to analyze sensor data to extract damage information, predict how the structural health of the UAV would change in the future, and recommend changes in its maneuvering to accommodate those changes.
The UAV case shows how similar digital-twin modeling could be useful in other situations where environmental wear and tear plays a significant role in operation, such as a wind turbine, a bridge, or a nuclear reactor, the researchers note in their paper.
"I think this idea of maintaining a persistent set of computational models that are constantly being updated and evolved alongside a physical asset over its entire life cycle is really the essence of digital twins," says Kapteyn, "and is what we have tried to capture in our model."
The probabilistic graphical model approach helps to "seamlessly span different phases of the asset life cycle," he notes. "In our particular case, this manifests as the graphical model seamlessly extending from the calibration phase into our operational, in-flight phase, where we actually start to use the digital twin for decision-making."
The research could help make the use of digital twins more widespread, since "even with existing limitations, digital twins are providing valuable decision support in many different application areas," Willcox said in a recent interview.
"Ultimately, we would like to see the technology used in every engineering system," she added. "At that point, we can start thinking not just about how a digital twin might change the way we operate the system, but also how we design it in the first place."
This work was partially supported by the Air Force Office of Scientific Research, the SUTD-MIT International Design Center, and the U.S. Department of Energy. | The Massachusetts Institute of Technology (MIT) 's Michael Kapteyn and colleagues have designed a model for generating digital twins - precise computer simulations - at scale. The researchers tested the probabilistic graphical model in scenarios involving an unpiloted aerial vehicle (UAV). The model mathematically characterizes a pair of physical and digital dynamic systems connected via two-way data streams as they evolve; the parameters of the UAV's digital twin are initially aligned with data collected from the physical counterpart, to accurately reflect the original at the onset. This ensures the digital twin matches any changes the physical asset undergoes over time, and can anticipate the physical asset's future changes. Kapteyn said this simplifies the production of digital twins for a large number of similar physical assets. | [] | [] | [] | scitechnews | None | None | None | None | The Massachusetts Institute of Technology (MIT) 's Michael Kapteyn and colleagues have designed a model for generating digital twins - precise computer simulations - at scale. The researchers tested the probabilistic graphical model in scenarios involving an unpiloted aerial vehicle (UAV). The model mathematically characterizes a pair of physical and digital dynamic systems connected via two-way data streams as they evolve; the parameters of the UAV's digital twin are initially aligned with data collected from the physical counterpart, to accurately reflect the original at the onset. This ensures the digital twin matches any changes the physical asset undergoes over time, and can anticipate the physical asset's future changes. Kapteyn said this simplifies the production of digital twins for a large number of similar physical assets.
Picture this: A delivery drone suffers some minor wing damage on its flight. Should it land immediately, carry on as usual, or reroute to a new destination? A digital twin, a computer model of the drone that has been flying the same route and now experiences the same damage in its virtual world, can help make the call.
Digital twins are an important part of engineering, medicine, and urban planning, but in most of these cases each twin is a bespoke, custom implementation that only works with a specific application. Michael Kapteyn SM '18, PhD '21 has now developed a model that can enable the deployment of digital twins at scale - creating twins for a whole fleet of drones, for instance.
A mathematical representation called a probabilistic graphical model can be the foundation for predictive digital twins, according to a new study by Kapteyn and his colleagues in the journal Nature Computational Science . The researchers tested out the idea on an unpiloted aerial vehicle (UAV) in a scenario like the one described above.
"The custom implementations that have been demonstrated so far typically require a significant amount of resources, which is a barrier to real-world deployment," explains Kapteyn, who recently received his doctorate in computational science and engineering from the MIT Department of Aeronautics and Astronautics.
"This is exacerbated by the fact that digital twins are most useful in situations where you are managing many similar assets," he adds. "When developing our model, we always kept in mind the goal of creating digital twins for an entire fleet of aircraft, or an entire farm of wind turbines, or a population of human cardiac patients."
"Their work pushes the boundaries of digital twins' custom implementations that require considerable deployment resources and a high level of expertise," says Omer San, an assistant professor of mechanical and aerospace engineering at Oklahoma State University who was not involved in the research.
Kapteyn's co-authors on the paper include his PhD advisor Karen Willcox SM '96, PhD '00, MIT visiting professor and director of the Oden Institute for Computational Engineering and Sciences at the University of Texas at Austin, and former MIT engineering and management master's student Jacob Pretorius '03, now chief technology officer of The Jessara Group.
Evolving twins
Digital twins have a long history in aerospace engineering, from one of its earliest uses by NASA in devising strategies to bring the crippled Apollo 13 moon mission home safely in 1970. Researchers in the medical field have been using digital twins for applications like cardiology, to consider treatments such as valve replacement before a surgery.
However, expanding the use of digital twins to guide the flight of hundreds of satellites, or recommend precision therapies for thousands of heart patients, requires a different approach than the one-off, highly specific digital twins that are created usually, the researchers write.
To resolve this, Kapteyn and colleagues sought out a unifying mathematical representation of the relationship between a digital twin and its associated physical asset that was not specific to a particular application or use. The researchers' model mathematically defines a pair of physical and digital dynamic systems, coupled together via two-way data streams as they evolve over time. In the case of the UAV, for example, the parameters of the digital twin are first calibrated with data collected from the physical UAV so that its twin is an accurate reflection from the start.
As the overall state of the UAV changes over time (through processes such as mechanical wear and tear and flight time logged, among others), these changes are observed by the digital twin and used to update its own state so that it matches the physical UAV. This updated digital twin can then predict how the UAV will change in the future, using this information to optimally direct the physical asset going forward.
The graphical model allows each digital twin "to be based on the same underlying computational model, but each physical asset must maintain a unique 'digital state' that defines a unique configuration of this model," Kapteyn explains. This makes it easier to create digital twins for a large collection of similar physical assets.
UAV test case
To test their model, the team used a 12-foot wingspan UAV designed and built together with Aurora Flight Sciences and outfitted with sensor "stickers" from The Jessara Group that were used to collect strain, acceleration, and other relevant data from the UAV.
The UAV was the test bed for everything from calibration experiments to a simulated "light damage" event. Its digital twin was able to analyze sensor data to extract damage information, predict how the structural health of the UAV would change in the future, and recommend changes in its maneuvering to accommodate those changes.
The UAV case shows how similar digital-twin modeling could be useful in other situations where environmental wear and tear plays a significant role in operation, such as a wind turbine, a bridge, or a nuclear reactor, the researchers note in their paper.
"I think this idea of maintaining a persistent set of computational models that are constantly being updated and evolved alongside a physical asset over its entire life cycle is really the essence of digital twins," says Kapteyn, "and is what we have tried to capture in our model."
The probabilistic graphical model approach helps to "seamlessly span different phases of the asset life cycle," he notes. "In our particular case, this manifests as the graphical model seamlessly extending from the calibration phase into our operational, in-flight phase, where we actually start to use the digital twin for decision-making."
The research could help make the use of digital twins more widespread, since "even with existing limitations, digital twins are providing valuable decision support in many different application areas," Willcox said in a recent interview.
"Ultimately, we would like to see the technology used in every engineering system," she added. "At that point, we can start thinking not just about how a digital twin might change the way we operate the system, but also how we design it in the first place."
This work was partially supported by the Air Force Office of Scientific Research, the SUTD-MIT International Design Center, and the U.S. Department of Energy. |
|||
260 | App Tracks Human Mobility, COVID-19 | Analyzing how people move about in their daily lives has long been important to urban planners, traffic engineers, and others developing new infrastructure projects.
But amid the social restrictions and quarantine policies imposed during the global spread of COVID-19 - which is directly linked to the movement of people - human mobility patterns changed dramatically.
To understand just how COVID-19 affected human movement on a global scale, Shouraseni Sen Roy, a professor in the College of Arts and Sciences Department of Geography and Sustainable Development, and graduate student Christopher Chapin developed COVID-19 vs. Human Mobility, an innovative and interactive web application that, shared in a new study, shows the connections between human mobility, government policies, and cases of COVID-19.
"At a macro level, understanding movement patterns of people can help influence decision making for higher-level policies, like social gathering restrictions, mask recommendations, and tracking and tracing the spread of infectious diseases," said Sen Roy. "At a local level, understanding the movement of people can lead to more specific decisions, like where to set up testing sites or vaccination sites."
Using a collection of big data sets, Chapin, who in May earned his Master of Science in Business Analytics with a minor in geospatial technology, developed the web application from three independent sources: Apple Maps, which provides data on human movement via walking, driving, and public transportation; Oxford University's COVID-19 Government Response Tracker, which provides data on government policies implemented during the pandemic; and global cases of COVID-19 gathered by Johns Hopkins University.
"Putting together this data application was a very ambitious project," said Chapin, the lead author of the study in the Journal of Geovisualization and Spatial Analysis. "I'm really proud of the end result and grateful that Dr. Sen Roy pushed me to get the application published. Now other researchers can access the massive amount of data on COVID-19 and human mobility on a global scale."
Users of the interactive web application can select a country, or a specific state or county in the U.S. and view comparisons between human mobility and COVID-19 cases across time. They also can view information on government policies in relation to the spread of COVID-19.
"Since the initial launch, we have continued to update the application with appropriate data at regular intervals," said Sen Roy. "The web application produces interesting visualizations that can reveal fascinating trends specific to a given area that might otherwise not be recognized."
During their exploration of the data, the researchers found a handful of case studies that suggested interesting trends. For example, in New Orleans, the application shows a spike in human mobility at the end of February 2020, which coincided with Mardi Gras celebrations. Coincidentally, there was a corresponding spike in COVID-19 cases almost a month after the event.
"We are hoping to garner more conversation and interest in the application that can help us and other researchers continue to see how COVID-19 has and continues to impact our world," said Sen Roy.
Although the application is specific to the pandemic, she noted that the framework could be modified rather easily to create a similar application for natural disasters - as long as appropriate data sets are available.
"Understanding historic mobility patterns, both under normal circumstances and in response to extreme events like a pandemic or a natural disaster, is surely needed for policy makers to make informed decisions regarding transportation systems and more," she said. "In this context, we hope that our application can be of use."
The study, " A Spatial Web Application to Explore the Interactions between Human Mobility, Government Policies and COVID-19 Cases ," is available online. | The COVID-19 vs. Human Mobility Web application can map the coronavirus pandemic's global impact on human movement. The University of Miami's Shouraseni Sen Roy and Christopher Chapin based the interactive app on Apple Maps' dataset on human movement through walking, driving, and public transit; Oxford University's COVID-19 Government Response Tracker, detailing government policies deployed during the pandemic; and Johns Hopkins University's compiled global cases of COVID-19. Users can choose a country, or a U.S. state or county, and compare human mobility and coronavirus cases over time, as well as data on government policies associated with COVID-19's spread. Sen Roy said, "Understanding historic mobility patterns, both under normal circumstances and in response to extreme events like a pandemic or a natural disaster, is surely needed for policymakers to make informed decisions regarding transportation systems and more." | [] | [] | [] | scitechnews | None | None | None | None | The COVID-19 vs. Human Mobility Web application can map the coronavirus pandemic's global impact on human movement. The University of Miami's Shouraseni Sen Roy and Christopher Chapin based the interactive app on Apple Maps' dataset on human movement through walking, driving, and public transit; Oxford University's COVID-19 Government Response Tracker, detailing government policies deployed during the pandemic; and Johns Hopkins University's compiled global cases of COVID-19. Users can choose a country, or a U.S. state or county, and compare human mobility and coronavirus cases over time, as well as data on government policies associated with COVID-19's spread. Sen Roy said, "Understanding historic mobility patterns, both under normal circumstances and in response to extreme events like a pandemic or a natural disaster, is surely needed for policymakers to make informed decisions regarding transportation systems and more."
Analyzing how people move about in their daily lives has long been important to urban planners, traffic engineers, and others developing new infrastructure projects.
But amid the social restrictions and quarantine policies imposed during the global spread of COVID-19 - which is directly linked to the movement of people - human mobility patterns changed dramatically.
To understand just how COVID-19 affected human movement on a global scale, Shouraseni Sen Roy, a professor in the College of Arts and Sciences Department of Geography and Sustainable Development, and graduate student Christopher Chapin developed COVID-19 vs. Human Mobility, an innovative and interactive web application that, shared in a new study, shows the connections between human mobility, government policies, and cases of COVID-19.
"At a macro level, understanding movement patterns of people can help influence decision making for higher-level policies, like social gathering restrictions, mask recommendations, and tracking and tracing the spread of infectious diseases," said Sen Roy. "At a local level, understanding the movement of people can lead to more specific decisions, like where to set up testing sites or vaccination sites."
Using a collection of big data sets, Chapin, who in May earned his Master of Science in Business Analytics with a minor in geospatial technology, developed the web application from three independent sources: Apple Maps, which provides data on human movement via walking, driving, and public transportation; Oxford University's COVID-19 Government Response Tracker, which provides data on government policies implemented during the pandemic; and global cases of COVID-19 gathered by Johns Hopkins University.
"Putting together this data application was a very ambitious project," said Chapin, the lead author of the study in the Journal of Geovisualization and Spatial Analysis. "I'm really proud of the end result and grateful that Dr. Sen Roy pushed me to get the application published. Now other researchers can access the massive amount of data on COVID-19 and human mobility on a global scale."
Users of the interactive web application can select a country, or a specific state or county in the U.S. and view comparisons between human mobility and COVID-19 cases across time. They also can view information on government policies in relation to the spread of COVID-19.
"Since the initial launch, we have continued to update the application with appropriate data at regular intervals," said Sen Roy. "The web application produces interesting visualizations that can reveal fascinating trends specific to a given area that might otherwise not be recognized."
During their exploration of the data, the researchers found a handful of case studies that suggested interesting trends. For example, in New Orleans, the application shows a spike in human mobility at the end of February 2020, which coincided with Mardi Gras celebrations. Coincidentally, there was a corresponding spike in COVID-19 cases almost a month after the event.
"We are hoping to garner more conversation and interest in the application that can help us and other researchers continue to see how COVID-19 has and continues to impact our world," said Sen Roy.
Although the application is specific to the pandemic, she noted that the framework could be modified rather easily to create a similar application for natural disasters - as long as appropriate data sets are available.
"Understanding historic mobility patterns, both under normal circumstances and in response to extreme events like a pandemic or a natural disaster, is surely needed for policy makers to make informed decisions regarding transportation systems and more," she said. "In this context, we hope that our application can be of use."
The study, " A Spatial Web Application to Explore the Interactions between Human Mobility, Government Policies and COVID-19 Cases ," is available online. |
|||
261 | ML Can Reduce Worry About Nanoparticles in Food | While crop yield has achieved a substantial boost from nanotechnology in recent years, alarms over the health risks posed by nanoparticles within fresh produce and grains have also increased. In particular, nanoparticles entering the soil through irrigation, fertilizers and other sources have raised concerns about whether plants absorb these minute particles enough to cause toxicity.
In a new study published online in the journal Environmental Science and Technology , researchers at Texas A&M University have used machine learning to evaluate the salient properties of metallic nanoparticles that make them more susceptible for plant uptake. The researchers said their algorithm could indicate how much plants accumulate nanoparticles in their roots and shoots.
Nanoparticles are a burgeoning trend in several fields, including medicine, consumer products and agriculture. Depending on the type of nanoparticle, some have favorable surface properties, charge and magnetism, among other features. These qualities make them ideal for a number of applications. For example, in agriculture, nanoparticles may be used as antimicrobials to protect plants from pathogens. Alternatively, they can be used to bind to fertilizers or insecticides and then programmed for slow release to increase plant absorption.
These agricultural practices and others, like irrigation, can cause nanoparticles to accumulate in the soil. However, with the different types of nanoparticles that could exist in the ground and a staggeringly large number of terrestrial plant species, including food crops, it is not clearly known if certain properties of nanoparticles make them more likely to be absorbed by some plant species than others.
"As you can imagine, if we have to test the presence of each nanoparticle for every plant species, it is a huge number of experiments, which is very time-consuming and expensive," said Xingmao "Samuel" Ma, associate professor in the Zachry Department of Civil and Environmental Engineering. "To give you an idea, silver nanoparticles alone can have hundreds of different sizes, shapes and surface coatings, and so, experimentally testing each one, even for a single plant species, is impractical."
Instead, for their study, the researchers chose two different machine learning algorithms, an artificial neural network and gene-expression programming. They first trained these algorithms on a database created from past research on different metallic nanoparticles and the specific plants in which they accumulated. In particular, their database contained the size, shape and other characteristics of different nanoparticles, along with information on how much of these particles were absorbed from soil or nutrient-enriched water into the plant body.
Once trained, their machine learning algorithms could correctly predict the likelihood of a given metallic nanoparticle to accumulate in a plant species. Also, their algorithms revealed that when plants are in a nutrient-enriched or hydroponic solution, the chemical makeup of the metallic nanoparticle determines the propensity of accumulation in the roots and shoots. But if plants are grown in soil, the contents of organic matter and the clay in soil are key to nanoparticle uptake.
Ma said that while the machine learning algorithms could make predictions for most food crops and terrestrial plants, they might not yet be ready for aquatic plants. He also noted that the next step in his research would be to investigate if the machine learning algorithms could predict nanoparticle uptake from leaves rather than through the roots.
"It is quite understandable that people are concerned about the presence of nanoparticles in their fruits, vegetables and grains," said Ma. "But instead of not using nanotechnology altogether, we would like farmers to reap the many benefits provided by this technology but avoid the potential food safety concerns."
Other contributors include Xiaoxuan Wang, Liwei Liu and Weilan Zhang from the civil and environmental engineering department.
This research is partly funded by the National Science Foundation and the Ministry of Science and Technology, Taiwan under the Graduate Students Study Abroad Program. | Texas A&M University scientists used two machine learning (ML) algorithms to assess the properties of metallic nanoparticles that make their absorption by plants more likely. The team trained an artificial neural network and gene-expression programming on a database culled from previous research on metallic nanoparticles and the plants in which they had collected. The algorithms can accurately predict a given metallic nanoparticle's likelihood to accumulate in a plant species, and how its chemical composition influences the tendency for absorption among plants in a nutrient-rich or hydroponic medium. Texas A&M's Xingmao Ma said, "It is quite understandable that people are concerned about the presence of nanoparticles in their fruits, vegetables, and grains. But instead of not using nanotechnology altogether, we would like farmers to reap the many benefits provided by this technology but avoid the potential food safety concerns." | [] | [] | [] | scitechnews | None | None | None | None | Texas A&M University scientists used two machine learning (ML) algorithms to assess the properties of metallic nanoparticles that make their absorption by plants more likely. The team trained an artificial neural network and gene-expression programming on a database culled from previous research on metallic nanoparticles and the plants in which they had collected. The algorithms can accurately predict a given metallic nanoparticle's likelihood to accumulate in a plant species, and how its chemical composition influences the tendency for absorption among plants in a nutrient-rich or hydroponic medium. Texas A&M's Xingmao Ma said, "It is quite understandable that people are concerned about the presence of nanoparticles in their fruits, vegetables, and grains. But instead of not using nanotechnology altogether, we would like farmers to reap the many benefits provided by this technology but avoid the potential food safety concerns."
While crop yield has achieved a substantial boost from nanotechnology in recent years, alarms over the health risks posed by nanoparticles within fresh produce and grains have also increased. In particular, nanoparticles entering the soil through irrigation, fertilizers and other sources have raised concerns about whether plants absorb these minute particles enough to cause toxicity.
In a new study published online in the journal Environmental Science and Technology , researchers at Texas A&M University have used machine learning to evaluate the salient properties of metallic nanoparticles that make them more susceptible for plant uptake. The researchers said their algorithm could indicate how much plants accumulate nanoparticles in their roots and shoots.
Nanoparticles are a burgeoning trend in several fields, including medicine, consumer products and agriculture. Depending on the type of nanoparticle, some have favorable surface properties, charge and magnetism, among other features. These qualities make them ideal for a number of applications. For example, in agriculture, nanoparticles may be used as antimicrobials to protect plants from pathogens. Alternatively, they can be used to bind to fertilizers or insecticides and then programmed for slow release to increase plant absorption.
These agricultural practices and others, like irrigation, can cause nanoparticles to accumulate in the soil. However, with the different types of nanoparticles that could exist in the ground and a staggeringly large number of terrestrial plant species, including food crops, it is not clearly known if certain properties of nanoparticles make them more likely to be absorbed by some plant species than others.
"As you can imagine, if we have to test the presence of each nanoparticle for every plant species, it is a huge number of experiments, which is very time-consuming and expensive," said Xingmao "Samuel" Ma, associate professor in the Zachry Department of Civil and Environmental Engineering. "To give you an idea, silver nanoparticles alone can have hundreds of different sizes, shapes and surface coatings, and so, experimentally testing each one, even for a single plant species, is impractical."
Instead, for their study, the researchers chose two different machine learning algorithms, an artificial neural network and gene-expression programming. They first trained these algorithms on a database created from past research on different metallic nanoparticles and the specific plants in which they accumulated. In particular, their database contained the size, shape and other characteristics of different nanoparticles, along with information on how much of these particles were absorbed from soil or nutrient-enriched water into the plant body.
Once trained, their machine learning algorithms could correctly predict the likelihood of a given metallic nanoparticle to accumulate in a plant species. Also, their algorithms revealed that when plants are in a nutrient-enriched or hydroponic solution, the chemical makeup of the metallic nanoparticle determines the propensity of accumulation in the roots and shoots. But if plants are grown in soil, the contents of organic matter and the clay in soil are key to nanoparticle uptake.
Ma said that while the machine learning algorithms could make predictions for most food crops and terrestrial plants, they might not yet be ready for aquatic plants. He also noted that the next step in his research would be to investigate if the machine learning algorithms could predict nanoparticle uptake from leaves rather than through the roots.
"It is quite understandable that people are concerned about the presence of nanoparticles in their fruits, vegetables and grains," said Ma. "But instead of not using nanotechnology altogether, we would like farmers to reap the many benefits provided by this technology but avoid the potential food safety concerns."
Other contributors include Xiaoxuan Wang, Liwei Liu and Weilan Zhang from the civil and environmental engineering department.
This research is partly funded by the National Science Foundation and the Ministry of Science and Technology, Taiwan under the Graduate Students Study Abroad Program. |
|||
262 | U.S.-Born Computer Professionals Earn Far More Than Other U.S. Workers | Native-born information technology (IT) professionals and people with computer-related majors chose their careers well. New research shows native-born Americans working in computer fields earn much higher salaries than workers in other fields.
"U.S. natives who work in a computer-related occupations or have a college degree in a computer-related major earn substantially more than other professional workers or college graduates with other majors, including other science, technology, engineering and math (STEM) majors," according to a new study from the National Foundation for American Policy (NFAP) by economist Madeline Zavodny.
The analysis used data from the Current Population Survey, American Community Survey, and National Survey of College Graduates and showed a significant premium for information technology professionals or computer and information systems-related (CIS) majors that has remained stable or risen over time.
"Despite oft-voiced concerns that U.S. IT workers and computer-related majors are disadvantaged by having to compete with foreign-born workers, either via offshoring or immigration, the evidence clearly indicates that IT professionals and computer-related majors have relatively high earnings," concluded Zavodny, an economics professor at the University of North Florida and formerly an economist at the Federal Reserve Bank of Atlanta. "IT professionals earn more than other professionals across all education groups examined here, and they earn more, on average, than other professionals who have similar demographics characteristics, live in the same state, and work in the same industry. Workers who have a bachelor's in a computer-related field earn more than their counterparts with a degree in another STEM field or in a non-STEM field. The same is true for recent bachelor's or master's degree recipients."
Other findings from the research include:
· "Median earnings of IT professionals were 40% higher than median earnings of other professionals, according to data on U.S.-born workers from the Current Population Survey for the period 2002 to 2020. There is a sizable earnings premium for all education groups examined here, including workers who have at least a bachelor's degree. The premium would be even larger if computer and information systems managers were classified with IT professionals instead of other professionals.
· "IT professionals earn significantly more than other professionals even when controlling for differences in observable demographic characteristics, state of residence, and broad industry. The earnings gap between IT professionals and other professionals as a whole remained fairly stable over the period 2002 to 2020 but rose among college graduates who work full-time, year-round in salaried jobs.
· "Median earnings of college graduates with a computer-related major are 35% higher than other STEM majors and fully 83% higher than non-STEM majors, according to data on U.S.-born college graduates from the American Community Survey for the period 2009 to 2019. The earning gap narrows but remains statistically significant when controlling for differences in observable demographic characteristics, state of residence and broad industry.
· "The earnings gap between college graduates with a major in computer and information systems or another computer-related field and other STEM majors has increased over time. The gap between computer and information systems-related (CIS) majors and non-STEM majors has remained stable over time at very high levels.
· "Median earnings of recent bachelor's degree recipients with a computer-related major are about 15% to 40% higher than other STEM majors, depending on the year, according to an analysis of data on recent U.S.-born bachelor's and master's degree recipients from the National Survey of College Graduates in 2010, 2013, 2015, and 2017. The gap is substantially larger in 2017 than in the other years. Recent bachelor's degree recipients with computer-related majors continue to earn significantly more than other STEM majors when controlling for differences in observable demographic characteristics and region of residence.
· "Median earnings of recent master's degree recipients with a computer-related major are about 10% to 40% higher than other STEM majors, depending on the year. Like with bachelor's degree recipients, the gap is larger in 2017 than in the other years and remains sizable when controlling for differences in observable demographic characteristics and region of residence."
The report adds to a growing body of research about foreign and native-born workers in technology fields. "The stable-to-increasing earnings premium among U.S.-born IT professionals and computer-related majors during a period that critics characterize as high levels of immigration is consistent with a large literature that concludes that highly educated immigrants have not harmed U.S.-born workers," writes Zavodny. "Indeed, studies show that highly educated U.S. natives may even see their earnings increase as a result of highly skilled immigration since it can boost firms' productivity, spur additional innovation, prompt more U.S. natives to move into communications-intensive jobs that are their comparative advantage, and slow offshoring by U.S. firms, among other benefits.
"The substantial earnings premium for IT professionals and computer-related majors is consistent with persistently strong demand for workers with these technical skills. Even during a period of temporary and permanent immigration into the U.S. of skilled foreign-born workers and offshoring of technical jobs outside of the U.S., U.S.-born IT professionals and computer and information systems majors continued to earn, on average, substantially more than other professional workers and other majors."
The report is good news and a signal to policymakers that native-born information technology professionals do not need protection from H-1B visa holders and employment-based immigrants. Other research shows attempts to enact such restrictions harms the U.S. economy and native-born workers by slowing innovation and pushing more jobs outside the United States. | A National Foundation for American Policy study by the University of North Florida's Madeline Zavodny found that U.S.-born information technology (IT) professionals and those who earned computer-related degrees in college make much more money than peers in other fields. Analysis of data from the Current Population Survey, American Community Survey, and National Survey of College Graduates yielded a substantial premium for IT professionals or computer and information systems-related majors that has climbed or held steady. Said Zavodny, "Workers who have a bachelor's in a computer-related field earn more than their counterparts with a degree in another STEM [science, technology, engineering, and math] field or in a non-STEM field." | [] | [] | [] | scitechnews | None | None | None | None | A National Foundation for American Policy study by the University of North Florida's Madeline Zavodny found that U.S.-born information technology (IT) professionals and those who earned computer-related degrees in college make much more money than peers in other fields. Analysis of data from the Current Population Survey, American Community Survey, and National Survey of College Graduates yielded a substantial premium for IT professionals or computer and information systems-related majors that has climbed or held steady. Said Zavodny, "Workers who have a bachelor's in a computer-related field earn more than their counterparts with a degree in another STEM [science, technology, engineering, and math] field or in a non-STEM field."
Native-born information technology (IT) professionals and people with computer-related majors chose their careers well. New research shows native-born Americans working in computer fields earn much higher salaries than workers in other fields.
"U.S. natives who work in a computer-related occupations or have a college degree in a computer-related major earn substantially more than other professional workers or college graduates with other majors, including other science, technology, engineering and math (STEM) majors," according to a new study from the National Foundation for American Policy (NFAP) by economist Madeline Zavodny.
The analysis used data from the Current Population Survey, American Community Survey, and National Survey of College Graduates and showed a significant premium for information technology professionals or computer and information systems-related (CIS) majors that has remained stable or risen over time.
"Despite oft-voiced concerns that U.S. IT workers and computer-related majors are disadvantaged by having to compete with foreign-born workers, either via offshoring or immigration, the evidence clearly indicates that IT professionals and computer-related majors have relatively high earnings," concluded Zavodny, an economics professor at the University of North Florida and formerly an economist at the Federal Reserve Bank of Atlanta. "IT professionals earn more than other professionals across all education groups examined here, and they earn more, on average, than other professionals who have similar demographics characteristics, live in the same state, and work in the same industry. Workers who have a bachelor's in a computer-related field earn more than their counterparts with a degree in another STEM field or in a non-STEM field. The same is true for recent bachelor's or master's degree recipients."
Other findings from the research include:
· "Median earnings of IT professionals were 40% higher than median earnings of other professionals, according to data on U.S.-born workers from the Current Population Survey for the period 2002 to 2020. There is a sizable earnings premium for all education groups examined here, including workers who have at least a bachelor's degree. The premium would be even larger if computer and information systems managers were classified with IT professionals instead of other professionals.
· "IT professionals earn significantly more than other professionals even when controlling for differences in observable demographic characteristics, state of residence, and broad industry. The earnings gap between IT professionals and other professionals as a whole remained fairly stable over the period 2002 to 2020 but rose among college graduates who work full-time, year-round in salaried jobs.
· "Median earnings of college graduates with a computer-related major are 35% higher than other STEM majors and fully 83% higher than non-STEM majors, according to data on U.S.-born college graduates from the American Community Survey for the period 2009 to 2019. The earning gap narrows but remains statistically significant when controlling for differences in observable demographic characteristics, state of residence and broad industry.
· "The earnings gap between college graduates with a major in computer and information systems or another computer-related field and other STEM majors has increased over time. The gap between computer and information systems-related (CIS) majors and non-STEM majors has remained stable over time at very high levels.
· "Median earnings of recent bachelor's degree recipients with a computer-related major are about 15% to 40% higher than other STEM majors, depending on the year, according to an analysis of data on recent U.S.-born bachelor's and master's degree recipients from the National Survey of College Graduates in 2010, 2013, 2015, and 2017. The gap is substantially larger in 2017 than in the other years. Recent bachelor's degree recipients with computer-related majors continue to earn significantly more than other STEM majors when controlling for differences in observable demographic characteristics and region of residence.
· "Median earnings of recent master's degree recipients with a computer-related major are about 10% to 40% higher than other STEM majors, depending on the year. Like with bachelor's degree recipients, the gap is larger in 2017 than in the other years and remains sizable when controlling for differences in observable demographic characteristics and region of residence."
The report adds to a growing body of research about foreign and native-born workers in technology fields. "The stable-to-increasing earnings premium among U.S.-born IT professionals and computer-related majors during a period that critics characterize as high levels of immigration is consistent with a large literature that concludes that highly educated immigrants have not harmed U.S.-born workers," writes Zavodny. "Indeed, studies show that highly educated U.S. natives may even see their earnings increase as a result of highly skilled immigration since it can boost firms' productivity, spur additional innovation, prompt more U.S. natives to move into communications-intensive jobs that are their comparative advantage, and slow offshoring by U.S. firms, among other benefits.
"The substantial earnings premium for IT professionals and computer-related majors is consistent with persistently strong demand for workers with these technical skills. Even during a period of temporary and permanent immigration into the U.S. of skilled foreign-born workers and offshoring of technical jobs outside of the U.S., U.S.-born IT professionals and computer and information systems majors continued to earn, on average, substantially more than other professional workers and other majors."
The report is good news and a signal to policymakers that native-born information technology professionals do not need protection from H-1B visa holders and employment-based immigrants. Other research shows attempts to enact such restrictions harms the U.S. economy and native-born workers by slowing innovation and pushing more jobs outside the United States. |
|||
263 | How Do We Improve the Virtual Classroom? | The COVID pandemic precipitated a major shift to virtual learning - an unplanned test of whether these technologies can scale effectively. But did they?
Researchers in the UC San Diego Department of Computer Science and Engineering (CSE) wanted to look beyond the anecdotal evidence to better understand where remote education fell short and how we might improve it. In a study presented at the Association for Computing Machinery's Conference on Human Factors in Computing Systems (CHI), the team examined faculty and student attitudes towards virtual classrooms and proposed several technological refinements that could improve their experience, such as flexible distribution of student video feeds and enhanced chat functions.
"We wanted to understand instructor and student perspectives and see how we can marry them," said CSE Associate Professor Nadir Weibel, senior author on the paper. "How can we improve students' experience and give better tools to instructors?"
The project was initiated and led by CSE Ph.D. student and first author Matin Yarmand. With coauthors Scott Klemmer, a professor of cognitive science and computer science, and Carnegie Mellon University Ph.D. student Jaemarie Solyst, the team interviewed seven UC San Diego faculty members to better understand their experience. Their primary fault with online learning is that many students never turn their cameras on.
"When I'm presenting the lecture content, it feels like I'm talking into a void," said one professor. "How do I tell whether students are engaged?" asked another. "You have no sense if people are getting it or not getting it."
The researchers then distributed student surveys. The 102 responses they showed - not surprisingly - that students resist turning on their cameras and often remain muted. Some don't want their peers to see them; others are eating or performing unrelated tasks; some are unaware video feedback helps instructors; and a hardcore few see no need to turn their cameras on at all.
While students feel awkward asking questions on video, they overwhelmingly like chat functions, which make them more likely to participate. "I'm more comfortable asking questions in chat, since I feel less like I'm interrupting the lecture," said one respondent. The survey also showed chat rooms drive more community among students. As a result, the authors propose that text communication could promote student engagement and community, even in in-person classrooms.
Students also have trouble connecting with instructors. Online classes lack opportunities for short conversations and questions that can happen during "hallway time" before and after live lectures.
In response to these concerns, the researchers have suggested potential solutions. Technology that reads social cues, such as facial expressions or head nods, could provide invaluable feedback for instructors. Video feeds could also be refined to make instructors the center of attention, as they are in real-life classrooms, rather than one of many co-equal video boxes.
Increased chat use could improve both online and in-person classes, as some students have always been reluctant to call attention to themselves.
Weibel and colleagues have also been exploring how virtual reality environments could improve online learning. Using a tool called Gather , they have been testing an instructor lounge, in which students and faculty could meet before or even during class.
"The way Gather works is you move your avatar around in the space, and if it gets close enough to somebody, you can have a one-to-one conversation with just that person," said Weibel. "Nobody else is part of it. People can just come and talk to me directly or chat with their friends and peers."
The team is also working on a natural language processing bot that could make the experience more interactive. For example, if two people are struggling on the same assignment, it could connect them to improve collaboration or point to additional resources on Canvas.
The authors believe online classes, such as MOOCs, which were popular before COVID, will continue into the future. Weibel hopes this research and other efforts will refine the technology and improve the online learning experience.
"Some classes will be back in person," he said. "Some will be only online and some will be hybrid, but I think online learning is probably here to stay." | Researchers in the University of California, San Diego Department of Computer Science and Engineering (CSE) and Carnegie Mellon University investigated virtual learning's shortcomings and proposed approaches for improving the experience. The researchers found that faculty members' chief complaint is that many students never activate their cameras; student surveys, meanwhile, found widespread learner resistance to turning cameras on. Students overwhelmingly favor chat functions, which increase the likelihood of participation, so the authors propose adding text-based chat to promote engagement and community. CSE's Nadir Weibel anticipates in the future, "Some classes will be back in person. Some will be only online, and some will be hybrid, but I think online learning is probably here to stay." | [] | [] | [] | scitechnews | None | None | None | None | Researchers in the University of California, San Diego Department of Computer Science and Engineering (CSE) and Carnegie Mellon University investigated virtual learning's shortcomings and proposed approaches for improving the experience. The researchers found that faculty members' chief complaint is that many students never activate their cameras; student surveys, meanwhile, found widespread learner resistance to turning cameras on. Students overwhelmingly favor chat functions, which increase the likelihood of participation, so the authors propose adding text-based chat to promote engagement and community. CSE's Nadir Weibel anticipates in the future, "Some classes will be back in person. Some will be only online, and some will be hybrid, but I think online learning is probably here to stay."
The COVID pandemic precipitated a major shift to virtual learning - an unplanned test of whether these technologies can scale effectively. But did they?
Researchers in the UC San Diego Department of Computer Science and Engineering (CSE) wanted to look beyond the anecdotal evidence to better understand where remote education fell short and how we might improve it. In a study presented at the Association for Computing Machinery's Conference on Human Factors in Computing Systems (CHI), the team examined faculty and student attitudes towards virtual classrooms and proposed several technological refinements that could improve their experience, such as flexible distribution of student video feeds and enhanced chat functions.
"We wanted to understand instructor and student perspectives and see how we can marry them," said CSE Associate Professor Nadir Weibel, senior author on the paper. "How can we improve students' experience and give better tools to instructors?"
The project was initiated and led by CSE Ph.D. student and first author Matin Yarmand. With coauthors Scott Klemmer, a professor of cognitive science and computer science, and Carnegie Mellon University Ph.D. student Jaemarie Solyst, the team interviewed seven UC San Diego faculty members to better understand their experience. Their primary fault with online learning is that many students never turn their cameras on.
"When I'm presenting the lecture content, it feels like I'm talking into a void," said one professor. "How do I tell whether students are engaged?" asked another. "You have no sense if people are getting it or not getting it."
The researchers then distributed student surveys. The 102 responses they showed - not surprisingly - that students resist turning on their cameras and often remain muted. Some don't want their peers to see them; others are eating or performing unrelated tasks; some are unaware video feedback helps instructors; and a hardcore few see no need to turn their cameras on at all.
While students feel awkward asking questions on video, they overwhelmingly like chat functions, which make them more likely to participate. "I'm more comfortable asking questions in chat, since I feel less like I'm interrupting the lecture," said one respondent. The survey also showed chat rooms drive more community among students. As a result, the authors propose that text communication could promote student engagement and community, even in in-person classrooms.
Students also have trouble connecting with instructors. Online classes lack opportunities for short conversations and questions that can happen during "hallway time" before and after live lectures.
In response to these concerns, the researchers have suggested potential solutions. Technology that reads social cues, such as facial expressions or head nods, could provide invaluable feedback for instructors. Video feeds could also be refined to make instructors the center of attention, as they are in real-life classrooms, rather than one of many co-equal video boxes.
Increased chat use could improve both online and in-person classes, as some students have always been reluctant to call attention to themselves.
Weibel and colleagues have also been exploring how virtual reality environments could improve online learning. Using a tool called Gather , they have been testing an instructor lounge, in which students and faculty could meet before or even during class.
"The way Gather works is you move your avatar around in the space, and if it gets close enough to somebody, you can have a one-to-one conversation with just that person," said Weibel. "Nobody else is part of it. People can just come and talk to me directly or chat with their friends and peers."
The team is also working on a natural language processing bot that could make the experience more interactive. For example, if two people are struggling on the same assignment, it could connect them to improve collaboration or point to additional resources on Canvas.
The authors believe online classes, such as MOOCs, which were popular before COVID, will continue into the future. Weibel hopes this research and other efforts will refine the technology and improve the online learning experience.
"Some classes will be back in person," he said. "Some will be only online and some will be hybrid, but I think online learning is probably here to stay." |
|||
264 | Meet Grace, the Healthcare Robot COVID-19 Created | Hong Kong-based Hanson Robotics has developed a prototype robot to engage with seniors and those isolated by the COVID-19 pandemic. The Grace robot is dressed like a nurse, has a thermal camera in her chest to take temperatures and measure responsiveness, uses artificial intelligence (AI) to diagnose patients, and can speak English, Mandarin, and Cantonese. Hanson Robotics founder David Hanson said Grace resembles a healthcare professional and facilitates social interactions to ease the workload of front-line hospital staff inundated during the pandemic. He also said Grace, whose facial features resemble a Western-Asian fusion of anime characters, can mimic the action of more than 48 major facial muscles. A beta version of Grace is slated to be mass-produced by Awakening Health, a joint venture between Hanson Robotics and enterprise AI developer Singularity Studio, by August, according to the venture's CEO, David Lake. | [] | [] | [] | scitechnews | None | None | None | None | Hong Kong-based Hanson Robotics has developed a prototype robot to engage with seniors and those isolated by the COVID-19 pandemic. The Grace robot is dressed like a nurse, has a thermal camera in her chest to take temperatures and measure responsiveness, uses artificial intelligence (AI) to diagnose patients, and can speak English, Mandarin, and Cantonese. Hanson Robotics founder David Hanson said Grace resembles a healthcare professional and facilitates social interactions to ease the workload of front-line hospital staff inundated during the pandemic. He also said Grace, whose facial features resemble a Western-Asian fusion of anime characters, can mimic the action of more than 48 major facial muscles. A beta version of Grace is slated to be mass-produced by Awakening Health, a joint venture between Hanson Robotics and enterprise AI developer Singularity Studio, by August, according to the venture's CEO, David Lake.
|
||||
265 | 3D-Printing Process Promises Highly Bespoke Prosthetics | A new 3D-printing process has allowed researchers to tailor-make artificial body parts and other medical devices with built-in functionality.
The advance from Nottingham University is said to offer better shape and durability, while simultaneously cutting the risk of bacterial infection.
In a statement, study lead, Dr Yinfeng He, from the Centre for Additive Manufacturing , said: "Most mass-produced medical devices fail to completely meet the unique and complex needs of their users. Similarly, single-material 3D-printing methods have design limitations that cannot produce a bespoke device with multiple biological or mechanical functions.
Interview: Nottingham Uni additive director Professor Richard Hague
Scientists create 3D printed heart using patient cells
"But for the first time, using a computer-aided, multi-material 3D-print technique, we demonstrate it is possible to combine complex functions within one customised healthcare device to enhance patient wellbeing."
The University hopes the design process can be applied to 3D-print any medical device that needs customisable shapes and functions, such as highly-bespoke one-piece prosthetic limbs or joints to replace a lost finger or leg. Similarly, the process could custom print polypills optimised to release into the body in a pre-designed therapeutic sequence.
For this study, the researchers applied a computer algorithm to design and manufacture - pixel by pixel - 3D-printed objects made up of two polymer materials of differing stiffness that also prevent the build-up of bacterial biofilm. By optimising the stiffness in this way, they achieved custom-shaped and -sized parts that offer the required flexibility and strength.
Current artificial finger joint replacements use silicone and metal parts that offer a standardised level of dexterity for the wearer, while still being rigid enough to implant into bone. As a demonstrator for the study, the team 3D-printed a finger joint offering these dual requirements in one device, while also being able to customise its size and strength to meet individual patient requirements.
With an added level of design control, the team is said to have performed their new style of 3D-printing with multi-materials that are bacteria-resistant and bio-functional, allowing them to be implanted and combat infection without the use of added antibiotic drugs.
The team also used a new high-resolution characterisation technique (3D orbitSIMS) to 3D-map the chemistry of the print structures and to test the bonding between them throughout the part.
The study was carried out by the Centre for Additive Manufacturing (CfAM) and funded by the Engineering and Physical Sciences Research Council. The complete findings are published in Advanced Science .
Prior to commercialising the 3D-printing process, the researchers will broaden its potential uses by testing it on more advanced materials with extra functionalities such as controlling immune responses and promoting stem cell attachment | A new three-dimensional (3D) printing method developed by researchers at the U.K.'s Nottingham University reportedly can improve the shape and resilience of implantable medical devices, resulting in reduced risk of bacterial infection. The team used an algorithm to design and fabricate 3D-printed objects composed of two polymer materials of differing stiffness, which optimize stiffness while preventing bacterial biofilm accrual. Researchers also applied the new 3D orbitSIMS high-resolution characterization approach to plot the chemistry of the print structures, and to test bonding throughout the component. Said Nottingham's Yinfeng He, "Using a computer-aided, multi-material 3D-print technique, we demonstrate it is possible to combine complex functions within one customized healthcare device to enhance patient wellbeing." | [] | [] | [] | scitechnews | None | None | None | None | A new three-dimensional (3D) printing method developed by researchers at the U.K.'s Nottingham University reportedly can improve the shape and resilience of implantable medical devices, resulting in reduced risk of bacterial infection. The team used an algorithm to design and fabricate 3D-printed objects composed of two polymer materials of differing stiffness, which optimize stiffness while preventing bacterial biofilm accrual. Researchers also applied the new 3D orbitSIMS high-resolution characterization approach to plot the chemistry of the print structures, and to test bonding throughout the component. Said Nottingham's Yinfeng He, "Using a computer-aided, multi-material 3D-print technique, we demonstrate it is possible to combine complex functions within one customized healthcare device to enhance patient wellbeing."
A new 3D-printing process has allowed researchers to tailor-make artificial body parts and other medical devices with built-in functionality.
The advance from Nottingham University is said to offer better shape and durability, while simultaneously cutting the risk of bacterial infection.
In a statement, study lead, Dr Yinfeng He, from the Centre for Additive Manufacturing , said: "Most mass-produced medical devices fail to completely meet the unique and complex needs of their users. Similarly, single-material 3D-printing methods have design limitations that cannot produce a bespoke device with multiple biological or mechanical functions.
Interview: Nottingham Uni additive director Professor Richard Hague
Scientists create 3D printed heart using patient cells
"But for the first time, using a computer-aided, multi-material 3D-print technique, we demonstrate it is possible to combine complex functions within one customised healthcare device to enhance patient wellbeing."
The University hopes the design process can be applied to 3D-print any medical device that needs customisable shapes and functions, such as highly-bespoke one-piece prosthetic limbs or joints to replace a lost finger or leg. Similarly, the process could custom print polypills optimised to release into the body in a pre-designed therapeutic sequence.
For this study, the researchers applied a computer algorithm to design and manufacture - pixel by pixel - 3D-printed objects made up of two polymer materials of differing stiffness that also prevent the build-up of bacterial biofilm. By optimising the stiffness in this way, they achieved custom-shaped and -sized parts that offer the required flexibility and strength.
Current artificial finger joint replacements use silicone and metal parts that offer a standardised level of dexterity for the wearer, while still being rigid enough to implant into bone. As a demonstrator for the study, the team 3D-printed a finger joint offering these dual requirements in one device, while also being able to customise its size and strength to meet individual patient requirements.
With an added level of design control, the team is said to have performed their new style of 3D-printing with multi-materials that are bacteria-resistant and bio-functional, allowing them to be implanted and combat infection without the use of added antibiotic drugs.
The team also used a new high-resolution characterisation technique (3D orbitSIMS) to 3D-map the chemistry of the print structures and to test the bonding between them throughout the part.
The study was carried out by the Centre for Additive Manufacturing (CfAM) and funded by the Engineering and Physical Sciences Research Council. The complete findings are published in Advanced Science .
Prior to commercialising the 3D-printing process, the researchers will broaden its potential uses by testing it on more advanced materials with extra functionalities such as controlling immune responses and promoting stem cell attachment |
|||
266 | Tech Firms Use Remote Monitoring to Help Honey Bees | Hit by a deadly parasitic mite, pesticides and climate change, a survey showed that between April 2019 and 2020 43.7% of US hives were lost . That was the second-highest annual figure since that particular study started in 2010. | Technology companies are monitoring honey bee colonies remotely to investigate hive die-offs, in efforts to improve their survival. The U.S.-based beehive management firm Best Bees installs hives on commercial and residential properties while staff monitor and record their health using software, sharing this data with Harvard University and Massachusetts Institute of Technology scientists. Meanwhile, Ireland's ApisProtect produces wireless in-hive sensors that collect and transmit data to an online dashboard, where machine learning software converts the data into useful information on hive health to determine when intervention by beekeepers may be necessary. Israeli firm Beewise builds solar-powered hive farms, or Beehomes, that operate autonomously or via a mobile application, using cameras, sensors, and robotic arms to take action against pests or other threats. | [] | [] | [] | scitechnews | None | None | None | None | Technology companies are monitoring honey bee colonies remotely to investigate hive die-offs, in efforts to improve their survival. The U.S.-based beehive management firm Best Bees installs hives on commercial and residential properties while staff monitor and record their health using software, sharing this data with Harvard University and Massachusetts Institute of Technology scientists. Meanwhile, Ireland's ApisProtect produces wireless in-hive sensors that collect and transmit data to an online dashboard, where machine learning software converts the data into useful information on hive health to determine when intervention by beekeepers may be necessary. Israeli firm Beewise builds solar-powered hive farms, or Beehomes, that operate autonomously or via a mobile application, using cameras, sensors, and robotic arms to take action against pests or other threats.
Hit by a deadly parasitic mite, pesticides and climate change, a survey showed that between April 2019 and 2020 43.7% of US hives were lost . That was the second-highest annual figure since that particular study started in 2010. |
|||
267 | DNA-Based Circuits May Be the Future of Medicine, and This Software Program Will Get Us There Faster | Biological circuits , made of synthetic DNA, have incredibly vast and important medical applications. Even though this technology is still early-stage, the approach has been used to create tests for diagnosing cancer and identifying internal injuries , such as traumatic brain injury, hemorrhagic shock, and more. As well, synthetic biological circuits can be used to precisely deliver drugs into cells, at specific doses as needed.
The number of possible applications of biological circuits is vast, as too are the calculations required to identify the appropriate chemical reactions for them. But designing these circuits will now be easier, thanks to a newly improved upon software program. The advancement is described in a recent study published in IEEE Design & Test .
Renan Marks, an Adjunct Professor of the Faculty of Computing at the Universida de Federal de Mato Grosso do Sul (UFMS), was involved in the study. His team initially created a software program called DNAr, which researchers can use to simulate various chemical reactions and subsequently design new biological circuits. In their most recent work, they developed a software extension for the program, called DNAr-Logic, that allows scientists to describe their desired circuits at a high-level. The software takes this high-level description of a logical circuit and converts it to chemical reaction networks that can be synthesized in DNA strands.
Marks says an advantage of his team's new software extension is that it will allow scientists to focus more on designing the circuits, rather than worry about the calculations and details of the chemical chain reactions. "They can design and simulate [biological circuits] using DNAr-Logic without previous knowledge in chemistry and without writing hundreds of reactions - and differential equations needed to simulate its dynamic behavior - by hand," says Marks. "The software lifts the burden of chemical reactions details from the scientist's shoulders."
His team tested the new software in a series of simulations. "The results revealed that logic circuits could be flawlessly designed, simulated, and tested," says Marks, noting that they were able to use DNAr-Logic to design some synthetic biological circuits capable of generating up to 600 different reactions.
However, there are still a number of barriers in fully realizing this technology in medical applications. One outstanding issue is that biological circuits made up of loose strands of DNA may undergo "leak reactions." This is when some strands might inadvertently react with other strands in the solution, resulting is an incorrect "computation." Marks acknowledges that, while issues such as leak reactions still need to be addressed, synthetic biological circuits have an immense amount of potential. "This new field of research opens endless possibilities," he says.
Moving forward, Marks says, "I plan to continue developing new extensions to expand the DNAr software with new capabilities which other researchers could rely upon. Also, I plan to use DNAr as a framework to assist in researching and developing new circuits based on algorithms that can help health professionals diagnose illnesses faster and be more effective in health treatments."
This article appears in the August 2021 print issue as "Building DNA Logic." | Enhanced software promises to improve the design of DNA-based circuits used to deliver drugs. Researchers at Brazil's Universidade Federal de Mato Grosso do Sul (UFMS) initially developed the DNAr software program, which can be used to model chemical reactions and engineer new biological circuits; an extension called DNAr-Logic lets scientists convert high-level descriptions of desired circuits into chemical reaction networks that can be generated in DNA strands. UFMS' Renan Marks said DNAr-Logic will enable researchers to design and simulate biological circuits "without previous knowledge in chemistry and without writing hundreds of reactions - and differential equations needed to simulate its dynamic behavior - by hand." Marks said tests showed DNAr-Logic can be used to design biological circuits capable of producing as many as 600 distinct reactions. | [] | [] | [] | scitechnews | None | None | None | None | Enhanced software promises to improve the design of DNA-based circuits used to deliver drugs. Researchers at Brazil's Universidade Federal de Mato Grosso do Sul (UFMS) initially developed the DNAr software program, which can be used to model chemical reactions and engineer new biological circuits; an extension called DNAr-Logic lets scientists convert high-level descriptions of desired circuits into chemical reaction networks that can be generated in DNA strands. UFMS' Renan Marks said DNAr-Logic will enable researchers to design and simulate biological circuits "without previous knowledge in chemistry and without writing hundreds of reactions - and differential equations needed to simulate its dynamic behavior - by hand." Marks said tests showed DNAr-Logic can be used to design biological circuits capable of producing as many as 600 distinct reactions.
Biological circuits , made of synthetic DNA, have incredibly vast and important medical applications. Even though this technology is still early-stage, the approach has been used to create tests for diagnosing cancer and identifying internal injuries , such as traumatic brain injury, hemorrhagic shock, and more. As well, synthetic biological circuits can be used to precisely deliver drugs into cells, at specific doses as needed.
The number of possible applications of biological circuits is vast, as too are the calculations required to identify the appropriate chemical reactions for them. But designing these circuits will now be easier, thanks to a newly improved upon software program. The advancement is described in a recent study published in IEEE Design & Test .
Renan Marks, an Adjunct Professor of the Faculty of Computing at the Universida de Federal de Mato Grosso do Sul (UFMS), was involved in the study. His team initially created a software program called DNAr, which researchers can use to simulate various chemical reactions and subsequently design new biological circuits. In their most recent work, they developed a software extension for the program, called DNAr-Logic, that allows scientists to describe their desired circuits at a high-level. The software takes this high-level description of a logical circuit and converts it to chemical reaction networks that can be synthesized in DNA strands.
Marks says an advantage of his team's new software extension is that it will allow scientists to focus more on designing the circuits, rather than worry about the calculations and details of the chemical chain reactions. "They can design and simulate [biological circuits] using DNAr-Logic without previous knowledge in chemistry and without writing hundreds of reactions - and differential equations needed to simulate its dynamic behavior - by hand," says Marks. "The software lifts the burden of chemical reactions details from the scientist's shoulders."
His team tested the new software in a series of simulations. "The results revealed that logic circuits could be flawlessly designed, simulated, and tested," says Marks, noting that they were able to use DNAr-Logic to design some synthetic biological circuits capable of generating up to 600 different reactions.
However, there are still a number of barriers in fully realizing this technology in medical applications. One outstanding issue is that biological circuits made up of loose strands of DNA may undergo "leak reactions." This is when some strands might inadvertently react with other strands in the solution, resulting is an incorrect "computation." Marks acknowledges that, while issues such as leak reactions still need to be addressed, synthetic biological circuits have an immense amount of potential. "This new field of research opens endless possibilities," he says.
Moving forward, Marks says, "I plan to continue developing new extensions to expand the DNAr software with new capabilities which other researchers could rely upon. Also, I plan to use DNAr as a framework to assist in researching and developing new circuits based on algorithms that can help health professionals diagnose illnesses faster and be more effective in health treatments."
This article appears in the August 2021 print issue as "Building DNA Logic." |
|||
268 | Quantum Holds the Key to Secure Conference Calls | The world is one step closer to ultimately secure conference calls, thanks to a collaboration between Quantum Communications Hub researchers and their German colleagues, enabling a quantum-secure conversation to take place between four parties simultaneously.
The demonstration, led by Hub researchers based at Heriot-Watt University and published in Science Advances , is a timely advance, given the global reliance on remote collaborative working, including conference calls, since the start of the C19 pandemic.
There have been reports of significant escalation of cyber-attacks on popular teleconferencing platforms in the last year. This advance in quantum secured communications could lead to conference calls with inherent unhackable security measures, underpinned by the principles of quantum physics.
Senior author, Professor Alessandro Fedrizzi, who led the team at Heriot-Watt, said: "We've long known that quantum entanglement, which Albert Einstein called 'spooky action at a distance' can be used for distributing secure keys. Our work is the first example where this was achieved via 'spooky action' between multiple users at the same time -- something that a future quantum internet will be able to exploit."
Secure communications rely upon the sharing of cryptographic keys. The keys used in most systems are relatively short and can therefore be compromised by hackers, and the key distribution procedure is under increasing threat from quickly advancing quantum computers. These growing threats to data security require new, secure methods of key distribution.
A mature quantum technology called Quantum Key Distribution (QKD), deployed in this demonstration in a network scenario for the first time, harnesses the properties of quantum physics to facilitate guaranteed secure distribution of cryptographic keys.
QKD has been used to secure communications for over three decades, facilitating communications of over 400km over terrestrial optical fibre and recently even through space, however, crucially, these communications have only ever occurred exclusively between two parties, limiting the practicality of the technology used to facilitate secure conversations between multiple users.
The system demonstrated by the team here utilises a key property of quantum physics, entanglement, which is the property of quantum physics that gives correlations - stronger than any with which we are familiar in everyday life - between two or more quantum systems, even when these are separated by large distances.
By harnessing multi-party entanglement, the team were able to share keys simultaneously between the four parties, through a process known as 'Quantum Conference Key Agreement', overcoming the limitations of traditional QKD systems to share keys between just two users, and enabling the first quantum conference call to occur with an image of a Cheshire cat shared between the four parties, separated by up to 50 km of optical fibre.
Entanglement-based quantum networks are just one part of a large programme of work that the Quantum Communications Hub, which is funded by EPSRC as part of the UK National Quantum Technologies Programme, is undertaking to deliver future quantum secured networks.
The technology demonstrated here has potential to drastically reduce the resource costs for conference calls in quantum networks when compared to standard two-party QKD methods. It is one of the first examples of the expected benefits of a future quantum internet, which is expected to supply entanglement to a system of globally distributed nodes.
You can find out more about the Quantum Communications Hub and the UK National Quantum Technologies Programme on their websites. | Scientists in the Quantum Communications Hub at the U.K.'s Heriot-Watt University, working with German colleagues, facilitated a quantum-secure four-way conversation, the result of deploying Quantum Key Distribution (QKD) in a network scenario for the first time. The team applied a process called Quantum Conference Key Agreement to surmount the constraints of traditional QKD systems to share keys between only two users. This enabled the first quantum conference call to share an image of a Cheshire cat between four parties, separated by up to 50 kilometers (31 miles) of optical fiber. Said Heriot-Watt's Alessandro Fedrizzi, "Our work is the first example where this was achieved via 'spooky action' between multiple users at the same time, something that a future quantum Internet will be able to exploit." | [] | [] | [] | scitechnews | None | None | None | None | Scientists in the Quantum Communications Hub at the U.K.'s Heriot-Watt University, working with German colleagues, facilitated a quantum-secure four-way conversation, the result of deploying Quantum Key Distribution (QKD) in a network scenario for the first time. The team applied a process called Quantum Conference Key Agreement to surmount the constraints of traditional QKD systems to share keys between only two users. This enabled the first quantum conference call to share an image of a Cheshire cat between four parties, separated by up to 50 kilometers (31 miles) of optical fiber. Said Heriot-Watt's Alessandro Fedrizzi, "Our work is the first example where this was achieved via 'spooky action' between multiple users at the same time, something that a future quantum Internet will be able to exploit."
The world is one step closer to ultimately secure conference calls, thanks to a collaboration between Quantum Communications Hub researchers and their German colleagues, enabling a quantum-secure conversation to take place between four parties simultaneously.
The demonstration, led by Hub researchers based at Heriot-Watt University and published in Science Advances , is a timely advance, given the global reliance on remote collaborative working, including conference calls, since the start of the C19 pandemic.
There have been reports of significant escalation of cyber-attacks on popular teleconferencing platforms in the last year. This advance in quantum secured communications could lead to conference calls with inherent unhackable security measures, underpinned by the principles of quantum physics.
Senior author, Professor Alessandro Fedrizzi, who led the team at Heriot-Watt, said: "We've long known that quantum entanglement, which Albert Einstein called 'spooky action at a distance' can be used for distributing secure keys. Our work is the first example where this was achieved via 'spooky action' between multiple users at the same time -- something that a future quantum internet will be able to exploit."
Secure communications rely upon the sharing of cryptographic keys. The keys used in most systems are relatively short and can therefore be compromised by hackers, and the key distribution procedure is under increasing threat from quickly advancing quantum computers. These growing threats to data security require new, secure methods of key distribution.
A mature quantum technology called Quantum Key Distribution (QKD), deployed in this demonstration in a network scenario for the first time, harnesses the properties of quantum physics to facilitate guaranteed secure distribution of cryptographic keys.
QKD has been used to secure communications for over three decades, facilitating communications of over 400km over terrestrial optical fibre and recently even through space, however, crucially, these communications have only ever occurred exclusively between two parties, limiting the practicality of the technology used to facilitate secure conversations between multiple users.
The system demonstrated by the team here utilises a key property of quantum physics, entanglement, which is the property of quantum physics that gives correlations - stronger than any with which we are familiar in everyday life - between two or more quantum systems, even when these are separated by large distances.
By harnessing multi-party entanglement, the team were able to share keys simultaneously between the four parties, through a process known as 'Quantum Conference Key Agreement', overcoming the limitations of traditional QKD systems to share keys between just two users, and enabling the first quantum conference call to occur with an image of a Cheshire cat shared between the four parties, separated by up to 50 km of optical fibre.
Entanglement-based quantum networks are just one part of a large programme of work that the Quantum Communications Hub, which is funded by EPSRC as part of the UK National Quantum Technologies Programme, is undertaking to deliver future quantum secured networks.
The technology demonstrated here has potential to drastically reduce the resource costs for conference calls in quantum networks when compared to standard two-party QKD methods. It is one of the first examples of the expected benefits of a future quantum internet, which is expected to supply entanglement to a system of globally distributed nodes.
You can find out more about the Quantum Communications Hub and the UK National Quantum Technologies Programme on their websites. |
|||
269 | TLS Attack Lets Attackers Launch Cross-Protocol Attacks Against Secure Sites | Researchers have disclosed a new type of attack that exploits misconfigurations in transport layer security (TLS) servers to redirect HTTPS traffic from a victim's web browser to a different TLS service endpoint located on another IP address to steal sensitive information.
The attacks have been dubbed ALPACA , short for "Application Layer Protocol Confusion - Analyzing and mitigating Cracks in tls Authentication," by a group of academics from Ruhr University Bochum, Münster University of Applied Sciences, and Paderborn University.
"Attackers can redirect traffic from one subdomain to another, resulting in a valid TLS session," the study said. "This breaks the authentication of TLS and cross-protocol attacks may be possible where the behavior of one protocol service may compromise the other at the application layer."
TLS is a cryptographic protocol underpinning several application layer protocols like HTTPS, SMTP, IMAP, POP3, and FTP to secure communications over a network with the goal of adding a layer of authentication and preserving integrity of exchanged data while in transit.
ALPACA attacks are possible because TLS does not bind a TCP connection to the intended application layer protocol, the researchers elaborated. The failure of TLS to protect the integrity of the TCP connection could therefore be abused to "redirect TLS traffic for the intended TLS service endpoint and protocol to another, substitute TLS service endpoint and protocol."
Given a client (i.e., web browser) and two application servers (i.e., the intended and substitute), the goal is to trick the substitute server into accepting application data from the client, or vice versa. Since the client uses a specific protocol to open a secure channel with the intended server (say, HTTPS) while the substitute server employs a different application layer protocol (say, FTP) and runs on a separate TCP endpoint, the mix-up culminates in what's called a cross-protocol attack.
At least three hypothetical cross-protocol attack scenarios have been uncovered, which can be leveraged by an adversary to circumvent TLS protections and target FTP and email servers. The attacks, however, hinge on the prerequisite that the perpetrator can intercept and divert the victim's traffic at the TCP/IP layer.
Put simply, the attacks take the form of a man-in-the-middle (MitM) scheme wherein the malicious actor entices a victim into opening a website under their control to trigger a cross-origin HTTPS request with a specially crafted FTP payload. This request is then redirected to an FTP server that uses a certificate that's compatible with that of the website, thus spawning a valid TLS sessionn.
Consequently, the misconfiguration in TLS services can be exploited to exfiltrate authentication cookies or other private data to the FTP server (Upload Attack), retrieve a malicious JavaScript payload from the FTP server in a stored XSS attack (Download Attack), or even execute a reflected XSS in the context of the victim website (Reflection Attack).
All TLS servers that have compatible certificates with other TLS services are expected to be affected. In an experimental setup, the researchers found that at least 1.4 million web servers were vulnerable to cross-protocol attacks, with 114,197 of the servers considered prone to attacks using an exploitable SMTP, IMAP, POP3, or FTP server with a trusted and compatible certificate.
To counter cross-protocol attacks, the researchers propose utilizing Application Layer Protocol Negotiation ( ALPN ) and Server Name Indication ( SNI ) extensions to TLS that can be used by a client to let the server know about the intended protocol to be used over a secure connection and the hostname it's attempting to connect to at the start of the handshake process.
The findings are expected to be presented at Black Hat USA 2021 and at USENIX Security Symposium 2021. Additional artifacts relevant to the ALPACA attack can be accessed via GitHub here . | A new transport layer security (TLS) attack allows hackers to reroute HTTPS traffic from a target's Web browser to a different TLS service endpoint on another Internet Protocol (IP) address, according to researchers at Germany's Ruhr University Bochum, Munster University of Applied Sciences, and Paderborn University. The researchers said the ALPACA (Application Layer Protocol Confusion - Analyzing and mitigating Cracks in TLS Authentication) exploit is basically a man-in-the-middle scheme in which the malefactor tricks a victim into accessing a malicious Website to invoke a cross-origin HTTPS request with a specially engineered file transfer protocol payload. The team proposed using Application Layer Protocol Negotiation and Server Name Indication extensions to TLS so servers are aware of the intended protocol to be employed over a secure connection and the hostname to which it tries to connect at the beginning of the handshake process. | [] | [] | [] | scitechnews | None | None | None | None | A new transport layer security (TLS) attack allows hackers to reroute HTTPS traffic from a target's Web browser to a different TLS service endpoint on another Internet Protocol (IP) address, according to researchers at Germany's Ruhr University Bochum, Munster University of Applied Sciences, and Paderborn University. The researchers said the ALPACA (Application Layer Protocol Confusion - Analyzing and mitigating Cracks in TLS Authentication) exploit is basically a man-in-the-middle scheme in which the malefactor tricks a victim into accessing a malicious Website to invoke a cross-origin HTTPS request with a specially engineered file transfer protocol payload. The team proposed using Application Layer Protocol Negotiation and Server Name Indication extensions to TLS so servers are aware of the intended protocol to be employed over a secure connection and the hostname to which it tries to connect at the beginning of the handshake process.
Researchers have disclosed a new type of attack that exploits misconfigurations in transport layer security (TLS) servers to redirect HTTPS traffic from a victim's web browser to a different TLS service endpoint located on another IP address to steal sensitive information.
The attacks have been dubbed ALPACA , short for "Application Layer Protocol Confusion - Analyzing and mitigating Cracks in tls Authentication," by a group of academics from Ruhr University Bochum, Münster University of Applied Sciences, and Paderborn University.
"Attackers can redirect traffic from one subdomain to another, resulting in a valid TLS session," the study said. "This breaks the authentication of TLS and cross-protocol attacks may be possible where the behavior of one protocol service may compromise the other at the application layer."
TLS is a cryptographic protocol underpinning several application layer protocols like HTTPS, SMTP, IMAP, POP3, and FTP to secure communications over a network with the goal of adding a layer of authentication and preserving integrity of exchanged data while in transit.
ALPACA attacks are possible because TLS does not bind a TCP connection to the intended application layer protocol, the researchers elaborated. The failure of TLS to protect the integrity of the TCP connection could therefore be abused to "redirect TLS traffic for the intended TLS service endpoint and protocol to another, substitute TLS service endpoint and protocol."
Given a client (i.e., web browser) and two application servers (i.e., the intended and substitute), the goal is to trick the substitute server into accepting application data from the client, or vice versa. Since the client uses a specific protocol to open a secure channel with the intended server (say, HTTPS) while the substitute server employs a different application layer protocol (say, FTP) and runs on a separate TCP endpoint, the mix-up culminates in what's called a cross-protocol attack.
At least three hypothetical cross-protocol attack scenarios have been uncovered, which can be leveraged by an adversary to circumvent TLS protections and target FTP and email servers. The attacks, however, hinge on the prerequisite that the perpetrator can intercept and divert the victim's traffic at the TCP/IP layer.
Put simply, the attacks take the form of a man-in-the-middle (MitM) scheme wherein the malicious actor entices a victim into opening a website under their control to trigger a cross-origin HTTPS request with a specially crafted FTP payload. This request is then redirected to an FTP server that uses a certificate that's compatible with that of the website, thus spawning a valid TLS sessionn.
Consequently, the misconfiguration in TLS services can be exploited to exfiltrate authentication cookies or other private data to the FTP server (Upload Attack), retrieve a malicious JavaScript payload from the FTP server in a stored XSS attack (Download Attack), or even execute a reflected XSS in the context of the victim website (Reflection Attack).
All TLS servers that have compatible certificates with other TLS services are expected to be affected. In an experimental setup, the researchers found that at least 1.4 million web servers were vulnerable to cross-protocol attacks, with 114,197 of the servers considered prone to attacks using an exploitable SMTP, IMAP, POP3, or FTP server with a trusted and compatible certificate.
To counter cross-protocol attacks, the researchers propose utilizing Application Layer Protocol Negotiation ( ALPN ) and Server Name Indication ( SNI ) extensions to TLS that can be used by a client to let the server know about the intended protocol to be used over a secure connection and the hostname it's attempting to connect to at the start of the handshake process.
The findings are expected to be presented at Black Hat USA 2021 and at USENIX Security Symposium 2021. Additional artifacts relevant to the ALPACA attack can be accessed via GitHub here . |
|||
271 | Amazon Details Warehouse Robots, 'Ernie' and 'Bert' | Retail giant Amazon is testing new robots designed to reduce worker stress and potential for injury. An Amazon blog post said the test involves four robots programmed to move items across warehouses, in close proximity to workers. Ernie helps remove items from a robotic shelf; Amazon said testing indicates it could improve worker safety. Bert is one of Amazon's first independently navigating Autonomous Mobile Robots (AMRs), which the company said can function safely even when moving among employees who also are in motion. Two more robots currently under development, Scooter and Kermit, are cart-transporting AMRs that Amazon said could move empty packages across warehouses, allowing workers to concentrate on less-strenuous tasks that involve critical thinking. | [] | [] | [] | scitechnews | None | None | None | None | Retail giant Amazon is testing new robots designed to reduce worker stress and potential for injury. An Amazon blog post said the test involves four robots programmed to move items across warehouses, in close proximity to workers. Ernie helps remove items from a robotic shelf; Amazon said testing indicates it could improve worker safety. Bert is one of Amazon's first independently navigating Autonomous Mobile Robots (AMRs), which the company said can function safely even when moving among employees who also are in motion. Two more robots currently under development, Scooter and Kermit, are cart-transporting AMRs that Amazon said could move empty packages across warehouses, allowing workers to concentrate on less-strenuous tasks that involve critical thinking.
|
||||
274 | Teams Engineer Complex Human Tissues, Win Top Prizes in NASA Challenge | Two scientific teams at the Wake Forest Institute for Regenerative Medicine (WFIRM) have placed first and second in the U.S. National Aeronautics and Space Administration (NASA) Vascular Tissue Challenge, a contest to advance tissue engineering to benefit people on Earth and future space explorers. Teams Winston and WFRIM three-dimensionally (3D) -printed laboratory-cultured human liver tissues that could survive and function in a manner like their in-body counterparts. Each team assembled a cube-shaped tissue that could function for 30 days in the lab, using gel-like scaffolds with a network of channels to maintain oxygen and nutrient levels. Team Winston will work with the International Space Station U.S. National Laboratory to adapt its technique for space. | [] | [] | [] | scitechnews | None | None | None | None | Two scientific teams at the Wake Forest Institute for Regenerative Medicine (WFIRM) have placed first and second in the U.S. National Aeronautics and Space Administration (NASA) Vascular Tissue Challenge, a contest to advance tissue engineering to benefit people on Earth and future space explorers. Teams Winston and WFRIM three-dimensionally (3D) -printed laboratory-cultured human liver tissues that could survive and function in a manner like their in-body counterparts. Each team assembled a cube-shaped tissue that could function for 30 days in the lab, using gel-like scaffolds with a network of channels to maintain oxygen and nutrient levels. Team Winston will work with the International Space Station U.S. National Laboratory to adapt its technique for space.
|
||||
275 | Facing Shortage of High-Skilled Workers, Employers Are Seeking More Immigrant Talent, Study Finds | The U.S. does not have enough high-skilled workers to meet demand for computer-related jobs, and employers are seeking immigrant talent to help fill that gap, according to a new report released Thursday.
For every unemployed computer or math worker in the country in 2020, there were more than seven job postings for computer-related occupations, bipartisan immigration research group New American Economy found.
"More nuanced and responsive policy around employment-based immigration could be one way to help the U.S. more quickly and more robustly bounce back from the Covid-19 [pandemic] and future economic disruptions and crises," the report said.
The study comes as record job openings in the U.S. coincide with persistent unemployment , suggesting a mismatch in labor demand and supply. The U.S. Chamber of Commerce last week launched a campaign calling for an increase in employment-based immigration to address the worker shortage.
NAE, which was founded by billionaire Mike Bloomberg, analyzed data from Labor Certification Applications for foreign-born skilled workers, unemployment numbers from the Bureau of Labor Statistics, and job postings data from the website Burning Glass Technologies.
Employers in the U.S. posted 1.36 million job openings for computer-related roles in 2020, according to NAE's analysis. Yet there were only 177,000 unemployed workers in computer and math occupations last year, NAE found, using Labor Department data.
"Even something as powerful and traumatic and unprecedented as Covid did not put a dent in the country's demand and shortage of high-skilled STEM talent," said Dick Burke, president and CEO of Envoy Global, an immigration services firm that co-authored the study.
Employers continued to seek high-skilled immigrant workers to fill labor shortages during the pandemic. There were 371,641 foreign labor requests for computer-related jobs filed in 2020, NAE reported.
The U.S. disproportionately relies on foreign-born talent in computer-related jobs. Immigrants made up 25% of the computer workforce in 2019, according to NAE's analysis of Census data, compared with 17.4% of the broader labor force, according to the Labor Department .
"The evidence in this report is really adding more support to the idea that there are still needs from employers in the United States for computer-related workers that are not being addressed by current immigration policy in the United States," said Andrew Lim, director of quantitative research at NAE.
Seven of the 10 fastest-growing jobs for immigrant workers, as measured by Labor Certification Applications requests, were computer-related, NAE reported. | U.S. employers are seeking out immigrants for computer-related jobs amid a shortage of domestic talent, according to a study by bipartisan immigration research group New American Economy (NAE). While U.S. employers posted 1.36 million openings for computer-related jobs last year, Labor Department data indicated only 177,000 computer and math workers were unemployed. NAE found more than seven job postings for computer-related occupations for each unemployed U.S computer or math worker. Census data indicated immigrants constituted 25% of the computer workforce in 2019, while Labor estimated they comprised 17.4% of the broader workforce. Said NAE's Andrew Lim, "The evidence in this report is really adding more support to the idea that there are still needs from employers in the U.S. for computer-related workers that are not being addressed by current immigration policy in the U.S." | [] | [] | [] | scitechnews | None | None | None | None | U.S. employers are seeking out immigrants for computer-related jobs amid a shortage of domestic talent, according to a study by bipartisan immigration research group New American Economy (NAE). While U.S. employers posted 1.36 million openings for computer-related jobs last year, Labor Department data indicated only 177,000 computer and math workers were unemployed. NAE found more than seven job postings for computer-related occupations for each unemployed U.S computer or math worker. Census data indicated immigrants constituted 25% of the computer workforce in 2019, while Labor estimated they comprised 17.4% of the broader workforce. Said NAE's Andrew Lim, "The evidence in this report is really adding more support to the idea that there are still needs from employers in the U.S. for computer-related workers that are not being addressed by current immigration policy in the U.S."
The U.S. does not have enough high-skilled workers to meet demand for computer-related jobs, and employers are seeking immigrant talent to help fill that gap, according to a new report released Thursday.
For every unemployed computer or math worker in the country in 2020, there were more than seven job postings for computer-related occupations, bipartisan immigration research group New American Economy found.
"More nuanced and responsive policy around employment-based immigration could be one way to help the U.S. more quickly and more robustly bounce back from the Covid-19 [pandemic] and future economic disruptions and crises," the report said.
The study comes as record job openings in the U.S. coincide with persistent unemployment , suggesting a mismatch in labor demand and supply. The U.S. Chamber of Commerce last week launched a campaign calling for an increase in employment-based immigration to address the worker shortage.
NAE, which was founded by billionaire Mike Bloomberg, analyzed data from Labor Certification Applications for foreign-born skilled workers, unemployment numbers from the Bureau of Labor Statistics, and job postings data from the website Burning Glass Technologies.
Employers in the U.S. posted 1.36 million job openings for computer-related roles in 2020, according to NAE's analysis. Yet there were only 177,000 unemployed workers in computer and math occupations last year, NAE found, using Labor Department data.
"Even something as powerful and traumatic and unprecedented as Covid did not put a dent in the country's demand and shortage of high-skilled STEM talent," said Dick Burke, president and CEO of Envoy Global, an immigration services firm that co-authored the study.
Employers continued to seek high-skilled immigrant workers to fill labor shortages during the pandemic. There were 371,641 foreign labor requests for computer-related jobs filed in 2020, NAE reported.
The U.S. disproportionately relies on foreign-born talent in computer-related jobs. Immigrants made up 25% of the computer workforce in 2019, according to NAE's analysis of Census data, compared with 17.4% of the broader labor force, according to the Labor Department .
"The evidence in this report is really adding more support to the idea that there are still needs from employers in the United States for computer-related workers that are not being addressed by current immigration policy in the United States," said Andrew Lim, director of quantitative research at NAE.
Seven of the 10 fastest-growing jobs for immigrant workers, as measured by Labor Certification Applications requests, were computer-related, NAE reported. |
|||
276 | Hackers Breach Electronic Arts, Stealing Game Source Code and Tools | (CNN Business) Hackers have broken into the systems of Electronic Arts, one of the world's biggest video game publishers, and stolen source code used in company games, a spokesperson confirmed to CNN Business on Thursday. | A spokesperson for video game publisher Electronic Arts (EA) verified that hackers have compromised the company's systems and stolen game source code and other assets. The hackers claimed in online forum posts that they had acquired 780 gigabytes of data, including the Frostbite source code undergirding a series of video games, and were offering "full capability of exploiting on all EA services." The hackers also said they had stolen software development tools and server code for player matchmaking in several other games. The EA spokesperson said, "No player data was accessed, and we have no reason to believe there is any risk to player privacy," adding that the company is "actively working with law enforcement officials and other experts as part of this ongoing criminal investigation." | [] | [] | [] | scitechnews | None | None | None | None | A spokesperson for video game publisher Electronic Arts (EA) verified that hackers have compromised the company's systems and stolen game source code and other assets. The hackers claimed in online forum posts that they had acquired 780 gigabytes of data, including the Frostbite source code undergirding a series of video games, and were offering "full capability of exploiting on all EA services." The hackers also said they had stolen software development tools and server code for player matchmaking in several other games. The EA spokesperson said, "No player data was accessed, and we have no reason to believe there is any risk to player privacy," adding that the company is "actively working with law enforcement officials and other experts as part of this ongoing criminal investigation."
(CNN Business) Hackers have broken into the systems of Electronic Arts, one of the world's biggest video game publishers, and stolen source code used in company games, a spokesperson confirmed to CNN Business on Thursday. |
|||
278 | Following E-Cigarette Conversations on Twitter Using AI | The advertising of nicotine products is highly restricted, but social media allows a way for these products to be marketed to young people. What's more, e-cigarette flavourings make them particularly appealing to teenagers and young adults. A team of researchers have developed machine learning methods to track the conversations on social media about flavoured products by one of the most popular e-cigarette brands, JUUL .
'An increasing amount of discussions on e-cigarettes is taking place online, in particular in popular social media such as Twitter, Instagram, and Facebook. As the content related to e-cigarettes is often targeted at youth - who are also very active on many social media platforms - it is important to explore these conversations' says Dr Aqdas Malik , Postdoctoral Researcher in the Department of Computer Science at Aalto University.
Previous research has shown that young people find the flavouring of e-cigarettes appealing, and Malik himself has used AI to study how vaping companies are using Instagram to promote their products to young people. In their new work, the team developed machine learning methods to study key themes and sentiment revolving around the Twitter conversations about JUUL flavors.
The team analysed over 30,000 tweets, and found many positive tweets about the different flavours. 'Popular flavors, such as mango, mint, and cucumber are highly appealing but also addictive for young people, and must be further regulated,' said Malik. 'There is also a need to cap the promotional activities by e-cigarettes retailers such as giveaways, announcing new stock arrivals, discounts, and "buy more, save more" campaigns.'
Overall, the tweets were overwhelmingly positive in tone, though some arguments were made against the product and the addictiveness of its flavours. Another core theme among negative conversations was proposed legislation, mostly from anti-tobacco campaigners and news outlets.
The team hopes that the AI tools that they have developed, which are built upon the open-source BERT platform by Google, could be used by regulators to help monitor how e-cigarette products are promoted to youngsters. Trained on web-based data, Google BERT is a relatively new machine learning technique for natural language processing and has been previously shown to excel at predicting sentiment -- allowing the team to label individual tweets as positive or negative.
While this work has focused on Twitter messaging, the tools used can be easily applied to textual data on other social media platforms, too. For the next stage of their work, Malik's team will apply their machine learning methods to understand trends in how people talk about e-cigarettes and other substances on TikTok, Reddit, and YouTube.
The paper Modelling Public Sentiments about Juul Flavors on Twitter through Machine Learning, by Aqdas Malik, Muhammad Irfan Khan, Habib Karbasian, Marko Nieminen, Muhammad Ammad-Ud-Din, and Suleiman Khan, in Nicotine & Tobacco Research , 2021;, ntab098, is available to read online here https://doi.org/10.1093/ntr/ntab098
Aqdas Malik Postdoctoral Researcher +358 408 682 398 [email protected] | Researchers at Finland's Aalto University designed machine learning (ML) techniques to follow Twitter-based conversations about the flavors of electronic cigarettes made by JUUL. The researchers built the tools on Google's open source Bidirectional Encoder Representations from Transformers platform, which previously demonstrated excellent sentiment-prediction ability. They analyzed over 30,000 tweets and found positive tweets about the different JUUL flavors; Aalto's Aqdas Malik said flavors like mango, mint, and cucumber appeal to young people but also are addictive, making a case for regulation. In addition, Malik said, "There is also a need to cap the promotional activities by e-cigarettes retailers, such as giveaways, announcing new stock arrivals, discounts, and 'buy more, save more' campaigns." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Finland's Aalto University designed machine learning (ML) techniques to follow Twitter-based conversations about the flavors of electronic cigarettes made by JUUL. The researchers built the tools on Google's open source Bidirectional Encoder Representations from Transformers platform, which previously demonstrated excellent sentiment-prediction ability. They analyzed over 30,000 tweets and found positive tweets about the different JUUL flavors; Aalto's Aqdas Malik said flavors like mango, mint, and cucumber appeal to young people but also are addictive, making a case for regulation. In addition, Malik said, "There is also a need to cap the promotional activities by e-cigarettes retailers, such as giveaways, announcing new stock arrivals, discounts, and 'buy more, save more' campaigns."
The advertising of nicotine products is highly restricted, but social media allows a way for these products to be marketed to young people. What's more, e-cigarette flavourings make them particularly appealing to teenagers and young adults. A team of researchers have developed machine learning methods to track the conversations on social media about flavoured products by one of the most popular e-cigarette brands, JUUL .
'An increasing amount of discussions on e-cigarettes is taking place online, in particular in popular social media such as Twitter, Instagram, and Facebook. As the content related to e-cigarettes is often targeted at youth - who are also very active on many social media platforms - it is important to explore these conversations' says Dr Aqdas Malik , Postdoctoral Researcher in the Department of Computer Science at Aalto University.
Previous research has shown that young people find the flavouring of e-cigarettes appealing, and Malik himself has used AI to study how vaping companies are using Instagram to promote their products to young people. In their new work, the team developed machine learning methods to study key themes and sentiment revolving around the Twitter conversations about JUUL flavors.
The team analysed over 30,000 tweets, and found many positive tweets about the different flavours. 'Popular flavors, such as mango, mint, and cucumber are highly appealing but also addictive for young people, and must be further regulated,' said Malik. 'There is also a need to cap the promotional activities by e-cigarettes retailers such as giveaways, announcing new stock arrivals, discounts, and "buy more, save more" campaigns.'
Overall, the tweets were overwhelmingly positive in tone, though some arguments were made against the product and the addictiveness of its flavours. Another core theme among negative conversations was proposed legislation, mostly from anti-tobacco campaigners and news outlets.
The team hopes that the AI tools that they have developed, which are built upon the open-source BERT platform by Google, could be used by regulators to help monitor how e-cigarette products are promoted to youngsters. Trained on web-based data, Google BERT is a relatively new machine learning technique for natural language processing and has been previously shown to excel at predicting sentiment -- allowing the team to label individual tweets as positive or negative.
While this work has focused on Twitter messaging, the tools used can be easily applied to textual data on other social media platforms, too. For the next stage of their work, Malik's team will apply their machine learning methods to understand trends in how people talk about e-cigarettes and other substances on TikTok, Reddit, and YouTube.
The paper Modelling Public Sentiments about Juul Flavors on Twitter through Machine Learning, by Aqdas Malik, Muhammad Irfan Khan, Habib Karbasian, Marko Nieminen, Muhammad Ammad-Ud-Din, and Suleiman Khan, in Nicotine & Tobacco Research , 2021;, ntab098, is available to read online here https://doi.org/10.1093/ntr/ntab098
Aqdas Malik Postdoctoral Researcher +358 408 682 398 [email protected] |
|||
279 | Tech Companies Want to Make Holograms Part of Office Life | WeWork last month announced a partnership with ARHT Media Inc., a hologram technology company, to bring holograms to 100 WeWork buildings in 16 locations around the world. The effort begins this month with New York, Los Angeles and Miami.
And Microsoft Corp. in March introduced what it calls a mixed-reality service, Microsoft Mesh, which integrates three-dimensional images of people and content into the compatible displays of smart glasses or other devices.
The companies say holograms and related technology will soon become common in conference rooms all over the world. Still, the costs involved mean holograms have yet to prove useful for everyday interactions.
Three-dimensional representations improve on traditional phone and video calls because they make it easier to read body language and feel more personal, backers say.
"There's Zoom fatigue, there's a lot of friction to being on video all day - it is exhausting," said
Brianne Kimmel,
founder and managing partner of WorkLife Ventures, a venture-capital firm that specializes in the future of work technologies. Holograms and avatars enable "a new style of communication, where you'll have better, more frequent interactions," Ms. Kimmel said.
Although the companies were experimenting with holograms before the pandemic, they say the past year created a more urgent need for them. The technology could aid employers' visions for hybrid offices where some workers are present on a given day while others report in from home.
But holograms and similar technologies are likely to have limits in the workplace, analysts said.
Workplace holograms might be best suited for situations such as recorded events, trainings or seminars, said Kanishka Chauhan, principal research analyst at Gartner Inc., a research firm. Live hologram meetings are likely to be bogged down by complex and time-consuming logistics, he said.
WeWork, however, envisions holograms for a variety of uses. Customers will be able to record or live stream three-dimensional videos for a virtual audience via videoconferencing, a physical audience at a WeWork, or a combination of both. The holograms are viewable on a ARHT Media "HoloPod," an 8-foot-tall screen structure with a camera, microphone and projector, a a "HoloPresence," a screen meant to be used on a stage or a computer or tablet.
Pricing will vary. WeWork will charge $2,500 for holograms to be displayed on a standard HoloPod at a single location, for example, and $25,000 for multiple holograms that appear simultaneously on a shared virtual stage.
Holograms give remote interaction a more natural feel than standard team video calls, where people talk at the same time by accident and participants can't see body language cues, according to Hamid Hashemi, chief product and experience officer at WeWork.
"This is something that technically you can do with Zoom, but it is not as effective," Mr. Hashemi said.
Project Starline so far exists only in Google's offices, where it remains under development, but the company plans on testing the technology this year with "select enterprise partners," it said in a blog post .
The newer players join companies that have been experimenting with holographic technology for a number of years. Portl Inc. expanded the chief application of its system from trade show displays to include helping celebrities make holographic appearances at events such as the iHeartRadio Music Festival last year as well as bringing executives "onstage" at conferences around the world.
The Portl holograms appear on a 7-foot tall booth called the Epic or a 24-inch box called the Mini. The company has raised $3 million in funding from investors including venture capitalist
Tim Draper.
"I really do believe this is a communication and broadcast tool and that these will be used in conference rooms all over the world," said
David Nussbaum,
chief executive of Portl.
Write to Ann-Marie Alcántara at ann-marie.alcantara@wsj.com | To reduce Zoom fatigue, some companies want to implement holograms in the workplace. Proponents of hologram technology say three-dimensional (3D) representations of people on video calls feel more personal and help participants read body language. WorkLife Ventures' Brianne Kimmel said holograms and avatars allow for "a new style of communication, where you'll have better, more frequent interactions." Gartner's Kanishka Chauhan said holograms may be best for recorded events, trainings, or seminars, citing the complex and time-consuming logistics of live hologram meetings. WeWork, which has partnered with hologram technology company ARHT Media to bring holograms to 100 WeWork buildings across the globe, expects to use them for recorded or livestreamed videos to virtual, physical, and hybrid audiences. | [] | [] | [] | scitechnews | None | None | None | None | To reduce Zoom fatigue, some companies want to implement holograms in the workplace. Proponents of hologram technology say three-dimensional (3D) representations of people on video calls feel more personal and help participants read body language. WorkLife Ventures' Brianne Kimmel said holograms and avatars allow for "a new style of communication, where you'll have better, more frequent interactions." Gartner's Kanishka Chauhan said holograms may be best for recorded events, trainings, or seminars, citing the complex and time-consuming logistics of live hologram meetings. WeWork, which has partnered with hologram technology company ARHT Media to bring holograms to 100 WeWork buildings across the globe, expects to use them for recorded or livestreamed videos to virtual, physical, and hybrid audiences.
WeWork last month announced a partnership with ARHT Media Inc., a hologram technology company, to bring holograms to 100 WeWork buildings in 16 locations around the world. The effort begins this month with New York, Los Angeles and Miami.
And Microsoft Corp. in March introduced what it calls a mixed-reality service, Microsoft Mesh, which integrates three-dimensional images of people and content into the compatible displays of smart glasses or other devices.
The companies say holograms and related technology will soon become common in conference rooms all over the world. Still, the costs involved mean holograms have yet to prove useful for everyday interactions.
Three-dimensional representations improve on traditional phone and video calls because they make it easier to read body language and feel more personal, backers say.
"There's Zoom fatigue, there's a lot of friction to being on video all day - it is exhausting," said
Brianne Kimmel,
founder and managing partner of WorkLife Ventures, a venture-capital firm that specializes in the future of work technologies. Holograms and avatars enable "a new style of communication, where you'll have better, more frequent interactions," Ms. Kimmel said.
Although the companies were experimenting with holograms before the pandemic, they say the past year created a more urgent need for them. The technology could aid employers' visions for hybrid offices where some workers are present on a given day while others report in from home.
But holograms and similar technologies are likely to have limits in the workplace, analysts said.
Workplace holograms might be best suited for situations such as recorded events, trainings or seminars, said Kanishka Chauhan, principal research analyst at Gartner Inc., a research firm. Live hologram meetings are likely to be bogged down by complex and time-consuming logistics, he said.
WeWork, however, envisions holograms for a variety of uses. Customers will be able to record or live stream three-dimensional videos for a virtual audience via videoconferencing, a physical audience at a WeWork, or a combination of both. The holograms are viewable on a ARHT Media "HoloPod," an 8-foot-tall screen structure with a camera, microphone and projector, a a "HoloPresence," a screen meant to be used on a stage or a computer or tablet.
Pricing will vary. WeWork will charge $2,500 for holograms to be displayed on a standard HoloPod at a single location, for example, and $25,000 for multiple holograms that appear simultaneously on a shared virtual stage.
Holograms give remote interaction a more natural feel than standard team video calls, where people talk at the same time by accident and participants can't see body language cues, according to Hamid Hashemi, chief product and experience officer at WeWork.
"This is something that technically you can do with Zoom, but it is not as effective," Mr. Hashemi said.
Project Starline so far exists only in Google's offices, where it remains under development, but the company plans on testing the technology this year with "select enterprise partners," it said in a blog post .
The newer players join companies that have been experimenting with holographic technology for a number of years. Portl Inc. expanded the chief application of its system from trade show displays to include helping celebrities make holographic appearances at events such as the iHeartRadio Music Festival last year as well as bringing executives "onstage" at conferences around the world.
The Portl holograms appear on a 7-foot tall booth called the Epic or a 24-inch box called the Mini. The company has raised $3 million in funding from investors including venture capitalist
Tim Draper.
"I really do believe this is a communication and broadcast tool and that these will be used in conference rooms all over the world," said
David Nussbaum,
chief executive of Portl.
Write to Ann-Marie Alcántara at ann-marie.alcantara@wsj.com |
|||
280 | Could Your Smart Watch Alert You to Risk of Sudden Death? | Every year in the UK thousands of people die of sudden cardiac death (SCD), where the heart develops a chaotic rhythm that impairs it ability to pump blood. Usually, identifying people at risk of SCD requires a visit to hospital for tests. This new algorithm could, in future, enable everyday wearable technology to detect potentially deadly changes in the wearer's heart rhythm.
The algorithm was developed by researchers from Queen Mary University of London and University College London. They found that it was able to identify changes on electrocardiograms (ECGs, which measure electrical activity in the heart) that were significantly associated with the risk of being hospitalised or dying due to an abnormal heart rhythm.
The team used data from nearly 24,000 participants from the UK Biobank Imaging study, which was part-funded by the British Heart Foundation, to get a reference for normal T waves on an ECG. The T wave represents the time it takes the ventricles (the two larger chambers of the heart) to relax once they have pumped blood out of the heart. An abnormal T wave can indicate an increased risk of ventricular arrhythmia, an abnormal heartbeat that begins in the ventricles (main pumping chambers) of the heart. Ventricular arrhythmias are a major cause of sudden death.
They then applied the algorithm to ECG data from over 50,000 other people in the UK Biobank study to look for an association between changes in the shape of the T wave on a resting ECG and the risk of being hospitalised or dying because of arrhythmia, heart attack or heart failure. They found that people with the biggest changes in their T waves over time were significantly more likely to be hospitalised or die due to ventricular arrhythmias.
Dr Julia Ramirez, Lecturer at Queen Mary University of London, led the study. She said:
"Previously, finding warning signs that someone was at risk of arrhythmias and sudden death would have required them to have an ECG while undergoing an exercise test. We've been able to develop this algorithm so it can analyse ECGs from people taken while they're resting. This will make it much easier to roll this out for everyday use in the general population.
"Our algorithm was also better at predicting risk of arrhythmia than standard ECG risk markers. We still need to test it in more people, including different cohorts, to ensure it works as it is supposed to. However, once we've done this, we'll be ready to start studying the integration of the algorithm into wearable technology."
Professor Metin Avkiran, Associate Medical Director at the British Heart Foundation, said:
"Identifying people who are at risk of sudden cardiac death is a major challenge. This algorithm could act as a warning sign that someone is at risk of a life-threatening disturbance in their heart rhythm.
"While more work is needed to test the algorithm, this research is a step forward in our ability to identify people who could be at risk of severe arrhythmias and sudden death and take preventive action."
This research was funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 786833 and by the Medical Research Council. | A new algorithm engineered by researchers at the U.K.'s Queen Mary University of London (QMUL) and University College London could eventually enable everyday wearables to alert users to potentially fatal changes in heart rhythm. The algorithm can recognize electrocardiogram (ECG) readings correlating with the risk of hospitalization or death resulting from an abnormal heart rhythm. The team obtained a reference for normal T waves (the time for heart's ventricles to relax once they have pumped blood out) on an ECG from data on some 24,000 U.K. Biobank Imaging study participants, then applied the algorithm to ECG data from over 50,000 other participants. Results indicated that people with the biggest T-wave changes over time were more likely to be hospitalized or die from ventricular arrhythmias. QMUL's Julia Ramirez said the algorithm is "better at predicting risk of arrhythmia than standard ECG risk markers." | [] | [] | [] | scitechnews | None | None | None | None | A new algorithm engineered by researchers at the U.K.'s Queen Mary University of London (QMUL) and University College London could eventually enable everyday wearables to alert users to potentially fatal changes in heart rhythm. The algorithm can recognize electrocardiogram (ECG) readings correlating with the risk of hospitalization or death resulting from an abnormal heart rhythm. The team obtained a reference for normal T waves (the time for heart's ventricles to relax once they have pumped blood out) on an ECG from data on some 24,000 U.K. Biobank Imaging study participants, then applied the algorithm to ECG data from over 50,000 other participants. Results indicated that people with the biggest T-wave changes over time were more likely to be hospitalized or die from ventricular arrhythmias. QMUL's Julia Ramirez said the algorithm is "better at predicting risk of arrhythmia than standard ECG risk markers."
Every year in the UK thousands of people die of sudden cardiac death (SCD), where the heart develops a chaotic rhythm that impairs it ability to pump blood. Usually, identifying people at risk of SCD requires a visit to hospital for tests. This new algorithm could, in future, enable everyday wearable technology to detect potentially deadly changes in the wearer's heart rhythm.
The algorithm was developed by researchers from Queen Mary University of London and University College London. They found that it was able to identify changes on electrocardiograms (ECGs, which measure electrical activity in the heart) that were significantly associated with the risk of being hospitalised or dying due to an abnormal heart rhythm.
The team used data from nearly 24,000 participants from the UK Biobank Imaging study, which was part-funded by the British Heart Foundation, to get a reference for normal T waves on an ECG. The T wave represents the time it takes the ventricles (the two larger chambers of the heart) to relax once they have pumped blood out of the heart. An abnormal T wave can indicate an increased risk of ventricular arrhythmia, an abnormal heartbeat that begins in the ventricles (main pumping chambers) of the heart. Ventricular arrhythmias are a major cause of sudden death.
They then applied the algorithm to ECG data from over 50,000 other people in the UK Biobank study to look for an association between changes in the shape of the T wave on a resting ECG and the risk of being hospitalised or dying because of arrhythmia, heart attack or heart failure. They found that people with the biggest changes in their T waves over time were significantly more likely to be hospitalised or die due to ventricular arrhythmias.
Dr Julia Ramirez, Lecturer at Queen Mary University of London, led the study. She said:
"Previously, finding warning signs that someone was at risk of arrhythmias and sudden death would have required them to have an ECG while undergoing an exercise test. We've been able to develop this algorithm so it can analyse ECGs from people taken while they're resting. This will make it much easier to roll this out for everyday use in the general population.
"Our algorithm was also better at predicting risk of arrhythmia than standard ECG risk markers. We still need to test it in more people, including different cohorts, to ensure it works as it is supposed to. However, once we've done this, we'll be ready to start studying the integration of the algorithm into wearable technology."
Professor Metin Avkiran, Associate Medical Director at the British Heart Foundation, said:
"Identifying people who are at risk of sudden cardiac death is a major challenge. This algorithm could act as a warning sign that someone is at risk of a life-threatening disturbance in their heart rhythm.
"While more work is needed to test the algorithm, this research is a step forward in our ability to identify people who could be at risk of severe arrhythmias and sudden death and take preventive action."
This research was funded by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 786833 and by the Medical Research Council. |
|||
281 | Researchers Create 'Un-Hackable' Quantum Network Over Hundreds of Kilometers Using Optical Fiber | Researchers from Toshiba have successfully sent quantum information over 600-kilometer-long optical fibers, creating a new distance record and paving the way for large-scale quantum networks that could be used to exchange information securely between cities and even countries.
Working from the company's R&D lab in Cambridge in the UK, the scientists demonstrated that they could transmit quantum bits (or qubits) over hundreds of kilometers of optical fiber without scrambling the fragile quantum data encoded in the particles, thanks to a new technology that stabilizes the environmental fluctuations occurring in the fiber.
This could go a long way in helping to create a next-generation quantum internet that scientists hope will one day span global distances.
The quantum internet, which will take the shape of a global network of quantum devices connected by long-distance quantum communication links, is expected to enable use-cases that are impossible with today's web applications . They range from generating virtually un-hackable communications, to creating clusters of inter-connected quantum devices that together could surpass the compute power of classical devices.
SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium)
But in order to communicate, quantum devices need to send and receive qubits - tiny particles that exist in a special, but extremely fragile, quantum state. Finding the best way to transmit qubits without having them fall from their quantum state has got scientists around the world scratching their heads for many years.
One approach consists of shooting qubits down optical fibers that connect quantum devices. The method has been successful but is limited in scale: small changes in the environment, such as temperature fluctuations, cause the fibers to expand and contract, and risk messing with the qubits.
This is why experiments with optical fiber, until now, have typically been limited to a range of hundreds of kilometers; in other words, nowhere near enough to create the large-scale, global quantum internet dreamed up by scientists.
To tackle the instable conditions inside optical fibers, Toshiba's researchers developed a new technique called "dual band stabilization." The method sends two signals down the optical fiber at different wavelengths. The first wavelength is used to cancel out rapidly varying fluctuations, while the second wavelength, which is at the same wavelength as the qubits, is used for finer adjustments of the phase.
Put simply, the two wavelengths combine to cancel environmental fluctuations inside the fiber in real time, which according to Toshiba's researchers, enabled qubits to travel safely over 600 kilometers.
Already, the company's team has used the technology to trial one of the most well-known applications of quantum networks: quantum-based encryption.
Known as Quantum Key Distribution (QKD), the protocol leverages quantum networks to create security keys that are impossible to hack, meaning that users can securely exchange confidential information, like bank statements or health records, over an untrusted communication channel such as the internet.
During a communication, QKD works by having one of the two parties encrypt a piece of data by encoding the cryptography key onto qubits and sending those qubits over to the other person thanks to a quantum network. Because of the laws of quantum mechanics, however, it is impossible for a spy to intercept the qubits without leaving a sign of eavesdropping that can be seen by the users - who, in turn, can take steps to protect the information.
Unlike classical cryptography, therefore, QKD does not rely on the mathematical complexity of solving security keys, but rather leverages the laws of physics. This means that even the most powerful computers would be unable to hack the qubits-based keys. It is easy to see why the idea is gathering the attention of players from all parts, ranging from financial institutions to intelligence agencies.
Toshiba's new technique to reduce fluctuations in optical fibers enabled the researchers to carry out QKD over a much larger distance than previously possible. "This is a very exciting result," said Mirko Pittaluga, research scientist at Toshiba Europe. "With the new techniques we have developed, further extensions of the communication distance for QKD are still possible and our solutions can also be applied to other quantum communications protocols and applications."
When it comes to carrying out QKD using optical fiber, Toshiba's 600-kilometer mark is a record-breaker, which the company predicts will enable secure links to be created between cities like London, Paris, Brussels, Amsterdam and Dublin.
Other research groups, however, have focused on different methods to transmit qubits, which have enabled QKD to happen over even larger distances. Chinese scientists, for example, are using a mix of satellite-based transmissions communicating with optical fibers on the ground, and recently succeeded in carrying out QKD over a total distance of 4,600 kilometers .
Every approach has its pros and cons: using satellite technologies is more costly and could be harder to scale up. But one thing is certain: research groups in the UK, China and the US are experimenting at pace to make quantum networks become a reality.
Toshiba's research was partially funded by the EU, which is showing a keen interest in developing quantum communications. Meanwhile, China's latest five-year plan also allocates a special place for quantum networks ; and the US recently published a blueprint laying out a step-by-step guide leading to the establishment of a global quantum internet. | Toshiba researchers in the U.K. transmitted quantum information over 600-kilometer (372-mile) -long optical fibers without disruption, demonstrating technology that stabilizes environmental fluctuations within the fibers. The researchers utilized dual-band stabilization to send two signals down the fiber at differing wavelengths, with one signal canceling out rapidly varying fluctuations, while the other made finer quantum-phase adjustments. The Toshiba team said this enabled the safe routing of quantum bits over the optical fiber, which it used to employ quantum-based encryption in the form of the Quantum Key Distribution protocol. Said Toshiba Europe's Mirko Pittaluga, "Further extensions of the communication distance for QKD are still possible ,and our solutions can also be applied to other quantum communications protocols and applications." | [] | [] | [] | scitechnews | None | None | None | None | Toshiba researchers in the U.K. transmitted quantum information over 600-kilometer (372-mile) -long optical fibers without disruption, demonstrating technology that stabilizes environmental fluctuations within the fibers. The researchers utilized dual-band stabilization to send two signals down the fiber at differing wavelengths, with one signal canceling out rapidly varying fluctuations, while the other made finer quantum-phase adjustments. The Toshiba team said this enabled the safe routing of quantum bits over the optical fiber, which it used to employ quantum-based encryption in the form of the Quantum Key Distribution protocol. Said Toshiba Europe's Mirko Pittaluga, "Further extensions of the communication distance for QKD are still possible ,and our solutions can also be applied to other quantum communications protocols and applications."
Researchers from Toshiba have successfully sent quantum information over 600-kilometer-long optical fibers, creating a new distance record and paving the way for large-scale quantum networks that could be used to exchange information securely between cities and even countries.
Working from the company's R&D lab in Cambridge in the UK, the scientists demonstrated that they could transmit quantum bits (or qubits) over hundreds of kilometers of optical fiber without scrambling the fragile quantum data encoded in the particles, thanks to a new technology that stabilizes the environmental fluctuations occurring in the fiber.
This could go a long way in helping to create a next-generation quantum internet that scientists hope will one day span global distances.
The quantum internet, which will take the shape of a global network of quantum devices connected by long-distance quantum communication links, is expected to enable use-cases that are impossible with today's web applications . They range from generating virtually un-hackable communications, to creating clusters of inter-connected quantum devices that together could surpass the compute power of classical devices.
SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium)
But in order to communicate, quantum devices need to send and receive qubits - tiny particles that exist in a special, but extremely fragile, quantum state. Finding the best way to transmit qubits without having them fall from their quantum state has got scientists around the world scratching their heads for many years.
One approach consists of shooting qubits down optical fibers that connect quantum devices. The method has been successful but is limited in scale: small changes in the environment, such as temperature fluctuations, cause the fibers to expand and contract, and risk messing with the qubits.
This is why experiments with optical fiber, until now, have typically been limited to a range of hundreds of kilometers; in other words, nowhere near enough to create the large-scale, global quantum internet dreamed up by scientists.
To tackle the instable conditions inside optical fibers, Toshiba's researchers developed a new technique called "dual band stabilization." The method sends two signals down the optical fiber at different wavelengths. The first wavelength is used to cancel out rapidly varying fluctuations, while the second wavelength, which is at the same wavelength as the qubits, is used for finer adjustments of the phase.
Put simply, the two wavelengths combine to cancel environmental fluctuations inside the fiber in real time, which according to Toshiba's researchers, enabled qubits to travel safely over 600 kilometers.
Already, the company's team has used the technology to trial one of the most well-known applications of quantum networks: quantum-based encryption.
Known as Quantum Key Distribution (QKD), the protocol leverages quantum networks to create security keys that are impossible to hack, meaning that users can securely exchange confidential information, like bank statements or health records, over an untrusted communication channel such as the internet.
During a communication, QKD works by having one of the two parties encrypt a piece of data by encoding the cryptography key onto qubits and sending those qubits over to the other person thanks to a quantum network. Because of the laws of quantum mechanics, however, it is impossible for a spy to intercept the qubits without leaving a sign of eavesdropping that can be seen by the users - who, in turn, can take steps to protect the information.
Unlike classical cryptography, therefore, QKD does not rely on the mathematical complexity of solving security keys, but rather leverages the laws of physics. This means that even the most powerful computers would be unable to hack the qubits-based keys. It is easy to see why the idea is gathering the attention of players from all parts, ranging from financial institutions to intelligence agencies.
Toshiba's new technique to reduce fluctuations in optical fibers enabled the researchers to carry out QKD over a much larger distance than previously possible. "This is a very exciting result," said Mirko Pittaluga, research scientist at Toshiba Europe. "With the new techniques we have developed, further extensions of the communication distance for QKD are still possible and our solutions can also be applied to other quantum communications protocols and applications."
When it comes to carrying out QKD using optical fiber, Toshiba's 600-kilometer mark is a record-breaker, which the company predicts will enable secure links to be created between cities like London, Paris, Brussels, Amsterdam and Dublin.
Other research groups, however, have focused on different methods to transmit qubits, which have enabled QKD to happen over even larger distances. Chinese scientists, for example, are using a mix of satellite-based transmissions communicating with optical fibers on the ground, and recently succeeded in carrying out QKD over a total distance of 4,600 kilometers .
Every approach has its pros and cons: using satellite technologies is more costly and could be harder to scale up. But one thing is certain: research groups in the UK, China and the US are experimenting at pace to make quantum networks become a reality.
Toshiba's research was partially funded by the EU, which is showing a keen interest in developing quantum communications. Meanwhile, China's latest five-year plan also allocates a special place for quantum networks ; and the US recently published a blueprint laying out a step-by-step guide leading to the establishment of a global quantum internet. |
|||
283 | Rates of Anxiety, Depression Among College Students Continue to Soar, App-Based Research Shows | A four-year study by Dartmouth College researchers uncovered higher rates of anxiety and depression among college students since the onset of the coronavirus pandemic, accompanied by less sleep and greater phone usage. Dartmouth's Andrew Campbell co-developed the StudentLife app, which records data on the user's location, phone use, sleep duration, and sedentary habits. The researchers tracked 217 students who began as freshmen in 2017, and Campbell said depression and anxiety rates have skyrocketed since the pandemic started, with no sign of decelerating. Dartmouth's Dante Mack said, "Interest in covid fatigue is a unique tool that allows us to understand how the 'new normal' may be associated with poor mental health outcomes." | [] | [] | [] | scitechnews | None | None | None | None | A four-year study by Dartmouth College researchers uncovered higher rates of anxiety and depression among college students since the onset of the coronavirus pandemic, accompanied by less sleep and greater phone usage. Dartmouth's Andrew Campbell co-developed the StudentLife app, which records data on the user's location, phone use, sleep duration, and sedentary habits. The researchers tracked 217 students who began as freshmen in 2017, and Campbell said depression and anxiety rates have skyrocketed since the pandemic started, with no sign of decelerating. Dartmouth's Dante Mack said, "Interest in covid fatigue is a unique tool that allows us to understand how the 'new normal' may be associated with poor mental health outcomes."
|
||||
285 | New Method to Untangle 3D Cancer Genome | Northwestern Medicine scientists have invented a new method for resolving rearranged chromosomes and their 3D structures in cancer cells, which can reveal key gene regulators that lead to the development of tumors, published in Nature Methods .
Feng Yue, PhD , the Duane and Susan Burnham Professor of Molecular Medicine and senior author of the study, said this method could help identify new targets for therapy.
"Cancer genomes contain tons of rearrangement. We frequently observe that big chunk of DNA fragments are lost, duplicated, or shuffled to another chromosome. Many of the known oncogenes and tumor suppressors are affected by such events," said Yue, who is also an associate professor of Biochemistry and Molecular Genetics , of Pathology and the director of the Center for Cancer Genomics at the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.
Within each cell, strands of DNA, which if laid flat would stretch over two meters long, need to be properly folded and organized so that they can fit inside the nucleus, which is usually only a few micrometers in diameter. Often, DNA ends up forming "loops" that bring together genomic elements that are usually very far apart when the entire genome is unfurled.
Normal human cells have 46 chromosomes, but cancerous cells often have more, with many chromosomes cut into pieces and fused with parts from other chromosomes. Sometimes a cancer-specific chromosome is formed by stitching pieces of DNA from several different chromosomes, according to Yue.
"It is extremely challenging to figure out the composition of cancer genomes," said Yue, who is also the director of the Center for Advanced Molecular Analysis at the Institute for Augmented Intelligence in Medicine .
Fusion events, in combination with chromatin loops within these cancer-specific chromosomes, can lead to activation of oncogenes, but until now there was no systematic way to find the link between the two. To remedy this, Yue and his group members developed "NeoLoopFinder," a complete computational framework that analyzes the 3D structure of cancer chromosomes and identifies the essential regulators for key oncogenes.
It does so by resolving the complex genome rearrangement events, reconstructing the local chromatin interaction maps, estimating how many copies of such events in the cancer cells and finally by using a machine-learning algorithm they previously developed to find the chromatin loop.
"This helps us identify the control element for the oncogenes - the switch that turns the oncogene on or off," Yue said.
The importance of these loops in cancer has become increasingly recognized by the field, Yue said, with such events observed in nearly every type of cancer they analyzed. In the study, Northwestern Medicine scientists studied 50 different types of cancer cells, such as leukemia, breast cancer and prostate cancer. In two recently published studies , Yue and collaborators also used this method to examine chromatin loops in pediatric brain cancer and bladder cancer.
Using NeoLoopFinder, the investigators compiled a database of control elements - mainly enhancers - associated with oncogenes. As a proof-of-concept, they used CRISPR-Cas9 genome editing to disable a gene enhancer linked to an oncogene (ETV1) for prostate cancer, and found that disabling the enhancer shut down the oncogene.
"Normally you wouldn't imagine that disabling an enhancer on chromosome 14 would shut down an oncogene on chromosome 7, but now with our tool, more scientists will be able to identify such events and dissect their clinical implications in cancer," Yue said.
The computational framework is freely available to researchers at GitHub.
Xiaotao Wang, PhD, a postdoctoral fellow in the Yue laboratory, was the lead author of the study.
This study was supported by National Institutes of Health grants R35GM124820 and R01HG009906. | Researchers in the Feinberg School of Medicine of Northwestern Medicine have devised a new approach for determining the three-dimensional (3D) composition of cancer cell structures, which could help identify gene regulators that control the development of tumors. Northwestern Medicine's Feng Yue and his team designed NeoLoopFinder, a computational framework that analyzes the 3D architecture of cancer chromosomes and highlights critical regulators of cancer-causing genes (oncogenes). The researchers reviewed 50 types of cancer cells and used NeoLoopFinder to compile a database of oncogene-associated control elements; they then used CRISPR-Cas9 genome editing to disable a gene enhancer tied to a prostate cancer oncogene as validation of this approach. Said Yue, "With our tool, more scientists will be able to identify such events and dissect their clinical implications in cancer." | [] | [] | [] | scitechnews | None | None | None | None | Researchers in the Feinberg School of Medicine of Northwestern Medicine have devised a new approach for determining the three-dimensional (3D) composition of cancer cell structures, which could help identify gene regulators that control the development of tumors. Northwestern Medicine's Feng Yue and his team designed NeoLoopFinder, a computational framework that analyzes the 3D architecture of cancer chromosomes and highlights critical regulators of cancer-causing genes (oncogenes). The researchers reviewed 50 types of cancer cells and used NeoLoopFinder to compile a database of oncogene-associated control elements; they then used CRISPR-Cas9 genome editing to disable a gene enhancer tied to a prostate cancer oncogene as validation of this approach. Said Yue, "With our tool, more scientists will be able to identify such events and dissect their clinical implications in cancer."
Northwestern Medicine scientists have invented a new method for resolving rearranged chromosomes and their 3D structures in cancer cells, which can reveal key gene regulators that lead to the development of tumors, published in Nature Methods .
Feng Yue, PhD , the Duane and Susan Burnham Professor of Molecular Medicine and senior author of the study, said this method could help identify new targets for therapy.
"Cancer genomes contain tons of rearrangement. We frequently observe that big chunk of DNA fragments are lost, duplicated, or shuffled to another chromosome. Many of the known oncogenes and tumor suppressors are affected by such events," said Yue, who is also an associate professor of Biochemistry and Molecular Genetics , of Pathology and the director of the Center for Cancer Genomics at the Robert H. Lurie Comprehensive Cancer Center of Northwestern University.
Within each cell, strands of DNA, which if laid flat would stretch over two meters long, need to be properly folded and organized so that they can fit inside the nucleus, which is usually only a few micrometers in diameter. Often, DNA ends up forming "loops" that bring together genomic elements that are usually very far apart when the entire genome is unfurled.
Normal human cells have 46 chromosomes, but cancerous cells often have more, with many chromosomes cut into pieces and fused with parts from other chromosomes. Sometimes a cancer-specific chromosome is formed by stitching pieces of DNA from several different chromosomes, according to Yue.
"It is extremely challenging to figure out the composition of cancer genomes," said Yue, who is also the director of the Center for Advanced Molecular Analysis at the Institute for Augmented Intelligence in Medicine .
Fusion events, in combination with chromatin loops within these cancer-specific chromosomes, can lead to activation of oncogenes, but until now there was no systematic way to find the link between the two. To remedy this, Yue and his group members developed "NeoLoopFinder," a complete computational framework that analyzes the 3D structure of cancer chromosomes and identifies the essential regulators for key oncogenes.
It does so by resolving the complex genome rearrangement events, reconstructing the local chromatin interaction maps, estimating how many copies of such events in the cancer cells and finally by using a machine-learning algorithm they previously developed to find the chromatin loop.
"This helps us identify the control element for the oncogenes - the switch that turns the oncogene on or off," Yue said.
The importance of these loops in cancer has become increasingly recognized by the field, Yue said, with such events observed in nearly every type of cancer they analyzed. In the study, Northwestern Medicine scientists studied 50 different types of cancer cells, such as leukemia, breast cancer and prostate cancer. In two recently published studies , Yue and collaborators also used this method to examine chromatin loops in pediatric brain cancer and bladder cancer.
Using NeoLoopFinder, the investigators compiled a database of control elements - mainly enhancers - associated with oncogenes. As a proof-of-concept, they used CRISPR-Cas9 genome editing to disable a gene enhancer linked to an oncogene (ETV1) for prostate cancer, and found that disabling the enhancer shut down the oncogene.
"Normally you wouldn't imagine that disabling an enhancer on chromosome 14 would shut down an oncogene on chromosome 7, but now with our tool, more scientists will be able to identify such events and dissect their clinical implications in cancer," Yue said.
The computational framework is freely available to researchers at GitHub.
Xiaotao Wang, PhD, a postdoctoral fellow in the Yue laboratory, was the lead author of the study.
This study was supported by National Institutes of Health grants R35GM124820 and R01HG009906. |
|||
287 | Leader in Power-Efficient Computer Architecture Receives Eckert-Mauchly Award | ACM and the IEEE Computer Society have named Princeton University's Margaret Martonosi recipient of the 2021 Eckert-Mauchly Award for her role in designing, modeling, and confirming power-efficient computer architecture. Martonosi was a pioneer in the design and modeling of power-aware microarchitectures, including the use of narrow bit-widths, thermal-issue modeling and response, and conducting power estimation. Martonosi and co-author David Brooks greatly reduced processor power consumption through a paper that introduced two optimizations. She and Brooks later demonstrated that a central processing unit can be engineered for a much lower maximum power rating, and minimally impact typical applications. Martonosi also presented the potential of fast, early-stage, formal methods to verify the correctness of memory consistency model deployment, through the Check verification tool suite. | [] | [] | [] | scitechnews | None | None | None | None | ACM and the IEEE Computer Society have named Princeton University's Margaret Martonosi recipient of the 2021 Eckert-Mauchly Award for her role in designing, modeling, and confirming power-efficient computer architecture. Martonosi was a pioneer in the design and modeling of power-aware microarchitectures, including the use of narrow bit-widths, thermal-issue modeling and response, and conducting power estimation. Martonosi and co-author David Brooks greatly reduced processor power consumption through a paper that introduced two optimizations. She and Brooks later demonstrated that a central processing unit can be engineered for a much lower maximum power rating, and minimally impact typical applications. Martonosi also presented the potential of fast, early-stage, formal methods to verify the correctness of memory consistency model deployment, through the Check verification tool suite.
|
||||
288 | FBI Secretly Ran Anom Messaging Platform, Yielding Hundreds of Arrests in Global Sting | Global authorities have arrested hundreds of suspected members of international criminal networks by tricking them into using Anom, an encrypted communications platform run by the U.S. Federal Bureau of Investigation (FBI). A bureau-led international law enforcement coalition monitored Anom, which makes and distributes mobile phones equipped with a covert communications application service. The FBI's San Diego field office co-opted Anom in 2018; with the cooperation of a confidential source, the FBI and its law-enforcement partners secretly embedded the ability to covertly intercept and decrypt messages. FBI special agent Suzanne Turner said, "The immense and unprecedented success of Operation Trojan Shield should be a warning to international criminal organizations - your criminal communications may not be secure; and you can count on law enforcement world-wide working together to combat dangerous crime that crosses international borders." | [] | [] | [] | scitechnews | None | None | None | None | Global authorities have arrested hundreds of suspected members of international criminal networks by tricking them into using Anom, an encrypted communications platform run by the U.S. Federal Bureau of Investigation (FBI). A bureau-led international law enforcement coalition monitored Anom, which makes and distributes mobile phones equipped with a covert communications application service. The FBI's San Diego field office co-opted Anom in 2018; with the cooperation of a confidential source, the FBI and its law-enforcement partners secretly embedded the ability to covertly intercept and decrypt messages. FBI special agent Suzanne Turner said, "The immense and unprecedented success of Operation Trojan Shield should be a warning to international criminal organizations - your criminal communications may not be secure; and you can count on law enforcement world-wide working together to combat dangerous crime that crosses international borders."
|
||||
289 | U.S. Senate Passes Bill to Encourage Tech Competition, Especially with China | The U.S. Senate passed legislation to ramp up semiconductor production and development of advanced technology amid intensifying global competition, especially from China. The bill allocates $50 billion to the Commerce Department to prop up chip development and fabrication via research and incentive programs previously greenlit by Congress. One provision would establish a new artificial intelligence- and quantum science-focused directorate within the National Science Foundation, authorizing up to $29 billion over five years for the new unit, and another $52 billion for its initiatives. Said Senate Majority Leader Chuck Schumer (D-NY), "Whoever wins the race to the technologies of the future is going to be the global economic leader, with profound consequences for foreign policy and national security as well." The House Science Committee is expected to consider that chamber's version of the legislation soon. | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Senate passed legislation to ramp up semiconductor production and development of advanced technology amid intensifying global competition, especially from China. The bill allocates $50 billion to the Commerce Department to prop up chip development and fabrication via research and incentive programs previously greenlit by Congress. One provision would establish a new artificial intelligence- and quantum science-focused directorate within the National Science Foundation, authorizing up to $29 billion over five years for the new unit, and another $52 billion for its initiatives. Said Senate Majority Leader Chuck Schumer (D-NY), "Whoever wins the race to the technologies of the future is going to be the global economic leader, with profound consequences for foreign policy and national security as well." The House Science Committee is expected to consider that chamber's version of the legislation soon.
|
||||
291 | Less Nosy Smart Speakers | Microphones are perhaps the most common electronic sensor in the world, with an estimated 320 million listening for our commands in the world's smart speakers. The trouble, of course, is that they're capable of hearing everything else, too. But now, a team of University of Michigan researchers has developed a system that can inform a smart home - or listen for the signal that would turn on a smart speaker - without eavesdropping on audible sound.
The key to the device, called PrivacyMic, is ultrasonic sound at frequencies above the range of human hearing. Running dishwashers, computer monitors, even finger snaps, all generate ultrasonic sounds, which have a frequency of 20 kilohertz or higher. We can't hear them - but dogs, cats and PrivacyMic can.
The system pieces together the ultrasonic information that's all around us to identify when its services are needed, and sense what's going on around it. Researchers have demonstrated that it can identify household and office activities with greater than 95% accuracy.
"There are a lot of situations where we want our home automation system or our smart speaker to understand what's going on in our home, but we don't necessarily want it listening to our conversations," said Alanson Sample, a U-M associate professor of electrical engineering and computer science and the senior author on a paper on the work recently presented at CHI 2021, the Association for Computing Machinery's Virtual Conference on Human Factors in Computing Systems. "And what we've found is that you can have a system that understands what's going on, and a hard guarantee that it will never record any audible information."
PrivacyMic can filter out audible information right on the device. That makes it more secure than encryption or other security measures that take steps to secure audio data after it's recorded, or limit who has access to it. Those measures could all leave sensitive information vulnerable to hackers, but with PrivacyMic, the information simply doesn't exist.
While smart speakers are an obvious application, the research team envisions many others that, while less common, may be more important. In-home ultrasonic devices, for example, could monitor the homes of the elderly for signs that they need help, monitor lung function in respiratory patients, or listen to clinical trial participants for sonic signatures that could reveal medication side effects or other problems.
"A conventional microphone placed in somebody's home for months at a time could give doctors richer information than they've ever had before, but patients just aren't willing to do that with today's technology," Sample said. "But an ultrasonic device could give doctors and medical schools unprecedented insight into what their patients' lives are really like in a way that the patients are much more likely to accept."
The idea behind PrivacyMic began when the team was classifying previously recorded audio. Looking at a visual graph of the data, they realized that audible sound was only a small piece of what was available.
"We realized that we were sitting on a lot of interesting information that was being ignored," explained Yasha Iravantchi, a graduate student research assistant in electrical engineering and computer science and the first author on the paper. "We could actually get a picture of what was going on in a home or office without using any audio at all."
Armed with this insight, a laptop and an ultrasonic microphone, the team then went to work capturing tooth brushing, toilet flushing, vacuum cleaners, dishwashers, computer monitors and hundreds of other common activities. They then compressed the ultrasonic signatures into smaller files that included key bits of information while stripping out noise within the range of human hearing - a bit like an ultrasonic MP3 - and built a Raspberry Pi-based device to listen for them.
The device, which can be set to filter out speech, or to strip out all audible content, accurately identified common activities more than 95% of the time. The team also conducted a trial where study participants listened to the audio collected by the device and found that not a single participant could make out human speech.
While the device detailed in the paper was a simple proof of concept, Sample says that implementing similar technology in a device like a smart speaker would require only minor modifications - software that listens for ultrasonic sound and a microphone capable of picking it up, which are inexpensive and readily available. While such a device is likely several years off, the team has already applied for patent protection through the U-M Office of Technology Transfer.
"Smart technology today is an all-or-nothing proposition.You can either have nothing, or you can have a device that's capable of constant audio recording," Sample said. "PrivacyMic offers another layer of privacy - you can interact with your device using audio if you choose, or you can have another setting where the device can glean information without picking up audio."
The paper is titled "PrivacyMic: Utilizing Inaudible Frequencies for Privacy Preserving Daily Activity Recognition." It was presented on May 12 at the CHI Conference on Human Factors in Computing Systems. Other researchers on the project included Karan Ahuja, Mayank Goel and Chris Harrison, all at Carnegie Mellon University. | University of Michigan (U-M) researchers have designed a device called PrivacyMic to reduce eavesdropping by smart speakers by notifying household devices of important data without recording speech. PrivacyMic pieces together ambient ultrasonic information that indicate when its services are required. The system compresses these ultrasonic signatures into smaller files that feature key bits of information, while removing noise within the range of human hearing. The researchers showed that PrivacyMic was more than 95% accurate in identifying household and office activities. U-M's Alanson Sample said, "What we've found is that you can have a system that understands what's going on, and a hard guarantee that it will never record any audible information." | [] | [] | [] | scitechnews | None | None | None | None | University of Michigan (U-M) researchers have designed a device called PrivacyMic to reduce eavesdropping by smart speakers by notifying household devices of important data without recording speech. PrivacyMic pieces together ambient ultrasonic information that indicate when its services are required. The system compresses these ultrasonic signatures into smaller files that feature key bits of information, while removing noise within the range of human hearing. The researchers showed that PrivacyMic was more than 95% accurate in identifying household and office activities. U-M's Alanson Sample said, "What we've found is that you can have a system that understands what's going on, and a hard guarantee that it will never record any audible information."
Microphones are perhaps the most common electronic sensor in the world, with an estimated 320 million listening for our commands in the world's smart speakers. The trouble, of course, is that they're capable of hearing everything else, too. But now, a team of University of Michigan researchers has developed a system that can inform a smart home - or listen for the signal that would turn on a smart speaker - without eavesdropping on audible sound.
The key to the device, called PrivacyMic, is ultrasonic sound at frequencies above the range of human hearing. Running dishwashers, computer monitors, even finger snaps, all generate ultrasonic sounds, which have a frequency of 20 kilohertz or higher. We can't hear them - but dogs, cats and PrivacyMic can.
The system pieces together the ultrasonic information that's all around us to identify when its services are needed, and sense what's going on around it. Researchers have demonstrated that it can identify household and office activities with greater than 95% accuracy.
"There are a lot of situations where we want our home automation system or our smart speaker to understand what's going on in our home, but we don't necessarily want it listening to our conversations," said Alanson Sample, a U-M associate professor of electrical engineering and computer science and the senior author on a paper on the work recently presented at CHI 2021, the Association for Computing Machinery's Virtual Conference on Human Factors in Computing Systems. "And what we've found is that you can have a system that understands what's going on, and a hard guarantee that it will never record any audible information."
PrivacyMic can filter out audible information right on the device. That makes it more secure than encryption or other security measures that take steps to secure audio data after it's recorded, or limit who has access to it. Those measures could all leave sensitive information vulnerable to hackers, but with PrivacyMic, the information simply doesn't exist.
While smart speakers are an obvious application, the research team envisions many others that, while less common, may be more important. In-home ultrasonic devices, for example, could monitor the homes of the elderly for signs that they need help, monitor lung function in respiratory patients, or listen to clinical trial participants for sonic signatures that could reveal medication side effects or other problems.
"A conventional microphone placed in somebody's home for months at a time could give doctors richer information than they've ever had before, but patients just aren't willing to do that with today's technology," Sample said. "But an ultrasonic device could give doctors and medical schools unprecedented insight into what their patients' lives are really like in a way that the patients are much more likely to accept."
The idea behind PrivacyMic began when the team was classifying previously recorded audio. Looking at a visual graph of the data, they realized that audible sound was only a small piece of what was available.
"We realized that we were sitting on a lot of interesting information that was being ignored," explained Yasha Iravantchi, a graduate student research assistant in electrical engineering and computer science and the first author on the paper. "We could actually get a picture of what was going on in a home or office without using any audio at all."
Armed with this insight, a laptop and an ultrasonic microphone, the team then went to work capturing tooth brushing, toilet flushing, vacuum cleaners, dishwashers, computer monitors and hundreds of other common activities. They then compressed the ultrasonic signatures into smaller files that included key bits of information while stripping out noise within the range of human hearing - a bit like an ultrasonic MP3 - and built a Raspberry Pi-based device to listen for them.
The device, which can be set to filter out speech, or to strip out all audible content, accurately identified common activities more than 95% of the time. The team also conducted a trial where study participants listened to the audio collected by the device and found that not a single participant could make out human speech.
While the device detailed in the paper was a simple proof of concept, Sample says that implementing similar technology in a device like a smart speaker would require only minor modifications - software that listens for ultrasonic sound and a microphone capable of picking it up, which are inexpensive and readily available. While such a device is likely several years off, the team has already applied for patent protection through the U-M Office of Technology Transfer.
"Smart technology today is an all-or-nothing proposition.You can either have nothing, or you can have a device that's capable of constant audio recording," Sample said. "PrivacyMic offers another layer of privacy - you can interact with your device using audio if you choose, or you can have another setting where the device can glean information without picking up audio."
The paper is titled "PrivacyMic: Utilizing Inaudible Frequencies for Privacy Preserving Daily Activity Recognition." It was presented on May 12 at the CHI Conference on Human Factors in Computing Systems. Other researchers on the project included Karan Ahuja, Mayank Goel and Chris Harrison, all at Carnegie Mellon University. |
|||
293 | Feds Recover More Than $2 Million in Ransomware Payments from Colonial Pipeline Hackers | U.S. officials say more than $2 million in cryptocurrency payments to the hackers who held Colonial Pipeline hostage in May has been recovered, marking the first recovery by the U.S. Department of Justice's new ransomware task force. Federal Bureau of Investigation deputy director Paul Abbate said the bureau seized proceeds paid to the Russian DarkSide hacker ring from a digital "wallet" containing the ransom, after securing a warrant from a federal judge. An affidavit said the bureau acquired the wallet's "private key," while officials have not disclosed how it was obtained. In announcing the seizure, Deputy Attorney General Lisa Monaco said, "The sophisticated use of technology to hold businesses and even whole cities hostage for profit is decidedly a 21st century challenge. But the adage, 'follow the money', still applies." | [] | [] | [] | scitechnews | None | None | None | None | U.S. officials say more than $2 million in cryptocurrency payments to the hackers who held Colonial Pipeline hostage in May has been recovered, marking the first recovery by the U.S. Department of Justice's new ransomware task force. Federal Bureau of Investigation deputy director Paul Abbate said the bureau seized proceeds paid to the Russian DarkSide hacker ring from a digital "wallet" containing the ransom, after securing a warrant from a federal judge. An affidavit said the bureau acquired the wallet's "private key," while officials have not disclosed how it was obtained. In announcing the seizure, Deputy Attorney General Lisa Monaco said, "The sophisticated use of technology to hold businesses and even whole cities hostage for profit is decidedly a 21st century challenge. But the adage, 'follow the money', still applies."
|
||||
294 | Super Productive 3D Bioprinter Could Speed Drug Development | June 8, 2021 -- A 3D printer that rapidly produces large batches of custom biological tissues could help make drug development faster and less costly. Nanoengineers at the University of California San Diego developed the high-throughput bioprinting technology, which 3D prints with record speed - it can produce a 96-well array of living human tissue samples within 30 minutes. Having the ability to rapidly produce such samples could accelerate high-throughput preclinical drug screening and disease modeling, the researchers said.
The process for a pharmaceutical company to develop a new drug can take up to 15 years and cost up to $2.6 billion. It generally begins with screening tens of thousands of drug candidates in test tubes. Successful candidates then get tested in animals, and any that pass this stage move on to clinical trials. With any luck, one of these candidates will make it into the market as an FDA approved drug.
The high-throughput 3D bioprinting technology developed at UC San Diego could accelerate the first steps of this process. It would enable drug developers to rapidly build up large quantities of human tissues on which they could test and weed out drug candidates much earlier.
"With human tissues, you can get better data - real human data - on how a drug will work," said Shaochen Chen, a professor of nanoengineering at the UC San Diego Jacobs School of Engineering. "Our technology can create these tissues with high-throughput capability, high reproducibility and high precision. This could really help the pharmaceutical industry quickly identify and focus on the most promising drugs."
The work was published in the journal Biofabrication .
The researchers note that while their technology might not eliminate animal testing, it could minimize failures encountered during that stage.
"What we are developing here are complex 3D cell culture systems that will more closely mimic actual human tissues, and that can hopefully improve the success rate of drug development," said Shangting You, a postdoctoral researcher in Chen's lab and co-first author of the study.
The technology rivals other 3D bioprinting methods not only in terms of resolution - it prints lifelike structures with intricate, microscopic features, such as human liver cancer tissues containing blood vessel networks - but also speed. Printing one of these tissue samples takes about 10 seconds with Chen's technology; printing the same sample would take hours with traditional methods. Also, it has the added benefit of automatically printing samples directly in industrial well plates. This means that samples no longer have to be manually transferred one at a time from the printing platform to the well plates for screening.
"When you're scaling this up to a 96-well plate, you're talking about a world of difference in time savings - at least 96 hours using a traditional method plus sample transfer time, versus around 30 minutes total with our technology," said Chen.
Reproducibility is another key feature of this work. The tissues that Chen's technology produces are highly organized structures, so they can be easily replicated for industrial scale screening. It's a different approach than growing organoids for drug screening, explained Chen. "With organoids, you're mixing different types of cells and letting them to self-organize to form a 3D structure that is not well controlled and can vary from one experiment to another. Thus, they are not reproducible for the same property, structure and function. But with our 3D bioprinting approach, we can specify exactly where to print different cell types, the amounts and the micro-architecture."
How it works
To print their tissue samples, the researchers first design 3D models of biological structures on a computer. These designs can even come from medical scans, so they can be personalized for a patient's tissues. The computer then slices the model into 2D snapshots and transfers them to millions of microscopic-sized mirrors. Each mirror is digitally controlled to project patterns of violet light - 405 nanometers in wavelength, which is safe for cells - in the form of these snapshots. The light patterns are shined onto a solution containing live cell cultures and light-sensitive polymers that solidify upon exposure to light. The structure is rapidly printed one layer at a time in a continuous fashion, creating a 3D solid polymer scaffold encapsulating live cells that will grow and become biological tissue.
The digitally controlled micromirror array is key to the printer's high speed. Because it projects entire 2D patterns onto the substrate as it prints layer by layer, it produces 3D structures much faster than other printing methods, which scans each layer line by line using either a nozzle or laser.
"An analogy would be comparing the difference between drawing a shape using a pencil versus a stamp," said Henry Hwang, a nanoengineering Ph.D. student in Chen's lab who is also co-first author of the study. "With a pencil, you'd have to draw every single line until you complete the shape. But with a stamp, you mark that entire shape all at once. That's what the digital micromirror device does in our technology. It's orders of magnitude difference in speed."
This recent work builds on the 3D bioprinting technology that Chen's team invented in 2013. It started out as a platform for creating living biological tissues for regenerative medicine. Past projects include 3D printing liver tissues , blood vessel networks , heart tissues and spinal cord implants , to name a few. In recent years, Chen's lab has expanded the use of their technology to print coral-inspired structures that marine scientists can use for studying algae growth and for aiding coral reef restoration projects.
Now, the researchers have automated the technology in order to do high-throughput tissue printing. Allegro 3D, Inc., a UC San Diego spin-off company co-founded by Chen and a nanoengineering Ph.D. alumnus from his lab, Wei Zhu, has licensed the technology and recently launched a commercial product.
Paper: " High throughput direct 3D bioprinting in multiwell plates ." Co-authors include Xuanyi Ma, Leilani Kwe, Grace Victorine, Natalie Lawrence, Xueyi Wan, Haixu Shen and Wei Zhu.
This work was supported in part by the National Institutes of Health (R01EB021857, R21AR074763, R21HD100132, R33HD090662) and the National Science Foundation (1903933, 1937653).
Liezel Labios Jacobs School of Engineering 858-246-1124 llabios@ucsd.edu | A three-dimensional (3D) bioprinter developed by researchers at the University of California San Diego (UCSD) that can produce large batches of custom biological tissues at record speed could accelerate drug development. The new bioprinting method can produce a tissue sample in just 10 seconds, compared to hours with traditional methods. The researchers designed 3D models of biological structures on a computer, which slices the models into 2D snapshots and transfers them to millions of microscopic-sized mirrors, which are digitally controlled to project patterns of violet light in the form of these snapshots. After the light patterns are shined onto a solution that solidifies upon exposure to light, the structure is printed a layer at a time in a continuous fashion. UCSD's Shangting You said, "What we are developing here are complex 3D cell culture systems that will more closely mimic actual human tissues, and that can hopefully improve the success rate of drug development." | [] | [] | [] | scitechnews | None | None | None | None | A three-dimensional (3D) bioprinter developed by researchers at the University of California San Diego (UCSD) that can produce large batches of custom biological tissues at record speed could accelerate drug development. The new bioprinting method can produce a tissue sample in just 10 seconds, compared to hours with traditional methods. The researchers designed 3D models of biological structures on a computer, which slices the models into 2D snapshots and transfers them to millions of microscopic-sized mirrors, which are digitally controlled to project patterns of violet light in the form of these snapshots. After the light patterns are shined onto a solution that solidifies upon exposure to light, the structure is printed a layer at a time in a continuous fashion. UCSD's Shangting You said, "What we are developing here are complex 3D cell culture systems that will more closely mimic actual human tissues, and that can hopefully improve the success rate of drug development."
June 8, 2021 -- A 3D printer that rapidly produces large batches of custom biological tissues could help make drug development faster and less costly. Nanoengineers at the University of California San Diego developed the high-throughput bioprinting technology, which 3D prints with record speed - it can produce a 96-well array of living human tissue samples within 30 minutes. Having the ability to rapidly produce such samples could accelerate high-throughput preclinical drug screening and disease modeling, the researchers said.
The process for a pharmaceutical company to develop a new drug can take up to 15 years and cost up to $2.6 billion. It generally begins with screening tens of thousands of drug candidates in test tubes. Successful candidates then get tested in animals, and any that pass this stage move on to clinical trials. With any luck, one of these candidates will make it into the market as an FDA approved drug.
The high-throughput 3D bioprinting technology developed at UC San Diego could accelerate the first steps of this process. It would enable drug developers to rapidly build up large quantities of human tissues on which they could test and weed out drug candidates much earlier.
"With human tissues, you can get better data - real human data - on how a drug will work," said Shaochen Chen, a professor of nanoengineering at the UC San Diego Jacobs School of Engineering. "Our technology can create these tissues with high-throughput capability, high reproducibility and high precision. This could really help the pharmaceutical industry quickly identify and focus on the most promising drugs."
The work was published in the journal Biofabrication .
The researchers note that while their technology might not eliminate animal testing, it could minimize failures encountered during that stage.
"What we are developing here are complex 3D cell culture systems that will more closely mimic actual human tissues, and that can hopefully improve the success rate of drug development," said Shangting You, a postdoctoral researcher in Chen's lab and co-first author of the study.
The technology rivals other 3D bioprinting methods not only in terms of resolution - it prints lifelike structures with intricate, microscopic features, such as human liver cancer tissues containing blood vessel networks - but also speed. Printing one of these tissue samples takes about 10 seconds with Chen's technology; printing the same sample would take hours with traditional methods. Also, it has the added benefit of automatically printing samples directly in industrial well plates. This means that samples no longer have to be manually transferred one at a time from the printing platform to the well plates for screening.
"When you're scaling this up to a 96-well plate, you're talking about a world of difference in time savings - at least 96 hours using a traditional method plus sample transfer time, versus around 30 minutes total with our technology," said Chen.
Reproducibility is another key feature of this work. The tissues that Chen's technology produces are highly organized structures, so they can be easily replicated for industrial scale screening. It's a different approach than growing organoids for drug screening, explained Chen. "With organoids, you're mixing different types of cells and letting them to self-organize to form a 3D structure that is not well controlled and can vary from one experiment to another. Thus, they are not reproducible for the same property, structure and function. But with our 3D bioprinting approach, we can specify exactly where to print different cell types, the amounts and the micro-architecture."
How it works
To print their tissue samples, the researchers first design 3D models of biological structures on a computer. These designs can even come from medical scans, so they can be personalized for a patient's tissues. The computer then slices the model into 2D snapshots and transfers them to millions of microscopic-sized mirrors. Each mirror is digitally controlled to project patterns of violet light - 405 nanometers in wavelength, which is safe for cells - in the form of these snapshots. The light patterns are shined onto a solution containing live cell cultures and light-sensitive polymers that solidify upon exposure to light. The structure is rapidly printed one layer at a time in a continuous fashion, creating a 3D solid polymer scaffold encapsulating live cells that will grow and become biological tissue.
The digitally controlled micromirror array is key to the printer's high speed. Because it projects entire 2D patterns onto the substrate as it prints layer by layer, it produces 3D structures much faster than other printing methods, which scans each layer line by line using either a nozzle or laser.
"An analogy would be comparing the difference between drawing a shape using a pencil versus a stamp," said Henry Hwang, a nanoengineering Ph.D. student in Chen's lab who is also co-first author of the study. "With a pencil, you'd have to draw every single line until you complete the shape. But with a stamp, you mark that entire shape all at once. That's what the digital micromirror device does in our technology. It's orders of magnitude difference in speed."
This recent work builds on the 3D bioprinting technology that Chen's team invented in 2013. It started out as a platform for creating living biological tissues for regenerative medicine. Past projects include 3D printing liver tissues , blood vessel networks , heart tissues and spinal cord implants , to name a few. In recent years, Chen's lab has expanded the use of their technology to print coral-inspired structures that marine scientists can use for studying algae growth and for aiding coral reef restoration projects.
Now, the researchers have automated the technology in order to do high-throughput tissue printing. Allegro 3D, Inc., a UC San Diego spin-off company co-founded by Chen and a nanoengineering Ph.D. alumnus from his lab, Wei Zhu, has licensed the technology and recently launched a commercial product.
Paper: " High throughput direct 3D bioprinting in multiwell plates ." Co-authors include Xuanyi Ma, Leilani Kwe, Grace Victorine, Natalie Lawrence, Xueyi Wan, Haixu Shen and Wei Zhu.
This work was supported in part by the National Institutes of Health (R01EB021857, R21AR074763, R21HD100132, R33HD090662) and the National Science Foundation (1903933, 1937653).
Liezel Labios Jacobs School of Engineering 858-246-1124 llabios@ucsd.edu |
|||
296 | Want Your Nails Done? Let a Robot Do It. | Three startups are developing robotics for automatically manicuring fingernails using technology that combines fingernail-painting hardware with machine learning software to differentiate the nail from the surrounding skin. The companies - Nimble, Clockwork, and Coral - use a database of thousands of nail shapes recorded by cameras when customers have manicures. Clockwork is testing a tabletop device at a pop-up location in San Francisco; the device incorporates computer vision and artificial intelligence (AI), while a gantry applies polish via multiaxis movements. Clockwork co-founder Aaron Feldstein said the product has a plastic-tipped cartridge to prevent piercing fingers, and thwarts hacking by not being connected to the Internet. Meanwhile, Nimble founder Omri Moran said his startup's technology blends computer vision, AI, and a robotic arm to polish and dry nails within 10 minutes, in a device about the size of a toaster. | [] | [] | [] | scitechnews | None | None | None | None | Three startups are developing robotics for automatically manicuring fingernails using technology that combines fingernail-painting hardware with machine learning software to differentiate the nail from the surrounding skin. The companies - Nimble, Clockwork, and Coral - use a database of thousands of nail shapes recorded by cameras when customers have manicures. Clockwork is testing a tabletop device at a pop-up location in San Francisco; the device incorporates computer vision and artificial intelligence (AI), while a gantry applies polish via multiaxis movements. Clockwork co-founder Aaron Feldstein said the product has a plastic-tipped cartridge to prevent piercing fingers, and thwarts hacking by not being connected to the Internet. Meanwhile, Nimble founder Omri Moran said his startup's technology blends computer vision, AI, and a robotic arm to polish and dry nails within 10 minutes, in a device about the size of a toaster.
|
||||
297 | Researchers Fine-Tune Control Over AI Image Generation | Researchers from North Carolina State University have developed a new state-of-the-art method for controlling how artificial intelligence (AI) systems create images. The work has applications for fields from autonomous robotics to AI training.
At issue is a type of AI task called conditional image generation, in which AI systems create images that meet a specific set of conditions. For example, a system could be trained to create original images of cats or dogs, depending on which animal the user requested. More recent techniques have built on this to incorporate conditions regarding an image layout. This allows users to specify which types of objects they want to appear in particular places on the screen. For example, the sky might go in one box, a tree might be in another box, a stream might be in a separate box, and so on.
The new work builds on those techniques to give users more control over the resulting images, and to retain certain characteristics across a series of images.
"Our approach is highly reconfigurable," says Tianfu Wu, co-author of a paper on the work and an assistant professor of computer engineering at NC State. "Like previous approaches, ours allows users to have the system generate an image based on a specific set of conditions. But ours also allows you to retain that image and add to it. For example, users could have the AI create a mountain scene. The users could then have the system add skiers to that scene."
In addition, the new approach allows users to have the AI manipulate specific elements so that they are identifiably the same, but have moved or changed in some way. For example, the AI might create a series of images showing skiers turn toward the viewer as they move across the landscape.
"One application for this would be to help autonomous robots 'imagine' what the end result might look like before they begin a given task," Wu says. "You could also use the system to generate images for AI training. So, instead of compiling images from external sources, you could use this system to create images for training other AI systems."
The researchers tested their new approach using the COCO-Stuff dataset and the Visual Genome dataset. Based on standard measures of image quality, the new approach outperformed the previous state-of-the-art image creation techniques.
"Our next step is to see if we can extend this work to video and three-dimensional images," Wu says.
Training for the new approach requires a fair amount of computational power; the researchers used a 4-GPU workstation. However, deploying the system is less computationally expensive.
"We found that one GPU gives you almost real-time speed," Wu says.
"In addition to our paper, we've made our source code for this approach available on GitHub . That said, we're always open to collaborating with industry partners."
The paper, " Learning Layout and Style Reconfigurable GANs for Controllable Image Synthesis ," is published in the journal IEEE Transactions on Pattern Analysis and Machine Intelligence . First author of the paper is Wei Sun, a recent Ph.D. graduate from NC State.
The work was supported by the National Science Foundation, under grants 1909644, 1822477, 2024688 and 2013451; by the U.S. Army Research Office, under grant W911NF1810295; and by the Administration for Community Living, under grant 90IFDV0017-01-00.
-shipman-
Note to Editors: The study abstract follows.
"Learning Layout and Style Reconfigurable GANs for Controllable Image Synthesis"
Authors : Wei Sun and Tianfu Wu, North Carolina State University
Published : May 10, IEEE Transactions on Pattern Analysis and Machine Intelligence
DOI : 10.1109/TPAMI.2021.3078577
Abstract: With the remarkable recent progress on learning deep generative models, it becomes increasingly interesting to develop models for controllable image synthesis from reconfigurable structured inputs. This paper focuses on a recently emerged task, layout-to-image, whose goal is to learn generative models for synthesizing photo-realistic images from a spatial layout (i.e., object bounding boxes configured in an image lattice) and its style codes (i.e., structural and appearance variations encoded by latent vectors). This paper first proposes an intuitive paradigm for the task, layout-to-mask-to-image, which learns to unfold object masks in a weakly-supervised way based on an input layout and object style codes. The layout-to-mask component deeply interacts with layers in the generator network to bridge the gap between an input layout and synthesized images. Then, this paper presents a method built on Generative Adversarial Networks (GANs) for the proposed layout-to-mask-to-image synthesis with layout and style control at both image and object levels. The controllability is realized by a proposed novel Instance-Sensitive and Layout-Aware Normalization (ISLA-Norm) scheme. A layout semi-supervised version of the proposed method is further developed without sacrificing performance. In experiments, the proposed method is tested in the COCO-Stuff dataset and the Visual Genome dataset with state-of-the-art performance obtained. | Refined control over artificial intelligence (AI) -driven conditional image generation by North Carolina State University (NC State) researchers has potential for use in fields ranging from autonomous robotics to AI training. NC State's Tianfu Wu said, "Like previous approaches, ours allows users to have the system generate an image based on a specific set of conditions. But ours also allows you to retain that image and add to it." The approach also can rig specific components to be identifiably the same, but shifted position or somehow altered. In testing the approach using the COCO-Stuff and the Visual Genome datasets, the technique bested previous state-of-the-art image generation methods. Wu suggested applications for the technique like helping autonomous robots "imagine" the appearance of an end result before undertaking a given task, or producing images for AI training. | [] | [] | [] | scitechnews | None | None | None | None | Refined control over artificial intelligence (AI) -driven conditional image generation by North Carolina State University (NC State) researchers has potential for use in fields ranging from autonomous robotics to AI training. NC State's Tianfu Wu said, "Like previous approaches, ours allows users to have the system generate an image based on a specific set of conditions. But ours also allows you to retain that image and add to it." The approach also can rig specific components to be identifiably the same, but shifted position or somehow altered. In testing the approach using the COCO-Stuff and the Visual Genome datasets, the technique bested previous state-of-the-art image generation methods. Wu suggested applications for the technique like helping autonomous robots "imagine" the appearance of an end result before undertaking a given task, or producing images for AI training.
Researchers from North Carolina State University have developed a new state-of-the-art method for controlling how artificial intelligence (AI) systems create images. The work has applications for fields from autonomous robotics to AI training.
At issue is a type of AI task called conditional image generation, in which AI systems create images that meet a specific set of conditions. For example, a system could be trained to create original images of cats or dogs, depending on which animal the user requested. More recent techniques have built on this to incorporate conditions regarding an image layout. This allows users to specify which types of objects they want to appear in particular places on the screen. For example, the sky might go in one box, a tree might be in another box, a stream might be in a separate box, and so on.
The new work builds on those techniques to give users more control over the resulting images, and to retain certain characteristics across a series of images.
"Our approach is highly reconfigurable," says Tianfu Wu, co-author of a paper on the work and an assistant professor of computer engineering at NC State. "Like previous approaches, ours allows users to have the system generate an image based on a specific set of conditions. But ours also allows you to retain that image and add to it. For example, users could have the AI create a mountain scene. The users could then have the system add skiers to that scene."
In addition, the new approach allows users to have the AI manipulate specific elements so that they are identifiably the same, but have moved or changed in some way. For example, the AI might create a series of images showing skiers turn toward the viewer as they move across the landscape.
"One application for this would be to help autonomous robots 'imagine' what the end result might look like before they begin a given task," Wu says. "You could also use the system to generate images for AI training. So, instead of compiling images from external sources, you could use this system to create images for training other AI systems."
The researchers tested their new approach using the COCO-Stuff dataset and the Visual Genome dataset. Based on standard measures of image quality, the new approach outperformed the previous state-of-the-art image creation techniques.
"Our next step is to see if we can extend this work to video and three-dimensional images," Wu says.
Training for the new approach requires a fair amount of computational power; the researchers used a 4-GPU workstation. However, deploying the system is less computationally expensive.
"We found that one GPU gives you almost real-time speed," Wu says.
"In addition to our paper, we've made our source code for this approach available on GitHub . That said, we're always open to collaborating with industry partners."
The paper, " Learning Layout and Style Reconfigurable GANs for Controllable Image Synthesis ," is published in the journal IEEE Transactions on Pattern Analysis and Machine Intelligence . First author of the paper is Wei Sun, a recent Ph.D. graduate from NC State.
The work was supported by the National Science Foundation, under grants 1909644, 1822477, 2024688 and 2013451; by the U.S. Army Research Office, under grant W911NF1810295; and by the Administration for Community Living, under grant 90IFDV0017-01-00.
-shipman-
Note to Editors: The study abstract follows.
"Learning Layout and Style Reconfigurable GANs for Controllable Image Synthesis"
Authors : Wei Sun and Tianfu Wu, North Carolina State University
Published : May 10, IEEE Transactions on Pattern Analysis and Machine Intelligence
DOI : 10.1109/TPAMI.2021.3078577
Abstract: With the remarkable recent progress on learning deep generative models, it becomes increasingly interesting to develop models for controllable image synthesis from reconfigurable structured inputs. This paper focuses on a recently emerged task, layout-to-image, whose goal is to learn generative models for synthesizing photo-realistic images from a spatial layout (i.e., object bounding boxes configured in an image lattice) and its style codes (i.e., structural and appearance variations encoded by latent vectors). This paper first proposes an intuitive paradigm for the task, layout-to-mask-to-image, which learns to unfold object masks in a weakly-supervised way based on an input layout and object style codes. The layout-to-mask component deeply interacts with layers in the generator network to bridge the gap between an input layout and synthesized images. Then, this paper presents a method built on Generative Adversarial Networks (GANs) for the proposed layout-to-mask-to-image synthesis with layout and style control at both image and object levels. The controllability is realized by a proposed novel Instance-Sensitive and Layout-Aware Normalization (ISLA-Norm) scheme. A layout semi-supervised version of the proposed method is further developed without sacrificing performance. In experiments, the proposed method is tested in the COCO-Stuff dataset and the Visual Genome dataset with state-of-the-art performance obtained. |
|||
301 | U.S. Supreme Court Narrows Scope of Sweeping Cybercrime Law | The majority ruling, written by Justice Amy Coney Barrett, is largely devoted to a meticulous parsing of the statue's language. However, she also noted the dangers of the approach prosecutors have advocated.
"The Government's interpretation of the statute would attach criminal penalties to a breathtaking amount of commonplace computer activity," Barrett wrote. "If the 'exceeds authorized access' clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals."
While insisting that the court arrived at its ruling based solely on reading the statute, and not considering its potential effects, Barrett concurred with critics who said the broader interpretation would "criminalize everything from embellishing an online-dating profile to using a pseudonym on Facebook."
In dissent, Justice Clarence Thomas said the majority's reading was contrived and off-base. He also said there are many areas of law where permission given to do something for one purpose does not imply permission for an unrelated purpose.
"A valet, for example, may take possession of a person's car to park it, but he cannot take it for a joyride," Thomas wrote in an opinion joined by Chief Justice John Roberts and Justice Samuel Alito.
Thomas also noted that violations of the law are typically a misdemeanor, and he said the breadth of the statute is no reason to misread it. "Much of the Federal Code criminalizes common activity," he wrote. "It is understandable to be uncomfortable with so much conduct being criminalized, but that discomfort does not give us authority to alter statutes."
Past controversies involving the law included a two-year prison sentence for a journalist who helped hackers deface the Los Angeles Times' website and, most notoriously, a prosecution that led to the suicide of a prominent internet freedom activist who faced the possibility of decades behind bars for downloading millions of scientific journal articles.
The case decided on Thursday, Van Buren v. United States , involved a former police officer convicted of violating the CFAA for searching a license plate database in exchange for a bribe as part of an FBI sting operation. The officer appealed the conviction, arguing that the law did not cover the unauthorized use of a computer system that the user was allowed to access as part of his job.
The Supreme Court agreed, holding that Nathan Van Buren's conviction was invalid.
A broad coalition of technology experts, civil-society activists and transparency advocates had poured amicus briefs into the high court as it considered its first-ever case involving the law.
The National Whistleblower Center warned that applying the CFAA to any unauthorized use of computer data would invite "retaliation against whistleblowers who provide evidence of criminal fraud and other criminal activity" to authorities. The libertarian Americans for Prosperity Foundation said the government's interpretation of the law would cover "violations of the fine print in website terms of service , company computer-use policies, and other breaches of contract" and "wrongly criminalize a wide swath of innocent, innocuous conduct."
Free-press advocates warned that a ruling for the government "would significantly chill First Amendment activity," while technologists said it would allow prosecutors to go after good-faith security researchers attempting to raise awareness of digital vulnerabilities.
But supporters of the broad use of the CFAA said it was necessary to combat insider threats facing businesses and government agencies' sensitive computer systems. Narrowing the law "would allow any person who has legitimate access to the data carte blanche to access and use (or indeed in many cases destroy) that data for any manifestly blameworthy reason they choose," the Federal Law Enforcement Officers Association told the court . | The U.S. Supreme Court has ruled that the 1986 Computer Fraud and Abuse Act (CFAA) cannot be invoked to prosecute people who misuse databases they are otherwise entitled to access. The 6-3 ruling follows concerns raised by justices that the federal government's interpretation of the statute could penalize people for commonplace activities, such as checking social media on their work computers. Dissenting Justice Clarence Thomas called the majority's view contrived and unfounded, contending there are many areas of law where consent to do something for one purpose does not imply permission for an unconnected purpose. | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Supreme Court has ruled that the 1986 Computer Fraud and Abuse Act (CFAA) cannot be invoked to prosecute people who misuse databases they are otherwise entitled to access. The 6-3 ruling follows concerns raised by justices that the federal government's interpretation of the statute could penalize people for commonplace activities, such as checking social media on their work computers. Dissenting Justice Clarence Thomas called the majority's view contrived and unfounded, contending there are many areas of law where consent to do something for one purpose does not imply permission for an unconnected purpose.
The majority ruling, written by Justice Amy Coney Barrett, is largely devoted to a meticulous parsing of the statue's language. However, she also noted the dangers of the approach prosecutors have advocated.
"The Government's interpretation of the statute would attach criminal penalties to a breathtaking amount of commonplace computer activity," Barrett wrote. "If the 'exceeds authorized access' clause criminalizes every violation of a computer-use policy, then millions of otherwise law-abiding citizens are criminals."
While insisting that the court arrived at its ruling based solely on reading the statute, and not considering its potential effects, Barrett concurred with critics who said the broader interpretation would "criminalize everything from embellishing an online-dating profile to using a pseudonym on Facebook."
In dissent, Justice Clarence Thomas said the majority's reading was contrived and off-base. He also said there are many areas of law where permission given to do something for one purpose does not imply permission for an unrelated purpose.
"A valet, for example, may take possession of a person's car to park it, but he cannot take it for a joyride," Thomas wrote in an opinion joined by Chief Justice John Roberts and Justice Samuel Alito.
Thomas also noted that violations of the law are typically a misdemeanor, and he said the breadth of the statute is no reason to misread it. "Much of the Federal Code criminalizes common activity," he wrote. "It is understandable to be uncomfortable with so much conduct being criminalized, but that discomfort does not give us authority to alter statutes."
Past controversies involving the law included a two-year prison sentence for a journalist who helped hackers deface the Los Angeles Times' website and, most notoriously, a prosecution that led to the suicide of a prominent internet freedom activist who faced the possibility of decades behind bars for downloading millions of scientific journal articles.
The case decided on Thursday, Van Buren v. United States , involved a former police officer convicted of violating the CFAA for searching a license plate database in exchange for a bribe as part of an FBI sting operation. The officer appealed the conviction, arguing that the law did not cover the unauthorized use of a computer system that the user was allowed to access as part of his job.
The Supreme Court agreed, holding that Nathan Van Buren's conviction was invalid.
A broad coalition of technology experts, civil-society activists and transparency advocates had poured amicus briefs into the high court as it considered its first-ever case involving the law.
The National Whistleblower Center warned that applying the CFAA to any unauthorized use of computer data would invite "retaliation against whistleblowers who provide evidence of criminal fraud and other criminal activity" to authorities. The libertarian Americans for Prosperity Foundation said the government's interpretation of the law would cover "violations of the fine print in website terms of service , company computer-use policies, and other breaches of contract" and "wrongly criminalize a wide swath of innocent, innocuous conduct."
Free-press advocates warned that a ruling for the government "would significantly chill First Amendment activity," while technologists said it would allow prosecutors to go after good-faith security researchers attempting to raise awareness of digital vulnerabilities.
But supporters of the broad use of the CFAA said it was necessary to combat insider threats facing businesses and government agencies' sensitive computer systems. Narrowing the law "would allow any person who has legitimate access to the data carte blanche to access and use (or indeed in many cases destroy) that data for any manifestly blameworthy reason they choose," the Federal Law Enforcement Officers Association told the court . |
|||
303 | Google Boosts Android Privacy Protections in Attempt to Rival Apple | Google will unveil additional safeguards for users of the Android mobile operating system so advertisers cannot track them when they switch between applications. Google said the extra protections will ensure any marketer trying to access Android users who have opted out of sharing their Advertising ID "will receive a string of zeros instead of the identifier." Although users can already restrict ad tracking or reset their Advertising IDs, developers can bypass those settings via alternative device identifiers. The Android OS revamp will let billions of users opt out of interest-based advertising, and sever marketers from the wealth of data they use to personalize messaging. | [] | [] | [] | scitechnews | None | None | None | None | Google will unveil additional safeguards for users of the Android mobile operating system so advertisers cannot track them when they switch between applications. Google said the extra protections will ensure any marketer trying to access Android users who have opted out of sharing their Advertising ID "will receive a string of zeros instead of the identifier." Although users can already restrict ad tracking or reset their Advertising IDs, developers can bypass those settings via alternative device identifiers. The Android OS revamp will let billions of users opt out of interest-based advertising, and sever marketers from the wealth of data they use to personalize messaging.
|
||||
305 | TikTok Gave Itself Permission to Collect Biometric Information on U.S. Users | Chinese video-sharing social networking service TikTok has revised its U.S. privacy policy to say it is permitted it to "collect biometric identifiers and biometric information" from users' content, including "faceprints and voiceprints." A newly-added Image and Audio Information section on TikTok about information it collects automatically says the app may collect data about images and audio in users' content "such as identifying the objects and scenery that appear, the existence and location within an image of face and body features and attributes, the nature of the audio, and the text of the words spoken in your User Content." The disclosure of the service's biometric data collection followed the $92-million settlement of a class action lawsuit against TikTok over its violation of Illinois' Biometric Information Privacy Act. | [] | [] | [] | scitechnews | None | None | None | None | Chinese video-sharing social networking service TikTok has revised its U.S. privacy policy to say it is permitted it to "collect biometric identifiers and biometric information" from users' content, including "faceprints and voiceprints." A newly-added Image and Audio Information section on TikTok about information it collects automatically says the app may collect data about images and audio in users' content "such as identifying the objects and scenery that appear, the existence and location within an image of face and body features and attributes, the nature of the audio, and the text of the words spoken in your User Content." The disclosure of the service's biometric data collection followed the $92-million settlement of a class action lawsuit against TikTok over its violation of Illinois' Biometric Information Privacy Act.
|
||||
306 | How AI Could Alert Firefighters of Imminent Danger | Firefighting is a race against time. Exactly how much time? For firefighters, that part is often unclear. Building fires can turn from bad to deadly in an instant, and the warning signs are frequently difficult to discern amid the mayhem of an inferno.
Seeking to remove this major blind spot, researchers at the National Institute of Standards and Technology (NIST) have developed P-Flash, or the Prediction Model for Flashover. The artificial-intelligence-powered tool was designed to predict and warn of a deadly phenomenon in burning buildings known as flashover , when flammable materials in a room ignite almost simultaneously, producing a blaze only limited in size by available oxygen. The tool's predictions are based on temperature data from a building's heat detectors, and, remarkably, it is designed to operate even after heat detectors begin to fail, making do with the remaining devices.
The team tested P-Flash's ability to predict imminent flashovers in over a thousand simulated fires and more than a dozen real-world fires. Research, just published in the Proceedings of the AAAI Conference on Artificial Intelligence , suggests the model shows promise in anticipating simulated flashovers and shows how real-world data helped the researchers identify an unmodeled physical phenomenon that if addressed could improve the tool's forecasting in actual fires. With further development, P-Flash could enhance the ability of firefighters to hone their real-time tactics, helping them save building occupants as well as themselves.
Flashovers are so dangerous in part because it's challenging to see them coming. There are indicators to watch, such as increasingly intense heat or flames rolling across the ceiling. However, these signs can be easy to miss in many situations, such as when a firefighter is searching for trapped victims with heavy equipment in tow and smoke obscuring the view. And from the outside, as firefighters approach a scene, the conditions inside are even less clear.
"I don't think the fire service has many tools technology-wise that predict flashover at the scene," said NIST researcher Christopher Brown, who also serves as a volunteer firefighter. "Our biggest tool is just observation, and that can be very deceiving. Things look one way on the outside, and when you get inside, it could be quite different."
Computer models that predict flashover based on temperature are not entirely new, but until now, they have relied on constant streams of temperature data, which are obtainable in a lab but not guaranteed during a real fire.
Heat detectors, which are commonly installed in commercial buildings and can be used in homes alongside smoke alarms, are for the most part expected to operate only at temperatures up to 150 degrees Celsius (302 degrees Fahrenheit), far below the 600 degrees Celsius (1,100 degrees Fahrenheit) at which a flashover typically begins to occur. To bridge the gap created by lost data, NIST researchers applied a form of artificial intelligence known as machine learning.
"You lose the data, but you've got the trend up to where the heat detector fails, and you've got other detectors. With machine learning, you could use that data as a jumping-off point to extrapolate whether flashover is going to occur or already occurred," said NIST chemical engineer Thomas Cleary, a co-author of the study.
Machine-learning algorithms uncover patterns in large datasets and build models based on their findings. These models can be useful for predicting certain outcomes, such as how much time will pass before a room is engulfed in flames.
To build P-Flash, the authors fed their algorithm temperature data from heat detectors in a burning three-bedroom, one-story ranch-style home - the most common type of home in a majority of states. This building was of a digital rather than brick-and-mortar variety, however.
Because machine learning algorithms require great quantities of data, and conducting hundreds of large-scale fire tests was not feasible, the team burned this virtual building repeatedly using NIST's Consolidated Model of Fire and Smoke Transport, or CFAST , a fire modeling program validated by real fire experiments, Cleary said.
The authors ran 5,041 simulations, with slight but critical variations between each. Different pieces of furniture throughout the house ignited with every run. Windows and bedroom doors were randomly configured to be open or closed. And the front door, which always started closed, opened up at some point to represent evacuating occupants. Heat detectors placed in the rooms produced temperature data until they were inevitably disabled by the intense heat.
To learn about P-Flash's ability to predict flashovers after heat detectors fail, the researchers split up the simulated temperature recordings, allowing the algorithm to learn from a set of 4,033 while keeping the others out of sight. Once P-Flash had wrapped up a study session, the team quizzed it on a set of 504 simulations, fine-tuned the model based on its grade and repeated the process. After attaining a desired performance, the researchers put P-Flash up against a final set of 504.
The researchers found that the model correctly predicted flashovers one minute beforehand for about 86% of the simulated fires. Another important aspect of P-Flash's performance was that even when it missed the mark, it mostly did so by producing false positives - predictions that an event would happen earlier than it actually did - which is better than the alternative of giving firefighters a false sense of security.
"You always want to be on the safe side. Even though we can accept a small number of false positives, our model development places a premium on minimizing or, better yet, eliminating false negatives," said NIST mechanical engineer and corresponding author Wai Cheong Tam.
The initial tests were promising, but the team had not grown complacent.
"One very important question remained, which was, can our model be trusted if we only train our model using synthetic data?" Tam said.
Luckily, the researchers came across an opportunity to find answers in real-world data produced by Underwriters Laboratories (UL) in a recent study funded by the National Institute of Justice . UL had carried out 13 experiments in a ranch-style home matching the one P-Flash was trained on, and as with the simulations, ignition sources and ventilation varied between each fire.
The NIST team trained P-Flash on thousands of simulations as before, but this time they swapped in temperature data from the UL experiments as the final test. And this time, the predictions played out a bit differently.
P-Flash, attempting to predict flashovers up to 30 seconds beforehand, performed well when fires started in open areas such the kitchen or living room. But when fires started in a bedroom, behind closed doors, the model could almost never tell when flashover was imminent.
The team identified a phenomenon called the enclosure effect as a possible explanation for the sharp drop-off in accuracy. When fires burn in small, closed-off spaces, heat has little ability to dissipate, so temperature rises quickly. However, many of the experiments that form the basis of P-Flash's training material were carried out in open lab spaces, Tam said. As such, temperatures from the UL experiments shot up nearly twice as fast as the synthetic data.
Despite revealing a weak spot in the tool, the team finds the results to be encouraging and a step in the right direction. The researchers' next task is to zero in on the enclosure effect and represent it in simulations. To do that they plan on performing more full-scale experiments themselves.
When its weak spots are patched and its predictions sharpened, the researchers envision that their system could be embedded in hand-held devices able to communicate with detectors in a building through the cloud, Tam said.
Firefighters would not only be able to tell their colleagues when it's time to escape, but they would be able to know danger spots in the building before they arrive and adjust their tactics to maximize their chances of saving lives.
Paper: E.Y. Fu, W.C. Tam, R. Peacock, P. Reneke, G. Ngai, H. Leong and T. Cleary. Predicting Flashover Occurrence using Surrogate Temperature Data. Proceedings of the AAAI Conference on Artificial Intelligence . | The artificial intelligence-driven Prediction Model for Flashover (P-Flash) tool is designed to warn firefighters of flashover, the near-simultaneous ignition of flammable materials in a room. Developed by researchers at the U.S. National Institute of Standards and Technology (NIST), P-Flash makes predictions based on temperature data from a building's heat detectors, and is engineered to function even after those detectors fail. The investigators developed the tool by feeding a machine learning algorithm temperature data from heat detectors in 5,041 simulations of a burning three-bedroom, one-story ranch-style home using NIST's Consolidated Model of Fire and Smoke Transport fire modeling program. In tests of the tool's ability to anticipate imminent flashovers in more than 1,000 simulated fires and over a dozen actual fires, it correctly predicted flashovers one minute in advance for about 86% of the simulated fires. | [] | [] | [] | scitechnews | None | None | None | None | The artificial intelligence-driven Prediction Model for Flashover (P-Flash) tool is designed to warn firefighters of flashover, the near-simultaneous ignition of flammable materials in a room. Developed by researchers at the U.S. National Institute of Standards and Technology (NIST), P-Flash makes predictions based on temperature data from a building's heat detectors, and is engineered to function even after those detectors fail. The investigators developed the tool by feeding a machine learning algorithm temperature data from heat detectors in 5,041 simulations of a burning three-bedroom, one-story ranch-style home using NIST's Consolidated Model of Fire and Smoke Transport fire modeling program. In tests of the tool's ability to anticipate imminent flashovers in more than 1,000 simulated fires and over a dozen actual fires, it correctly predicted flashovers one minute in advance for about 86% of the simulated fires.
Firefighting is a race against time. Exactly how much time? For firefighters, that part is often unclear. Building fires can turn from bad to deadly in an instant, and the warning signs are frequently difficult to discern amid the mayhem of an inferno.
Seeking to remove this major blind spot, researchers at the National Institute of Standards and Technology (NIST) have developed P-Flash, or the Prediction Model for Flashover. The artificial-intelligence-powered tool was designed to predict and warn of a deadly phenomenon in burning buildings known as flashover , when flammable materials in a room ignite almost simultaneously, producing a blaze only limited in size by available oxygen. The tool's predictions are based on temperature data from a building's heat detectors, and, remarkably, it is designed to operate even after heat detectors begin to fail, making do with the remaining devices.
The team tested P-Flash's ability to predict imminent flashovers in over a thousand simulated fires and more than a dozen real-world fires. Research, just published in the Proceedings of the AAAI Conference on Artificial Intelligence , suggests the model shows promise in anticipating simulated flashovers and shows how real-world data helped the researchers identify an unmodeled physical phenomenon that if addressed could improve the tool's forecasting in actual fires. With further development, P-Flash could enhance the ability of firefighters to hone their real-time tactics, helping them save building occupants as well as themselves.
Flashovers are so dangerous in part because it's challenging to see them coming. There are indicators to watch, such as increasingly intense heat or flames rolling across the ceiling. However, these signs can be easy to miss in many situations, such as when a firefighter is searching for trapped victims with heavy equipment in tow and smoke obscuring the view. And from the outside, as firefighters approach a scene, the conditions inside are even less clear.
"I don't think the fire service has many tools technology-wise that predict flashover at the scene," said NIST researcher Christopher Brown, who also serves as a volunteer firefighter. "Our biggest tool is just observation, and that can be very deceiving. Things look one way on the outside, and when you get inside, it could be quite different."
Computer models that predict flashover based on temperature are not entirely new, but until now, they have relied on constant streams of temperature data, which are obtainable in a lab but not guaranteed during a real fire.
Heat detectors, which are commonly installed in commercial buildings and can be used in homes alongside smoke alarms, are for the most part expected to operate only at temperatures up to 150 degrees Celsius (302 degrees Fahrenheit), far below the 600 degrees Celsius (1,100 degrees Fahrenheit) at which a flashover typically begins to occur. To bridge the gap created by lost data, NIST researchers applied a form of artificial intelligence known as machine learning.
"You lose the data, but you've got the trend up to where the heat detector fails, and you've got other detectors. With machine learning, you could use that data as a jumping-off point to extrapolate whether flashover is going to occur or already occurred," said NIST chemical engineer Thomas Cleary, a co-author of the study.
Machine-learning algorithms uncover patterns in large datasets and build models based on their findings. These models can be useful for predicting certain outcomes, such as how much time will pass before a room is engulfed in flames.
To build P-Flash, the authors fed their algorithm temperature data from heat detectors in a burning three-bedroom, one-story ranch-style home - the most common type of home in a majority of states. This building was of a digital rather than brick-and-mortar variety, however.
Because machine learning algorithms require great quantities of data, and conducting hundreds of large-scale fire tests was not feasible, the team burned this virtual building repeatedly using NIST's Consolidated Model of Fire and Smoke Transport, or CFAST , a fire modeling program validated by real fire experiments, Cleary said.
The authors ran 5,041 simulations, with slight but critical variations between each. Different pieces of furniture throughout the house ignited with every run. Windows and bedroom doors were randomly configured to be open or closed. And the front door, which always started closed, opened up at some point to represent evacuating occupants. Heat detectors placed in the rooms produced temperature data until they were inevitably disabled by the intense heat.
To learn about P-Flash's ability to predict flashovers after heat detectors fail, the researchers split up the simulated temperature recordings, allowing the algorithm to learn from a set of 4,033 while keeping the others out of sight. Once P-Flash had wrapped up a study session, the team quizzed it on a set of 504 simulations, fine-tuned the model based on its grade and repeated the process. After attaining a desired performance, the researchers put P-Flash up against a final set of 504.
The researchers found that the model correctly predicted flashovers one minute beforehand for about 86% of the simulated fires. Another important aspect of P-Flash's performance was that even when it missed the mark, it mostly did so by producing false positives - predictions that an event would happen earlier than it actually did - which is better than the alternative of giving firefighters a false sense of security.
"You always want to be on the safe side. Even though we can accept a small number of false positives, our model development places a premium on minimizing or, better yet, eliminating false negatives," said NIST mechanical engineer and corresponding author Wai Cheong Tam.
The initial tests were promising, but the team had not grown complacent.
"One very important question remained, which was, can our model be trusted if we only train our model using synthetic data?" Tam said.
Luckily, the researchers came across an opportunity to find answers in real-world data produced by Underwriters Laboratories (UL) in a recent study funded by the National Institute of Justice . UL had carried out 13 experiments in a ranch-style home matching the one P-Flash was trained on, and as with the simulations, ignition sources and ventilation varied between each fire.
The NIST team trained P-Flash on thousands of simulations as before, but this time they swapped in temperature data from the UL experiments as the final test. And this time, the predictions played out a bit differently.
P-Flash, attempting to predict flashovers up to 30 seconds beforehand, performed well when fires started in open areas such the kitchen or living room. But when fires started in a bedroom, behind closed doors, the model could almost never tell when flashover was imminent.
The team identified a phenomenon called the enclosure effect as a possible explanation for the sharp drop-off in accuracy. When fires burn in small, closed-off spaces, heat has little ability to dissipate, so temperature rises quickly. However, many of the experiments that form the basis of P-Flash's training material were carried out in open lab spaces, Tam said. As such, temperatures from the UL experiments shot up nearly twice as fast as the synthetic data.
Despite revealing a weak spot in the tool, the team finds the results to be encouraging and a step in the right direction. The researchers' next task is to zero in on the enclosure effect and represent it in simulations. To do that they plan on performing more full-scale experiments themselves.
When its weak spots are patched and its predictions sharpened, the researchers envision that their system could be embedded in hand-held devices able to communicate with detectors in a building through the cloud, Tam said.
Firefighters would not only be able to tell their colleagues when it's time to escape, but they would be able to know danger spots in the building before they arrive and adjust their tactics to maximize their chances of saving lives.
Paper: E.Y. Fu, W.C. Tam, R. Peacock, P. Reneke, G. Ngai, H. Leong and T. Cleary. Predicting Flashover Occurrence using Surrogate Temperature Data. Proceedings of the AAAI Conference on Artificial Intelligence . |
|||
309 | PNNL's Shadow Figment Technology Foils Cyberattacks | RICHLAND, Wash. - Scientists have created a cybersecurity technology called Shadow Figment that is designed to lure hackers into an artificial world, then stop them from doing damage by feeding them illusory tidbits of success.
The aim is to sequester bad actors by captivating them with an attractive - but imaginary - world.
The technology is aimed at protecting physical targets - infrastructure such as buildings, the electric grid , water and sewage systems, and even pipelines. The technology was developed by scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory .
The starting point for Shadow Figment is an oft-deployed technology called a honeypot - something attractive to lure an attacker, perhaps a desirable target with the appearance of easy access.
But while most honeypots are used to lure attackers and study their methods, Shadow Figment goes much further. The technology uses artificial intelligence to deploy elaborate deception to keep attackers engaged in a pretend world - the figment - that mirrors the real world. The decoy interacts with users in real time, responding in realistic ways to commands.
"Our intention is to make interactions seem realistic, so that if someone is interacting with our decoy, we keep them involved, giving our defenders extra time to respond," said Thomas Edgar, a PNNL cybersecurity researcher who led the development of Shadow Figment.
The system rewards hackers with false signals of success, keeping them occupied while defenders learn about the attackers' methods and take actions to protect the real system.
The credibility of the deception relies on a machine learning program that learns from observing the real-world system where it is installed. The program responds to an attack by sending signals that illustrate that the system under attack is responding in plausible ways. This "model-driven dynamic deception" is much more realistic than a static decoy, a more common tool that is quickly recognized by experienced cyberattackers.
Shadow Figment spans two worlds that years ago were independent but are now intertwined: the cyber world and the physical world, with elaborate structures that rely on complex industrial control systems. Such systems are more often in the crosshairs of hackers than ever before. Examples include the takedown of large portions of the electric grid in the Ukraine in 2015, an attack on a Florida water supply earlier this year, and the recent hacking of the Colonial pipeline that affected gasoline supplies along the East Coast.
Physical systems are so complex and immense that the number of potential targets - valves, controls, pumps, sensors, chillers and so on - is boundless. Thousands of devices work in concert to bring us uninterrupted electricity, clean water and comfortable working conditions. False readings fed into a system maliciously could cause electricity to shut down. They could drive up the temperature in a building to uncomfortable or unsafe levels, or change the concentration of chemicals added to a water supply.
Shadow Figment creates interactive clones of such system in all their complexity, in ways that experienced operators and cyber criminals would expect. For example, if a hacker turns off a fan in a server room in the artificial world, Shadow Figment responds by signaling that air movement has slowed and the temperature is rising. If a hacker changes a setting to a water boiler, the system adjusts the water flow rate accordingly.
The intent is to distract bad actors from the real control systems, to funnel them into an artificial system where their actions have no impact.
"We're buying time so the defenders can take action to stop bad things from happening," Edgar said. "Even a few minutes is sometimes all you need to stop an attack. But Shadow Figment needs to be one piece of a broader program of cybersecurity defense. There is no one solution that is a magic bullet."
PNNL has applied for a patent on the technology, which has been licensed to Attivo Networks. Shadow Figment is one of five cybersecurity technologies created by PNNL and packaged together in a suite called PACiFiC .
"The development of Shadow Figments is yet another example of how PNNL scientists are focused on protecting the nation's critical assets and infrastructure," said Kannan Krishnaswami, a commercialization manager at PNNL. "This cybersecurity tool has far-reaching applications in government and private sectors - from city municipalities, to utilities, to banking institutions, manufacturing, and even health providers."
The team's most recent results were published in the spring issue of the Journal of Information Warfare .
Edgar's colleagues on the project include William Hofer, Juan Brandi-Lozano, Garrett Seppala, Katy Nowak and Draguna Vrabie. The work was funded by PNNL and by DOE's Office of Technology Transitions.
# # #
Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the U.S. Department of Energy's Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE's Office of Science is working to address some of the most pressing challenges of our time. For more information, visit PNNL's News Center . Follow us on Twitter , Facebook , LinkedIn , and Instagram . | Shadow Figment technology is designed to contain cyberattacks by luring hackers into artificial environments and feeding them false indicators of success. Scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory (PNNL) developed Shadow Figment to extend beyond typical honeypot technology, employing artificial intelligence to keep attackers decoyed in an imaginary world that mimics the real world. Shadow Figment adds credibility to its false-success signals through an algorithm that learns from observing the real-world system where it is deployed, and responds to attacks in a seemingly plausible manner by using an interactive clone of the system. NNL's Thomas Edgar said, "Our intention is to make interactions seem realistic, so that if someone is interacting with our decoy, we keep them involved, giving our defenders extra time to respond." | [] | [] | [] | scitechnews | None | None | None | None | Shadow Figment technology is designed to contain cyberattacks by luring hackers into artificial environments and feeding them false indicators of success. Scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory (PNNL) developed Shadow Figment to extend beyond typical honeypot technology, employing artificial intelligence to keep attackers decoyed in an imaginary world that mimics the real world. Shadow Figment adds credibility to its false-success signals through an algorithm that learns from observing the real-world system where it is deployed, and responds to attacks in a seemingly plausible manner by using an interactive clone of the system. NNL's Thomas Edgar said, "Our intention is to make interactions seem realistic, so that if someone is interacting with our decoy, we keep them involved, giving our defenders extra time to respond."
RICHLAND, Wash. - Scientists have created a cybersecurity technology called Shadow Figment that is designed to lure hackers into an artificial world, then stop them from doing damage by feeding them illusory tidbits of success.
The aim is to sequester bad actors by captivating them with an attractive - but imaginary - world.
The technology is aimed at protecting physical targets - infrastructure such as buildings, the electric grid , water and sewage systems, and even pipelines. The technology was developed by scientists at the U.S. Department of Energy's Pacific Northwest National Laboratory .
The starting point for Shadow Figment is an oft-deployed technology called a honeypot - something attractive to lure an attacker, perhaps a desirable target with the appearance of easy access.
But while most honeypots are used to lure attackers and study their methods, Shadow Figment goes much further. The technology uses artificial intelligence to deploy elaborate deception to keep attackers engaged in a pretend world - the figment - that mirrors the real world. The decoy interacts with users in real time, responding in realistic ways to commands.
"Our intention is to make interactions seem realistic, so that if someone is interacting with our decoy, we keep them involved, giving our defenders extra time to respond," said Thomas Edgar, a PNNL cybersecurity researcher who led the development of Shadow Figment.
The system rewards hackers with false signals of success, keeping them occupied while defenders learn about the attackers' methods and take actions to protect the real system.
The credibility of the deception relies on a machine learning program that learns from observing the real-world system where it is installed. The program responds to an attack by sending signals that illustrate that the system under attack is responding in plausible ways. This "model-driven dynamic deception" is much more realistic than a static decoy, a more common tool that is quickly recognized by experienced cyberattackers.
Shadow Figment spans two worlds that years ago were independent but are now intertwined: the cyber world and the physical world, with elaborate structures that rely on complex industrial control systems. Such systems are more often in the crosshairs of hackers than ever before. Examples include the takedown of large portions of the electric grid in the Ukraine in 2015, an attack on a Florida water supply earlier this year, and the recent hacking of the Colonial pipeline that affected gasoline supplies along the East Coast.
Physical systems are so complex and immense that the number of potential targets - valves, controls, pumps, sensors, chillers and so on - is boundless. Thousands of devices work in concert to bring us uninterrupted electricity, clean water and comfortable working conditions. False readings fed into a system maliciously could cause electricity to shut down. They could drive up the temperature in a building to uncomfortable or unsafe levels, or change the concentration of chemicals added to a water supply.
Shadow Figment creates interactive clones of such system in all their complexity, in ways that experienced operators and cyber criminals would expect. For example, if a hacker turns off a fan in a server room in the artificial world, Shadow Figment responds by signaling that air movement has slowed and the temperature is rising. If a hacker changes a setting to a water boiler, the system adjusts the water flow rate accordingly.
The intent is to distract bad actors from the real control systems, to funnel them into an artificial system where their actions have no impact.
"We're buying time so the defenders can take action to stop bad things from happening," Edgar said. "Even a few minutes is sometimes all you need to stop an attack. But Shadow Figment needs to be one piece of a broader program of cybersecurity defense. There is no one solution that is a magic bullet."
PNNL has applied for a patent on the technology, which has been licensed to Attivo Networks. Shadow Figment is one of five cybersecurity technologies created by PNNL and packaged together in a suite called PACiFiC .
"The development of Shadow Figments is yet another example of how PNNL scientists are focused on protecting the nation's critical assets and infrastructure," said Kannan Krishnaswami, a commercialization manager at PNNL. "This cybersecurity tool has far-reaching applications in government and private sectors - from city municipalities, to utilities, to banking institutions, manufacturing, and even health providers."
The team's most recent results were published in the spring issue of the Journal of Information Warfare .
Edgar's colleagues on the project include William Hofer, Juan Brandi-Lozano, Garrett Seppala, Katy Nowak and Draguna Vrabie. The work was funded by PNNL and by DOE's Office of Technology Transitions.
# # #
Pacific Northwest National Laboratory draws on its distinguishing strengths in chemistry, Earth sciences, biology and data science to advance scientific knowledge and address challenges in sustainable energy and national security. Founded in 1965, PNNL is operated by Battelle for the U.S. Department of Energy's Office of Science, which is the single largest supporter of basic research in the physical sciences in the United States. DOE's Office of Science is working to address some of the most pressing challenges of our time. For more information, visit PNNL's News Center . Follow us on Twitter , Facebook , LinkedIn , and Instagram . |
|||
310 | Amazon's Ring Will Ask Police to Publicly Request User Videos | Amazon subsidiary and Internet-connected doorbell maker Ring said police departments that require help in investigations must publicly request home security video from doorbells and cameras. Law enforcement agencies now must post such Requests for Assistance on Neighbors, Ring's video-sharing and safety-related community discussion portal; nearby users with potentially helpful videos can click a link within the post and select which videos they wish to submit. Ring, which has been accused of having a too-cozy relationship with law enforcement, explained on its blog that it has been working with independent third-party experts to help give people greater insight into law enforcement's use of its technology. | [] | [] | [] | scitechnews | None | None | None | None | Amazon subsidiary and Internet-connected doorbell maker Ring said police departments that require help in investigations must publicly request home security video from doorbells and cameras. Law enforcement agencies now must post such Requests for Assistance on Neighbors, Ring's video-sharing and safety-related community discussion portal; nearby users with potentially helpful videos can click a link within the post and select which videos they wish to submit. Ring, which has been accused of having a too-cozy relationship with law enforcement, explained on its blog that it has been working with independent third-party experts to help give people greater insight into law enforcement's use of its technology.
|
||||
311 | White House Sends Memo to Private Sector on Cyberattack Protections | A memo issued by the White House offers recommendations for private sector organizations to guard against cyberattacks, following recent high-profile incidents including those affecting Colonial Pipeline and SolarWinds. Deputy National Security Advisor for Cyber and Emerging Technology Anne Neuberger stressed that "all organizations must recognize that no company is safe from being targeted by ransomware, regardless of size or location." The memo calls on business executives to "convene their leadership teams to discuss the ransomware threat and review corporate security posture and business continuity plans to ensure you have the ability to continue or quickly restore operations." Companies are urged to, among other things, deploy multifactor authentication, test backups and update patches on a regular basis, test incident response plans, and restrict Internet access to operational networks. | [] | [] | [] | scitechnews | None | None | None | None | A memo issued by the White House offers recommendations for private sector organizations to guard against cyberattacks, following recent high-profile incidents including those affecting Colonial Pipeline and SolarWinds. Deputy National Security Advisor for Cyber and Emerging Technology Anne Neuberger stressed that "all organizations must recognize that no company is safe from being targeted by ransomware, regardless of size or location." The memo calls on business executives to "convene their leadership teams to discuss the ransomware threat and review corporate security posture and business continuity plans to ensure you have the ability to continue or quickly restore operations." Companies are urged to, among other things, deploy multifactor authentication, test backups and update patches on a regular basis, test incident response plans, and restrict Internet access to operational networks.
|
||||
312 | EU Plans Digital ID Wallet for Post-Pandemic Life | The European Union (EU) on Thursday announced plans for a post-pandemic smartphone application to enable EU residents to access services across the bloc. Europeans would be able to store digital credentials such as driver's licenses, prescriptions, and school diplomas through the European Digital Identity Wallet, and access online and offline public/private services while keeping personal data secure. The European Commission (EC) said the e-wallet would be available to all EU residents, although its use is not mandatory. Dominant online platforms, however, would have to accept the wallet, in line with the EC's agenda to regulate big technology companies and their control over personal information. | [] | [] | [] | scitechnews | None | None | None | None | The European Union (EU) on Thursday announced plans for a post-pandemic smartphone application to enable EU residents to access services across the bloc. Europeans would be able to store digital credentials such as driver's licenses, prescriptions, and school diplomas through the European Digital Identity Wallet, and access online and offline public/private services while keeping personal data secure. The European Commission (EC) said the e-wallet would be available to all EU residents, although its use is not mandatory. Dominant online platforms, however, would have to accept the wallet, in line with the EC's agenda to regulate big technology companies and their control over personal information.
|
||||
313 | Engineers Create a Programmable Fiber | MIT researchers have created the first fiber with digital capabilities, able to sense, store, analyze, and infer activity after being sewn into a shirt.
Yoel Fink, who is a professor in the departments of materials science and engineering and electrical engineering and computer science, a Research Laboratory of Electronics principal investigator, and the senior author on the study, says digital fibers expand the possibilities for fabrics to uncover the context of hidden patterns in the human body that could be used for physical performance monitoring, medical inference, and early disease detection.
Or, you might someday store your wedding music in the gown you wore on the big day - more on that later.
Fink and his colleagues describe the features of the digital fiber today in Nature Communications . Until now, electronic fibers have been analog - carrying a continuous electrical signal - rather than digital, where discrete bits of information can be encoded and processed in 0s and 1s.
"This work presents the first realization of a fabric with the ability to store and process data digitally, adding a new information content dimension to textiles and allowing fabrics to be programmed literally," Fink says.
MIT PhD student Gabriel Loke and MIT postdoc Tural Khudiyev are the lead authors on the paper. Other co-authors MIT postdoc Wei Yan; MIT undergraduates Brian Wang, Stephanie Fu, Ioannis Chatziveroglou, Syamantak Payra, Yorai Shaoul, Johnny Fung, and Itamar Chinn; John Joannopoulos, the Francis Wright Davis Chair Professor of Physics and director of the Institute for Soldier Nanotechnologies at MIT; Harrisburg University of Science and Technology master's student Pin-Wen Chou; and Rhode Island School of Design Associate Professor Anna Gitelson-Kahn. The fabric work was facilitated by Professor Anais Missakian, who holds the Pevaroff-Cohn Family Endowed Chair in Textiles at RISD.
Memory and more
The new fiber was created by placing hundreds of square silicon microscale digital chips into a preform that was then used to create a polymer fiber. By precisely controlling the polymer flow, the researchers were able to create a fiber with continuous electrical connection between the chips over a length of tens of meters.
The fiber itself is thin and flexible and can be passed through a needle, sewn into fabrics, and washed at least 10 times without breaking down. According to Loke, "When you put it into a shirt, you can't feel it at all. You wouldn't know it was there."
Making a digital fiber "opens up different areas of opportunities and actually solves some of the problems of functional fibers," he says.
For instance, it offers a way to control individual elements within a fiber, from one point at the fiber's end. "You can think of our fiber as a corridor, and the elements are like rooms, and they each have their own unique digital room numbers," Loke explains. The research team devised a digital addressing method that allows them to "switch on" the functionality of one element without turning on all the elements.
A digital fiber can also store a lot of information in memory. The researchers were able to write, store, and read information on the fiber, including a 767-kilobit full-color short movie file and a 0.48 megabyte music file. The files can be stored for two months without power.
When they were dreaming up "crazy ideas" for the fiber, Loke says, they thought about applications like a wedding gown that would store digital wedding music within the weave of its fabric, or even writing the story of the fiber's creation into its components.
Fink notes that the research at MIT was in close collaboration with the textile department at RISD led by Missakian. Gitelson-Kahn incorporated the digital fibers into a knitted garment sleeve, thus paving the way to creating the first digital garment.
On-body artificial intelligence
The fiber also takes a few steps forward into artificial intelligence by including, within the fiber memory, a neural network of 1,650 connections. After sewing it around the armpit of a shirt, the researchers used the fiber to collect 270 minutes of surface body temperature data from a person wearing the shirt, and analyze how these data corresponded to different physical activities. Trained on these data, the fiber was able to determine with 96 percent accuracy what activity the person wearing it was engaged in.
Adding an AI component to the fiber further increases its possibilities, the researchers say. Fabrics with digital components can collect a lot of information across the body over time, and these "lush data" are perfect for machine learning algorithms, Loke says.
"This type of fabric could give quantity and quality open-source data for extracting out new body patterns that we did not know about before," he says.
With this analytic power, the fibers someday could sense and alert people in real-time to health changes like a respiratory decline or an irregular heartbeat, or deliver muscle activation or heart rate data to athletes during training.
The fiber is controlled by a small external device, so the next step will be to design a new chip as a microcontroller that can be connected within the fiber itself.
"When we can do that, we can call it a fiber computer," Loke says.
This research was supported by the U.S. Army Institute of Soldier Nanotechnologies, National Science Foundation, the U.S. Army Research Office, the MIT Sea Grant, and the Defense Threat Reduction Agency. | The first programmable digital fiber has been designed by engineers at the Massachusetts Institute of Technology (MIT), the Harrisburg University of Science and Technology, and the Rhode Island School of Design. The researchers deposited silicon microscale digital chips into a preform that was used to fabricate a polymer fiber, which could support continuous electrical connection between the chips across tens of meters. The fiber also incorporates a neural network of 1,650 links within its memory. When sewed into a shirt, the fiber collected 270 minutes of surface body temperature data from the wearer, and when trained on this data could determine the wearer's current activity with 96% accuracy. MIT's Yoel Fink said, "This work presents the first realization of a fabric with the ability to store and process data digitally, adding a new information content dimension to textiles and allowing fabrics to be programmed literally." | [] | [] | [] | scitechnews | None | None | None | None | The first programmable digital fiber has been designed by engineers at the Massachusetts Institute of Technology (MIT), the Harrisburg University of Science and Technology, and the Rhode Island School of Design. The researchers deposited silicon microscale digital chips into a preform that was used to fabricate a polymer fiber, which could support continuous electrical connection between the chips across tens of meters. The fiber also incorporates a neural network of 1,650 links within its memory. When sewed into a shirt, the fiber collected 270 minutes of surface body temperature data from the wearer, and when trained on this data could determine the wearer's current activity with 96% accuracy. MIT's Yoel Fink said, "This work presents the first realization of a fabric with the ability to store and process data digitally, adding a new information content dimension to textiles and allowing fabrics to be programmed literally."
MIT researchers have created the first fiber with digital capabilities, able to sense, store, analyze, and infer activity after being sewn into a shirt.
Yoel Fink, who is a professor in the departments of materials science and engineering and electrical engineering and computer science, a Research Laboratory of Electronics principal investigator, and the senior author on the study, says digital fibers expand the possibilities for fabrics to uncover the context of hidden patterns in the human body that could be used for physical performance monitoring, medical inference, and early disease detection.
Or, you might someday store your wedding music in the gown you wore on the big day - more on that later.
Fink and his colleagues describe the features of the digital fiber today in Nature Communications . Until now, electronic fibers have been analog - carrying a continuous electrical signal - rather than digital, where discrete bits of information can be encoded and processed in 0s and 1s.
"This work presents the first realization of a fabric with the ability to store and process data digitally, adding a new information content dimension to textiles and allowing fabrics to be programmed literally," Fink says.
MIT PhD student Gabriel Loke and MIT postdoc Tural Khudiyev are the lead authors on the paper. Other co-authors MIT postdoc Wei Yan; MIT undergraduates Brian Wang, Stephanie Fu, Ioannis Chatziveroglou, Syamantak Payra, Yorai Shaoul, Johnny Fung, and Itamar Chinn; John Joannopoulos, the Francis Wright Davis Chair Professor of Physics and director of the Institute for Soldier Nanotechnologies at MIT; Harrisburg University of Science and Technology master's student Pin-Wen Chou; and Rhode Island School of Design Associate Professor Anna Gitelson-Kahn. The fabric work was facilitated by Professor Anais Missakian, who holds the Pevaroff-Cohn Family Endowed Chair in Textiles at RISD.
Memory and more
The new fiber was created by placing hundreds of square silicon microscale digital chips into a preform that was then used to create a polymer fiber. By precisely controlling the polymer flow, the researchers were able to create a fiber with continuous electrical connection between the chips over a length of tens of meters.
The fiber itself is thin and flexible and can be passed through a needle, sewn into fabrics, and washed at least 10 times without breaking down. According to Loke, "When you put it into a shirt, you can't feel it at all. You wouldn't know it was there."
Making a digital fiber "opens up different areas of opportunities and actually solves some of the problems of functional fibers," he says.
For instance, it offers a way to control individual elements within a fiber, from one point at the fiber's end. "You can think of our fiber as a corridor, and the elements are like rooms, and they each have their own unique digital room numbers," Loke explains. The research team devised a digital addressing method that allows them to "switch on" the functionality of one element without turning on all the elements.
A digital fiber can also store a lot of information in memory. The researchers were able to write, store, and read information on the fiber, including a 767-kilobit full-color short movie file and a 0.48 megabyte music file. The files can be stored for two months without power.
When they were dreaming up "crazy ideas" for the fiber, Loke says, they thought about applications like a wedding gown that would store digital wedding music within the weave of its fabric, or even writing the story of the fiber's creation into its components.
Fink notes that the research at MIT was in close collaboration with the textile department at RISD led by Missakian. Gitelson-Kahn incorporated the digital fibers into a knitted garment sleeve, thus paving the way to creating the first digital garment.
On-body artificial intelligence
The fiber also takes a few steps forward into artificial intelligence by including, within the fiber memory, a neural network of 1,650 connections. After sewing it around the armpit of a shirt, the researchers used the fiber to collect 270 minutes of surface body temperature data from a person wearing the shirt, and analyze how these data corresponded to different physical activities. Trained on these data, the fiber was able to determine with 96 percent accuracy what activity the person wearing it was engaged in.
Adding an AI component to the fiber further increases its possibilities, the researchers say. Fabrics with digital components can collect a lot of information across the body over time, and these "lush data" are perfect for machine learning algorithms, Loke says.
"This type of fabric could give quantity and quality open-source data for extracting out new body patterns that we did not know about before," he says.
With this analytic power, the fibers someday could sense and alert people in real-time to health changes like a respiratory decline or an irregular heartbeat, or deliver muscle activation or heart rate data to athletes during training.
The fiber is controlled by a small external device, so the next step will be to design a new chip as a microcontroller that can be connected within the fiber itself.
"When we can do that, we can call it a fiber computer," Loke says.
This research was supported by the U.S. Army Institute of Soldier Nanotechnologies, National Science Foundation, the U.S. Army Research Office, the MIT Sea Grant, and the Defense Threat Reduction Agency. |
|||
314 | Quantum Memory Crystals Are a Step Towards Futuristic Internet | By Matthew Sparkes
This quantum memory is made from yttrium orthosilicate crystals ICFO
A secure quantum internet is one step closer thanks to a quantum memory made from a crystal, which could form a crucial part of a device able to transmit entangled photons over a distance of 5 kilometres. Crucially, it is entirely compatible with existing communication networks, making it suitable for real-world use.
There has long been a vision of a quantum version of the internet , which would allow quantum computers to communicate across long distances by exchanging particles of light called photons that have been linked together with quantum entanglement , allowing them to transmit quantum states.
The problem is that photons get lost when they are transmitted through long lengths of fibre-optic cable. For normal photons, this isn't an issue, because networking equipment can simply measure and retransmit them after a certain distance, which is how normal fibre data connections work. But for entangled photons, any attempt to measure or amplify them changes their state.
The solution to this is a procedure called quantum teleportation. This involves simultaneously measuring the state of one photon from each of two pairs of entangled photons, which effectively links the most distant two photons in the chain.
"The photons are used not to send the information, but to share the entanglement. Then I can use that entanglement. I can teleport the quantum information I want from A to B," says Myungshik Kim at Imperial College London.
But that introduces another problem - all of your entangled pairs have to be ready at the same time to form a chain, which becomes more difficult over longer distances. To solve this, you need a quantum memory.
"The idea is that you try one link, and when you have a success, then you stall this entanglement and this link and you wait for the other link to be also ready. And when the other links are ready, then you can combine them together. This will extend the entanglement towards larger and larger distances," says Hugues de Riedmatten at the Institute of Photonic Sciences in Castelldefels, Spain.
de Reidmatten and his team used yttrium orthosilicate crystals to store pairs of entangled photons for 25 microseconds in two separate quantum memories. They performed the experiment between two labs, linked by 50 metres of fibre-optic cable, but theoretically this amount of storage time would allow devices up to 5 kilometres apart to communicate.
Crucially, the researchers were able to store and retrieve photons in the order they were sent, and transmit them using frequencies and fibre-optic cables already used in data networks, showing that the approach should work outside the lab. They now hope to increase the distance between the two memory devices by increasing the maximum storage time and make a fully functioning quantum repeater.
Journal reference: Nature , DOI: 10.1038/s41586-021-03481-8
Sign up to Lost in Space-Time , a free monthly newsletter on the weirdness of reality | Hugues de Riedmatten and colleagues at Spain's Institute of Photonic Sciences have taken a step toward a secure quantum Internet by using crystals to execute quantum teleportation of information. The researchers were able to store a pair of entangled photons in yttrium orthosilicate crystals for 25 microseconds in two separate quantum memories. The experiment was conducted between two laboratories connected by 50 meters (164 feet) of fiber-optic cable; theoretically, 25 microseconds would allow communication between devices that are up to five kilometers (3.1 miles) apart. The researchers stored and retrieved photons in the order of transmission, and sent them using frequencies and cables already used in data networks. | [] | [] | [] | scitechnews | None | None | None | None | Hugues de Riedmatten and colleagues at Spain's Institute of Photonic Sciences have taken a step toward a secure quantum Internet by using crystals to execute quantum teleportation of information. The researchers were able to store a pair of entangled photons in yttrium orthosilicate crystals for 25 microseconds in two separate quantum memories. The experiment was conducted between two laboratories connected by 50 meters (164 feet) of fiber-optic cable; theoretically, 25 microseconds would allow communication between devices that are up to five kilometers (3.1 miles) apart. The researchers stored and retrieved photons in the order of transmission, and sent them using frequencies and cables already used in data networks.
By Matthew Sparkes
This quantum memory is made from yttrium orthosilicate crystals ICFO
A secure quantum internet is one step closer thanks to a quantum memory made from a crystal, which could form a crucial part of a device able to transmit entangled photons over a distance of 5 kilometres. Crucially, it is entirely compatible with existing communication networks, making it suitable for real-world use.
There has long been a vision of a quantum version of the internet , which would allow quantum computers to communicate across long distances by exchanging particles of light called photons that have been linked together with quantum entanglement , allowing them to transmit quantum states.
The problem is that photons get lost when they are transmitted through long lengths of fibre-optic cable. For normal photons, this isn't an issue, because networking equipment can simply measure and retransmit them after a certain distance, which is how normal fibre data connections work. But for entangled photons, any attempt to measure or amplify them changes their state.
The solution to this is a procedure called quantum teleportation. This involves simultaneously measuring the state of one photon from each of two pairs of entangled photons, which effectively links the most distant two photons in the chain.
"The photons are used not to send the information, but to share the entanglement. Then I can use that entanglement. I can teleport the quantum information I want from A to B," says Myungshik Kim at Imperial College London.
But that introduces another problem - all of your entangled pairs have to be ready at the same time to form a chain, which becomes more difficult over longer distances. To solve this, you need a quantum memory.
"The idea is that you try one link, and when you have a success, then you stall this entanglement and this link and you wait for the other link to be also ready. And when the other links are ready, then you can combine them together. This will extend the entanglement towards larger and larger distances," says Hugues de Riedmatten at the Institute of Photonic Sciences in Castelldefels, Spain.
de Reidmatten and his team used yttrium orthosilicate crystals to store pairs of entangled photons for 25 microseconds in two separate quantum memories. They performed the experiment between two labs, linked by 50 metres of fibre-optic cable, but theoretically this amount of storage time would allow devices up to 5 kilometres apart to communicate.
Crucially, the researchers were able to store and retrieve photons in the order they were sent, and transmit them using frequencies and fibre-optic cables already used in data networks, showing that the approach should work outside the lab. They now hope to increase the distance between the two memory devices by increasing the maximum storage time and make a fully functioning quantum repeater.
Journal reference: Nature , DOI: 10.1038/s41586-021-03481-8
Sign up to Lost in Space-Time , a free monthly newsletter on the weirdness of reality |
|||
315 | Shoot Better Drone Videos with a Single Word | The pros make it look easy, but making a movie with a drone can be anything but.
First, it takes skill to fly the often expensive pieces of equipment smoothly and without crashing. And once you've mastered flying, there are camera angles, panning speeds, trajectories and flight paths to plan.
With all the sensors and processing power onboard a drone and embedded in its camera, there must be a better way to capture the perfect shot.
"Sometimes you just want to tell the drone to make an exciting video," said Rogerio Bonatti , a Ph.D. candidate in Carnegie Mellon University's Robotics Institute .
Bonatti was part of a team from CMU, the University of Sao Paulo and Facebook AI Research that developed a model that enables a drone to shoot a video based on a desired emotion or viewer reaction. The drone uses camera angles, speeds and flight paths to generate a video that could be exciting, calm, enjoyable or nerve-wracking - depending on what the filmmaker tells it.
The team presented their paper on the work at the 2021 International Conference on Robotics and Automation this month. The presentation can be viewed on YouTube .
"We are learning how to map semantics, like a word or emotion, to the motion of the camera," Bonatti said.
But before "Lights! Camera! Action!" the researchers needed hundreds of videos and thousands of viewers to capture data on what makes a video evoke a certain emotion or feeling. Bonatti and the team collected a few hundred diverse videos. A few thousand viewers then watched 12 pairs of videos and gave them scores based on how the videos made them feel.
The researchers then used the data to train a model that directed the drone to mimic the cinematography corresponding to a particular emotion. If fast moving, tight shots created excitement, the drone would use those elements to make an exciting video when the user requested it. The drone could also create videos that were calm, revealing, interesting, nervous and enjoyable, among other emotions and their combinations, like an interesting and calm video.
"I was surprised that this worked," said Bonatti. "We were trying to learn something incredibly subjective, and I was surprised that we obtained good quality data."
The team tested their model by creating sample videos, like a chase scene or someone dribbling a soccer ball, and asked viewers for feedback on how the videos felt. Bonatti said that not only did the team create videos intended to be exciting or calming that actually felt that way, but they also achieved different degrees of those emotions.
The team's work aims to improve the interface between people and cameras, whether that be helping amateur filmmakers with drone cinematography or providing on-screen directions on a smartphone to capture the perfect shot.
"This opens the door to many other applications, even outside filming or photography," Bonatti said. "We designed a model that maps emotions to robot behavior." | Aerial drones can shoot video according to emotional desires or viewer reactions through a model developed by researchers at Carnegie Mellon University (CMU), Brazil's University of Sao Paulo, and Facebook AI Research. CMU's Rogerio Bonatti said the new model is part of an effort "to map semantics, like a word or emotion, to the motion of the camera." The team first compiled several hundred videos, and then a few thousand viewers watched and scored 12 pairs of videos on the emotions they elicited. The researchers fed this data to a model that instructed the drone to imitate cinematography associated with specific emotions. Bonatti said the videos not only elicited intended emotions in viewers, but also could evoke different levels of emotions. | [] | [] | [] | scitechnews | None | None | None | None | Aerial drones can shoot video according to emotional desires or viewer reactions through a model developed by researchers at Carnegie Mellon University (CMU), Brazil's University of Sao Paulo, and Facebook AI Research. CMU's Rogerio Bonatti said the new model is part of an effort "to map semantics, like a word or emotion, to the motion of the camera." The team first compiled several hundred videos, and then a few thousand viewers watched and scored 12 pairs of videos on the emotions they elicited. The researchers fed this data to a model that instructed the drone to imitate cinematography associated with specific emotions. Bonatti said the videos not only elicited intended emotions in viewers, but also could evoke different levels of emotions.
The pros make it look easy, but making a movie with a drone can be anything but.
First, it takes skill to fly the often expensive pieces of equipment smoothly and without crashing. And once you've mastered flying, there are camera angles, panning speeds, trajectories and flight paths to plan.
With all the sensors and processing power onboard a drone and embedded in its camera, there must be a better way to capture the perfect shot.
"Sometimes you just want to tell the drone to make an exciting video," said Rogerio Bonatti , a Ph.D. candidate in Carnegie Mellon University's Robotics Institute .
Bonatti was part of a team from CMU, the University of Sao Paulo and Facebook AI Research that developed a model that enables a drone to shoot a video based on a desired emotion or viewer reaction. The drone uses camera angles, speeds and flight paths to generate a video that could be exciting, calm, enjoyable or nerve-wracking - depending on what the filmmaker tells it.
The team presented their paper on the work at the 2021 International Conference on Robotics and Automation this month. The presentation can be viewed on YouTube .
"We are learning how to map semantics, like a word or emotion, to the motion of the camera," Bonatti said.
But before "Lights! Camera! Action!" the researchers needed hundreds of videos and thousands of viewers to capture data on what makes a video evoke a certain emotion or feeling. Bonatti and the team collected a few hundred diverse videos. A few thousand viewers then watched 12 pairs of videos and gave them scores based on how the videos made them feel.
The researchers then used the data to train a model that directed the drone to mimic the cinematography corresponding to a particular emotion. If fast moving, tight shots created excitement, the drone would use those elements to make an exciting video when the user requested it. The drone could also create videos that were calm, revealing, interesting, nervous and enjoyable, among other emotions and their combinations, like an interesting and calm video.
"I was surprised that this worked," said Bonatti. "We were trying to learn something incredibly subjective, and I was surprised that we obtained good quality data."
The team tested their model by creating sample videos, like a chase scene or someone dribbling a soccer ball, and asked viewers for feedback on how the videos felt. Bonatti said that not only did the team create videos intended to be exciting or calming that actually felt that way, but they also achieved different degrees of those emotions.
The team's work aims to improve the interface between people and cameras, whether that be helping amateur filmmakers with drone cinematography or providing on-screen directions on a smartphone to capture the perfect shot.
"This opens the door to many other applications, even outside filming or photography," Bonatti said. "We designed a model that maps emotions to robot behavior." |
|||
317 | Researchers Develop Prototype Robotic Device to Pick, Trim Button Mushrooms | UNIVERSITY PARK, Pa. - Researchers in Penn State's College of Agricultural Sciences have developed a robotic mechanism for mushroom picking and trimming and demonstrated its effectiveness for the automated harvesting of button mushrooms.
In a new study, the prototype, which is designed to be integrated with a machine vision system, showed that it is capable of both picking and trimming mushrooms growing in a shelf system.
The research is consequential, according to lead author Long He , assistant professor of agricultural and biological engineering, because the mushroom industry has been facing labor shortages and rising labor costs. Mechanical or robotic picking can help alleviate those problems.
"The mushroom industry in Pennsylvania is producing about two-thirds of the mushrooms grown nationwide, and the growers here are having a difficult time finding laborers to handle the harvesting, which is a very labor intensive and difficult job," said He. "The industry is facing some challenges, so an automated system for harvesting like the one we are working on would be a big help."
The button mushroom - Agaricus bisporus - is an important agricultural commodity. A total of 891 million pounds of button mushrooms valued at $1.13 billion were consumed in the U.S. from 2017 to 2018. Of this production, 91% were for the fresh market, according to the U.S. Department of Agriculture, and were picked by hand, one by one, to ensure product quality, shelf life and appearance. Labor costs for mushroom harvesting account for 15% to 30% of the production value, He pointed out.
Developing a device to effectively harvest mushrooms was a complex endeavor, explained He. In hand-picking, a picker first locates a mature mushroom and detaches it with one hand, typically using three fingers. A knife, in the picker's other hand, is then used to remove the stipe end. Sometimes the picker waits until there are two or three mushrooms in hand and cuts them one by one. Finally, the mushroom is placed in a collection box. A robotic mechanism had to achieve an equivalent picking process.
The researchers designed a robotic mushroom-picking mechanism that included a picking "end-effector" based on a bending motion, a "4-degree-of-freedom positioning" end-effector for moving the picking end-effector, a mushroom stipe-trimming end-effector, and an electro-pneumatic control system. They fabricated a laboratory-scale prototype to validate the performance of the mechanism.
The research team used a suction cup mechanism to latch onto mushrooms and conducted bruise tests on the mushroom caps to analyze the influence of air pressure and acting time of the suction cup.
The test results, recently published in Transactions of the American Society of Agricultural and Biological Engineers, showed that the picking end-effector was successfully positioned to the target locations and its success rate was 90% at first pick, increasing to 94.2% after second pick.
The trimming end-effector achieved a success rate of 97% overall. The bruise tests indicated that the air pressure was the main factor affecting the bruise level, compared to the suction-cup acting time, and an optimized suction cup may help to alleviate the bruise damage, the researchers noted. The laboratory test results indicated that the developed picking mechanism has potential to be implemented in automatic mushroom harvesting.
Button mushrooms for the study were grown in tubs at Penn State's Mushroom Research Center on the University Park campus. Fabrication and experiments were conducted at the Fruit Research and Extension Center in Biglerville. A total of 70 picking tests were conducted to evaluate the robotic picking mechanism. The working pressures of the pneumatic system and the suction cup were set at 80 and 25 pounds per square inch, respectively.
Other Penn State researchers involved in the study were Daeun Choi, assistant professor of agricultural and biological engineering, and John Pecchia, associate research professor, Department of Plant Pathology and Environmental Microbiology. Research team members also included doctoral students Mingsen Huang, from Jiangsu University, Zhenjiang, China; and Xiaohu Jiang , from Jilin University, Changchun, China, both visiting Penn State's Department of Agricultural and Biological Engineering.
The Penn State Mushroom Research Competitive Grants Program supported this research. | A prototype robotic mushroom-picker/trimmer engineered by Pennsylvania State University (Penn State) researchers successfully harvested button mushrooms growing in a shelf system. The device, designed to be integrated with a machine vision system, uses a suction cup mechanism to grip mushrooms. Laboratory testing showed the device's picking mechanism could be potentially implemented in automatic mushroom harvesting with a success rate of 90% at first pick, increasing to 94.2% after second pick. Testing also showed the trimming end-effector achieved a success rate of 97% overall. Penn State's Long He said this achievement is significant, given labor shortages and mounting labor costs in Pennsylvania's mushroom industry. | [] | [] | [] | scitechnews | None | None | None | None | A prototype robotic mushroom-picker/trimmer engineered by Pennsylvania State University (Penn State) researchers successfully harvested button mushrooms growing in a shelf system. The device, designed to be integrated with a machine vision system, uses a suction cup mechanism to grip mushrooms. Laboratory testing showed the device's picking mechanism could be potentially implemented in automatic mushroom harvesting with a success rate of 90% at first pick, increasing to 94.2% after second pick. Testing also showed the trimming end-effector achieved a success rate of 97% overall. Penn State's Long He said this achievement is significant, given labor shortages and mounting labor costs in Pennsylvania's mushroom industry.
UNIVERSITY PARK, Pa. - Researchers in Penn State's College of Agricultural Sciences have developed a robotic mechanism for mushroom picking and trimming and demonstrated its effectiveness for the automated harvesting of button mushrooms.
In a new study, the prototype, which is designed to be integrated with a machine vision system, showed that it is capable of both picking and trimming mushrooms growing in a shelf system.
The research is consequential, according to lead author Long He , assistant professor of agricultural and biological engineering, because the mushroom industry has been facing labor shortages and rising labor costs. Mechanical or robotic picking can help alleviate those problems.
"The mushroom industry in Pennsylvania is producing about two-thirds of the mushrooms grown nationwide, and the growers here are having a difficult time finding laborers to handle the harvesting, which is a very labor intensive and difficult job," said He. "The industry is facing some challenges, so an automated system for harvesting like the one we are working on would be a big help."
The button mushroom - Agaricus bisporus - is an important agricultural commodity. A total of 891 million pounds of button mushrooms valued at $1.13 billion were consumed in the U.S. from 2017 to 2018. Of this production, 91% were for the fresh market, according to the U.S. Department of Agriculture, and were picked by hand, one by one, to ensure product quality, shelf life and appearance. Labor costs for mushroom harvesting account for 15% to 30% of the production value, He pointed out.
Developing a device to effectively harvest mushrooms was a complex endeavor, explained He. In hand-picking, a picker first locates a mature mushroom and detaches it with one hand, typically using three fingers. A knife, in the picker's other hand, is then used to remove the stipe end. Sometimes the picker waits until there are two or three mushrooms in hand and cuts them one by one. Finally, the mushroom is placed in a collection box. A robotic mechanism had to achieve an equivalent picking process.
The researchers designed a robotic mushroom-picking mechanism that included a picking "end-effector" based on a bending motion, a "4-degree-of-freedom positioning" end-effector for moving the picking end-effector, a mushroom stipe-trimming end-effector, and an electro-pneumatic control system. They fabricated a laboratory-scale prototype to validate the performance of the mechanism.
The research team used a suction cup mechanism to latch onto mushrooms and conducted bruise tests on the mushroom caps to analyze the influence of air pressure and acting time of the suction cup.
The test results, recently published in Transactions of the American Society of Agricultural and Biological Engineers, showed that the picking end-effector was successfully positioned to the target locations and its success rate was 90% at first pick, increasing to 94.2% after second pick.
The trimming end-effector achieved a success rate of 97% overall. The bruise tests indicated that the air pressure was the main factor affecting the bruise level, compared to the suction-cup acting time, and an optimized suction cup may help to alleviate the bruise damage, the researchers noted. The laboratory test results indicated that the developed picking mechanism has potential to be implemented in automatic mushroom harvesting.
Button mushrooms for the study were grown in tubs at Penn State's Mushroom Research Center on the University Park campus. Fabrication and experiments were conducted at the Fruit Research and Extension Center in Biglerville. A total of 70 picking tests were conducted to evaluate the robotic picking mechanism. The working pressures of the pneumatic system and the suction cup were set at 80 and 25 pounds per square inch, respectively.
Other Penn State researchers involved in the study were Daeun Choi, assistant professor of agricultural and biological engineering, and John Pecchia, associate research professor, Department of Plant Pathology and Environmental Microbiology. Research team members also included doctoral students Mingsen Huang, from Jiangsu University, Zhenjiang, China; and Xiaohu Jiang , from Jilin University, Changchun, China, both visiting Penn State's Department of Agricultural and Biological Engineering.
The Penn State Mushroom Research Competitive Grants Program supported this research. |
|||
319 | Something Bothering You? Tell It to Woebot. | "You can deliver it pretty readily in a digital framework, help people grasp these concepts and practice the exercises that help them think in a more rational manner," said Jesse Wright, a psychiatrist who studies digital forms of C.B.T. and is the director of the University of Louisville Depression Center. "Whereas trying to put something like psychoanalysis into a digital format would seem pretty formidable."
Dr. Wright said several dozen studies had shown that computer algorithms could take someone through a standard C.B.T. process, step by step, and get results similar to in-person therapy. Those programs generally follow a set length and number of sessions and require some guidance from a human clinician.
But most smartphone apps don't work that way, he said. People tend to use therapy apps in short, fragmented spurts, without clinician oversight. Outside of limited company-sponsored research, Dr. Wright said he knew of no rigorous studies of that model.
And some automated conversations can be clunky and frustrating when the bot fails to pick up on the user's exact meaning. Dr. Wright said A.I. is not advanced enough to reliably duplicate a natural conversation.
"The chances of a bot being as wise, sympathetic, empathic, knowing, creative and being able to say the right thing at the right time as a human therapist is pretty slim," he said. "There's a limit to what they can do, a real limit."
John Torous, director of digital psychiatry for Beth Israel Deaconess Medical Center in Boston, said therapeutic bots might be promising, but he's worried they are being rolled out too soon, before the technology has caught up to the psychiatry.
"If you deliver C.B.T. in these bite-size parts, how much exposure to bite-size parts equals the original?" he said. "We don't have a good way to predict who's going to respond to them or not - or who it's good or bad for." | A chatbot application offered by Woebot Health uses the principles of cognitive behavioral therapy (CBT) to counsel patients via natural language processing and learned responses. Many mental health experts think CBT's structure and focus on fostering skills to change negative behaviors lends itself to algorithmic deployment to some extent. The Woebot app can emulate conversation, recall past sessions, and provide advice on sleep, anxiety, and stress. Woebot Health founder Alison Darcy said a well-designed bot can bond with users in an empathetic and therapeutic manner. Although Woebot does not approach actual therapy, the company is pursuing U.S. Food and Drug Administration clearance to extend the app to help treat postpartum and adolescent depression. | [] | [] | [] | scitechnews | None | None | None | None | A chatbot application offered by Woebot Health uses the principles of cognitive behavioral therapy (CBT) to counsel patients via natural language processing and learned responses. Many mental health experts think CBT's structure and focus on fostering skills to change negative behaviors lends itself to algorithmic deployment to some extent. The Woebot app can emulate conversation, recall past sessions, and provide advice on sleep, anxiety, and stress. Woebot Health founder Alison Darcy said a well-designed bot can bond with users in an empathetic and therapeutic manner. Although Woebot does not approach actual therapy, the company is pursuing U.S. Food and Drug Administration clearance to extend the app to help treat postpartum and adolescent depression.
"You can deliver it pretty readily in a digital framework, help people grasp these concepts and practice the exercises that help them think in a more rational manner," said Jesse Wright, a psychiatrist who studies digital forms of C.B.T. and is the director of the University of Louisville Depression Center. "Whereas trying to put something like psychoanalysis into a digital format would seem pretty formidable."
Dr. Wright said several dozen studies had shown that computer algorithms could take someone through a standard C.B.T. process, step by step, and get results similar to in-person therapy. Those programs generally follow a set length and number of sessions and require some guidance from a human clinician.
But most smartphone apps don't work that way, he said. People tend to use therapy apps in short, fragmented spurts, without clinician oversight. Outside of limited company-sponsored research, Dr. Wright said he knew of no rigorous studies of that model.
And some automated conversations can be clunky and frustrating when the bot fails to pick up on the user's exact meaning. Dr. Wright said A.I. is not advanced enough to reliably duplicate a natural conversation.
"The chances of a bot being as wise, sympathetic, empathic, knowing, creative and being able to say the right thing at the right time as a human therapist is pretty slim," he said. "There's a limit to what they can do, a real limit."
John Torous, director of digital psychiatry for Beth Israel Deaconess Medical Center in Boston, said therapeutic bots might be promising, but he's worried they are being rolled out too soon, before the technology has caught up to the psychiatry.
"If you deliver C.B.T. in these bite-size parts, how much exposure to bite-size parts equals the original?" he said. "We don't have a good way to predict who's going to respond to them or not - or who it's good or bad for." |
|||
320 | Taking Underwater Communications, Power to New Depths with Light | JONAS ALLERT/UNSPLASH
Scientists have made much progress in using light to transmit data in the open air, as well as to power various devices from a distance-but how to accomplish these feats underwater has been a bit murkier. However, in a new study published May 4 in IEEE Transactions on Wireless Communications , researchers have identified a new way to boost the transfer of power and data to devices underwater using light.
The ocean and other bodies of water are full of mysteries yet to be observed. Networks of underwater sensors are increasingly being deployed t o gather information . Currently the most common approach for remotely transmitting signals underwater is via sound waves, which easily travel long distances through the watery depths. However, sound cannot carry nearly as much data as light can.
"Visible light communication can provide data rates at several orders of magnitude beyond the capabilities of traditional acoustic techniques and is particularly suited for emerging bandwidth-hungry underwater applications," explains Murat Uysal , a professor with the Department of Electrical and Electronics Engineering at Ozyegin University , in Turkey.
He also notes that powering sensors and other devices underwater is another challenge, as replacing batteries in marine environments can be particularly difficult. Conveniently, any device that uses a solar panel to receive data via light signals could also be used to harvest energy simultaneously. In such a scenario, an autonomous underwater vehicle passing by a sensor could use a laser to both collect data and transfer power to the device.
Currently, the most effective method to do this is through an approach in which the power derived from the light signal is separated into Alternating Current (AC) and Direct Current (DC), whereby the AC signal is used to transmit data and the DC signal is used a power source. This is called the AC-DC separation (ADS) method.
However, some scientists, including Uysal's team, have been trying to build upon a different approach that strategically switches between energy harvesting and data transfer as needed to optimize performance. This approach is called simultaneous lightwave information and power transfer (SLIPT). Yet, despite its sophistication, the SLIPT technique has not surpassed the traditional ADS method in terms of efficiency - until now.
In their study, Uysal and his colleagues devised a SLIPT optimization algorithm that allows energy to be more efficiently extracted from the light spectrum. Uysal notes that this allows their SLIPT method to "significantly outperform" the traditional ADS method.
"The feasibility of wireless power was already successfully demonstrated in underwater environments [using light], despite the fact that seawater conductivity, temperature, pressure, water currents, and biofouling phenomenon impose additional challenges," says Uysal.
These examples have been largely experimental to date: For the real-world implementation of SLIPT, he says the commercialization of underwater devices capable of harvesting energy wirelessly will be necessary, as well as advances in underwater modems that can support communication using visible light. In the meantime, his team plans to explore ways of optimizing the trajectories of underwater autonomous vehicles, which could one day travel across vast areas of the world's oceans, simultaneously collecting data from the futuristic sensors and remotely powering them using light. | Researchers at Turkey's Ozyegin University have determined a way to transfer more power and data to underwater vehicles using light, with greater efficiency than the traditional alternating current-direct current (AC/DC) separation technique. Ozyegin's Murat Uysal and his team crafted a simultaneous lightwave information and power transfer (SLIPT) optimization algorithm that facilitates more efficient energy extraction from the light spectrum. Uysal said deploying SLIPT in real-world conditions requires the commercialization of underwater devices that can harvest energy wirelessly, in addition to innovative underwater modems that can enable communication via visible light. Said Uysal, "The feasibility of wireless power was already successfully demonstrated in underwater environments [using light], despite the fact that seawater conductivity, temperature, pressure, water currents, and biofouling phenomenon impose additional challenges." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Turkey's Ozyegin University have determined a way to transfer more power and data to underwater vehicles using light, with greater efficiency than the traditional alternating current-direct current (AC/DC) separation technique. Ozyegin's Murat Uysal and his team crafted a simultaneous lightwave information and power transfer (SLIPT) optimization algorithm that facilitates more efficient energy extraction from the light spectrum. Uysal said deploying SLIPT in real-world conditions requires the commercialization of underwater devices that can harvest energy wirelessly, in addition to innovative underwater modems that can enable communication via visible light. Said Uysal, "The feasibility of wireless power was already successfully demonstrated in underwater environments [using light], despite the fact that seawater conductivity, temperature, pressure, water currents, and biofouling phenomenon impose additional challenges."
JONAS ALLERT/UNSPLASH
Scientists have made much progress in using light to transmit data in the open air, as well as to power various devices from a distance-but how to accomplish these feats underwater has been a bit murkier. However, in a new study published May 4 in IEEE Transactions on Wireless Communications , researchers have identified a new way to boost the transfer of power and data to devices underwater using light.
The ocean and other bodies of water are full of mysteries yet to be observed. Networks of underwater sensors are increasingly being deployed t o gather information . Currently the most common approach for remotely transmitting signals underwater is via sound waves, which easily travel long distances through the watery depths. However, sound cannot carry nearly as much data as light can.
"Visible light communication can provide data rates at several orders of magnitude beyond the capabilities of traditional acoustic techniques and is particularly suited for emerging bandwidth-hungry underwater applications," explains Murat Uysal , a professor with the Department of Electrical and Electronics Engineering at Ozyegin University , in Turkey.
He also notes that powering sensors and other devices underwater is another challenge, as replacing batteries in marine environments can be particularly difficult. Conveniently, any device that uses a solar panel to receive data via light signals could also be used to harvest energy simultaneously. In such a scenario, an autonomous underwater vehicle passing by a sensor could use a laser to both collect data and transfer power to the device.
Currently, the most effective method to do this is through an approach in which the power derived from the light signal is separated into Alternating Current (AC) and Direct Current (DC), whereby the AC signal is used to transmit data and the DC signal is used a power source. This is called the AC-DC separation (ADS) method.
However, some scientists, including Uysal's team, have been trying to build upon a different approach that strategically switches between energy harvesting and data transfer as needed to optimize performance. This approach is called simultaneous lightwave information and power transfer (SLIPT). Yet, despite its sophistication, the SLIPT technique has not surpassed the traditional ADS method in terms of efficiency - until now.
In their study, Uysal and his colleagues devised a SLIPT optimization algorithm that allows energy to be more efficiently extracted from the light spectrum. Uysal notes that this allows their SLIPT method to "significantly outperform" the traditional ADS method.
"The feasibility of wireless power was already successfully demonstrated in underwater environments [using light], despite the fact that seawater conductivity, temperature, pressure, water currents, and biofouling phenomenon impose additional challenges," says Uysal.
These examples have been largely experimental to date: For the real-world implementation of SLIPT, he says the commercialization of underwater devices capable of harvesting energy wirelessly will be necessary, as well as advances in underwater modems that can support communication using visible light. In the meantime, his team plans to explore ways of optimizing the trajectories of underwater autonomous vehicles, which could one day travel across vast areas of the world's oceans, simultaneously collecting data from the futuristic sensors and remotely powering them using light. |
|||
323 | UTSA Researchers Among Collaborative Improving Computer Vision for AI | The University of Texas at San Antonio is dedicated to the advancement of knowledge through research and discovery, teaching and learning, community engagement and public service. As an institution of access and excellence, UTSA embraces multicultural traditions and serves as a center for intellectual and creative resources as well as a catalyst for socioeconomic development and the commercialization of intellectual property - for Texas, the nation and the world.
To be a premier public research university, providing access to educational excellence and preparing citizen leaders for the global environment.
We encourage an environment of dialogue and discovery, where integrity, excellence, inclusiveness, respect, collaboration and innovation are fostered.
UTSA is a proud Hispanic Serving Institution (HSI) as designated by the U.S. Department of Education .
The University of Texas at San Antonio, a Hispanic Serving Institution situated in a global city that has been a crossroads of peoples and cultures for centuries, values diversity and inclusion in all aspects of university life. As an institution expressly founded to advance the education of Mexican Americans and other underserved communities, our university is committed to ending generations of discrimination and inequity. UTSA, a premier public research university, fosters academic excellence through a community of dialogue, discovery and innovation that embraces the uniqueness of each voice. | A new method to improve computer vision for artificial intelligence (AI) was developed by researchers at the University of Texas at San Antonio (UTSA), the University of Central Florida, the Air Force Research Laboratory, and SRI International. The researchers injected noise, or pixilation, into every layer of a neural network, compared with the conventional approach of injecting noise into only the input layer. The result, UTSA's Sumit Jha said, is that "The network is now forced to learn a more robust representation of the input in all of its internal layers. If every layer experiences more perturbations in every training, then the image representation will be more robust and you won't see the AI fail just because you change a few pixels of the input image." | [] | [] | [] | scitechnews | None | None | None | None | A new method to improve computer vision for artificial intelligence (AI) was developed by researchers at the University of Texas at San Antonio (UTSA), the University of Central Florida, the Air Force Research Laboratory, and SRI International. The researchers injected noise, or pixilation, into every layer of a neural network, compared with the conventional approach of injecting noise into only the input layer. The result, UTSA's Sumit Jha said, is that "The network is now forced to learn a more robust representation of the input in all of its internal layers. If every layer experiences more perturbations in every training, then the image representation will be more robust and you won't see the AI fail just because you change a few pixels of the input image."
The University of Texas at San Antonio is dedicated to the advancement of knowledge through research and discovery, teaching and learning, community engagement and public service. As an institution of access and excellence, UTSA embraces multicultural traditions and serves as a center for intellectual and creative resources as well as a catalyst for socioeconomic development and the commercialization of intellectual property - for Texas, the nation and the world.
To be a premier public research university, providing access to educational excellence and preparing citizen leaders for the global environment.
We encourage an environment of dialogue and discovery, where integrity, excellence, inclusiveness, respect, collaboration and innovation are fostered.
UTSA is a proud Hispanic Serving Institution (HSI) as designated by the U.S. Department of Education .
The University of Texas at San Antonio, a Hispanic Serving Institution situated in a global city that has been a crossroads of peoples and cultures for centuries, values diversity and inclusion in all aspects of university life. As an institution expressly founded to advance the education of Mexican Americans and other underserved communities, our university is committed to ending generations of discrimination and inequity. UTSA, a premier public research university, fosters academic excellence through a community of dialogue, discovery and innovation that embraces the uniqueness of each voice. |
|||
324 | AI Tool Helps Doctors Manage COVID-19 | Artificial intelligence (AI) technology developed by researchers at Waterloo Engineering is capable of assessing the severity of COVID-19 cases with a promising degree of accuracy.
The new work, part of the COVID-Net open-source initiative launched more than a year ago, involved researchers from Waterloo and spin-off startup company DarwinAI , as well as radiologists at the Stony Brook School of Medicine and the Montefiore Medical Center in New York.
Deep-learning AI was trained to analyze the extent and opacity of infection in the lungs of COVID-19 patients based on chest x-rays. Its scores were then compared to assessments of the same x-rays by expert radiologists.
For both extent and opacity, important indicators of the severity of infections, predictions made by the AI software were in good alignment with scores provided by the human experts.
Alexander Wong , a systems design engineering professor and co-founder of DarwinAI, said the technology could give doctors an important tool to help them manage cases.
"Assessing the severity of a patient with COVID-19 is a critical step in the clinical workflow for determining the best course of action for treatment and care, be it admitting the patient to ICU, giving a patient oxygen therapy, or putting a patient on a mechanical ventilator," he said.
"The promising results in this study show that artificial intelligence has a strong potential to be an effective tool for supporting frontline healthcare workers in their decisions and improving clinical efficiency, which is especially important given how much stress the ongoing pandemic has placed on healthcare systems around the world."
A paper on the research, Towards computer-aided severity assessment via deep neural networks for geographic and opacity extent scoring of SARS-CoV-2 chest X-rays , appears in the journal Scientific Reports.
Photo: Chest x-rays used in the COVID-Net study show differing infection extent and opacity in the lungs of COVID-19 patients. | Researchers at Canada's University of Waterloo have developed artificial intelligence (AI) technology to evaluate the degree of COVID-19 severity, as part of the open source COVID-Net project between Waterloo, spinoff startup DarwinAI, the Stony Brook School of Medicine, and the Montefiore Medical Center. The researchers trained the deep learning AI to extrapolate the extent and opacity of infection in the lungs of COVID-19 patients from chest x-rays. The software's evaluations were compared to expert radiologists' evaluations of the same images, and were found to align well with them. Waterloo's Alexander Wong said, "The promising results in this study show that artificial intelligence has a strong potential to be an effective tool for supporting frontline healthcare workers in their decisions and improving clinical efficiency, which is especially important given how much stress the ongoing pandemic has placed on healthcare systems around the world." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Canada's University of Waterloo have developed artificial intelligence (AI) technology to evaluate the degree of COVID-19 severity, as part of the open source COVID-Net project between Waterloo, spinoff startup DarwinAI, the Stony Brook School of Medicine, and the Montefiore Medical Center. The researchers trained the deep learning AI to extrapolate the extent and opacity of infection in the lungs of COVID-19 patients from chest x-rays. The software's evaluations were compared to expert radiologists' evaluations of the same images, and were found to align well with them. Waterloo's Alexander Wong said, "The promising results in this study show that artificial intelligence has a strong potential to be an effective tool for supporting frontline healthcare workers in their decisions and improving clinical efficiency, which is especially important given how much stress the ongoing pandemic has placed on healthcare systems around the world."
Artificial intelligence (AI) technology developed by researchers at Waterloo Engineering is capable of assessing the severity of COVID-19 cases with a promising degree of accuracy.
The new work, part of the COVID-Net open-source initiative launched more than a year ago, involved researchers from Waterloo and spin-off startup company DarwinAI , as well as radiologists at the Stony Brook School of Medicine and the Montefiore Medical Center in New York.
Deep-learning AI was trained to analyze the extent and opacity of infection in the lungs of COVID-19 patients based on chest x-rays. Its scores were then compared to assessments of the same x-rays by expert radiologists.
For both extent and opacity, important indicators of the severity of infections, predictions made by the AI software were in good alignment with scores provided by the human experts.
Alexander Wong , a systems design engineering professor and co-founder of DarwinAI, said the technology could give doctors an important tool to help them manage cases.
"Assessing the severity of a patient with COVID-19 is a critical step in the clinical workflow for determining the best course of action for treatment and care, be it admitting the patient to ICU, giving a patient oxygen therapy, or putting a patient on a mechanical ventilator," he said.
"The promising results in this study show that artificial intelligence has a strong potential to be an effective tool for supporting frontline healthcare workers in their decisions and improving clinical efficiency, which is especially important given how much stress the ongoing pandemic has placed on healthcare systems around the world."
A paper on the research, Towards computer-aided severity assessment via deep neural networks for geographic and opacity extent scoring of SARS-CoV-2 chest X-rays , appears in the journal Scientific Reports.
Photo: Chest x-rays used in the COVID-Net study show differing infection extent and opacity in the lungs of COVID-19 patients. |
|||
325 | Russian Hackers Launch Major Cyberattack Through U.S. Aid Agency's Email System, Microsoft Says | Microsoft reported that Nobelium, the Russian hacking group believed to be responsible for last year's SolarWinds attack, has targeted more than 150 organizations in at least 24 countries in the last week in another major cyberattack. More than 3,000 email accounts received phishing emails as part of the latest attack. Microsoft's Tom Burt said at least 25% of the affected organizations are involved in international development and humanitarian and human rights work. The hackers gained access to the U.S. Agency for International Development's email marketing account to distribute the phishing emails. The malicious file distributed as part of the attack contains the NativeZone backdoor, which Burt said can "enable a wide range of activities from stealing data to infecting other computers on a network." | [] | [] | [] | scitechnews | None | None | None | None | Microsoft reported that Nobelium, the Russian hacking group believed to be responsible for last year's SolarWinds attack, has targeted more than 150 organizations in at least 24 countries in the last week in another major cyberattack. More than 3,000 email accounts received phishing emails as part of the latest attack. Microsoft's Tom Burt said at least 25% of the affected organizations are involved in international development and humanitarian and human rights work. The hackers gained access to the U.S. Agency for International Development's email marketing account to distribute the phishing emails. The malicious file distributed as part of the attack contains the NativeZone backdoor, which Burt said can "enable a wide range of activities from stealing data to infecting other computers on a network."
|
||||
326 | How to Get More Women into Technology | Recent initiatives aimed at swelling the ranks of women in science, technology, engineering, and math (STEM) fields include enrichment programs, mentorships, and engagement with potential employers. Cornell Tech's Break Through Tech program initially focused on women attending the City University of New York, to get more students hired for summer internships. The program's founder, Judith Spitz, said it "tried to act as a concierge facilitator" to introduce employers to those students, but their eligibility criteria rarely matched students' experience. To address this problem, Break Through Tech now offers paid internships during academic recess, algorithmically matching students with employers. Other efforts are attempting to transform computing education for girls by removing biases and other factors that discourage women from pursuing STEM, as well as consulting with partner employers to hire more women. | [] | [] | [] | scitechnews | None | None | None | None | Recent initiatives aimed at swelling the ranks of women in science, technology, engineering, and math (STEM) fields include enrichment programs, mentorships, and engagement with potential employers. Cornell Tech's Break Through Tech program initially focused on women attending the City University of New York, to get more students hired for summer internships. The program's founder, Judith Spitz, said it "tried to act as a concierge facilitator" to introduce employers to those students, but their eligibility criteria rarely matched students' experience. To address this problem, Break Through Tech now offers paid internships during academic recess, algorithmically matching students with employers. Other efforts are attempting to transform computing education for girls by removing biases and other factors that discourage women from pursuing STEM, as well as consulting with partner employers to hire more women.
|
||||
328 | In Post-Pandemic Europe, Migrants Face Digital Fortress | PEPLO, Greece (AP) - As the world begins to travel again, Europe is sending migrants a loud message: Stay away!
Greek border police are firing bursts of deafening noise from an armored truck over the frontier into Turkey. Mounted on the vehicle, the long-range acoustic device, or "sound cannon," is the size of a small TV set but can match the volume of a jet engine.
It's part of a vast array of physical and experimental new digital barriers being installed and tested during the quiet months of the coronavirus pandemic at the 200-kilometer (125-mile) Greek border with Turkey to stop people entering the European Union illegally.
A new steel wall, similar to recent construction on the U.S.-Mexico border, blocks commonly-used crossing points along the Evros River that separates the two countries.
Nearby observation towers are being fitted with long-range cameras, night vision, and multiple sensors. The data will be sent to control centers to flag suspicious movement using artificial intelligence analysis.
"We will have a clear 'pre-border' picture of what's happening," Police Maj. Dimonsthenis Kamargios, head of the region's border guard authority, told the Associated Press.
The EU has poured 3 billion euros ($3.7 billion) into security tech research following the refugee crisis in 2015-16, when more than 1 million people - many escaping wars in Syria, Iraq and Afghanistan - fled to Greece and on to other EU countries.
The automated surveillance network being built on the Greek-Turkish border is aimed at detecting migrants early and deterring them from crossing, with river and land patrols using searchlights and long-range acoustic devices.
Key elements of the network will be launched by the end of the year, Kamargios said. "Our task is to prevent migrants from entering the country illegally. We need modern equipment and tools to do that."
Researchers at universities around Europe, working with private firms, have developed futuristic surveillance and verification technology, and tested more than a dozen projects at Greek borders.
AI-powered lie detectors and virtual border-guard interview bots have been piloted, as well as efforts to integrate satellite data with footage from drones on land, air, sea and underwater. Palm scanners record the unique vein pattern in a person's hand to use as a biometric identifier, and the makers of live camera reconstruction technology promise to erase foliage virtually, exposing people hiding near border areas.
Testing has also been conducted in Hungary, Latvia and elsewhere along the eastern EU perimeter.
The more aggressive migration strategy has been advanced by European policymakers over the past five years, funding deals with Mediterranean countries outside the bloc to hold migrants back and transforming the EU border protection agency, Frontex, from a coordination mechanism to a full-fledged multinational security force.
But regional migration deals have left the EU exposed to political pressure from neighbors.
Earlier this month, several thousand migrants crossed from Morocco into the Spanish enclave of Ceuta in a single day, prompting Spain to deploy the army. A similar crisis unfolded on the Greek-Turkish border and lasted three weeks last year.
Greece is pressing the EU to let Frontex patrol outside its territorial waters to stop migrants reaching Lesbos and other Greek islands, the most common route in Europe for illegal crossing in recent years.
Armed with new tech tools, European law enforcement authorities are leaning further outside borders.
Not all the surveillance programs being tested will be included in the new detection system, but human rights groups say the emerging technology will make it even harder for refugees fleeing wars and extreme hardship to find safety.
Patrick Breyer, a European lawmaker from Germany, has taken an EU research authority to court, demanding that details of the AI-powered lie detection program be made public.
"What we are seeing at the borders, and in treating foreign nationals generally, is that it's often a testing field for technologies that are later used on Europeans as well. And that's why everybody should care, in their own self-interest," Breyer of the German Pirates Party told the AP.
He urged authorities to allow broad oversight of border surveillance methods to review ethical concerns and prevent the sale of the technology through private partners to authoritarian regimes outside the EU.
Ella Jakubowska, of the digital rights group EDRi, argued that EU officials were adopting "techno-solutionism" to sideline moral considerations in dealing with the complex issue of migration.
"It is deeply troubling that, time and again, EU funds are poured into expensive technologies which are used in ways that criminalize, experiment with and dehumanize people on the move," she said.
The London-based group Privacy International argued the tougher border policing would provide a political reward to European leaders who have adopted a hard line on migration.
"If people migrating are viewed only as a security problem to be deterred and challenged, the inevitable result is that governments will throw technology at controlling them," said Edin Omanovic, an advocacy director at the group.
"It's not hard to see why: across Europe we have autocrats looking for power by targeting foreigners, otherwise progressive leaders who have failed to come up with any alternatives to copying their agendas, and a rampant arms industry with vast access to decision-makers."
Migration flows have slowed in many parts of Europe during the pandemic, interrupting an increase recorded over years. In Greece, for example, the number of arrivals dropped from nearly 75,000 in 2019 to 15,700 in 2020, a 78% decrease.
But the pressure is sure to return. Between 2000 and 2020, the world's migrant population rose by more than 80% to reach 272 million, according to United Nations data, fast outpacing international population growth.
At the Greek border village of Poros, the breakfast discussion at a cafe was about the recent crisis on the Spanish-Moroccan border.
Many of the houses in the area are abandoned and in a gradual state of collapse, and life is adjusting to that reality.
Cows use the steel wall as a barrier for the wind and rest nearby.
Panagiotis Kyrgiannis, a Poros resident, says the wall and other preventive measures have brought migrant crossings to a dead stop.
"We are used to seeing them cross over and come through the village in groups of 80 or a 100," he said. "We were not afraid. ... They don't want to settle here. All of this that's happening around us is not about us."
___
Associated Press writer Kelvin Chan in London contributed to this report.
___
Follow Derek Gatopoulos at https://twitter.com/dgatopoulos and Costas Kantouris at https://twitter.com/CostasKantouris
___
Follow AP's global migration coverage at https://apnews.com/hub/migration | European governments are installing and testing new digital technologies to bar migrants' illegal entry in the post-pandemic era. Observation towers are being equipped along the Greek-Turkish border with long-range cameras, night vision, and multiple sensors that will send data to control centers to flag suspicious movement through artificial intelligence (AI) analysis. The automated surveillance network is designed to detect migrants early and prevent them from crossing the border. Academic researchers across Europe, working with private companies, have developed surveillance and verification tools such as AI-powered lie detectors and virtual border-guard interview bots, and tested them in more than a dozen projects at Greek borders. | [] | [] | [] | scitechnews | None | None | None | None | European governments are installing and testing new digital technologies to bar migrants' illegal entry in the post-pandemic era. Observation towers are being equipped along the Greek-Turkish border with long-range cameras, night vision, and multiple sensors that will send data to control centers to flag suspicious movement through artificial intelligence (AI) analysis. The automated surveillance network is designed to detect migrants early and prevent them from crossing the border. Academic researchers across Europe, working with private companies, have developed surveillance and verification tools such as AI-powered lie detectors and virtual border-guard interview bots, and tested them in more than a dozen projects at Greek borders.
PEPLO, Greece (AP) - As the world begins to travel again, Europe is sending migrants a loud message: Stay away!
Greek border police are firing bursts of deafening noise from an armored truck over the frontier into Turkey. Mounted on the vehicle, the long-range acoustic device, or "sound cannon," is the size of a small TV set but can match the volume of a jet engine.
It's part of a vast array of physical and experimental new digital barriers being installed and tested during the quiet months of the coronavirus pandemic at the 200-kilometer (125-mile) Greek border with Turkey to stop people entering the European Union illegally.
A new steel wall, similar to recent construction on the U.S.-Mexico border, blocks commonly-used crossing points along the Evros River that separates the two countries.
Nearby observation towers are being fitted with long-range cameras, night vision, and multiple sensors. The data will be sent to control centers to flag suspicious movement using artificial intelligence analysis.
"We will have a clear 'pre-border' picture of what's happening," Police Maj. Dimonsthenis Kamargios, head of the region's border guard authority, told the Associated Press.
The EU has poured 3 billion euros ($3.7 billion) into security tech research following the refugee crisis in 2015-16, when more than 1 million people - many escaping wars in Syria, Iraq and Afghanistan - fled to Greece and on to other EU countries.
The automated surveillance network being built on the Greek-Turkish border is aimed at detecting migrants early and deterring them from crossing, with river and land patrols using searchlights and long-range acoustic devices.
Key elements of the network will be launched by the end of the year, Kamargios said. "Our task is to prevent migrants from entering the country illegally. We need modern equipment and tools to do that."
Researchers at universities around Europe, working with private firms, have developed futuristic surveillance and verification technology, and tested more than a dozen projects at Greek borders.
AI-powered lie detectors and virtual border-guard interview bots have been piloted, as well as efforts to integrate satellite data with footage from drones on land, air, sea and underwater. Palm scanners record the unique vein pattern in a person's hand to use as a biometric identifier, and the makers of live camera reconstruction technology promise to erase foliage virtually, exposing people hiding near border areas.
Testing has also been conducted in Hungary, Latvia and elsewhere along the eastern EU perimeter.
The more aggressive migration strategy has been advanced by European policymakers over the past five years, funding deals with Mediterranean countries outside the bloc to hold migrants back and transforming the EU border protection agency, Frontex, from a coordination mechanism to a full-fledged multinational security force.
But regional migration deals have left the EU exposed to political pressure from neighbors.
Earlier this month, several thousand migrants crossed from Morocco into the Spanish enclave of Ceuta in a single day, prompting Spain to deploy the army. A similar crisis unfolded on the Greek-Turkish border and lasted three weeks last year.
Greece is pressing the EU to let Frontex patrol outside its territorial waters to stop migrants reaching Lesbos and other Greek islands, the most common route in Europe for illegal crossing in recent years.
Armed with new tech tools, European law enforcement authorities are leaning further outside borders.
Not all the surveillance programs being tested will be included in the new detection system, but human rights groups say the emerging technology will make it even harder for refugees fleeing wars and extreme hardship to find safety.
Patrick Breyer, a European lawmaker from Germany, has taken an EU research authority to court, demanding that details of the AI-powered lie detection program be made public.
"What we are seeing at the borders, and in treating foreign nationals generally, is that it's often a testing field for technologies that are later used on Europeans as well. And that's why everybody should care, in their own self-interest," Breyer of the German Pirates Party told the AP.
He urged authorities to allow broad oversight of border surveillance methods to review ethical concerns and prevent the sale of the technology through private partners to authoritarian regimes outside the EU.
Ella Jakubowska, of the digital rights group EDRi, argued that EU officials were adopting "techno-solutionism" to sideline moral considerations in dealing with the complex issue of migration.
"It is deeply troubling that, time and again, EU funds are poured into expensive technologies which are used in ways that criminalize, experiment with and dehumanize people on the move," she said.
The London-based group Privacy International argued the tougher border policing would provide a political reward to European leaders who have adopted a hard line on migration.
"If people migrating are viewed only as a security problem to be deterred and challenged, the inevitable result is that governments will throw technology at controlling them," said Edin Omanovic, an advocacy director at the group.
"It's not hard to see why: across Europe we have autocrats looking for power by targeting foreigners, otherwise progressive leaders who have failed to come up with any alternatives to copying their agendas, and a rampant arms industry with vast access to decision-makers."
Migration flows have slowed in many parts of Europe during the pandemic, interrupting an increase recorded over years. In Greece, for example, the number of arrivals dropped from nearly 75,000 in 2019 to 15,700 in 2020, a 78% decrease.
But the pressure is sure to return. Between 2000 and 2020, the world's migrant population rose by more than 80% to reach 272 million, according to United Nations data, fast outpacing international population growth.
At the Greek border village of Poros, the breakfast discussion at a cafe was about the recent crisis on the Spanish-Moroccan border.
Many of the houses in the area are abandoned and in a gradual state of collapse, and life is adjusting to that reality.
Cows use the steel wall as a barrier for the wind and rest nearby.
Panagiotis Kyrgiannis, a Poros resident, says the wall and other preventive measures have brought migrant crossings to a dead stop.
"We are used to seeing them cross over and come through the village in groups of 80 or a 100," he said. "We were not afraid. ... They don't want to settle here. All of this that's happening around us is not about us."
___
Associated Press writer Kelvin Chan in London contributed to this report.
___
Follow Derek Gatopoulos at https://twitter.com/dgatopoulos and Costas Kantouris at https://twitter.com/CostasKantouris
___
Follow AP's global migration coverage at https://apnews.com/hub/migration |
|||
329 | Google Sees Sweeping German Antitrust Probes into Data Terms | Germany's Federal Cartel Office announced the launch of two antitrust probes of Google under an expansion of its investigative authority, focusing on the company's data processing terms and whether it offers users "sufficient choice as to how Google will use their data." Cartel Office president Andreas Mundt said, "Due to the large number of digital services offered by Google, such as the Google search engine, YouTube, Google Maps, the Android operating system, or the Chrome browser, the company could be considered to be of paramount significance for competition across markets. It is often very difficult for other companies to challenge this position of power." Limiting Google's data collection could potentially jeopardize its business model of tracking people online to help serve up personalized advertising. Google spokesperson Ralf Bremer said the company will cooperate fully with the Cartel Office investigations. | [] | [] | [] | scitechnews | None | None | None | None | Germany's Federal Cartel Office announced the launch of two antitrust probes of Google under an expansion of its investigative authority, focusing on the company's data processing terms and whether it offers users "sufficient choice as to how Google will use their data." Cartel Office president Andreas Mundt said, "Due to the large number of digital services offered by Google, such as the Google search engine, YouTube, Google Maps, the Android operating system, or the Chrome browser, the company could be considered to be of paramount significance for competition across markets. It is often very difficult for other companies to challenge this position of power." Limiting Google's data collection could potentially jeopardize its business model of tracking people online to help serve up personalized advertising. Google spokesperson Ralf Bremer said the company will cooperate fully with the Cartel Office investigations.
|
||||
330 | Eye-Tracking Software Could Make Video Calls Feel More Lifelike | By Chris Stokel-Walker
Teaching over video calls can be challenging Robert Nickelsberg/Getty Images
A system that tracks your eye movements could help make video calls truer to life.
Shlomo Dubnov at the University of California, San Diego (UCSD), was frustrated by the inability to smoothly teach an online music class during the coronavirus pandemic. "With the online setting, we miss a lot of these little non-verbal body gestures and communications," he says.
With Ross Greer , a colleague at UCSD, he developed a machine learning system that monitors a presenter's eye movements to track who they are ... | University of California, San Diego (UCSD) researchers have designed an eye-tracking system that could make video conversations truer to life. UCSD's Shlomo Dubnov and Ross Greer developed software that employs two neural networks: one network captures a videoconferencing system's screen and records the location of each participant's video window and their name; the other network uses the call leader's camera feed to locate their face and eye position, then analyzes their eye movements to estimate where on the screen they are looking, and at whom. The system cross-checks that with the first network to determine who is in that position on screen, and shows their name to all participants. The algorithm, once trained, is able to estimate where participants were looking, and to get within 2 centimeters of the correct point on a 70-by-39-centimeter screen. | [] | [] | [] | scitechnews | None | None | None | None | University of California, San Diego (UCSD) researchers have designed an eye-tracking system that could make video conversations truer to life. UCSD's Shlomo Dubnov and Ross Greer developed software that employs two neural networks: one network captures a videoconferencing system's screen and records the location of each participant's video window and their name; the other network uses the call leader's camera feed to locate their face and eye position, then analyzes their eye movements to estimate where on the screen they are looking, and at whom. The system cross-checks that with the first network to determine who is in that position on screen, and shows their name to all participants. The algorithm, once trained, is able to estimate where participants were looking, and to get within 2 centimeters of the correct point on a 70-by-39-centimeter screen.
By Chris Stokel-Walker
Teaching over video calls can be challenging Robert Nickelsberg/Getty Images
A system that tracks your eye movements could help make video calls truer to life.
Shlomo Dubnov at the University of California, San Diego (UCSD), was frustrated by the inability to smoothly teach an online music class during the coronavirus pandemic. "With the online setting, we miss a lot of these little non-verbal body gestures and communications," he says.
With Ross Greer , a colleague at UCSD, he developed a machine learning system that monitors a presenter's eye movements to track who they are ... |
|||
332 | Tesla Activates In-Car Camera to Monitor Drivers Using Autopilot | Electric vehicle manufacturer Tesla has turned the in-car camera in its Model 3 and Model Y vehicles into a monitor for when its Autopilot advanced driver assistance system is in use. A Tesla software update specified that the "cabin camera above the rearview mirror can now detect and alert driver inattentiveness while Autopilot is engaged," and that the system can only save or transit information if data sharing is intentionally enabled. Tesla has been criticized for failing to activate its in-vehicle driver monitoring technology amid growing evidence that owners were misusing Autopilot. | [] | [] | [] | scitechnews | None | None | None | None | Electric vehicle manufacturer Tesla has turned the in-car camera in its Model 3 and Model Y vehicles into a monitor for when its Autopilot advanced driver assistance system is in use. A Tesla software update specified that the "cabin camera above the rearview mirror can now detect and alert driver inattentiveness while Autopilot is engaged," and that the system can only save or transit information if data sharing is intentionally enabled. Tesla has been criticized for failing to activate its in-vehicle driver monitoring technology amid growing evidence that owners were misusing Autopilot.
|
||||
333 | Technology to Manage Mental Health at Your Fingertips | To help patients manage their mental wellness between appointments, researchers at Texas A&M University have developed a smart device-based electronic platform that can continuously monitor the state of hyperarousal, one of the key signs of psychiatric distress.
They say this advanced technology can read facial cues, analyze voice patterns and integrate readings from built-in vital signs sensors on smartwatches to determine if a patient is under stress. Furthermore, the researchers note that the technology could provide feedback and alert care teams if there is an abrupt deterioration in a patient's mental health.
"Mental health can change very rapidly, and a lot of these changes remain hidden from providers or counselors," said Farzan Sasangohar, assistant professor in the Wm Michael Barnes '64 Department of Industrial and Systems Engineering. "Our technology will give providers and counselors continuous access to patient variables and patient status, and I think it's going to have a lifesaving implication because they can reach out to patients when they need it. Plus, it will empower patients to manage their mental health better."
The researchers' integrated electronic monitoring and feedback platform is described in the Journal of Psychiatric Practice.
Unlike some physical illnesses that can usually be treated with a few visits to the doctor, people with mental health needs can require an extended period of care. Between visits to a health care provider, information on a patient's mental health status has been lacking, giving unforeseen deterioration in mental health a limited chance of being addressed.
For example, a patient with anxiety disorder may experience a stressful life event, triggering extreme irritability and restlessness which may need immediate medical attention. But this patient may be between appointments. On the other hand, health care professionals have no way to know about their patients' ongoing struggle with mental health, which can prevent them from providing the appropriate care.
Patient-reported outcomes between visits are critical for designing effective health care interventions for mental health so that there is continued improvement in the patient's wellbeing. To fill this gap, Sasangohar and his team worked with clinicians and researchers in the Department of Psychiatry at Houston Methodist Hospital to develop a smart electronic platform to help assess a patient's mental wellbeing.
"The hospital has the largest inpatient psychiatry clinic in the Houston area," Sasangohar said. "With this collaboration, we could include thousands of patients that had given consent for psychiatric monitoring."
Sasangohar's collaborators at Houston Methodist Hospital were already using an off-the-shelf patient navigation tool called CareSense. This software can be used to send reminders and monitoring questions to patients to better assess their wellbeing. For instance, individuals at risk for self-harm can be prompted to take questionnaires for major depressive disorder periodically.
Rather than solely relying on the patients' subjective assessment of their mental health, Sasangohar and his team also developed a suite of software for automatized hyperarousal analysis that can be easily installed on smartphones and smartwatches. These programs gather input from face and voice recognition applications and sensors already built in smartwatches, such as heart rate sensors and pedometers. The data from these sources then train machine-learning algorithms to recognize patterns that are aligned with the normal state of arousal. Once trained, the algorithms can continuously look at readings coming from the sensors and recognition applications to determine if an individual is in an elevated arousal state.
"The key here is triangulation," Sasangohar said. "Each of these methods on their own, say facial sentiment analysis, show promise to detect the mental state, albeit with limitations. But when you combine that information with the voice sentiment analysis, as well as physiological indicators of distress, the diagnosis and inference become much more powerful and clearer."
Sasangohar said both the subjective evaluation of mental state and the objective evaluation from the machine-learning algorithms are integrated to make a final assessment of the state of arousal for a given individual.
While their technology's prototype is ready, the researchers said they still need to improve the battery life of smartphones carrying their software since the algorithms guzzle a lot of power. They also have to address usability issues that prohibit patients from using their technology, such as difficulty in navigating their application.
"Because of the stigmatization that surrounds mental illness, we wanted to build a mental health monitoring device that was very discreet," Sasangohar said. "So, we chose off-the-shelf products, like smartphones, and then build sophisticated applications that operate within these devices to make monitoring mental health discreet."
Other contributors to the study include Dr. Christopher Fowler and Dr. Alok Madan from The University of Texas McGovern School of Medicine and Baylor College of Medicine; Courtenay Bruce and Dr. Stephen Jones from the Houston Methodist Institute for Academic Medicine; Dr. Christopher Frueh from The University of Texas McGovern School of Medicine and the University of Hawaii; and Bita Kash from the Methodist Institute for Academic Medicine and Texas A&M.
This research is funded by the Texas A&M University President's Excellence Grant (X-Grant). | An electronic platform can read facial cues and vocal patterns, and integrate readings from smartwatch sensors to detect psychological stress, according to Texas A&M University (TAMU) researchers. They developed the monitoring and feedback system with Houston Methodist Hospital collaborators and other researchers in Texas and Hawaii, using smartwatch-collected data to train machine learning algorithms to recognize patterns that correspond with the normal state of arousal. The algorithms can then continuously monitor readings from the sensors and recognition applications to identify the state of hyperarousal, a sign of psychiatric distress. TAMU's Farzan Sasangohar said the technology "will give providers and counselors continuous access to patient variables and patient status, and I think it's going to have a lifesaving implication because they can reach out to patients when they need it. Plus, it will empower patients to manage their mental health better." | [] | [] | [] | scitechnews | None | None | None | None | An electronic platform can read facial cues and vocal patterns, and integrate readings from smartwatch sensors to detect psychological stress, according to Texas A&M University (TAMU) researchers. They developed the monitoring and feedback system with Houston Methodist Hospital collaborators and other researchers in Texas and Hawaii, using smartwatch-collected data to train machine learning algorithms to recognize patterns that correspond with the normal state of arousal. The algorithms can then continuously monitor readings from the sensors and recognition applications to identify the state of hyperarousal, a sign of psychiatric distress. TAMU's Farzan Sasangohar said the technology "will give providers and counselors continuous access to patient variables and patient status, and I think it's going to have a lifesaving implication because they can reach out to patients when they need it. Plus, it will empower patients to manage their mental health better."
To help patients manage their mental wellness between appointments, researchers at Texas A&M University have developed a smart device-based electronic platform that can continuously monitor the state of hyperarousal, one of the key signs of psychiatric distress.
They say this advanced technology can read facial cues, analyze voice patterns and integrate readings from built-in vital signs sensors on smartwatches to determine if a patient is under stress. Furthermore, the researchers note that the technology could provide feedback and alert care teams if there is an abrupt deterioration in a patient's mental health.
"Mental health can change very rapidly, and a lot of these changes remain hidden from providers or counselors," said Farzan Sasangohar, assistant professor in the Wm Michael Barnes '64 Department of Industrial and Systems Engineering. "Our technology will give providers and counselors continuous access to patient variables and patient status, and I think it's going to have a lifesaving implication because they can reach out to patients when they need it. Plus, it will empower patients to manage their mental health better."
The researchers' integrated electronic monitoring and feedback platform is described in the Journal of Psychiatric Practice.
Unlike some physical illnesses that can usually be treated with a few visits to the doctor, people with mental health needs can require an extended period of care. Between visits to a health care provider, information on a patient's mental health status has been lacking, giving unforeseen deterioration in mental health a limited chance of being addressed.
For example, a patient with anxiety disorder may experience a stressful life event, triggering extreme irritability and restlessness which may need immediate medical attention. But this patient may be between appointments. On the other hand, health care professionals have no way to know about their patients' ongoing struggle with mental health, which can prevent them from providing the appropriate care.
Patient-reported outcomes between visits are critical for designing effective health care interventions for mental health so that there is continued improvement in the patient's wellbeing. To fill this gap, Sasangohar and his team worked with clinicians and researchers in the Department of Psychiatry at Houston Methodist Hospital to develop a smart electronic platform to help assess a patient's mental wellbeing.
"The hospital has the largest inpatient psychiatry clinic in the Houston area," Sasangohar said. "With this collaboration, we could include thousands of patients that had given consent for psychiatric monitoring."
Sasangohar's collaborators at Houston Methodist Hospital were already using an off-the-shelf patient navigation tool called CareSense. This software can be used to send reminders and monitoring questions to patients to better assess their wellbeing. For instance, individuals at risk for self-harm can be prompted to take questionnaires for major depressive disorder periodically.
Rather than solely relying on the patients' subjective assessment of their mental health, Sasangohar and his team also developed a suite of software for automatized hyperarousal analysis that can be easily installed on smartphones and smartwatches. These programs gather input from face and voice recognition applications and sensors already built in smartwatches, such as heart rate sensors and pedometers. The data from these sources then train machine-learning algorithms to recognize patterns that are aligned with the normal state of arousal. Once trained, the algorithms can continuously look at readings coming from the sensors and recognition applications to determine if an individual is in an elevated arousal state.
"The key here is triangulation," Sasangohar said. "Each of these methods on their own, say facial sentiment analysis, show promise to detect the mental state, albeit with limitations. But when you combine that information with the voice sentiment analysis, as well as physiological indicators of distress, the diagnosis and inference become much more powerful and clearer."
Sasangohar said both the subjective evaluation of mental state and the objective evaluation from the machine-learning algorithms are integrated to make a final assessment of the state of arousal for a given individual.
While their technology's prototype is ready, the researchers said they still need to improve the battery life of smartphones carrying their software since the algorithms guzzle a lot of power. They also have to address usability issues that prohibit patients from using their technology, such as difficulty in navigating their application.
"Because of the stigmatization that surrounds mental illness, we wanted to build a mental health monitoring device that was very discreet," Sasangohar said. "So, we chose off-the-shelf products, like smartphones, and then build sophisticated applications that operate within these devices to make monitoring mental health discreet."
Other contributors to the study include Dr. Christopher Fowler and Dr. Alok Madan from The University of Texas McGovern School of Medicine and Baylor College of Medicine; Courtenay Bruce and Dr. Stephen Jones from the Houston Methodist Institute for Academic Medicine; Dr. Christopher Frueh from The University of Texas McGovern School of Medicine and the University of Hawaii; and Bita Kash from the Methodist Institute for Academic Medicine and Texas A&M.
This research is funded by the Texas A&M University President's Excellence Grant (X-Grant). |
|||
334 | A Helping Hand for Working Robots | A reimagined robot hand combines strength with resilience, sidestepping the problems that accompany existing designs.
Researchers at the Department of Robotics Engineering at South Korea's Daegu Gyeongbuk Institute of Science and Technology (DGIST) have developed and tested a new type of human-like mechanical hand that combines the benefits of existing robot hands while eliminating their weaknesses. They describe their new design in the journal Soft Robotics .
Until now, competing types of robotic hand designs offered a trade-off between strength and durability. One commonly used design, employing a rigid pin joint that mimics the mechanism in human finger joints, can lift heavy payloads, but is easily damaged in collisions, particularly if hit from the side. Meanwhile, fully compliant hands, typically made of molded silicone, are more flexible, harder to break, and better at grasping objects of various shapes, but they fall short on lifting power.
[ Prof. Dongwon Yun (Left) & Junmo Yang, Integrated M.S & Ph.D course student (Right) ]
The DGIST research team investigated the idea that a partially-compliant robot hand, using a rigid link connected to a structure known as a Crossed Flexural Hinge (CFH), could increase the robot's lifting power while minimizing damage in the event of a collision. Generally, a CFH is made of two strips of metal arranged in an X-shape that can flex or bend in one position while remaining rigid in others, without creating friction.
"Smart industrial robots and cooperative robots that interact with humans need both resilience and strength," says Dongwon Yun , who heads the DGIST BioRobotics and Mechatronics Lab and led the research team. "Our findings show the advantages of both a rigid structure and a compliant structure can be combined, and this will overcome the shortcomings of both."
The team 3D-printed the metal strips that serve as the CFH joints connecting segments in each robotic finger, which allow the robotic fingers to curve and straighten similar to a human hand. The researchers demonstrated the robotic hand's ability to grasp different objects, including a box of tissues, a small fan and a wallet. The CFH-jointed robot hand was shown to have 46.7 percent more shock absorption than a pin joint-oriented robotic hand. It was also stronger than fully compliant robot hands, with the ability to hold objects weighing up to four kilograms.
Further improvements are needed before robots with these partially-compliant hands are able to go to work alongside or directly with humans. The researchers note that additional analysis of materials is required, as well as field experiments to pinpoint the best practical applications.
"The industrial and healthcare settings where robots are widely used are dynamic and demanding places, so it's important to keep improving robots' performance," says DGIST engineering Ph.D. student Junmo Yang, the first paper author.
• • •
For more information, contact: Dongwon Yun , Associate Professor Department of Robotics Engineering Daegu Gyeongbuk Institute of Science and Technology (DGIST) E-mail: mech@dgist.ac.kr
Associated Links Research Paper in Journal of Soft Robotics DOI: 10.1089/soro.2020.0067
Journal Reference Junmo Yang, Jeongseok Kim, Donghyun Kim, and Dongwon Yun, "Shock Resistive Flexure-based Anthropomorphic Hand with Enhanced Payload," Soft Robotics, on-line published on 5 Mar, 2021. | A new human-like mechanical hand developed by researchers at South Korea's Daegu Gyeongbuk Institute of Science and Technology (DGIST) is designed to be both strong and resilient. The new design features a crossed flexural hinge (CFH) that can flex or bend in one position and remain rigid in others without creating fiction. DGIST's Dongwon Yun said, "Our findings show the advantages of both a rigid structure and a compliant structure can be combined, and this will overcome the shortcomings of both." The robotic hand was found to be 46.7% more shock-absorbent than pin joint-oriented robotic hands; it also is capable of holding objects weighing up to four kilograms (8.8 lbs.). | [] | [] | [] | scitechnews | None | None | None | None | A new human-like mechanical hand developed by researchers at South Korea's Daegu Gyeongbuk Institute of Science and Technology (DGIST) is designed to be both strong and resilient. The new design features a crossed flexural hinge (CFH) that can flex or bend in one position and remain rigid in others without creating fiction. DGIST's Dongwon Yun said, "Our findings show the advantages of both a rigid structure and a compliant structure can be combined, and this will overcome the shortcomings of both." The robotic hand was found to be 46.7% more shock-absorbent than pin joint-oriented robotic hands; it also is capable of holding objects weighing up to four kilograms (8.8 lbs.).
A reimagined robot hand combines strength with resilience, sidestepping the problems that accompany existing designs.
Researchers at the Department of Robotics Engineering at South Korea's Daegu Gyeongbuk Institute of Science and Technology (DGIST) have developed and tested a new type of human-like mechanical hand that combines the benefits of existing robot hands while eliminating their weaknesses. They describe their new design in the journal Soft Robotics .
Until now, competing types of robotic hand designs offered a trade-off between strength and durability. One commonly used design, employing a rigid pin joint that mimics the mechanism in human finger joints, can lift heavy payloads, but is easily damaged in collisions, particularly if hit from the side. Meanwhile, fully compliant hands, typically made of molded silicone, are more flexible, harder to break, and better at grasping objects of various shapes, but they fall short on lifting power.
[ Prof. Dongwon Yun (Left) & Junmo Yang, Integrated M.S & Ph.D course student (Right) ]
The DGIST research team investigated the idea that a partially-compliant robot hand, using a rigid link connected to a structure known as a Crossed Flexural Hinge (CFH), could increase the robot's lifting power while minimizing damage in the event of a collision. Generally, a CFH is made of two strips of metal arranged in an X-shape that can flex or bend in one position while remaining rigid in others, without creating friction.
"Smart industrial robots and cooperative robots that interact with humans need both resilience and strength," says Dongwon Yun , who heads the DGIST BioRobotics and Mechatronics Lab and led the research team. "Our findings show the advantages of both a rigid structure and a compliant structure can be combined, and this will overcome the shortcomings of both."
The team 3D-printed the metal strips that serve as the CFH joints connecting segments in each robotic finger, which allow the robotic fingers to curve and straighten similar to a human hand. The researchers demonstrated the robotic hand's ability to grasp different objects, including a box of tissues, a small fan and a wallet. The CFH-jointed robot hand was shown to have 46.7 percent more shock absorption than a pin joint-oriented robotic hand. It was also stronger than fully compliant robot hands, with the ability to hold objects weighing up to four kilograms.
Further improvements are needed before robots with these partially-compliant hands are able to go to work alongside or directly with humans. The researchers note that additional analysis of materials is required, as well as field experiments to pinpoint the best practical applications.
"The industrial and healthcare settings where robots are widely used are dynamic and demanding places, so it's important to keep improving robots' performance," says DGIST engineering Ph.D. student Junmo Yang, the first paper author.
• • •
For more information, contact: Dongwon Yun , Associate Professor Department of Robotics Engineering Daegu Gyeongbuk Institute of Science and Technology (DGIST) E-mail: mech@dgist.ac.kr
Associated Links Research Paper in Journal of Soft Robotics DOI: 10.1089/soro.2020.0067
Journal Reference Junmo Yang, Jeongseok Kim, Donghyun Kim, and Dongwon Yun, "Shock Resistive Flexure-based Anthropomorphic Hand with Enhanced Payload," Soft Robotics, on-line published on 5 Mar, 2021. |
|||
335 | Bristol Researchers' Camera Knows Exactly Where It Is | Overview of the on-sensor mapping. The system moves around and as it does it builds a visual catalogue of what it observes. This is the map that is later used to know if it has been there before. University of Bristol
Right: the system moves around the world, Left: A new image is seen and a decision is made to add it or not to the visual catalogue (top left), this is the pictorial map that can then be used to localise the system later. University of Bristol
During localisation the incoming image is compared to the visual catalogue (Descriptor database) and if a match is found, the system will tell where it is (Predicted node, small white rectangle at the top) relative to the catalogue. Note how the system is able to match images even if there are changes in illumination or objects like people moving. | A camera developed by researchers at the U.K.'s University of Bristol can construct a pictorial map to ascertain its current location. The camera employs a processing-on-sensor Pixel Processor Array (PPA) that can recognize objects at thousands of frames per second, to generate and use maps at the time of image capture, in conjunction with an on-board mapping algorithm. When presented with a new image, the algorithm determines whether it is sufficiently different to previously observed images, and stores or discards data based on that assessment. As the PPA device is moved around by a person or robot, for example, it compiles a visual catalog of views that can be used to match any new image in localization mode. The PPA does not send out images, which boosts the system's energy efficiency and privacy. | [] | [] | [] | scitechnews | None | None | None | None | A camera developed by researchers at the U.K.'s University of Bristol can construct a pictorial map to ascertain its current location. The camera employs a processing-on-sensor Pixel Processor Array (PPA) that can recognize objects at thousands of frames per second, to generate and use maps at the time of image capture, in conjunction with an on-board mapping algorithm. When presented with a new image, the algorithm determines whether it is sufficiently different to previously observed images, and stores or discards data based on that assessment. As the PPA device is moved around by a person or robot, for example, it compiles a visual catalog of views that can be used to match any new image in localization mode. The PPA does not send out images, which boosts the system's energy efficiency and privacy.
Overview of the on-sensor mapping. The system moves around and as it does it builds a visual catalogue of what it observes. This is the map that is later used to know if it has been there before. University of Bristol
Right: the system moves around the world, Left: A new image is seen and a decision is made to add it or not to the visual catalogue (top left), this is the pictorial map that can then be used to localise the system later. University of Bristol
During localisation the incoming image is compared to the visual catalogue (Descriptor database) and if a match is found, the system will tell where it is (Predicted node, small white rectangle at the top) relative to the catalogue. Note how the system is able to match images even if there are changes in illumination or objects like people moving. |
|||
337 | Google, Hospital Chain Partner in Push to Boost Efficiency | Google Cloud and national hospital chain HCA Healthcare announced an alliance to upgrade medical care efficiency by producing "a secure and dynamic data analytics platform for HCA Healthcare and [enabling] the development of next-generation operational models focused on actionable insights and improved workflows." The partners said they will strive to store medical devices and digital health records with Google Data. The companies also said they aim to design more effective algorithms that will "empower physicians, nurses, and others with workflow tools, analysis, and alerts on their mobile devices to help clinicians respond quickly to changes in a patient's condition." | [] | [] | [] | scitechnews | None | None | None | None | Google Cloud and national hospital chain HCA Healthcare announced an alliance to upgrade medical care efficiency by producing "a secure and dynamic data analytics platform for HCA Healthcare and [enabling] the development of next-generation operational models focused on actionable insights and improved workflows." The partners said they will strive to store medical devices and digital health records with Google Data. The companies also said they aim to design more effective algorithms that will "empower physicians, nurses, and others with workflow tools, analysis, and alerts on their mobile devices to help clinicians respond quickly to changes in a patient's condition."
|
||||
339 | Drones May Have Attacked Humans Fully Autonomously for the First Time | A recent report by the United Nations Security Council's Panel of Experts reveals that an incident in Libya last year may have marked the first time military drones autonomously attacked humans. Full details of the incident have not been released, but the report said retreating forces affiliated with Khalifa Haftar, commander of the Libyan National Army, were "hunted down" by Kargu-2 quadcopters during a civil war conflict in March 2020. The drones, produced by the Turkish firm STM, locate and identify targets in autonomous mode using on-board cameras with artificial intelligence, and attack by flying into the target and detonating. The report called the attack "highly effective" and said the drones did not require data connectivity with an operator. | [] | [] | [] | scitechnews | None | None | None | None | A recent report by the United Nations Security Council's Panel of Experts reveals that an incident in Libya last year may have marked the first time military drones autonomously attacked humans. Full details of the incident have not been released, but the report said retreating forces affiliated with Khalifa Haftar, commander of the Libyan National Army, were "hunted down" by Kargu-2 quadcopters during a civil war conflict in March 2020. The drones, produced by the Turkish firm STM, locate and identify targets in autonomous mode using on-board cameras with artificial intelligence, and attack by flying into the target and detonating. The report called the attack "highly effective" and said the drones did not require data connectivity with an operator.
|
||||
340 | Pipelines Now Must Report Cybersecurity Breaches | The U.S. Department of Homeland Security (DHS) ' Transportation Security Administration (TSA) has announced new reporting mandates for pipeline operators following the ransomware attack on the Colonial Pipeline. Operators are required to report any cyberattacks on their systems to the federal government within 12 hours; they also must appoint a round-the-clock, on-call cybersecurity coordinator to work with the government in the event of an attack, and then have 30 days to evaluate their cyber practices. Pipeline operators must report cyberattacks to the Cybersecurity and Infrastructure Security Agency, or face fines starting at $7,000 a day. DHS says roughly 100 pipelines have been deemed critical and subject to the new directive; a DHS official said additional actions will be taken "in the not-too-distant future." | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Department of Homeland Security (DHS) ' Transportation Security Administration (TSA) has announced new reporting mandates for pipeline operators following the ransomware attack on the Colonial Pipeline. Operators are required to report any cyberattacks on their systems to the federal government within 12 hours; they also must appoint a round-the-clock, on-call cybersecurity coordinator to work with the government in the event of an attack, and then have 30 days to evaluate their cyber practices. Pipeline operators must report cyberattacks to the Cybersecurity and Infrastructure Security Agency, or face fines starting at $7,000 a day. DHS says roughly 100 pipelines have been deemed critical and subject to the new directive; a DHS official said additional actions will be taken "in the not-too-distant future."
|
||||
341 | Archaeologists vs. Computers: Study Tests Who's Best at Sifting the Past | A key piece of an archaeologist's job involves the tedious process of categorizing shards of pottery into subtypes. Ask archaeologists why they have put a fragment into a particular category and it's often difficult for them to say what exactly had led them to that conclusion.
"It's kind of like looking at a photograph of Elvis Presley and looking at a photo of an impersonator," said Christian Downum, an anthropology professor at Northern Arizona University. "You know something is off with the impersonator, but it's hard to specify why it's not Elvis."
But archaeologists have now demonstrated that it's possible to program a computer to do this critical part of their job as well as they can. In a study published in the June issue of The Journal of Archaeological Science, researchers reported that a deep-learning model sorted images of decorated shards as accurately as - and occasionally more precisely than - four expert archaeologists did.
"It doesn't hurt my feelings," Dr. Downum, one of the study's authors, said. Rather, he said, it should improve the field by freeing up time and replacing "the subjective and difficult-to-describe process of classification with a system that gives the same result every time." | Computers can sort pottery shards into subtypes at least as accurately as human archaeologists, as demonstrated by Northern Arizona University researchers. The researchers pitted a deep learning neural network against four expert archaeologists in classifying thousands of images of Tusayan White Ware pottery among nine known types; the networks outperformed two experts and equaled the other two. The network also sifted through all 3,000 photos in minutes, while each expert's analysis took three to four months. The network also could more specifically communicate its reasoning for certain categorizations than its human counterparts, and offered a single answer for each classification. | [] | [] | [] | scitechnews | None | None | None | None | Computers can sort pottery shards into subtypes at least as accurately as human archaeologists, as demonstrated by Northern Arizona University researchers. The researchers pitted a deep learning neural network against four expert archaeologists in classifying thousands of images of Tusayan White Ware pottery among nine known types; the networks outperformed two experts and equaled the other two. The network also sifted through all 3,000 photos in minutes, while each expert's analysis took three to four months. The network also could more specifically communicate its reasoning for certain categorizations than its human counterparts, and offered a single answer for each classification.
A key piece of an archaeologist's job involves the tedious process of categorizing shards of pottery into subtypes. Ask archaeologists why they have put a fragment into a particular category and it's often difficult for them to say what exactly had led them to that conclusion.
"It's kind of like looking at a photograph of Elvis Presley and looking at a photo of an impersonator," said Christian Downum, an anthropology professor at Northern Arizona University. "You know something is off with the impersonator, but it's hard to specify why it's not Elvis."
But archaeologists have now demonstrated that it's possible to program a computer to do this critical part of their job as well as they can. In a study published in the June issue of The Journal of Archaeological Science, researchers reported that a deep-learning model sorted images of decorated shards as accurately as - and occasionally more precisely than - four expert archaeologists did.
"It doesn't hurt my feelings," Dr. Downum, one of the study's authors, said. Rather, he said, it should improve the field by freeing up time and replacing "the subjective and difficult-to-describe process of classification with a system that gives the same result every time." |
|||
342 | AI Technology Protects Privacy | To address this problem, an interdisciplinary team at TUM has worked with researchers at Imperial College London and the non-profit OpenMined to develop a unique combination of AI-based diagnostic processes for radiological image data that safeguard data privacy. In a paper published in Nature Machine Intelligence, the team has now presented a successful application: a deep learning algorithm that helps to classify pneumonia conditions in x-rays of children.
"We have tested our models against specialized radiologists. In some cases the models showed comparable or better accuracy in diagnosing various types of pneumonia in children," says Prof. Marcus R. Makowski, the Director of the Department of Diagnostic and Interventional Radiology at the Klinikum rechts der Isar of TUM . | Technology developed by researchers at Germany's Technical University of Munich (TUM) ensures that the training of artificial intelligence (AI) algorithms does not infringe on patients' personal data. The team, collaborating with researchers at the U.K.'s Imperial College London and the OpenMined private AI technology nonprofit, integrated AI-based diagnostic processes for radiological image data that preserve privacy. TUM's Alexander Ziller said the models were trained in various hospitals on local data, so "data owners did not have to share their data and retained complete control." The researchers also used data aggregation to block the identification of institutions where the algorithm was trained, while a third technique was utilized to guarantee differential privacy. TUM's Rickmer Braren said, "It is often claimed that data protection and the utilization of data must always be in conflict. But we are now proving that this does not have to be true." | [] | [] | [] | scitechnews | None | None | None | None | Technology developed by researchers at Germany's Technical University of Munich (TUM) ensures that the training of artificial intelligence (AI) algorithms does not infringe on patients' personal data. The team, collaborating with researchers at the U.K.'s Imperial College London and the OpenMined private AI technology nonprofit, integrated AI-based diagnostic processes for radiological image data that preserve privacy. TUM's Alexander Ziller said the models were trained in various hospitals on local data, so "data owners did not have to share their data and retained complete control." The researchers also used data aggregation to block the identification of institutions where the algorithm was trained, while a third technique was utilized to guarantee differential privacy. TUM's Rickmer Braren said, "It is often claimed that data protection and the utilization of data must always be in conflict. But we are now proving that this does not have to be true."
To address this problem, an interdisciplinary team at TUM has worked with researchers at Imperial College London and the non-profit OpenMined to develop a unique combination of AI-based diagnostic processes for radiological image data that safeguard data privacy. In a paper published in Nature Machine Intelligence, the team has now presented a successful application: a deep learning algorithm that helps to classify pneumonia conditions in x-rays of children.
"We have tested our models against specialized radiologists. In some cases the models showed comparable or better accuracy in diagnosing various types of pneumonia in children," says Prof. Marcus R. Makowski, the Director of the Department of Diagnostic and Interventional Radiology at the Klinikum rechts der Isar of TUM . |
|||
343 | You'll Soon be Able to Use Your Apple Watch Without Touching the Screen | Apple will launch features later this year that enable hand gesture-based control of Apple Watches and eye-motion control of iPads. The AssistiveTouch hand-movement control feature was developed to facilitate touch-free smartwatch control for users with upper-body limb differences; an Apple spokesman said it will be available to all users at launch. Apple also plans to update the iPad operating system to support third-party eye-tracking devices, and to introduce "background sounds" that iPhone users can use to minimize distractions and maintain calm. Jonathan Hassell at U.K.-based accessibility consultancy Hassell Inclusion said, "The big guys tend to do the right thing. The thing that now needs to be done is for everybody to follow their lead." | [] | [] | [] | scitechnews | None | None | None | None | Apple will launch features later this year that enable hand gesture-based control of Apple Watches and eye-motion control of iPads. The AssistiveTouch hand-movement control feature was developed to facilitate touch-free smartwatch control for users with upper-body limb differences; an Apple spokesman said it will be available to all users at launch. Apple also plans to update the iPad operating system to support third-party eye-tracking devices, and to introduce "background sounds" that iPhone users can use to minimize distractions and maintain calm. Jonathan Hassell at U.K.-based accessibility consultancy Hassell Inclusion said, "The big guys tend to do the right thing. The thing that now needs to be done is for everybody to follow their lead."
|
||||
344 | Massive Phishing Campaign Delivers Password-Stealing Malware Disguised as Ransomware | A massive phishing campaign is distributing what looks like ransomware but is in fact trojan malware that creates a backdoor into Windows systems to steal usernames, passwords and other information from victims.
Detailed by cybersecurity researchers at Microsoft , the latest version of the Java-based STRRAT malware is being sent out via a large email campaign , which uses compromised email accounts to distribute messages claiming to be related to payments, alongside an image posing as a PDF attachment that looks like it has information about the supposed transfer.
When the user opens this file, they're connected to a malicious domain that downloads STRRAT malware onto the machine.
SEE: A winning strategy for cybersecurity (ZDNet special report) | Download the report as a PDF (TechRepublic)
The updated version of the malware is what researchers describe as "notably more obfuscated and modular than previous versions," but it retains the same backdoor functions, including the ability to collect passwords, log keystrokes, run remote commands and PowerShell, and more - ultimately giving the attacker full control over the infected machine.
As part of the infection process, the malware adds a .crimson file name extension to files in an attempt to make the attack look like ransomware - although no files are actually encrypted.
This could be an attempt to distract the victim and hide the fact that the PC has actually been compromised with a remote access trojan - a highly stealthy form of malware, as opposed to a much more overt ransomware attack .
It's likely that this spam campaign - or similar phishing campaigns - is still active as cyber criminals continue attempts to distribute STRRAT malware to more victims.
Given how the malware is able to gain access to usernames and passwords, it's possible that anyone who's system becomes infected could see their email account abused by attackers in an effort to further spread STRRAT with new phishing emails.
SEE: Ransomware just got very real. And it's likely to get worse
However, as the malware campaign relies on phishing emails, there are steps that can be taken to avoid becoming a new victim of the attack. These include being wary of unexpected or unusual messages - particularly those that appear to offer a financial incentive - as well as taking caution when it comes to opening emails and attachments being delivered from strange or unknown email addresses.
Using antivirus software to detect and identify threats can also help prevent malicious emails from landing in inboxes in the first place, removing the risk of someone opening the message and clicking the malicious link. | Microsoft cybersecurity researchers said the latest version of the Java-based STRRAT malware is being distributed as part of a massive phishing campaign. The messages, sent via compromised email accounts, claim to be related to payments and contain an image that looks like a PDF attachment with information about the transfer; when opened, the file connects users to a malicious domain that downloads the malware. The addition of a .crimson file name extension aims to make the attack appear like ransomware, but actually a remote access trojan is placed on the PC to steal usernames, passwords, and other information through a backdoor into Windows systems. Victims' email accounts could be used by the attackers in new phishing emails to spread the STRRAT malware. Users can protect themselves by using antivirus software and exerting caution when opening emails and attachments from unknown senders. | [] | [] | [] | scitechnews | None | None | None | None | Microsoft cybersecurity researchers said the latest version of the Java-based STRRAT malware is being distributed as part of a massive phishing campaign. The messages, sent via compromised email accounts, claim to be related to payments and contain an image that looks like a PDF attachment with information about the transfer; when opened, the file connects users to a malicious domain that downloads the malware. The addition of a .crimson file name extension aims to make the attack appear like ransomware, but actually a remote access trojan is placed on the PC to steal usernames, passwords, and other information through a backdoor into Windows systems. Victims' email accounts could be used by the attackers in new phishing emails to spread the STRRAT malware. Users can protect themselves by using antivirus software and exerting caution when opening emails and attachments from unknown senders.
A massive phishing campaign is distributing what looks like ransomware but is in fact trojan malware that creates a backdoor into Windows systems to steal usernames, passwords and other information from victims.
Detailed by cybersecurity researchers at Microsoft , the latest version of the Java-based STRRAT malware is being sent out via a large email campaign , which uses compromised email accounts to distribute messages claiming to be related to payments, alongside an image posing as a PDF attachment that looks like it has information about the supposed transfer.
When the user opens this file, they're connected to a malicious domain that downloads STRRAT malware onto the machine.
SEE: A winning strategy for cybersecurity (ZDNet special report) | Download the report as a PDF (TechRepublic)
The updated version of the malware is what researchers describe as "notably more obfuscated and modular than previous versions," but it retains the same backdoor functions, including the ability to collect passwords, log keystrokes, run remote commands and PowerShell, and more - ultimately giving the attacker full control over the infected machine.
As part of the infection process, the malware adds a .crimson file name extension to files in an attempt to make the attack look like ransomware - although no files are actually encrypted.
This could be an attempt to distract the victim and hide the fact that the PC has actually been compromised with a remote access trojan - a highly stealthy form of malware, as opposed to a much more overt ransomware attack .
It's likely that this spam campaign - or similar phishing campaigns - is still active as cyber criminals continue attempts to distribute STRRAT malware to more victims.
Given how the malware is able to gain access to usernames and passwords, it's possible that anyone who's system becomes infected could see their email account abused by attackers in an effort to further spread STRRAT with new phishing emails.
SEE: Ransomware just got very real. And it's likely to get worse
However, as the malware campaign relies on phishing emails, there are steps that can be taken to avoid becoming a new victim of the attack. These include being wary of unexpected or unusual messages - particularly those that appear to offer a financial incentive - as well as taking caution when it comes to opening emails and attachments being delivered from strange or unknown email addresses.
Using antivirus software to detect and identify threats can also help prevent malicious emails from landing in inboxes in the first place, removing the risk of someone opening the message and clicking the malicious link. |
|||
345 | Technique Breaks the Mold for 3D-Printing Medical Implants | O'Connell said other approaches were able to create impressive structures, but only with precisely-tailored materials, tuned with particular additives or modified with special chemistry.
"Importantly, our technique is versatile enough to use medical grade materials off-the-shelf," he said.
"It's extraordinary to create such complex shapes using a basic 'high school' grade 3D printer.
"That really lowers the bar for entry into the field, and brings us a significant step closer to making tissue engineering a medical reality."
The research, published in Advanced Materials Technologies , was conducted at BioFab3D@ACMD, a state-of-the-art bioengineering research, education and training hub located at St Vincent's Hospital Melbourne.
Co-author Associate Professor Claudia Di Bella, an orthopedic surgeon at St Vincent's Hospital Melbourne, said the study showcases the possibilities that open up when clinicians, engineers and biomedical scientists come together to address a clinical problem.
"A common problem faced by clinicians is the inability to access technological experimental solutions for the problems they face daily," Di Bella said.
"While a clinician is the best professional to recognise a problem and think about potential solutions, biomedical engineers can turn that idea into reality.
"Learning how to speak a common language across engineering and medicine is often an initial barrier, but once this is overcome, the possibilities are endless." | The development of three-dimensionally (3D) -printed molds by researchers at Australia's Royal Melbourne Institute of Technology (RMIT) invert the traditional 3D printing of medical implants. The Negative Embodied Sacrificial Template 3D (NEST3D) printing method generates molds featuring intricately patterned cavities filled with biocompatible materials; these are dissolved in water and leave behind fingernail-sized bioscaffolds with elaborate structures that standard 3D printers could not previously produce. The RMIT researchers developed NEST3D with collaborators at the University of Melbourne and St. Vincent's Hospital Melbourne. RMIT's Cathal O'Connell said, "We essentially draw the structure we want in the empty space inside our 3D-printed mold. This allows us to create the tiny, complex microstructures where cells will flourish." RMIT's Stephanie Doyle said the technique's versatility allowed the production of dozens of trial bioscaffolds using a range of materials. | [] | [] | [] | scitechnews | None | None | None | None | The development of three-dimensionally (3D) -printed molds by researchers at Australia's Royal Melbourne Institute of Technology (RMIT) invert the traditional 3D printing of medical implants. The Negative Embodied Sacrificial Template 3D (NEST3D) printing method generates molds featuring intricately patterned cavities filled with biocompatible materials; these are dissolved in water and leave behind fingernail-sized bioscaffolds with elaborate structures that standard 3D printers could not previously produce. The RMIT researchers developed NEST3D with collaborators at the University of Melbourne and St. Vincent's Hospital Melbourne. RMIT's Cathal O'Connell said, "We essentially draw the structure we want in the empty space inside our 3D-printed mold. This allows us to create the tiny, complex microstructures where cells will flourish." RMIT's Stephanie Doyle said the technique's versatility allowed the production of dozens of trial bioscaffolds using a range of materials.
O'Connell said other approaches were able to create impressive structures, but only with precisely-tailored materials, tuned with particular additives or modified with special chemistry.
"Importantly, our technique is versatile enough to use medical grade materials off-the-shelf," he said.
"It's extraordinary to create such complex shapes using a basic 'high school' grade 3D printer.
"That really lowers the bar for entry into the field, and brings us a significant step closer to making tissue engineering a medical reality."
The research, published in Advanced Materials Technologies , was conducted at BioFab3D@ACMD, a state-of-the-art bioengineering research, education and training hub located at St Vincent's Hospital Melbourne.
Co-author Associate Professor Claudia Di Bella, an orthopedic surgeon at St Vincent's Hospital Melbourne, said the study showcases the possibilities that open up when clinicians, engineers and biomedical scientists come together to address a clinical problem.
"A common problem faced by clinicians is the inability to access technological experimental solutions for the problems they face daily," Di Bella said.
"While a clinician is the best professional to recognise a problem and think about potential solutions, biomedical engineers can turn that idea into reality.
"Learning how to speak a common language across engineering and medicine is often an initial barrier, but once this is overcome, the possibilities are endless." |
|||
346 | Germany Greenlights Driverless Vehicles on Public Roads | Legislation passed by the lower house of Germany's parliament would permit driverless vehicles on that nation's public roads by 2022. The bill specifically addresses vehicles with the Society of Automobile Engineers' Level 4 autonomy designation, which means all driving is handled by the vehicle's computer in certain conditions. The legislation also details possible initial applications for self-driving cars, including public passenger transport, business and supply trips, logistics, company shuttles, and trips between medical centers and retirement homes. Commercial driverless vehicle operators would have to carry liability insurance and be able to stop autonomous operations remotely, among other requirements. The bill still needs the approval of the upper chamber of parliament to be enacted into law. | [] | [] | [] | scitechnews | None | None | None | None | Legislation passed by the lower house of Germany's parliament would permit driverless vehicles on that nation's public roads by 2022. The bill specifically addresses vehicles with the Society of Automobile Engineers' Level 4 autonomy designation, which means all driving is handled by the vehicle's computer in certain conditions. The legislation also details possible initial applications for self-driving cars, including public passenger transport, business and supply trips, logistics, company shuttles, and trips between medical centers and retirement homes. Commercial driverless vehicle operators would have to carry liability insurance and be able to stop autonomous operations remotely, among other requirements. The bill still needs the approval of the upper chamber of parliament to be enacted into law.
|
||||
347 | UVA Develops Tools to Battle Cancer, Advance Genomics Research | UVA's Chongzhi Zang, PhD, and his colleagues and students have developed a new computational method to map the folding patterns of our chromosomes in three dimensions.
School of Medicine scientists have developed important new resources that will aid the battle against cancer and advance cutting-edge genomics research.
UVA's Chongzhi Zang, PhD, and his colleagues and students have developed a new computational method to map the folding patterns of our chromosomes in three dimensions from experimental data. This is important because the configuration of genetic material inside our chromosomes actually affects how our genes work. In cancer, that configuration can go wrong, so scientists want to understand the genome architecture of both healthy cells and cancerous ones. This will help them develop better ways to treat and prevent cancer, in addition to advancing many other areas of medical research.
Using their new approaches, Zang and his colleagues and students have already unearthed a treasure trove of useful data, and they are making their techniques and findings available to their fellow scientists. To advance cancer research, they've even built an interactive website that brings together their findings with vast amounts of data from other resources. They say their new website, bartcancer.org , can provide "unique insights" for cancer researchers.
"The folding pattern of the genome is highly dynamic; it changes frequently and differs from cell to cell. Our new method aims to link this dynamic pattern to the control of gene activities," said Zang, a computational biologist with UVA's Center for Public Health Genomics and UVA Cancer Center. "A better understanding of this link can help unravel the genetic cause of cancer and other diseases and can guide future drug development for precision medicine."
Zang's new approach to mapping the folding of our genome is called BART3D. Essentially, it compares available three-dimensional configuration data about one region of a chromosome with many of its neighbors. It can then extrapolate from this comparison to fill in blanks in the blueprints of genetic material using "Binding Analysis for Regulation of Transcription," or BART, a novel algorithm they recently developed. The result is a map that offers unprecedented insights into how our genes interact with the "transcriptional regulators" that control their activity. Identifying these regulators helps scientists understand what turns particular genes on and off - information they can use in the battle against cancer and other diseases.
The researchers have built a web server, BARTweb, to offer the BART tool to their fellow scientists. It's available, for free, at http://bartweb.org . The source code is available at https://github.com/zanglab/bart2 . Test runs demonstrated that the server outperformed several existing tools for identifying the transcriptional regulators that control particular sets of genes, the researchers report.
The UVA team also built the BART Cancer database to advance research into 15 different types of cancer, including breast, lung, colorectal and prostate cancer. Scientists can search the interactive database to see which regulators are more active and which are less active in each cancer.
"While a cancer researcher can browse our database to screen potential drug targets, any biomedical scientist can use our web server to analyze their own genetic data," Zang said. "We hope that the tools and resources we develop can benefit the whole biomedical research community by accelerating scientific discoveries and future therapeutic development."
The researchers have published their findings in a trio of new scientific papers: They describe BART3D in the scientific journal Bioinformatics in an article by Zhenjia Wang, Yifan Zhang and Chongzhi Zang; they describe BARTweb in NAR Genomics and Bioinformatics in an article by Wenjing Ma, Zhenjia Wang, Yifan Zhang, Neal E. Magee, Yayi Feng, Ruoyao Shi, Yang Chen and Chongzhi Zang; and they describe BART Cancer in NAR Cancer in a paper by Zachary V. Thomas, Zhenjia Wang and Chongzhi Zang.
Chongzhi Zang is a member of the School of Medicine's Department of Public Health Sciences and Department of Biochemistry and Molecular Genetics. He is also part of UVA's Department of Biomedical Engineering, a collaboration of the School of Medicine and the School of Engineering.
The work was supported by the National Institutes of Health, grants R35GM133712 and K22CA204439; a Phi Beta Psi Sorority Research Grant; and a Seed Award from the Jayne Koskinas Ted Giovanis Foundation for Health and Policy.
To keep up with the latest medical research news from UVA, subscribe to the Making of Medicine blog.
MORE FROM ZANG: Cancer's dangerous renovations to our chromosomes revealed. | A computational method that maps the folding patterns of human chromosomes from experimental data could help combat cancer while furthering genomics science. Developed by scientists at the University of Virginia (UVA) School of Medicine, the BART3D technique compares available three-dimensional (3D) configuration data about a chromosomal region with that of neighboring regions, then uses the Binding Analysis for Regulation of Transcription (BART) algorithm to fill in gaps in the genetic template with information from the comparison. The resulting map yields unprecedented insights about genetic interaction with transcriptional regulators. The UVA researchers also built the BARTweb server to offer free access to the BART tool. UVA's Chongzhi Zang said, "We hope that the tools and resources we develop can benefit the whole biomedical research community by accelerating scientific discoveries and future therapeutic development." | [] | [] | [] | scitechnews | None | None | None | None | A computational method that maps the folding patterns of human chromosomes from experimental data could help combat cancer while furthering genomics science. Developed by scientists at the University of Virginia (UVA) School of Medicine, the BART3D technique compares available three-dimensional (3D) configuration data about a chromosomal region with that of neighboring regions, then uses the Binding Analysis for Regulation of Transcription (BART) algorithm to fill in gaps in the genetic template with information from the comparison. The resulting map yields unprecedented insights about genetic interaction with transcriptional regulators. The UVA researchers also built the BARTweb server to offer free access to the BART tool. UVA's Chongzhi Zang said, "We hope that the tools and resources we develop can benefit the whole biomedical research community by accelerating scientific discoveries and future therapeutic development."
UVA's Chongzhi Zang, PhD, and his colleagues and students have developed a new computational method to map the folding patterns of our chromosomes in three dimensions.
School of Medicine scientists have developed important new resources that will aid the battle against cancer and advance cutting-edge genomics research.
UVA's Chongzhi Zang, PhD, and his colleagues and students have developed a new computational method to map the folding patterns of our chromosomes in three dimensions from experimental data. This is important because the configuration of genetic material inside our chromosomes actually affects how our genes work. In cancer, that configuration can go wrong, so scientists want to understand the genome architecture of both healthy cells and cancerous ones. This will help them develop better ways to treat and prevent cancer, in addition to advancing many other areas of medical research.
Using their new approaches, Zang and his colleagues and students have already unearthed a treasure trove of useful data, and they are making their techniques and findings available to their fellow scientists. To advance cancer research, they've even built an interactive website that brings together their findings with vast amounts of data from other resources. They say their new website, bartcancer.org , can provide "unique insights" for cancer researchers.
"The folding pattern of the genome is highly dynamic; it changes frequently and differs from cell to cell. Our new method aims to link this dynamic pattern to the control of gene activities," said Zang, a computational biologist with UVA's Center for Public Health Genomics and UVA Cancer Center. "A better understanding of this link can help unravel the genetic cause of cancer and other diseases and can guide future drug development for precision medicine."
Zang's new approach to mapping the folding of our genome is called BART3D. Essentially, it compares available three-dimensional configuration data about one region of a chromosome with many of its neighbors. It can then extrapolate from this comparison to fill in blanks in the blueprints of genetic material using "Binding Analysis for Regulation of Transcription," or BART, a novel algorithm they recently developed. The result is a map that offers unprecedented insights into how our genes interact with the "transcriptional regulators" that control their activity. Identifying these regulators helps scientists understand what turns particular genes on and off - information they can use in the battle against cancer and other diseases.
The researchers have built a web server, BARTweb, to offer the BART tool to their fellow scientists. It's available, for free, at http://bartweb.org . The source code is available at https://github.com/zanglab/bart2 . Test runs demonstrated that the server outperformed several existing tools for identifying the transcriptional regulators that control particular sets of genes, the researchers report.
The UVA team also built the BART Cancer database to advance research into 15 different types of cancer, including breast, lung, colorectal and prostate cancer. Scientists can search the interactive database to see which regulators are more active and which are less active in each cancer.
"While a cancer researcher can browse our database to screen potential drug targets, any biomedical scientist can use our web server to analyze their own genetic data," Zang said. "We hope that the tools and resources we develop can benefit the whole biomedical research community by accelerating scientific discoveries and future therapeutic development."
The researchers have published their findings in a trio of new scientific papers: They describe BART3D in the scientific journal Bioinformatics in an article by Zhenjia Wang, Yifan Zhang and Chongzhi Zang; they describe BARTweb in NAR Genomics and Bioinformatics in an article by Wenjing Ma, Zhenjia Wang, Yifan Zhang, Neal E. Magee, Yayi Feng, Ruoyao Shi, Yang Chen and Chongzhi Zang; and they describe BART Cancer in NAR Cancer in a paper by Zachary V. Thomas, Zhenjia Wang and Chongzhi Zang.
Chongzhi Zang is a member of the School of Medicine's Department of Public Health Sciences and Department of Biochemistry and Molecular Genetics. He is also part of UVA's Department of Biomedical Engineering, a collaboration of the School of Medicine and the School of Engineering.
The work was supported by the National Institutes of Health, grants R35GM133712 and K22CA204439; a Phi Beta Psi Sorority Research Grant; and a Seed Award from the Jayne Koskinas Ted Giovanis Foundation for Health and Policy.
To keep up with the latest medical research news from UVA, subscribe to the Making of Medicine blog.
MORE FROM ZANG: Cancer's dangerous renovations to our chromosomes revealed. |
|||
348 | Slender Robotic Finger Senses Buried Items | Over the years, robots have gotten quite good at identifying objects - as long as they're out in the open.
Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.
MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs. | Massachusetts Institute of Technology (MIT) researchers have developed a slender robot finger with a sharp tip and tactile sensing capabilities that can help identify buried objects. Dubbed "Digger Finger," the robot can sense the shape of items submerged within granular materials like sand or rice. The researchers adapted their GelSight tactile sensor for the Digger Finger, and added vibration to make it easier for the robot to clear jams in the granular media that occur when particles lock together. MIT's Radhen Patel said the Digger Finger's motion pattern must be adjusted based on the type of media in which it is searching, and the size and shape of its grains. MIT's Edward Adelson said the Digger Finger "would be helpful if you're trying to find and disable buried bombs, for example." | [] | [] | [] | scitechnews | None | None | None | None | Massachusetts Institute of Technology (MIT) researchers have developed a slender robot finger with a sharp tip and tactile sensing capabilities that can help identify buried objects. Dubbed "Digger Finger," the robot can sense the shape of items submerged within granular materials like sand or rice. The researchers adapted their GelSight tactile sensor for the Digger Finger, and added vibration to make it easier for the robot to clear jams in the granular media that occur when particles lock together. MIT's Radhen Patel said the Digger Finger's motion pattern must be adjusted based on the type of media in which it is searching, and the size and shape of its grains. MIT's Edward Adelson said the Digger Finger "would be helpful if you're trying to find and disable buried bombs, for example."
Over the years, robots have gotten quite good at identifying objects - as long as they're out in the open.
Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.
MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs. |
|||
349 | Fetal Heart Defect Detection Improved by Using ML | UC San Francisco researchers have found a way to double doctors' accuracy in detecting the vast majority of complex fetal heart defects in utero - when interventions could either correct them or greatly improve a child's chance of survival - by combining routine ultrasound imaging with machine-learning computer tools.
The team, led by UCSF cardiologist Rima Arnaout , MD, trained a group of machine-learning models to mimic the tasks that clinicians follow in diagnosing complex congenital heart disease (CHD). Worldwide, humans detect as few as 30 percent to 50 percent of these conditions before birth. However, the combination of human-performed ultrasound and machine analysis allowed the researchers to detect 95 percent of CHD in their test dataset.
The findings appear in the May issue of Nature Medicine .
Fetal ultrasound screening is universally recommended during the second trimester of pregnancy in the United States and by the World Health Organization. Diagnosis of fetal heart defects, in particular, can improve newborn outcomes and enable further research on in utero therapies, the researchers said.
"Second-trimester screening is a rite of passage in pregnancy to tell if the fetus is a boy or girl, but it is also used to screen for birth defects," said Arnaout, a UCSF assistant professor and lead author of the paper. Typically, the imaging includes five cardiac views that could allow clinicians to diagnosis up to 90 percent of congenital heart disease, but in practice, only about half of those are detected at non-expert centers.
"On the one hand, heart defects are the most common kind of birth defect, and it's very important to diagnose them before birth," Arnaout said. "On the other hand, they are still rare enough that detecting them is difficult even for trained clinicians, unless they are highly sub-specialized. And all too often, in clinics and hospitals worldwide, sensitivity and specificity can be quite low."
The UCSF team, which included fetal cardiologist and senior author Anita Moon-Grady , MD, trained the machine tools to mimic clinicians' work in three steps. First, they utilized neural networks to find five views of the heart that are important for diagnosis. Then, they again used neural networks to decide whether each of these views was normal or not. Then, a third algorithm combined the results of the first two steps to give a final result of whether the fetal heart was normal or abnormal.
"We hope this work will revolutionize screening for these birth defects," said Arnaout, a member of the UCSF Bakar Computational Health Sciences Institute , the UCSF Center for Intelligent Imaging , and a Chan Zuckerberg Biohub Intercampus Research Award Investigator. "Our goal is to help forge a path toward using machine learning to solve diagnostic challenges for the many diseases where ultrasound is used in screening and diagnosis."
Co-authors include Lara Curran, MBBS; Yili Zhao, PhD, and Erin Chinn, MS, from UCSF; and Jami Levine from Boston Children's Hospital. The project was supported by the UCSF Academic Research Systems, National Institutes of Health (UL1 TR991872 and R01HL150394), the American Heart Association and the Department of Defense. Please see the paper for additional authors, funding details, and disclosures. | University of California, San Francisco (UCSF) researchers doubled the accuracy of doctors in detecting fetal heart defects in unborn children by integrating ultrasound imaging with machine learning (ML). The researchers trained ML models to mimic tasks that doctors conduct in diagnosing congenital heart disease (CHD). The technique employs neural networks to find five views of the heart, then uses neural networks again to decide whether each view is normal; finally, a third algorithm combines the results of the first two into a diagnosis of fetal-heart normality or abnormality. Humans typically detect 30% to 50% of CHD cases in utero, while the UCSF system detected 95% of CHD cases in the test dataset. UCSF's Rima Arnaout said, "Our goal is to help forge a path toward using machine learning to solve diagnostic challenges for the many diseases where ultrasound is used in screening and diagnosis." | [] | [] | [] | scitechnews | None | None | None | None | University of California, San Francisco (UCSF) researchers doubled the accuracy of doctors in detecting fetal heart defects in unborn children by integrating ultrasound imaging with machine learning (ML). The researchers trained ML models to mimic tasks that doctors conduct in diagnosing congenital heart disease (CHD). The technique employs neural networks to find five views of the heart, then uses neural networks again to decide whether each view is normal; finally, a third algorithm combines the results of the first two into a diagnosis of fetal-heart normality or abnormality. Humans typically detect 30% to 50% of CHD cases in utero, while the UCSF system detected 95% of CHD cases in the test dataset. UCSF's Rima Arnaout said, "Our goal is to help forge a path toward using machine learning to solve diagnostic challenges for the many diseases where ultrasound is used in screening and diagnosis."
UC San Francisco researchers have found a way to double doctors' accuracy in detecting the vast majority of complex fetal heart defects in utero - when interventions could either correct them or greatly improve a child's chance of survival - by combining routine ultrasound imaging with machine-learning computer tools.
The team, led by UCSF cardiologist Rima Arnaout , MD, trained a group of machine-learning models to mimic the tasks that clinicians follow in diagnosing complex congenital heart disease (CHD). Worldwide, humans detect as few as 30 percent to 50 percent of these conditions before birth. However, the combination of human-performed ultrasound and machine analysis allowed the researchers to detect 95 percent of CHD in their test dataset.
The findings appear in the May issue of Nature Medicine .
Fetal ultrasound screening is universally recommended during the second trimester of pregnancy in the United States and by the World Health Organization. Diagnosis of fetal heart defects, in particular, can improve newborn outcomes and enable further research on in utero therapies, the researchers said.
"Second-trimester screening is a rite of passage in pregnancy to tell if the fetus is a boy or girl, but it is also used to screen for birth defects," said Arnaout, a UCSF assistant professor and lead author of the paper. Typically, the imaging includes five cardiac views that could allow clinicians to diagnosis up to 90 percent of congenital heart disease, but in practice, only about half of those are detected at non-expert centers.
"On the one hand, heart defects are the most common kind of birth defect, and it's very important to diagnose them before birth," Arnaout said. "On the other hand, they are still rare enough that detecting them is difficult even for trained clinicians, unless they are highly sub-specialized. And all too often, in clinics and hospitals worldwide, sensitivity and specificity can be quite low."
The UCSF team, which included fetal cardiologist and senior author Anita Moon-Grady , MD, trained the machine tools to mimic clinicians' work in three steps. First, they utilized neural networks to find five views of the heart that are important for diagnosis. Then, they again used neural networks to decide whether each of these views was normal or not. Then, a third algorithm combined the results of the first two steps to give a final result of whether the fetal heart was normal or abnormal.
"We hope this work will revolutionize screening for these birth defects," said Arnaout, a member of the UCSF Bakar Computational Health Sciences Institute , the UCSF Center for Intelligent Imaging , and a Chan Zuckerberg Biohub Intercampus Research Award Investigator. "Our goal is to help forge a path toward using machine learning to solve diagnostic challenges for the many diseases where ultrasound is used in screening and diagnosis."
Co-authors include Lara Curran, MBBS; Yili Zhao, PhD, and Erin Chinn, MS, from UCSF; and Jami Levine from Boston Children's Hospital. The project was supported by the UCSF Academic Research Systems, National Institutes of Health (UL1 TR991872 and R01HL150394), the American Heart Association and the Department of Defense. Please see the paper for additional authors, funding details, and disclosures. |
|||
350 | ACM Recognizes Far-Reaching Technical Achievements with Special Awards | New York, NY, May 26, 2021 - ACM, the Association for Computing Machinery, today announced the recipients of four prestigious technical awards. These leaders were selected by their peers for making contributions that extend the boundaries of research, advance industry, and lay the foundation for technologies that transform society.
Shyamnath Gollakota , University of Washington, is the recipient of the 2020 ACM Grace Murray Hopper Award for contributions to the use of wireless signals in creating novel applications, including battery-free communications, health monitoring, gesture recognition, and bio-based wireless sensing. His work has revolutionized and reimagined what can be done using wireless systems and has a feel of technologies depicted in science fiction novels.
Gollakota defined the technology referred to today as ambient backscatter - a mechanism by which an unpowered, battery-less device can harvest existing wireless signals (such as broadcast TV or WiFi) in the environment for energy and use it to transmit encoded data. In addition, he has developed techniques that can use sonar signals from smartphones to support numerous healthcare applications. Examples include detection and diagnosis of breathing anomalies such as apnea, detection of ear infections, and even detection of life-threatening opioid overdoses. These innovations have the potential to transform the way healthcare systems will be designed and delivered in the future, and some of these efforts are now being commercialized for real-world use.
Gollakota also opened up a new field of extremely lightweight mobile sensors and controllers attached to insects, demonstrating how wireless technology can stream video data from the backs of tiny insects. Some observers believe this could be a first step to creating an internet of biological things, in which insects are employed as delivery vehicles for mobile sensors.
The ACM Grace Murray Hopper Award is given to the outstanding young computer professional of the year, selected on the basis of a single recent major technical or service contribution. This award is accompanied by a prize of $35,000. The candidate must have been 35 years of age or less at the time the qualifying contribution was made. Financial support for this award is provided by Microsoft.
Margo Seltzer , University of British Columbia; Mike Olson , formerly of Cloudera; and Keith Bostic , MongoDB, receive the ACM Software System Award for Berkeley DB, which was an early exemplar of the NoSQL movement and pioneered the "dual-license" approach to software licensing.
Since 1991, Berkeley DB has been a pervasive force underlying the modern internet: it is a part of nearly every POSIX or POSIX-like system, as well as the GNU standard C library (glibc) and many higher-level scripting languages. Berkeley DB was the transactional key/value store for a range of first- and second-generation internet services, including account management, mail and identity servers, online trading platforms and many other software-as-a-service platforms.
As an open source package, Berkeley DB is an invaluable teaching tool, allowing students to see under the hood of a tool that they have grown familiar with by use. The code is clean, well structured, and well documented - it had to be, as it was meant to be consumed and used by an unlimited number of software developers.
As originally created by Seltzer, Olson and Bostic, Berkeley DB was distributed as part of the University of California's Fourth Berkeley Software Distribution. Seltzer and Bostic subsequently founded Sleepycat Software in 1996 to continue development of Berkeley DB and provide commercial support. Olson joined in 1997, and for 10 years, Berkeley DB was the de facto data store for major web infrastructure. As the first production quality, commercial key/value store, it helped launched the NoSQL movement; as the engine behind Amazon's Dynamo and the University of Michigan's SLAPD server, Berkeley DB helped move non-relational databases into the public eye.
Sleepycat Software pioneered the "dual-license" model of software licensing: use and redistribution in Open Source applications was always free, and companies could choose a commercial license for support or to distribute Berkeley DB as part of proprietary packages. This model pointed the way for a number of other open source companies, and this innovation has been widely adopted in open source communities. The open source Berkeley DB release includes all the features of the complete commercial version, and developers building prototypes with open source releases suffer no delay when transitioning to a proprietary product that embeds Berkeley DB.
In summary, Berkeley DB has been one of the most useful, powerful, reliable, and long-lived software packages. The longevity of Berkeley DB's contribution is particularly impressive in an industry with frequent software system turnover.
The ACM Software System Award is presented to an institution or individual (s) recognized for developing a software system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both. The Software System Award carries a prize of $35,000. Financial support for the Software System Award is provided by IBM.
Yossi Azar , Tel Aviv University; Andrei Broder , Google Research; Anna Karlin , University of Washington; Michael Mitzenmacher , Harvard University; and Eli Upfal , Brown University, receive the ACM Paris Kanellakis Theory and Practice Award for the discovery and analysis of balanced allocations, known as the power of two choices, and their extensive applications to practice.
Azar, Broder, Karlin, Mitzenmacher and Upfal introduced the Balanced Allocations framework, also known as the power of two choices paradigm, an elegant theoretical work that had a widespread practical impact.
When n balls are thrown into n bins chosen uniformly at random, it is known that with high probability, the maximum load on any bin is bounded by (lg n/lg lg n) (1+o (1)). Azar, Broder, Karlin, and Upfal (STOC 1994) proved that adding a little bit of choice makes a big difference. When throwing each ball, instead of choosing one bin at random, choose two bins at random, and then place the ball in the bin with the lesser load. This minor change brings on an exponential improvement; now with high probability, the maximal load in any bin is bounded by (lg lg n/lg 2) +O (1).
In the same work, they have shown that, if each ball has d choices, then the maximum load drops with high probability to (ln ln n/ ln d) +O (1). These results were greatly extended by Mitzenmacher in his 1996 PhD dissertation, where he removed the sequential setting, and developed a framework for using the power of two choices in queueing systems.
Since bins and balls are the basic model for analyzing data structures, such as hashing or processes like load balancing of jobs in servers, it is not surprising that the power of two choices that requires only a local decision rather than global coordination has led to a wide range of practical applications. These include i-Google's web index, Akamai's overlay routing network, and highly reliable distributed data storage systems used by Microsoft and Dropbox, which are all based on variants of the power of two choices paradigm. There are many other software systems that use balanced allocations as an important ingredient.
The Balanced Allocations paper and the follow-up work on the power of two choices are elegant theoretical results, and their content had, and will surely continue to have, a demonstrable effect on the practice of computing.
The ACM Paris Kanellakis Theory and Practice Award honors specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing. This award is accompanied by a prize of $10,000 and is endowed by contributions from the Kanellakis family, with additional financial support provided by ACM's Special Interest Groups on Algorithms and Computation Theory (SIGACT), Design Automation (SIGDA), Management of Data (SIGMOD), and Programming Languages (SIGPLAN), the ACM SIG Projects Fund, and individual contributions.
Hector Levesque and Moshe Vardi receive the ACM - AAAI Allen Newell Award .
Hector Levesque , University of Toronto, is recognized for fundamental contributions to knowledge representation and reasoning, and their broader influence within theoretical computer science, databases, robotics, and the study of Boolean satisfiability.
Levesque is recognized for his outstanding contributions to the broad core of logic-inspired artificial intelligence and the impact they have had across multiple sub-disciplines within computer science. With collaborators, he has made fundamental contributions to cognitive robotics, multi-agent systems, theoretical computer science, and database systems, as well as in philosophy and cognitive psychology. These have inspired applications such as the semantic web and automated verification. He is internationally recognized as one of the deepest and most original thinkers within AI and a researcher who has advanced the flame that AI pioneer Alan Newell lit.
On the representation side, Levesque has worked on the formalization of several concepts pertaining to artificial and natural agents including belief, goals, intentions, ability, and the interaction between knowledge, perception and action.
On the reasoning side, his research has focused on how automated reasoning can be kept computationally tractable, including the use of greedy local search methods. He is recognized for his fundamental contributions to the development of several new fields of research including the fields of description logic, the study of tractability in knowledge representation, the study of intention and teamwork, the hardness of satisfiability problems, and cognitive robotics. Levesque has also made fundamental contributions to the development of the systematic use of beliefs, desires, and intentions in the development of intelligent software, where his formalization of many aspects of intention and teamwork has shaped the entire approach to the use of these terms and the design of intelligent agents.
Moshe Vardi , Rice University, is cited for contributions to the development of logic as a unifying foundational framework and a tool for modeling computational systems.
Vardi has made major contributions to a wide variety of fields, including database theory, program verification, finite-model theory, reasoning about knowledge, and constraint satisfaction. He is perhaps the most influential researcher working at the interface of logic and computer science, building bridges between communities in computer science and beyond. With his collaborators he has made fundamental contributions to major research areas, including: 1) investigation of the logical theory of databases, where his focus on the trade-off between expressiveness and computational complexity laid the foundations for work on integrity constraints, complexity of query evaluation, incomplete information, database updates, and logic programming in databases; 2) the automata-theoretic approach to reactive systems, which laid mathematical foundations for verifying that a program meets its specifications, and 3) reasoning about knowledge through his development of epistemic logic.
In database theory, Vardi developed a theory of general data dependencies, finding axiomatizations and resolving their decision problem; introduced two basic notions of measuring the complexity of algorithms for evaluating queries, data-complexity, and query-complexity, which soon became standard in the field; created a logical theory of data updates; and characterized the expressive power of query languages and related them to complexity classes.
In software and hardware verification, Vardi introduced an automata-theoretic approach to the verification of reactive systems that revolutionized the field, using automata on infinite strings and trees to represent both the system being analyzed and undesirable computations of the system. Vardi's automata-theoretic approach has played a central role over the last 30 years of research in the field and in the development of verification tools.
In knowledge theory, Vardi developed rigorous foundations for reasoning about the knowledge of multi-agent and distributed systems, a problem of central importance in many disciplines; his co-authored book on the subject is the definitive source for this field.
The ACM - AAAI Allen Newell Award is presented to an individual selected for career contributions that have breadth within computer science, or that bridge computer science and other disciplines. The Newell award is accompanied by a prize of $10,000, provided by ACM and the Association for the Advancement of Artificial Intelligence (AAAI), and by individual contributions.
ACM, the Association for Computing Machinery , is the world's largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking. | ACM has named the recipients of four awards, chosen by peers for technical achievements that advance research and industry, and which create a basis for transformative technologies. The University of Washington's Shyamnath Gollakota will receive the 2020 ACM Grace Murray Hopper Award for developing novel wireless-signal applications, including wireless video streaming via sensors and controllers affixed to insects. Meanwhile, a team of American and Israeli scientists will receive the ACM Paris Kanellakis Theory and Practice Award for their discovery and analysis of the Balanced Allocations framework, or the power of two choices paradigm, which yielded numerous computing applications. The creators of Berkeley DB, the first production-quality commercial key/value store, were named to receive the ACM Software System Award. Finally, Hector Levesque at Canada's University of Toronto and Rice University's Moshe Vardi will receive the ACM-AAAI Allen Newell Award for their contributions to knowledge theory, software and hardware verification, and logic-inspired artificial intelligence. | [] | [] | [] | scitechnews | None | None | None | None | ACM has named the recipients of four awards, chosen by peers for technical achievements that advance research and industry, and which create a basis for transformative technologies. The University of Washington's Shyamnath Gollakota will receive the 2020 ACM Grace Murray Hopper Award for developing novel wireless-signal applications, including wireless video streaming via sensors and controllers affixed to insects. Meanwhile, a team of American and Israeli scientists will receive the ACM Paris Kanellakis Theory and Practice Award for their discovery and analysis of the Balanced Allocations framework, or the power of two choices paradigm, which yielded numerous computing applications. The creators of Berkeley DB, the first production-quality commercial key/value store, were named to receive the ACM Software System Award. Finally, Hector Levesque at Canada's University of Toronto and Rice University's Moshe Vardi will receive the ACM-AAAI Allen Newell Award for their contributions to knowledge theory, software and hardware verification, and logic-inspired artificial intelligence.
New York, NY, May 26, 2021 - ACM, the Association for Computing Machinery, today announced the recipients of four prestigious technical awards. These leaders were selected by their peers for making contributions that extend the boundaries of research, advance industry, and lay the foundation for technologies that transform society.
Shyamnath Gollakota , University of Washington, is the recipient of the 2020 ACM Grace Murray Hopper Award for contributions to the use of wireless signals in creating novel applications, including battery-free communications, health monitoring, gesture recognition, and bio-based wireless sensing. His work has revolutionized and reimagined what can be done using wireless systems and has a feel of technologies depicted in science fiction novels.
Gollakota defined the technology referred to today as ambient backscatter - a mechanism by which an unpowered, battery-less device can harvest existing wireless signals (such as broadcast TV or WiFi) in the environment for energy and use it to transmit encoded data. In addition, he has developed techniques that can use sonar signals from smartphones to support numerous healthcare applications. Examples include detection and diagnosis of breathing anomalies such as apnea, detection of ear infections, and even detection of life-threatening opioid overdoses. These innovations have the potential to transform the way healthcare systems will be designed and delivered in the future, and some of these efforts are now being commercialized for real-world use.
Gollakota also opened up a new field of extremely lightweight mobile sensors and controllers attached to insects, demonstrating how wireless technology can stream video data from the backs of tiny insects. Some observers believe this could be a first step to creating an internet of biological things, in which insects are employed as delivery vehicles for mobile sensors.
The ACM Grace Murray Hopper Award is given to the outstanding young computer professional of the year, selected on the basis of a single recent major technical or service contribution. This award is accompanied by a prize of $35,000. The candidate must have been 35 years of age or less at the time the qualifying contribution was made. Financial support for this award is provided by Microsoft.
Margo Seltzer , University of British Columbia; Mike Olson , formerly of Cloudera; and Keith Bostic , MongoDB, receive the ACM Software System Award for Berkeley DB, which was an early exemplar of the NoSQL movement and pioneered the "dual-license" approach to software licensing.
Since 1991, Berkeley DB has been a pervasive force underlying the modern internet: it is a part of nearly every POSIX or POSIX-like system, as well as the GNU standard C library (glibc) and many higher-level scripting languages. Berkeley DB was the transactional key/value store for a range of first- and second-generation internet services, including account management, mail and identity servers, online trading platforms and many other software-as-a-service platforms.
As an open source package, Berkeley DB is an invaluable teaching tool, allowing students to see under the hood of a tool that they have grown familiar with by use. The code is clean, well structured, and well documented - it had to be, as it was meant to be consumed and used by an unlimited number of software developers.
As originally created by Seltzer, Olson and Bostic, Berkeley DB was distributed as part of the University of California's Fourth Berkeley Software Distribution. Seltzer and Bostic subsequently founded Sleepycat Software in 1996 to continue development of Berkeley DB and provide commercial support. Olson joined in 1997, and for 10 years, Berkeley DB was the de facto data store for major web infrastructure. As the first production quality, commercial key/value store, it helped launched the NoSQL movement; as the engine behind Amazon's Dynamo and the University of Michigan's SLAPD server, Berkeley DB helped move non-relational databases into the public eye.
Sleepycat Software pioneered the "dual-license" model of software licensing: use and redistribution in Open Source applications was always free, and companies could choose a commercial license for support or to distribute Berkeley DB as part of proprietary packages. This model pointed the way for a number of other open source companies, and this innovation has been widely adopted in open source communities. The open source Berkeley DB release includes all the features of the complete commercial version, and developers building prototypes with open source releases suffer no delay when transitioning to a proprietary product that embeds Berkeley DB.
In summary, Berkeley DB has been one of the most useful, powerful, reliable, and long-lived software packages. The longevity of Berkeley DB's contribution is particularly impressive in an industry with frequent software system turnover.
The ACM Software System Award is presented to an institution or individual (s) recognized for developing a software system that has had a lasting influence, reflected in contributions to concepts, in commercial acceptance, or both. The Software System Award carries a prize of $35,000. Financial support for the Software System Award is provided by IBM.
Yossi Azar , Tel Aviv University; Andrei Broder , Google Research; Anna Karlin , University of Washington; Michael Mitzenmacher , Harvard University; and Eli Upfal , Brown University, receive the ACM Paris Kanellakis Theory and Practice Award for the discovery and analysis of balanced allocations, known as the power of two choices, and their extensive applications to practice.
Azar, Broder, Karlin, Mitzenmacher and Upfal introduced the Balanced Allocations framework, also known as the power of two choices paradigm, an elegant theoretical work that had a widespread practical impact.
When n balls are thrown into n bins chosen uniformly at random, it is known that with high probability, the maximum load on any bin is bounded by (lg n/lg lg n) (1+o (1)). Azar, Broder, Karlin, and Upfal (STOC 1994) proved that adding a little bit of choice makes a big difference. When throwing each ball, instead of choosing one bin at random, choose two bins at random, and then place the ball in the bin with the lesser load. This minor change brings on an exponential improvement; now with high probability, the maximal load in any bin is bounded by (lg lg n/lg 2) +O (1).
In the same work, they have shown that, if each ball has d choices, then the maximum load drops with high probability to (ln ln n/ ln d) +O (1). These results were greatly extended by Mitzenmacher in his 1996 PhD dissertation, where he removed the sequential setting, and developed a framework for using the power of two choices in queueing systems.
Since bins and balls are the basic model for analyzing data structures, such as hashing or processes like load balancing of jobs in servers, it is not surprising that the power of two choices that requires only a local decision rather than global coordination has led to a wide range of practical applications. These include i-Google's web index, Akamai's overlay routing network, and highly reliable distributed data storage systems used by Microsoft and Dropbox, which are all based on variants of the power of two choices paradigm. There are many other software systems that use balanced allocations as an important ingredient.
The Balanced Allocations paper and the follow-up work on the power of two choices are elegant theoretical results, and their content had, and will surely continue to have, a demonstrable effect on the practice of computing.
The ACM Paris Kanellakis Theory and Practice Award honors specific theoretical accomplishments that have had a significant and demonstrable effect on the practice of computing. This award is accompanied by a prize of $10,000 and is endowed by contributions from the Kanellakis family, with additional financial support provided by ACM's Special Interest Groups on Algorithms and Computation Theory (SIGACT), Design Automation (SIGDA), Management of Data (SIGMOD), and Programming Languages (SIGPLAN), the ACM SIG Projects Fund, and individual contributions.
Hector Levesque and Moshe Vardi receive the ACM - AAAI Allen Newell Award .
Hector Levesque , University of Toronto, is recognized for fundamental contributions to knowledge representation and reasoning, and their broader influence within theoretical computer science, databases, robotics, and the study of Boolean satisfiability.
Levesque is recognized for his outstanding contributions to the broad core of logic-inspired artificial intelligence and the impact they have had across multiple sub-disciplines within computer science. With collaborators, he has made fundamental contributions to cognitive robotics, multi-agent systems, theoretical computer science, and database systems, as well as in philosophy and cognitive psychology. These have inspired applications such as the semantic web and automated verification. He is internationally recognized as one of the deepest and most original thinkers within AI and a researcher who has advanced the flame that AI pioneer Alan Newell lit.
On the representation side, Levesque has worked on the formalization of several concepts pertaining to artificial and natural agents including belief, goals, intentions, ability, and the interaction between knowledge, perception and action.
On the reasoning side, his research has focused on how automated reasoning can be kept computationally tractable, including the use of greedy local search methods. He is recognized for his fundamental contributions to the development of several new fields of research including the fields of description logic, the study of tractability in knowledge representation, the study of intention and teamwork, the hardness of satisfiability problems, and cognitive robotics. Levesque has also made fundamental contributions to the development of the systematic use of beliefs, desires, and intentions in the development of intelligent software, where his formalization of many aspects of intention and teamwork has shaped the entire approach to the use of these terms and the design of intelligent agents.
Moshe Vardi , Rice University, is cited for contributions to the development of logic as a unifying foundational framework and a tool for modeling computational systems.
Vardi has made major contributions to a wide variety of fields, including database theory, program verification, finite-model theory, reasoning about knowledge, and constraint satisfaction. He is perhaps the most influential researcher working at the interface of logic and computer science, building bridges between communities in computer science and beyond. With his collaborators he has made fundamental contributions to major research areas, including: 1) investigation of the logical theory of databases, where his focus on the trade-off between expressiveness and computational complexity laid the foundations for work on integrity constraints, complexity of query evaluation, incomplete information, database updates, and logic programming in databases; 2) the automata-theoretic approach to reactive systems, which laid mathematical foundations for verifying that a program meets its specifications, and 3) reasoning about knowledge through his development of epistemic logic.
In database theory, Vardi developed a theory of general data dependencies, finding axiomatizations and resolving their decision problem; introduced two basic notions of measuring the complexity of algorithms for evaluating queries, data-complexity, and query-complexity, which soon became standard in the field; created a logical theory of data updates; and characterized the expressive power of query languages and related them to complexity classes.
In software and hardware verification, Vardi introduced an automata-theoretic approach to the verification of reactive systems that revolutionized the field, using automata on infinite strings and trees to represent both the system being analyzed and undesirable computations of the system. Vardi's automata-theoretic approach has played a central role over the last 30 years of research in the field and in the development of verification tools.
In knowledge theory, Vardi developed rigorous foundations for reasoning about the knowledge of multi-agent and distributed systems, a problem of central importance in many disciplines; his co-authored book on the subject is the definitive source for this field.
The ACM - AAAI Allen Newell Award is presented to an individual selected for career contributions that have breadth within computer science, or that bridge computer science and other disciplines. The Newell award is accompanied by a prize of $10,000, provided by ACM and the Association for the Advancement of Artificial Intelligence (AAAI), and by individual contributions.
ACM, the Association for Computing Machinery , is the world's largest educational and scientific computing society, uniting educators, researchers and professionals to inspire dialogue, share resources and address the field's challenges. ACM strengthens the computing profession's collective voice through strong leadership, promotion of the highest standards, and recognition of technical excellence. ACM supports the professional growth of its members by providing opportunities for life-long learning, career development, and professional networking. |
|||
351 | Jan V. Vandenberg, Computer Scientist and Key Collaborator with Galaxy Zoo, Dies | Johns Hopkins University (JHU) computer scientist Jan V. Vandenberg has died of colon cancer at 48. At JHU's Institute for Data Intensive Engineering and Science, Vandenberg served as chief systems architect, designing and building pioneering computer systems for data-intensive science. Vandenberg also was a key collaborator on the joint U.S.-U.K. online Galaxy Zoo project, which offered access to astronomy images from the Sloan Digital Sky Survey (SDSS) to volunteer citizen scientists worldwide. As a member of the team, Vandenberg shared this year's ACM Special Interest Group on Management of Data (SIGMOD) Award with his SDSS colleagues. | [] | [] | [] | scitechnews | None | None | None | None | Johns Hopkins University (JHU) computer scientist Jan V. Vandenberg has died of colon cancer at 48. At JHU's Institute for Data Intensive Engineering and Science, Vandenberg served as chief systems architect, designing and building pioneering computer systems for data-intensive science. Vandenberg also was a key collaborator on the joint U.S.-U.K. online Galaxy Zoo project, which offered access to astronomy images from the Sloan Digital Sky Survey (SDSS) to volunteer citizen scientists worldwide. As a member of the team, Vandenberg shared this year's ACM Special Interest Group on Management of Data (SIGMOD) Award with his SDSS colleagues.
|
||||
352 | Going to the Moon via the Cloud | Researchers, scientists and engineers can use any desktop computer and browser to easily access supercomputing through cloud services where resources are on-demand and billed by consumption. As demand for computing resources continues to grow, cloud services are growing in popularity among research and development groups and applied science fields because of their accessibility, flexibility and minimal upfront time and cost investments.
A single high-performance-computing workload to optimize an aircraft wing design can cost $20,000, while machine learning workloads used in the earlier phases of development can easily be far more expensive. Firefly says it typically spends thousands to tens of thousands of dollars an hour on its computations - still far less than the cost of building and maintaining a high-performance computer. | The wide availability of high-performance computing accessed through the cloud is fostering creativity worldwide, allowing the Firefly Aerospace startup, for example, to build a rocket for lunar flights using high-performance computing simulations. Although the latest supercomputers can run 1 quadrillion calculations per second, they are prohibitively expensive and have huge space and power needs; less powerful but more nimble networked computer clusters can nearly equal supercomputers' capabilities. Moreover, most cloud computing firms supply access to high-performance computing hardware with more versatility than supercomputers. High-performance cloud computing company Rescale estimates roughly 12% of such computing is currently cloud-based, but that number - approximately $5.3 billion - is expanding 25% annually. Cloud services are growing increasingly popular among research and development groups and applied science fields, amid spiking demand for computing resources. | [] | [] | [] | scitechnews | None | None | None | None | The wide availability of high-performance computing accessed through the cloud is fostering creativity worldwide, allowing the Firefly Aerospace startup, for example, to build a rocket for lunar flights using high-performance computing simulations. Although the latest supercomputers can run 1 quadrillion calculations per second, they are prohibitively expensive and have huge space and power needs; less powerful but more nimble networked computer clusters can nearly equal supercomputers' capabilities. Moreover, most cloud computing firms supply access to high-performance computing hardware with more versatility than supercomputers. High-performance cloud computing company Rescale estimates roughly 12% of such computing is currently cloud-based, but that number - approximately $5.3 billion - is expanding 25% annually. Cloud services are growing increasingly popular among research and development groups and applied science fields, amid spiking demand for computing resources.
Researchers, scientists and engineers can use any desktop computer and browser to easily access supercomputing through cloud services where resources are on-demand and billed by consumption. As demand for computing resources continues to grow, cloud services are growing in popularity among research and development groups and applied science fields because of their accessibility, flexibility and minimal upfront time and cost investments.
A single high-performance-computing workload to optimize an aircraft wing design can cost $20,000, while machine learning workloads used in the earlier phases of development can easily be far more expensive. Firefly says it typically spends thousands to tens of thousands of dollars an hour on its computations - still far less than the cost of building and maintaining a high-performance computer. |
|||
353 | Warehouses Look to Robots to Fill Labor Gaps, Speed Deliveries | The push toward automation comes as businesses say they can't hire warehouse workers fast enough to meet surging online demand for everything from furniture to frozen food in pandemic-disrupted supply chains. The crunch is accelerating the adoption of robots and other technology in a sector that still largely relies on workers pulling carts.
"This is not about taking over your job, it's about taking care of those jobs we can't fill," said
Kristi Montgomery,
vice president of innovation, research and development for Kenco Logistics Services LLC, a third-party logistics provider based in Chattanooga, Tenn.
Kenco is rolling out a fleet of self-driving robots from Locus Robotics Corp. to bridge a labor gap by helping workers fill online orders at the company's largest e-commerce site, in Jeffersonville, Ind. The company is also testing autonomous tractors that tow carts loaded with pallets.
To save on labor and space at a distribution center for heating, ventilation and air-conditioning equipment, the company is installing an automated storage and retrieval system set to go online this fall that uses robots to fetch goods packed closely together in dense rows of stacks.
Kenco and France-based logistics provider Geodis SA are also testing remote-operated forklifts equipped with technology from startup Phantom Auto that drivers can operate remotely using real-time video and audio streams.
The technology allows operators to switch between vehicles in different locations depending on demand, opening up those jobs to workers in various regions. It could also let Kenco access untapped sections of the labor market, such as people who are physically disabled, Ms. Montgomery said.
Logistics-automation companies say demand for their technology has grown during the pandemic as companies look for ways to cope with big swings in volume when workers are scarce and social distancing requirements limit building occupancy.
"Robots are beginning to fill that void," said
Dwight Klappich,
a supply-chain research vice president at Gartner Inc. The technology-research firm forecasts that demand for robotic systems that deliver goods to human workers will quadruple through 2023.
"We have been benefiting from that significantly since the second half of last year," said
Jerome Dubois,
co-founder and co-chief executive of robotics provider 6 River Systems, which is owned by Shopify Inc. "The driver here is not to reduce costs, but simply to serve the customer's needs. They simply cannot hire."
The growth of e-commerce demand during the coronavirus pandemic added strains to what was already a tight labor market for logistics and distribution work.
U.S. warehousing and storage companies added nearly 168,000 jobs between April 2020 and April of this year, federal figures show, a rise of 13.6%. But sector payrolls contracted by 4,300 jobs from March to April, according to a preliminary report by the Labor Department.
Many logistics employers say they can't add enough staff to keep pace with strong demand as the U.S. economy emerges from the pandemic.
The staffing shortfall is driving up wages as logistics operators compete with heavyweights including Amazon.com Inc., which plans to hire another 75,000 warehouse workers this year. Logistics-staffing firm ProLogistix, which works with companies including Walmart Inc. and Target Corp., said its average starting pay for warehouse workers was $16.58 an hour in April, up 8.9% from the same month in 2020.
Users say mobile robots and other logistics technology can also boost output and efficiency, helping businesses handle sudden spikes in demand without investing millions of dollars in fixed infrastructure.
XPO Logistics Inc. said its use of robots in warehousing operations increased efficiency by as much as six times in some cases. The company plans to roughly double the number of robots in its warehouses this year.
Crocs Inc., whose foam-plastic footwear is riding a wave of resurgent popularity , set up a pop-up e-commerce fulfillment operation over last year's holiday sales season that used 83 mobile robots from 6 River Systems to assist 55 workers. Post-peak, the company now has 51 robots supporting 30 people. The robots have nearly tripled productivity, according to Crocs, which said the move to automate was driven largely by the rapid growth in demand.
Seattle-based sports gear and apparel retailer evo, which generates 70% of its business from online sales, had been considering automation before the pandemic made hiring even tougher. The company used Locus robots to support higher order volumes last year and added units during the 2020 peak, reducing congestion in the warehouse and taking the pressure off labor recruitment.
"Now, as we get ready for peak season, we won't be as challenged to find the same number of workers we would typically need to meet the seasonal volume demands," evo said in a statement.
Write to Jennifer Smith at jennifer.smith@wsj.com | Warehouses are deploying robots to offset staff shortages and deliver orders rapidly as online demand for products surges due to the pandemic. For example, third-party logistics provider Kenco Logistics Service is launching a fleet of self-driving robots from Locus Robotics to help employees fill online orders at the company's biggest e-commerce outlet in Indiana; Kenco also is testing autonomous tractors that tow pallet-loaded carts. Meanwhile, Kenco and French logistics provider Geodis are testing remote-operated forklifts featuring technology from startup Phantom Auto that drivers can operate remotely via real-time video and audio. Technology research firm Gartner predicts a quadrupling of demand for robotic delivery systems through 2023. Users say logistics technology, including mobile robots, can improve output and efficiency so businesses can accommodate spikes in demand without expensive investments in fixed infrastructure. | [] | [] | [] | scitechnews | None | None | None | None | Warehouses are deploying robots to offset staff shortages and deliver orders rapidly as online demand for products surges due to the pandemic. For example, third-party logistics provider Kenco Logistics Service is launching a fleet of self-driving robots from Locus Robotics to help employees fill online orders at the company's biggest e-commerce outlet in Indiana; Kenco also is testing autonomous tractors that tow pallet-loaded carts. Meanwhile, Kenco and French logistics provider Geodis are testing remote-operated forklifts featuring technology from startup Phantom Auto that drivers can operate remotely via real-time video and audio. Technology research firm Gartner predicts a quadrupling of demand for robotic delivery systems through 2023. Users say logistics technology, including mobile robots, can improve output and efficiency so businesses can accommodate spikes in demand without expensive investments in fixed infrastructure.
The push toward automation comes as businesses say they can't hire warehouse workers fast enough to meet surging online demand for everything from furniture to frozen food in pandemic-disrupted supply chains. The crunch is accelerating the adoption of robots and other technology in a sector that still largely relies on workers pulling carts.
"This is not about taking over your job, it's about taking care of those jobs we can't fill," said
Kristi Montgomery,
vice president of innovation, research and development for Kenco Logistics Services LLC, a third-party logistics provider based in Chattanooga, Tenn.
Kenco is rolling out a fleet of self-driving robots from Locus Robotics Corp. to bridge a labor gap by helping workers fill online orders at the company's largest e-commerce site, in Jeffersonville, Ind. The company is also testing autonomous tractors that tow carts loaded with pallets.
To save on labor and space at a distribution center for heating, ventilation and air-conditioning equipment, the company is installing an automated storage and retrieval system set to go online this fall that uses robots to fetch goods packed closely together in dense rows of stacks.
Kenco and France-based logistics provider Geodis SA are also testing remote-operated forklifts equipped with technology from startup Phantom Auto that drivers can operate remotely using real-time video and audio streams.
The technology allows operators to switch between vehicles in different locations depending on demand, opening up those jobs to workers in various regions. It could also let Kenco access untapped sections of the labor market, such as people who are physically disabled, Ms. Montgomery said.
Logistics-automation companies say demand for their technology has grown during the pandemic as companies look for ways to cope with big swings in volume when workers are scarce and social distancing requirements limit building occupancy.
"Robots are beginning to fill that void," said
Dwight Klappich,
a supply-chain research vice president at Gartner Inc. The technology-research firm forecasts that demand for robotic systems that deliver goods to human workers will quadruple through 2023.
"We have been benefiting from that significantly since the second half of last year," said
Jerome Dubois,
co-founder and co-chief executive of robotics provider 6 River Systems, which is owned by Shopify Inc. "The driver here is not to reduce costs, but simply to serve the customer's needs. They simply cannot hire."
The growth of e-commerce demand during the coronavirus pandemic added strains to what was already a tight labor market for logistics and distribution work.
U.S. warehousing and storage companies added nearly 168,000 jobs between April 2020 and April of this year, federal figures show, a rise of 13.6%. But sector payrolls contracted by 4,300 jobs from March to April, according to a preliminary report by the Labor Department.
Many logistics employers say they can't add enough staff to keep pace with strong demand as the U.S. economy emerges from the pandemic.
The staffing shortfall is driving up wages as logistics operators compete with heavyweights including Amazon.com Inc., which plans to hire another 75,000 warehouse workers this year. Logistics-staffing firm ProLogistix, which works with companies including Walmart Inc. and Target Corp., said its average starting pay for warehouse workers was $16.58 an hour in April, up 8.9% from the same month in 2020.
Users say mobile robots and other logistics technology can also boost output and efficiency, helping businesses handle sudden spikes in demand without investing millions of dollars in fixed infrastructure.
XPO Logistics Inc. said its use of robots in warehousing operations increased efficiency by as much as six times in some cases. The company plans to roughly double the number of robots in its warehouses this year.
Crocs Inc., whose foam-plastic footwear is riding a wave of resurgent popularity , set up a pop-up e-commerce fulfillment operation over last year's holiday sales season that used 83 mobile robots from 6 River Systems to assist 55 workers. Post-peak, the company now has 51 robots supporting 30 people. The robots have nearly tripled productivity, according to Crocs, which said the move to automate was driven largely by the rapid growth in demand.
Seattle-based sports gear and apparel retailer evo, which generates 70% of its business from online sales, had been considering automation before the pandemic made hiring even tougher. The company used Locus robots to support higher order volumes last year and added units during the 2020 peak, reducing congestion in the warehouse and taking the pressure off labor recruitment.
"Now, as we get ready for peak season, we won't be as challenged to find the same number of workers we would typically need to meet the seasonal volume demands," evo said in a statement.
Write to Jennifer Smith at jennifer.smith@wsj.com |
|||
354 | The Army's Latest Night-Vision Tech Looks Like Something Out of a Video Game | The U.S. Army is training on a new night-vision system that generates a videogame-like view of a scene. The helmet-mounted Enhanced Night Vision Goggle-Binocular device features thermal imaging and augmented reality (AR) capabilities, incorporating smartphone and gaming-systems technology into traditional night-vision hardware. The goggles have a mode that outlines objects with glowing white light, and an AR overlay that can display maps and navigation data. The system, which also features an intensity tool that allows it to be used in daylight, is a result of years of work to modernize the tools used by the military. | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Army is training on a new night-vision system that generates a videogame-like view of a scene. The helmet-mounted Enhanced Night Vision Goggle-Binocular device features thermal imaging and augmented reality (AR) capabilities, incorporating smartphone and gaming-systems technology into traditional night-vision hardware. The goggles have a mode that outlines objects with glowing white light, and an AR overlay that can display maps and navigation data. The system, which also features an intensity tool that allows it to be used in daylight, is a result of years of work to modernize the tools used by the military.
|
||||
356 | Drones, Sensors Could Spot Fires Before They Go Wild | The speed at which a wildfire can rip through an area and wreak havoc is nothing short of awe-inspiring and terrifying. Early detection of these events is critical for fire management efforts, whether that involves calling in firefighters or evacuating nearby communities.
Currently, early fire detection in remote areas is typically done by satellite - but this approach can be hindered by cloud cover. What's more, even the most advanced satellite systems detect fires once the burning area reaches an average seize of 18.4 km 2 (7.1 square miles).
To detect wildfires earlier on, some researchers are proposing a novel solution that harnesses a network of Internet of Things (IoT) sensors and a fleet of drones, or unmanned aerial vehicles (UAVs). The researchers tested their approach through simulations, described in a study published May 5 in IEEE Internet of Things Journal , finding that it can detect fires that are just 2.5 km 2 (just under one square mile) in size with near perfect accuracy.
Their idea is timely, as climate change is driving an increase in wildfires around many regions of the world, as seen recently in California and Australia.
"In the last few years, the number, frequency, and severity of wildfires have increased dramatically worldwide, significantly impacting countries' economies, ecosystems, and communities. Wildfire management presents a significant challenge in which early fire detection is key," emphasizes Osama Bushnaq, a senior researcher at the Autonomous Robotics Research Center of the Technology Innovation Institute in Abu Dhabi, who was involved in the study.
The approach that Bushnaq and his colleagues are proposing involves a network of IoT sensors scattered throughout regions of concern, such as a national park or forests situated near communities. If a fire ignites, IoT devices deployed in the area will detect it and wait until a patrolling UAV is within transmission range to report their measurements. If a UAV receives multiple positive detections by the IoT devices, it will notify the nearby firefighting department that a wildfire has been verified.
The researchers evaluated a number of different UAVs and IoT sensors based on cost and features to determine the optimal combinations. Next, they tested their UAV-IoT approach through simulations, whereby 420 IoT sensors were deployed per square kilometer of simulated forest and 18 UAVs patrolled the forest of area 400 square kilometers. The system could detect fires covering 2.5 km 2 with greater than 99 percent accuracy. For smaller fires covering 0.5 km 2 the approach yielded 69 percent accuracy.
These results suggest that, if an optimal number of UAVs and IoT devices are present, wildfires can be detected in much shorter time than with the satellite imaging. But Bushnaq acknowledges that this approach has its limitations. "UAV-IoT networks can only cover relatively smaller areas," he explains. "Therefore, the UAV-IoT network would be particularly suitable for wildfire detection at high-risk regions."
For these reasons, the researchers are proposing that UAV-IoT approach be used alongside satellite imaging, which can cover vast areas but with less wildfire detection speed and reliability.
Moving forward, the team plans to explore ways of further improving upon this approach, for example by optimizing the trajectory of the UAVs or addressing issues related to the battery life of UAVs.
Bushnaq envisions such UAV-IoT systems having much broader applications, too. "Although the system is designed for wildfire detection, it can be used for monitoring different forest parameters, such as wind speed, moisture content, or temperature estimation," he says, noting that such a system could also be extended beyond the forest setting, for example by monitoring oil spills in bodies of water.
This article appears in the July 2021 print issue as "Unmanned Aerial Firespotters." | A research team led by the UAE's Technology Innovation Institute demonstrated that a network of Internet of Things (IoT) sensors and a fleet of unmanned aerial vehicles (UAVs) can detect wildfires faster than satellite systems. The team's novel approach involves placing IoT sensors across regions of concern, and when a fire ignites in their area, the sensors report their measurements to a patrolling UAV. The UAV notifies firefighting departments when multiple positive detections by are reported by the sensors. In simulations, the system detected fires covering an area of 2.5 square kilometers with more than 99% accuracy, and fires covering 0.5 square kilometers with 69% accuracy. | [] | [] | [] | scitechnews | None | None | None | None | A research team led by the UAE's Technology Innovation Institute demonstrated that a network of Internet of Things (IoT) sensors and a fleet of unmanned aerial vehicles (UAVs) can detect wildfires faster than satellite systems. The team's novel approach involves placing IoT sensors across regions of concern, and when a fire ignites in their area, the sensors report their measurements to a patrolling UAV. The UAV notifies firefighting departments when multiple positive detections by are reported by the sensors. In simulations, the system detected fires covering an area of 2.5 square kilometers with more than 99% accuracy, and fires covering 0.5 square kilometers with 69% accuracy.
The speed at which a wildfire can rip through an area and wreak havoc is nothing short of awe-inspiring and terrifying. Early detection of these events is critical for fire management efforts, whether that involves calling in firefighters or evacuating nearby communities.
Currently, early fire detection in remote areas is typically done by satellite - but this approach can be hindered by cloud cover. What's more, even the most advanced satellite systems detect fires once the burning area reaches an average seize of 18.4 km 2 (7.1 square miles).
To detect wildfires earlier on, some researchers are proposing a novel solution that harnesses a network of Internet of Things (IoT) sensors and a fleet of drones, or unmanned aerial vehicles (UAVs). The researchers tested their approach through simulations, described in a study published May 5 in IEEE Internet of Things Journal , finding that it can detect fires that are just 2.5 km 2 (just under one square mile) in size with near perfect accuracy.
Their idea is timely, as climate change is driving an increase in wildfires around many regions of the world, as seen recently in California and Australia.
"In the last few years, the number, frequency, and severity of wildfires have increased dramatically worldwide, significantly impacting countries' economies, ecosystems, and communities. Wildfire management presents a significant challenge in which early fire detection is key," emphasizes Osama Bushnaq, a senior researcher at the Autonomous Robotics Research Center of the Technology Innovation Institute in Abu Dhabi, who was involved in the study.
The approach that Bushnaq and his colleagues are proposing involves a network of IoT sensors scattered throughout regions of concern, such as a national park or forests situated near communities. If a fire ignites, IoT devices deployed in the area will detect it and wait until a patrolling UAV is within transmission range to report their measurements. If a UAV receives multiple positive detections by the IoT devices, it will notify the nearby firefighting department that a wildfire has been verified.
The researchers evaluated a number of different UAVs and IoT sensors based on cost and features to determine the optimal combinations. Next, they tested their UAV-IoT approach through simulations, whereby 420 IoT sensors were deployed per square kilometer of simulated forest and 18 UAVs patrolled the forest of area 400 square kilometers. The system could detect fires covering 2.5 km 2 with greater than 99 percent accuracy. For smaller fires covering 0.5 km 2 the approach yielded 69 percent accuracy.
These results suggest that, if an optimal number of UAVs and IoT devices are present, wildfires can be detected in much shorter time than with the satellite imaging. But Bushnaq acknowledges that this approach has its limitations. "UAV-IoT networks can only cover relatively smaller areas," he explains. "Therefore, the UAV-IoT network would be particularly suitable for wildfire detection at high-risk regions."
For these reasons, the researchers are proposing that UAV-IoT approach be used alongside satellite imaging, which can cover vast areas but with less wildfire detection speed and reliability.
Moving forward, the team plans to explore ways of further improving upon this approach, for example by optimizing the trajectory of the UAVs or addressing issues related to the battery life of UAVs.
Bushnaq envisions such UAV-IoT systems having much broader applications, too. "Although the system is designed for wildfire detection, it can be used for monitoring different forest parameters, such as wind speed, moisture content, or temperature estimation," he says, noting that such a system could also be extended beyond the forest setting, for example by monitoring oil spills in bodies of water.
This article appears in the July 2021 print issue as "Unmanned Aerial Firespotters." |
|||
357 | Dutch Researchers Build Security Software to Mimic Human Immune System | As a research organisation, TNO is not the party bringing the software to the market commercially. The organisation has made the self-healing software available under an open source licence and hopes that organisations, like IT service providers, will use the possibilities of the software in their own security products.
"We try to inspire and hope that the market will then pick this up," said Gijsen.
Companies from outside the Netherlands are also invited to use the self-healing security software of TNO. | Researchers at Dutch research institute TNO, working with Dutch banks and insurers, have developed self-healing security software modeled after the human immune system. TNO's Bart Gijsen said the work yielded decentralized disposability for information technology; "TNO did this by building a system that is decentralized, repairs itself, and also recognizes the moment to do so." At the core of this regenerative technique is existing container software, which Gijsen said "already contains the option of restarting and renewing, but we have added functionality to our software that allows containers to renew themselves at pre-set intervals." That, said Gijsen, "ensures that a faster response is possible in the event of an attack. Moreover, it offers cybersecurity specialists the opportunity to focus on the cause instead of constantly putting out fires." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Dutch research institute TNO, working with Dutch banks and insurers, have developed self-healing security software modeled after the human immune system. TNO's Bart Gijsen said the work yielded decentralized disposability for information technology; "TNO did this by building a system that is decentralized, repairs itself, and also recognizes the moment to do so." At the core of this regenerative technique is existing container software, which Gijsen said "already contains the option of restarting and renewing, but we have added functionality to our software that allows containers to renew themselves at pre-set intervals." That, said Gijsen, "ensures that a faster response is possible in the event of an attack. Moreover, it offers cybersecurity specialists the opportunity to focus on the cause instead of constantly putting out fires."
As a research organisation, TNO is not the party bringing the software to the market commercially. The organisation has made the self-healing software available under an open source licence and hopes that organisations, like IT service providers, will use the possibilities of the software in their own security products.
"We try to inspire and hope that the market will then pick this up," said Gijsen.
Companies from outside the Netherlands are also invited to use the self-healing security software of TNO. |
|||
358 | Simple Diagnostic Tool Predicts Individual Risk of Alzheimer's | Approximately 20-30% of patients with Alzheimer's disease are wrongly diagnosed within specialist healthcare, and diagnostic work-up is even more difficult in primary care. Accuracy can be significantly improved by measuring the proteins tau and beta-amyloid via a spinal fluid sample, or PET scan. However, those methods are expensive and only available at a relatively few specialized memory clinics worldwide. Early and accurate diagnosis of AD is becoming even more important, as new drugs that slow down the progression of the disease will hopefully soon become available.
A research group led by Professor Oskar Hansson at Lund University have now shown that a combination of relatively easily acccessible tests can be used for early and reliable diagnosis of Alzheimer's disease. The study examined 340 patients with mild memory impairment in the Swedish BioFINDER Study, and the results were confirmed in a North American study of 543 people.
A combination of a simple blood test (measuring a variant of the tau protein and a risk gene for Alzheimer's) and three brief cognitive tests that only take 10 minutes to complete, predicted with over 90% certainty which patients would develop Alzheimer's dementia within four years. This simple prognostic algorithm was significantly more accurate than the clinical predictions by the dementia experts who examined the patients, but did not have access to expensive spinal fluid testing or PET scans, said Oskar Hansson.
"Our algorithm is based on a blood analysis of phosphylated tau and a risk gene for Alzheimer's, combined with testing of memory and executive function. We have now developed a prototype online tool to estimate the individual risk of a person with mild memory complaints developing Alzheimer's dementia within four years," explains Sebastian Palmqvist, first author of the study and associate professor at Lund University.
One clear advantage of the algorithm is that it has been developed for use in clinics without access to advanced diagnostic instruments. In the future, the algorithm might therefore make a major difference in the diagnosis of Alzheimer's within primary healthcare.
"The algorithm has currently only been tested on patients who have been examined in memory clinics. Our hope is that it will also be validated for use in primary healthcare as well as in developing countries with limited resources," says Sebastian Palmqvist.
Simple diagnostic tools for Alzheimer's could also improve the development of drugs, as it is difficult to recruit the suitable study partcipants for drug trials in a time- and cost-effective manner.
"The algorithm will enable us to recruit people with Alzheimer's at an early stage, which is when new drugs have a better chance of slowing the course of the disease," concludes Professor Oskar Hansson. | Researchers at Sweden's Lund University have developed an algorithm that can predict an individual's risk for Alzheimer's disease (AD). The researchers combined data from a simple blood test that measures a phosphylated tau protein variant and a risk gene for Alzheimer's with data from three cognitive tests. The algorithm forecast with more than 90% confidence which patients would develop AD within four years. Lund's Oskar Hansson said the algorithm made more accurate predictions than dementia experts who examined the same patients but lacked access to spinal fluid testing or positron-emission tomography scans. Said Hansson, "The algorithm will enable us to recruit people with Alzheimer's at an early stage, which is when new drugs have a better chance of slowing the course of the disease." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Sweden's Lund University have developed an algorithm that can predict an individual's risk for Alzheimer's disease (AD). The researchers combined data from a simple blood test that measures a phosphylated tau protein variant and a risk gene for Alzheimer's with data from three cognitive tests. The algorithm forecast with more than 90% confidence which patients would develop AD within four years. Lund's Oskar Hansson said the algorithm made more accurate predictions than dementia experts who examined the same patients but lacked access to spinal fluid testing or positron-emission tomography scans. Said Hansson, "The algorithm will enable us to recruit people with Alzheimer's at an early stage, which is when new drugs have a better chance of slowing the course of the disease."
Approximately 20-30% of patients with Alzheimer's disease are wrongly diagnosed within specialist healthcare, and diagnostic work-up is even more difficult in primary care. Accuracy can be significantly improved by measuring the proteins tau and beta-amyloid via a spinal fluid sample, or PET scan. However, those methods are expensive and only available at a relatively few specialized memory clinics worldwide. Early and accurate diagnosis of AD is becoming even more important, as new drugs that slow down the progression of the disease will hopefully soon become available.
A research group led by Professor Oskar Hansson at Lund University have now shown that a combination of relatively easily acccessible tests can be used for early and reliable diagnosis of Alzheimer's disease. The study examined 340 patients with mild memory impairment in the Swedish BioFINDER Study, and the results were confirmed in a North American study of 543 people.
A combination of a simple blood test (measuring a variant of the tau protein and a risk gene for Alzheimer's) and three brief cognitive tests that only take 10 minutes to complete, predicted with over 90% certainty which patients would develop Alzheimer's dementia within four years. This simple prognostic algorithm was significantly more accurate than the clinical predictions by the dementia experts who examined the patients, but did not have access to expensive spinal fluid testing or PET scans, said Oskar Hansson.
"Our algorithm is based on a blood analysis of phosphylated tau and a risk gene for Alzheimer's, combined with testing of memory and executive function. We have now developed a prototype online tool to estimate the individual risk of a person with mild memory complaints developing Alzheimer's dementia within four years," explains Sebastian Palmqvist, first author of the study and associate professor at Lund University.
One clear advantage of the algorithm is that it has been developed for use in clinics without access to advanced diagnostic instruments. In the future, the algorithm might therefore make a major difference in the diagnosis of Alzheimer's within primary healthcare.
"The algorithm has currently only been tested on patients who have been examined in memory clinics. Our hope is that it will also be validated for use in primary healthcare as well as in developing countries with limited resources," says Sebastian Palmqvist.
Simple diagnostic tools for Alzheimer's could also improve the development of drugs, as it is difficult to recruit the suitable study partcipants for drug trials in a time- and cost-effective manner.
"The algorithm will enable us to recruit people with Alzheimer's at an early stage, which is when new drugs have a better chance of slowing the course of the disease," concludes Professor Oskar Hansson. |
|||
359 | Stanford Bioengineers Develop Algorithm to Compare Cells Across Species | Cells are the building blocks of life, present in every living organism. But how similar do you think your cells are to a mouse? A fish? A worm?
Comparing cell types in different species across the tree of life can help biologists understand how cell types arose and how they have adapted to the functional needs of different life forms. This has been of increasing interest to evolutionary biologists in recent years because new technology now allows sequencing and identifying all cells throughout whole organisms. "There's essentially a wave in the scientific community to classify all types of cells in a wide variety of different organisms," explained Bo Wang, an assistant professor of bioengineering at Stanford University.
In response to this opportunity, Wang's lab developed an algorithm to link similar cell types across evolutionary distances. Their method, detailed in a paper published May 4 in eLife , is designed to compare cell types in different species.
For their research, the team used seven species to compare 21 different pairings and were able to identify cell types present in all species along with their similarities and differences.
According to Alexander Tarashansky, a graduate student in bioengineering who works in Wang's laboratory, the idea to create the algorithm came when Wang walked into the lab one day and asked him if he could analyze cell-type datasets from two different worms the lab studies at the same time.
"I was struck by how stark the differences are between them," said Tarashansky, who was lead author of the paper and is a Stanford Bio-X Interdisciplinary Fellow . "We thought that they should have similar cell types, but when we try analyzing them using standard techniques, the method doesn't recognize them as being similar."
He wondered if it was a problem with the technique or if the cell types were just too different to match across species. Tarashansky then began working on the algorithm to better match cell types across species.
"Let's say I want to compare a sponge to a human," said Tarashansky. "It's really not clear which sponge gene corresponds to which human gene because as organisms evolve, genes duplicate, they change, they duplicate again. And so now you have one gene in the sponge that may be related to many genes in humans."
Instead of trying to find a one-to-one gene match like previous methods for data matching, the researchers' mapping method matches the one gene in the sponge to all potentially corresponding human genes. Then the algorithm proceeds to figure out which is the right one.
Tarashansky says trying to find only one-to-one gene pairs has limited scientists looking to map cell types in the past. "I think the main innovation here is that we account for features that have changed over the course of hundreds of millions of years of evolution for long-range comparisons."
"How can we use the ever-evolving genes to recognize the same cell type that are also constantly changing in different species?" Said Wang, who is senior author of the paper. "Evolution has been understood using genes and organismal traits, I think we are now at an exciting turning point to bridge the scales by looking at how cells evolve."
Using their mapping approach, the team discovered a number of conserved genes and cell type families across species.
Tarashansky said a highlight of the research was when they were comparing stem cells between two very different flatworms.
"The fact that we did find one-to-one matches in their stem cell populations was really exciting," he said. "I think that basically unlocked a lot of new and exciting information about how stem cells look inside a parasitic flatworm that infects hundreds of millions of people all over the world."
The results of the team's mapping also suggest there's a strong conservation of characteristics of neurons and muscle cells from very simple animal types, such as sponges, to more complex mammals like mice and humans.
"That really suggests those cell types arose very early on in animal evolution," Wang said.
Now that the team has built the tool for cell comparison, researchers can continue to collect data on a wide variety of species for analysis. As more datasets from more species are collected and compared, biologists will be able to trace the trajectory of cell types in different organisms and the ability to recognize novel cell types will improve.
"If you only have sponges and then worms and you're missing everything in between, it's hard to know how the sponge cell types evolved or how their ancestors have diversified into sponges and worms," said Tarashansky. "We want to fill in as many nodes along the tree of life as possible to be able to facilitate this type of evolutionary analysis and transfer of knowledge across species." | An algorithm designed by Stanford University bioengineers can compare cell types in different species. Stanford's Alexander Tarashansky said, "I think the main innovation here is that we account for features that have changed over the course of hundreds of millions of years of evolution for long-range comparisons." The researchers employed seven species to compare 21 different cell pairings; their technique unearthed an array of conserved genes and cell-type families across species. The researchers said that as the algorithm is used to collect and compare more datasets from more species, biologists will be able to trace the trajectory of cell types in different organisms, and will be better able to recognize novel cell types. | [] | [] | [] | scitechnews | None | None | None | None | An algorithm designed by Stanford University bioengineers can compare cell types in different species. Stanford's Alexander Tarashansky said, "I think the main innovation here is that we account for features that have changed over the course of hundreds of millions of years of evolution for long-range comparisons." The researchers employed seven species to compare 21 different cell pairings; their technique unearthed an array of conserved genes and cell-type families across species. The researchers said that as the algorithm is used to collect and compare more datasets from more species, biologists will be able to trace the trajectory of cell types in different organisms, and will be better able to recognize novel cell types.
Cells are the building blocks of life, present in every living organism. But how similar do you think your cells are to a mouse? A fish? A worm?
Comparing cell types in different species across the tree of life can help biologists understand how cell types arose and how they have adapted to the functional needs of different life forms. This has been of increasing interest to evolutionary biologists in recent years because new technology now allows sequencing and identifying all cells throughout whole organisms. "There's essentially a wave in the scientific community to classify all types of cells in a wide variety of different organisms," explained Bo Wang, an assistant professor of bioengineering at Stanford University.
In response to this opportunity, Wang's lab developed an algorithm to link similar cell types across evolutionary distances. Their method, detailed in a paper published May 4 in eLife , is designed to compare cell types in different species.
For their research, the team used seven species to compare 21 different pairings and were able to identify cell types present in all species along with their similarities and differences.
According to Alexander Tarashansky, a graduate student in bioengineering who works in Wang's laboratory, the idea to create the algorithm came when Wang walked into the lab one day and asked him if he could analyze cell-type datasets from two different worms the lab studies at the same time.
"I was struck by how stark the differences are between them," said Tarashansky, who was lead author of the paper and is a Stanford Bio-X Interdisciplinary Fellow . "We thought that they should have similar cell types, but when we try analyzing them using standard techniques, the method doesn't recognize them as being similar."
He wondered if it was a problem with the technique or if the cell types were just too different to match across species. Tarashansky then began working on the algorithm to better match cell types across species.
"Let's say I want to compare a sponge to a human," said Tarashansky. "It's really not clear which sponge gene corresponds to which human gene because as organisms evolve, genes duplicate, they change, they duplicate again. And so now you have one gene in the sponge that may be related to many genes in humans."
Instead of trying to find a one-to-one gene match like previous methods for data matching, the researchers' mapping method matches the one gene in the sponge to all potentially corresponding human genes. Then the algorithm proceeds to figure out which is the right one.
Tarashansky says trying to find only one-to-one gene pairs has limited scientists looking to map cell types in the past. "I think the main innovation here is that we account for features that have changed over the course of hundreds of millions of years of evolution for long-range comparisons."
"How can we use the ever-evolving genes to recognize the same cell type that are also constantly changing in different species?" Said Wang, who is senior author of the paper. "Evolution has been understood using genes and organismal traits, I think we are now at an exciting turning point to bridge the scales by looking at how cells evolve."
Using their mapping approach, the team discovered a number of conserved genes and cell type families across species.
Tarashansky said a highlight of the research was when they were comparing stem cells between two very different flatworms.
"The fact that we did find one-to-one matches in their stem cell populations was really exciting," he said. "I think that basically unlocked a lot of new and exciting information about how stem cells look inside a parasitic flatworm that infects hundreds of millions of people all over the world."
The results of the team's mapping also suggest there's a strong conservation of characteristics of neurons and muscle cells from very simple animal types, such as sponges, to more complex mammals like mice and humans.
"That really suggests those cell types arose very early on in animal evolution," Wang said.
Now that the team has built the tool for cell comparison, researchers can continue to collect data on a wide variety of species for analysis. As more datasets from more species are collected and compared, biologists will be able to trace the trajectory of cell types in different organisms and the ability to recognize novel cell types will improve.
"If you only have sponges and then worms and you're missing everything in between, it's hard to know how the sponge cell types evolved or how their ancestors have diversified into sponges and worms," said Tarashansky. "We want to fill in as many nodes along the tree of life as possible to be able to facilitate this type of evolutionary analysis and transfer of knowledge across species." |
|||
361 | More Efficient LiDAR Sensing for Self-Driving Cars | If you see a self-driving car out in the wild, you might notice a giant spinning cylinder on top of its roof. That's a lidar sensor, and it works by sending out pulses of infrared light and measuring the time it takes for them to bounce off objects. This creates a map of 3D points that serve as a snapshot of the car's surroundings.
One downside of lidar is that its 3D data is immense and computationally intensive. A typical 64-channel sensor, for example, produces more than 2 million points per second. Due to the additional spatial dimension, the state-of-the-art 3D models require 14x more computation at inference time compared to its 2D image counterpart. This means that, in order to navigate effectively, engineers first typically have to collapse the data into 2D - the side effect of this is that it introduces significant information loss.
But a team from MIT has been working on a self-driving system that uses machine learning so that custom hand-tuning isn't needed. Their new end-to-end framework can navigate autonomously using only raw 3D point cloud data and low-resolution GPS maps, similar to those available on smartphones today.
End-to-end learning from raw lidar data is a computationally intensive process, since it involves giving the computer huge amounts of rich sensory information for learning how to steer. Because of this, the team had to actually design new deep learning components which leveraged modern GPU hardware more efficiently in order to control the vehicle in real-time.
"We've optimized our solution from both algorithm and system perspectives, achieving a cumulative speedup of roughly 9x compared to existing 3D lidar approaches," says PhD student Zhijian Liu, who was the co-lead author on this paper alongside Alexander Amini.
In tests the researchers showed that their system reduced how often a human-driver had to take control over from the machine, and could even withstand severe sensor failures.
For example, picture yourself driving through a tunnel and then emerging into the sunlight - for a split-second, your eyes will likely have problems seeing because of the glare. A similar problem arises with the cameras in self-driving cars, as well as with the systems' lidar sensors when weather conditions are poor.
To handle this, the MIT team's system can estimate how certain it is about any given prediction, and can therefore give more or less weight to that prediction in making its decisions. (In the case of emerging from a tunnel, it would essentially disregard any prediction that should not be trusted due to inaccurate sensor data.)
The team calls their approach "hybrid evidential fusion," because it fuses the different control predictions together to arrive at its motion-planning choices.
"By fusing the control predictions according to the model's uncertainty, the system can adapt to unexpected events," says MIT professor Daniela Rus, one of the senior authors on the paper.
In many respects, the system itself is a fusion of three previous MIT projects:
"We've taken the benefits of a mapless driving approach and combined it with end-to-end machine learning so that we don't need expert programmers to tune the system by hand," says Amini.
As a next step, the team plans to continue to scale their system to increasing amounts of complexity in the real world, including adverse weather conditions and dynamic interaction with other vehicles.
Liu and Amini co-wrote the new paper with MIT professors Song Han and Daniela Rus. Their other co-authors include research assistant Sibo Zhu and associate professor Sertac Karaman. The paper will be presented later this month at the International Conference on Robotics and Automation (ICRA). | A new machine learning system for driverless cars uses an end-to-end mapless driving framework that taps raw Light Detection and Ranging (LiDAR) data for autonomous navigation. Researchers at the Massachusetts Institute of Technology (MIT) engineered new deep learning elements which harnessed modern global positioning system hardware more efficiently to enable real-time vehicle control. MIT's Zhijian Liu said, "We've optimized our solution from both algorithm and system perspectives, achieving a cumulative speedup of roughly 9x compared to existing [three-dimensional] LiDAR approaches." Tests demonstrated that the system reduced how frequently a human driver had to assume vehicle control, and was resilient against severe sensor malfunctions. | [] | [] | [] | scitechnews | None | None | None | None | A new machine learning system for driverless cars uses an end-to-end mapless driving framework that taps raw Light Detection and Ranging (LiDAR) data for autonomous navigation. Researchers at the Massachusetts Institute of Technology (MIT) engineered new deep learning elements which harnessed modern global positioning system hardware more efficiently to enable real-time vehicle control. MIT's Zhijian Liu said, "We've optimized our solution from both algorithm and system perspectives, achieving a cumulative speedup of roughly 9x compared to existing [three-dimensional] LiDAR approaches." Tests demonstrated that the system reduced how frequently a human driver had to assume vehicle control, and was resilient against severe sensor malfunctions.
If you see a self-driving car out in the wild, you might notice a giant spinning cylinder on top of its roof. That's a lidar sensor, and it works by sending out pulses of infrared light and measuring the time it takes for them to bounce off objects. This creates a map of 3D points that serve as a snapshot of the car's surroundings.
One downside of lidar is that its 3D data is immense and computationally intensive. A typical 64-channel sensor, for example, produces more than 2 million points per second. Due to the additional spatial dimension, the state-of-the-art 3D models require 14x more computation at inference time compared to its 2D image counterpart. This means that, in order to navigate effectively, engineers first typically have to collapse the data into 2D - the side effect of this is that it introduces significant information loss.
But a team from MIT has been working on a self-driving system that uses machine learning so that custom hand-tuning isn't needed. Their new end-to-end framework can navigate autonomously using only raw 3D point cloud data and low-resolution GPS maps, similar to those available on smartphones today.
End-to-end learning from raw lidar data is a computationally intensive process, since it involves giving the computer huge amounts of rich sensory information for learning how to steer. Because of this, the team had to actually design new deep learning components which leveraged modern GPU hardware more efficiently in order to control the vehicle in real-time.
"We've optimized our solution from both algorithm and system perspectives, achieving a cumulative speedup of roughly 9x compared to existing 3D lidar approaches," says PhD student Zhijian Liu, who was the co-lead author on this paper alongside Alexander Amini.
In tests the researchers showed that their system reduced how often a human-driver had to take control over from the machine, and could even withstand severe sensor failures.
For example, picture yourself driving through a tunnel and then emerging into the sunlight - for a split-second, your eyes will likely have problems seeing because of the glare. A similar problem arises with the cameras in self-driving cars, as well as with the systems' lidar sensors when weather conditions are poor.
To handle this, the MIT team's system can estimate how certain it is about any given prediction, and can therefore give more or less weight to that prediction in making its decisions. (In the case of emerging from a tunnel, it would essentially disregard any prediction that should not be trusted due to inaccurate sensor data.)
The team calls their approach "hybrid evidential fusion," because it fuses the different control predictions together to arrive at its motion-planning choices.
"By fusing the control predictions according to the model's uncertainty, the system can adapt to unexpected events," says MIT professor Daniela Rus, one of the senior authors on the paper.
In many respects, the system itself is a fusion of three previous MIT projects:
"We've taken the benefits of a mapless driving approach and combined it with end-to-end machine learning so that we don't need expert programmers to tune the system by hand," says Amini.
As a next step, the team plans to continue to scale their system to increasing amounts of complexity in the real world, including adverse weather conditions and dynamic interaction with other vehicles.
Liu and Amini co-wrote the new paper with MIT professors Song Han and Daniela Rus. Their other co-authors include research assistant Sibo Zhu and associate professor Sertac Karaman. The paper will be presented later this month at the International Conference on Robotics and Automation (ICRA). |
|||
362 | Researchers Study Phone Use Behavior, Geometrics on Urban, Rural Roads | Ding - a notification goes off on a cell phone. A driver looks down and their eyes briefly leave the road ahead and crash!! Phone use while driving is a significant source of distracted driving that leads to traffic accidents, which are considered preventable.
Researchers at Texas A&M University investigated the relationship between phone use behavior and road geometrics determining that using a phone while driving is more than just a self-choice. The combination and presence of a shoulder, median, higher speed limit and extra lanes could encourage more phone use while driving. The results also confirmed the correlation between the frequency of phone use and distracted crashes on urban roads.
This study could help transportation agencies identify countermeasures on roadways to reducing distracted-related crashes and provide researchers with a new perspective to study phone-relation behavior rather than focus on the drivers' personalities.
"This study finds patterns for where the locations are where phone use while driving behavior most occurs. These findings are unique and informative and have not been documented elsewhere yet," said Xiaoqiang "Jack" Kong, a doctoral student in the Zachry Department of Civil and Environmental Engineering and graduate research assistant at the Texas A&M Transportation Institute (TTI). "While I am driving, I always notice many drivers who are on their phones talking, texting or scrolling. There are many times the cars in front of my car didn't move after traffic lights turn green. It seems to happen to me every day. As a transportation Ph.D. student, I started to wonder how exactly this behavior could impact traffic safety."
The findings of this study were published in Accident Analysis & Prevention . The paper's authors also include Dr. Subasish Das , assistant research scientist in TTI's Roadway Safety Division; Dr. Hongmin "Tracy" Zhou , associate transportation researcher in TTI's Research and Implementation Division; and Dr. Yunglong Zhang , professor, associate department head of graduate programs in civil and environmental engineering.
Phone use while driving is a complex psychological behavior driven by many factors, including the driver's personality, environmental factors and roadway operational factors. | A study by Texas A&M University researchers found a relationship between drivers' phone use and road geometrics, the physical shapes of roads themselves. The researchers used an dataset from a private data service provider on phone use while driving, integrating all 'phone use while driving' events with the Texas road inventory and the distracted crash count on each road segment in the road inventory from the state crash database. The researchers found that some road geometrics give drivers a sense of security, and could encourage more distracted driving cases on both rural and urban roadways. Drivers also might feel safer or become less cautious on interstate highways, where drivers enter and exit without traffic lights. Texas A&M's Xiaoqiang Kong said the study identified "patterns for where the locations are where phone use while driving behavior most occurs." | [] | [] | [] | scitechnews | None | None | None | None | A study by Texas A&M University researchers found a relationship between drivers' phone use and road geometrics, the physical shapes of roads themselves. The researchers used an dataset from a private data service provider on phone use while driving, integrating all 'phone use while driving' events with the Texas road inventory and the distracted crash count on each road segment in the road inventory from the state crash database. The researchers found that some road geometrics give drivers a sense of security, and could encourage more distracted driving cases on both rural and urban roadways. Drivers also might feel safer or become less cautious on interstate highways, where drivers enter and exit without traffic lights. Texas A&M's Xiaoqiang Kong said the study identified "patterns for where the locations are where phone use while driving behavior most occurs."
Ding - a notification goes off on a cell phone. A driver looks down and their eyes briefly leave the road ahead and crash!! Phone use while driving is a significant source of distracted driving that leads to traffic accidents, which are considered preventable.
Researchers at Texas A&M University investigated the relationship between phone use behavior and road geometrics determining that using a phone while driving is more than just a self-choice. The combination and presence of a shoulder, median, higher speed limit and extra lanes could encourage more phone use while driving. The results also confirmed the correlation between the frequency of phone use and distracted crashes on urban roads.
This study could help transportation agencies identify countermeasures on roadways to reducing distracted-related crashes and provide researchers with a new perspective to study phone-relation behavior rather than focus on the drivers' personalities.
"This study finds patterns for where the locations are where phone use while driving behavior most occurs. These findings are unique and informative and have not been documented elsewhere yet," said Xiaoqiang "Jack" Kong, a doctoral student in the Zachry Department of Civil and Environmental Engineering and graduate research assistant at the Texas A&M Transportation Institute (TTI). "While I am driving, I always notice many drivers who are on their phones talking, texting or scrolling. There are many times the cars in front of my car didn't move after traffic lights turn green. It seems to happen to me every day. As a transportation Ph.D. student, I started to wonder how exactly this behavior could impact traffic safety."
The findings of this study were published in Accident Analysis & Prevention . The paper's authors also include Dr. Subasish Das , assistant research scientist in TTI's Roadway Safety Division; Dr. Hongmin "Tracy" Zhou , associate transportation researcher in TTI's Research and Implementation Division; and Dr. Yunglong Zhang , professor, associate department head of graduate programs in civil and environmental engineering.
Phone use while driving is a complex psychological behavior driven by many factors, including the driver's personality, environmental factors and roadway operational factors. |
|||
364 | This Robotic Extra Thumb Can Be Controlled by Moving Your Toes | By Chris Stokel-Walker
People equipped with an additional, robotic thumb learned to control it with their toes - but prolonged used may come at a cost of their brains being less certain about how their hands work.
Danielle Clode at University College London and her colleagues gave 36 people a prosthetic thumb that wrapped around their wrist and sat underneath their little finger. All were right-handed, and wore the device on their dominant hand.
The third thumb's movement was controlled by sensors attached to the user's big toes, and communications were sent using wireless technology affixed at the wrist and ankle. By wiggling each toe, the augmented humans could move the thumb in different directions and clench its grip.
A third thumb can help you blow bubbles Dani Clode
For five days, participants were encouraged to use the thumb both in laboratory settings and in the wider world. "One of the goals of the training was to push the participants about what was possible and train them in unique new ways of handling objects," says Clode.
The additional thumb could cradle a cup of coffee while the same hand's forefingers held a spoon to stir in milk, for instance, while some participants used the thumb to flick through pages of a book they were holding in the same hand. The average user wore the thumb for just under 3 hours a day.
To understand how the extra thumb affected people's brains, the researchers gave them an MRI scan before and after the experiment.
"Technology is advancing, but no one is talking about whether our brain can deal with that," says team member Paulina Kieliba, also at UCL.
"In our augmented population, on the right hand, the representation of individual fingers collapsed on each other," says Kieliba - meaning the brain perceived each finger as more similar to each other than it did before the experiment. A week later, 12 of the participants returned for a third brain scan, where the effect of the brain changes had begun to wear off.
Jonathan Aitken at the University of Sheffield, UK, is surprised at how quickly participants adapted to the thumb. "The incorporation of such an unfamiliar tool - and one that requires operation by the toes to control action - and the rapid speed of learning is very interesting," he says.
Journal reference: Science Robotics , DOI: 10.1126/scirobotics.abd7935 | A prosthetic robotic thumb controlled by users' toes could come with a cognitive cost, according to scientists at the U.K.'s University College London (UCL). UCL's Danielle Clode and colleagues outfitted 36 people with the robotic thumb, which is controlled by sensors worn on the big toes, with commands transmitted via wireless hardware on the wrist and ankle. Wiggling each toe lets users move the thumb in different directions or clench its grip. UCL's Paulina Kieliba said magnetic resonance imaging before and after the experiment showed that participants' brains perceived each finger on the hand with the robotic thumb as more similar to each other than they did before the experiment. | [] | [] | [] | scitechnews | None | None | None | None | A prosthetic robotic thumb controlled by users' toes could come with a cognitive cost, according to scientists at the U.K.'s University College London (UCL). UCL's Danielle Clode and colleagues outfitted 36 people with the robotic thumb, which is controlled by sensors worn on the big toes, with commands transmitted via wireless hardware on the wrist and ankle. Wiggling each toe lets users move the thumb in different directions or clench its grip. UCL's Paulina Kieliba said magnetic resonance imaging before and after the experiment showed that participants' brains perceived each finger on the hand with the robotic thumb as more similar to each other than they did before the experiment.
By Chris Stokel-Walker
People equipped with an additional, robotic thumb learned to control it with their toes - but prolonged used may come at a cost of their brains being less certain about how their hands work.
Danielle Clode at University College London and her colleagues gave 36 people a prosthetic thumb that wrapped around their wrist and sat underneath their little finger. All were right-handed, and wore the device on their dominant hand.
The third thumb's movement was controlled by sensors attached to the user's big toes, and communications were sent using wireless technology affixed at the wrist and ankle. By wiggling each toe, the augmented humans could move the thumb in different directions and clench its grip.
A third thumb can help you blow bubbles Dani Clode
For five days, participants were encouraged to use the thumb both in laboratory settings and in the wider world. "One of the goals of the training was to push the participants about what was possible and train them in unique new ways of handling objects," says Clode.
The additional thumb could cradle a cup of coffee while the same hand's forefingers held a spoon to stir in milk, for instance, while some participants used the thumb to flick through pages of a book they were holding in the same hand. The average user wore the thumb for just under 3 hours a day.
To understand how the extra thumb affected people's brains, the researchers gave them an MRI scan before and after the experiment.
"Technology is advancing, but no one is talking about whether our brain can deal with that," says team member Paulina Kieliba, also at UCL.
"In our augmented population, on the right hand, the representation of individual fingers collapsed on each other," says Kieliba - meaning the brain perceived each finger as more similar to each other than it did before the experiment. A week later, 12 of the participants returned for a third brain scan, where the effect of the brain changes had begun to wear off.
Jonathan Aitken at the University of Sheffield, UK, is surprised at how quickly participants adapted to the thumb. "The incorporation of such an unfamiliar tool - and one that requires operation by the toes to control action - and the rapid speed of learning is very interesting," he says.
Journal reference: Science Robotics , DOI: 10.1126/scirobotics.abd7935 |
|||
365 | You Can Pet a Virtual Cat, Feel Its Simulated Fur Using Elaborate VR Controller | For some pet owners, being away from their furry companions for an extended period can be heartbreaking. Visiting a beloved pet on a video call just isn't the same, so researchers at National Taiwan University developed a VR controller that allows the user to feel simulated fur while petting a virtual animal.
Created at the university's Interactive Graphics (and Multimedia) Laboratory, in collaboration with Taiwan's National Chengchi University, "HairTouch" was presented at the 2021 Computer-Human Interaction conference this week, and it's another attempt to bridge the real world and virtual reality to make simulated experiences feel more authentic by engaging more than just a user's sense of sight and sound. A VR controller, the motions of which can be tracked by a virtual reality headset so the movements of a user's hands are mirrored in the simulation, was augmented with an elaborate contraption that uses a couple of tufts of fake fur that a finger can feel.
The HairTouch controller not only presents the fake fur when a user touches a furry animal in VR, but it's also capable of simulating the feeling of different types of fur, and other surfaces, by manipulating those hairs as they extend and contract. By controlling the length of hairs, the fake fur can be made to feel softer and more pliable when it's fully extended, or stiffer and more coarse when only a small amount of the fibers are sticking up.
To accurately simulate a pet, whose fur coat doesn't stick straight up like the fibers on a paint brush do, the fake fur on the HairTouch controller can also be bent from side to side, depending on the user's hand and finger movements in the simulation, and the orientation of the virtual animal. Petting your dog from 3,000 miles away doesn't seem like the best use of hundreds of dollars worth of VR gear (unless you're a really devoted dog owner), but the controller can be used to simulate the feel of other textures, too, including fabrics, so the research could also be a welcome upgrade to virtual shopping - a promised use of the technology that hasn't really moved past the concept stage.
Don't expect to see the HairTouch available as an official Oculus accessory anytime soon (or even ever), as it's currently just a research project and the prototype isn't quite as sleek as the VR hardware available to consumers now. But it's a clever idea that could find its way into other hardware, and other applications, helping virtual reality blur the lines with reality. | A virtual reality (VR) controller developed by researchers at Taiwan's National Taiwan and National Chengchi universities lets users feel simulated fur while petting a virtual animal. The prototype HairTouch controller augments a VR headset that tracks and mirrors hand movements in a simulated environment with a device that uses tufts of artificial fur. The controller not only presents the fake fur when a user touches a furry creature in VR, but it also simulates the sensations of different types of fur and other surfaces by manipulating the hairs as they extend and contract. The controller can also be used to simulate the feel of other textures, including fabrics. | [] | [] | [] | scitechnews | None | None | None | None | A virtual reality (VR) controller developed by researchers at Taiwan's National Taiwan and National Chengchi universities lets users feel simulated fur while petting a virtual animal. The prototype HairTouch controller augments a VR headset that tracks and mirrors hand movements in a simulated environment with a device that uses tufts of artificial fur. The controller not only presents the fake fur when a user touches a furry creature in VR, but it also simulates the sensations of different types of fur and other surfaces by manipulating the hairs as they extend and contract. The controller can also be used to simulate the feel of other textures, including fabrics.
For some pet owners, being away from their furry companions for an extended period can be heartbreaking. Visiting a beloved pet on a video call just isn't the same, so researchers at National Taiwan University developed a VR controller that allows the user to feel simulated fur while petting a virtual animal.
Created at the university's Interactive Graphics (and Multimedia) Laboratory, in collaboration with Taiwan's National Chengchi University, "HairTouch" was presented at the 2021 Computer-Human Interaction conference this week, and it's another attempt to bridge the real world and virtual reality to make simulated experiences feel more authentic by engaging more than just a user's sense of sight and sound. A VR controller, the motions of which can be tracked by a virtual reality headset so the movements of a user's hands are mirrored in the simulation, was augmented with an elaborate contraption that uses a couple of tufts of fake fur that a finger can feel.
The HairTouch controller not only presents the fake fur when a user touches a furry animal in VR, but it's also capable of simulating the feeling of different types of fur, and other surfaces, by manipulating those hairs as they extend and contract. By controlling the length of hairs, the fake fur can be made to feel softer and more pliable when it's fully extended, or stiffer and more coarse when only a small amount of the fibers are sticking up.
To accurately simulate a pet, whose fur coat doesn't stick straight up like the fibers on a paint brush do, the fake fur on the HairTouch controller can also be bent from side to side, depending on the user's hand and finger movements in the simulation, and the orientation of the virtual animal. Petting your dog from 3,000 miles away doesn't seem like the best use of hundreds of dollars worth of VR gear (unless you're a really devoted dog owner), but the controller can be used to simulate the feel of other textures, too, including fabrics, so the research could also be a welcome upgrade to virtual shopping - a promised use of the technology that hasn't really moved past the concept stage.
Don't expect to see the HairTouch available as an official Oculus accessory anytime soon (or even ever), as it's currently just a research project and the prototype isn't quite as sleek as the VR hardware available to consumers now. But it's a clever idea that could find its way into other hardware, and other applications, helping virtual reality blur the lines with reality. |
|||
366 | Helping Robots Learn What They Can and Can't Do in New Situations | The models that robots use to do tasks work well in the structured environment of the laboratory. Outside the lab, however, even the most sophisticated models may prove inadequate in new situations or in difficult to model tasks, such as working with soft materials like rope and cloth.
To overcome this problem, University of Michigan researchers have created a way for robots to predict when they can't trust their models, and to recover when they find that their model is unreliable.
"We're trying to teach the robot to make do with what it has," said Peter Mitrano, Robotics PhD student.
"When a robot is picking things up and moving them around, it may not know the physics or geometry of everything. Our goal was to have the robot still accomplish useful tasks even with this limited dynamics model that characterises how things move."
To enable robots to handle complex objects or environments, engineers usually rely on one of two approaches.
One is to collect a lot of data, using it to develop a detailed model that attempts to cover every possible scenario. This full dynamics model, however, is usually only accurate for small movements and in fixed settings.
Another method is to check how inaccurate a model is in order to generate the best possible action. However, the inaccuracy of a model is difficult to measure, especially if new, unmodeled items have appeared, and if the robot overestimates error it may incorrectly determine that it is impossible to complete a task.
"So you can try to accurately learn the dynamics everywhere, you can try to be conservative and estimate when a model is right, or you can utilize our approach, which is to learn from interacting with the environment where your model is or is not accurate," said Mitrano.
In experiments, the team created a simple model of a rope's dynamics while moving it around an open space. Then, they added obstacles and created a classifier, which learned when this simple rope model was reliable - but did not attempt to learn the more complex behavior of how the rope interacted with the objects. Finally, the team added recovery steps if the robot encountered a situation - such assay, when the rope collided with an obstacle - and the classifier determined that the simple model was unreliable.
The team tested their simple model, classifier, and recovery approach against the current state-of-the-art, full dynamic approach. Given the task of dragging a rope to a goal position among obstacles, the team's method was successful 84% of the time, compared to 18% of the time for the full dynamic model.
"In our approach, we took inspiration from other realms of science and robotics where simple models, despite their limitations, are still very useful," said Dmitry Berenson, Associate Professor of Electrical Engineering and Computer Science and core faculty member in the Robotics Institute .
"Here, we have a simple model of a rope, and we develop ways to make sure that we are using it in appropriate situations where the model is reliable," said Berenson. "This method can allow robots to generalize their knowledge to new situations that they have never encountered before."
The team also demonstrated the success of their model in two real-world settings: grabbing a phone charging cable and manipulating hoses and straps under the hood of a car.
These examples also show the limitations of their method, in that it doesn't provide a solution for the contact actions necessary to completely finish a task. For example, while it enables moving the charging cord into place, you need a different method in order to plug in the phone. Additionally, as the robot is exploring its capabilities by moving things in the world, the robot must be equipped with safety constraints so that it can explore safely.
The next step in this research is exploring where else a given model might be useful, said Mitrano.
"We have our setup that can drag a phone cable around a table, but can we apply the model to dragging something like a fire hose across a ship?"
The paper, " Learning where to trust unreliable models in an unstructured world for deformable object manipulation ," is published in Science Robotics . Dale McConachie, Robotics PhD '20 and Research Scientist at the Toyota Research Institute (TRI), also contributed to the work.
This work was supported by NSF Grant IIS-1750489 and ONR grant N000141712050, and by Toyota Research Institute (TRI). | University of Michigan researchers have developed a method of helping robots to predict when the model on which they were trained is unreliable, and to learn from interacting with the environment. Their approach involved creating a simple model of a rope's dynamics while moving it around an open space, adding obstacles, creating a classifier that learned when the model was reliable without learning how the rope interacted with the objects, and including recovery steps for when the classifier determined the model was unreliable. The researchers found their approach was successful 84% of the time, versus 18% for a full dynamics model, which aims to incorporate all possible scenarios. The approach also was successful in two real-world settings that involved grabbing a phone charging cable, and manipulating hoses and straps under a car hood. Michigan's Dmitry Berenson said, "This method can allow robots to generalize their knowledge to new situations that they have never encountered before." | [] | [] | [] | scitechnews | None | None | None | None | University of Michigan researchers have developed a method of helping robots to predict when the model on which they were trained is unreliable, and to learn from interacting with the environment. Their approach involved creating a simple model of a rope's dynamics while moving it around an open space, adding obstacles, creating a classifier that learned when the model was reliable without learning how the rope interacted with the objects, and including recovery steps for when the classifier determined the model was unreliable. The researchers found their approach was successful 84% of the time, versus 18% for a full dynamics model, which aims to incorporate all possible scenarios. The approach also was successful in two real-world settings that involved grabbing a phone charging cable, and manipulating hoses and straps under a car hood. Michigan's Dmitry Berenson said, "This method can allow robots to generalize their knowledge to new situations that they have never encountered before."
The models that robots use to do tasks work well in the structured environment of the laboratory. Outside the lab, however, even the most sophisticated models may prove inadequate in new situations or in difficult to model tasks, such as working with soft materials like rope and cloth.
To overcome this problem, University of Michigan researchers have created a way for robots to predict when they can't trust their models, and to recover when they find that their model is unreliable.
"We're trying to teach the robot to make do with what it has," said Peter Mitrano, Robotics PhD student.
"When a robot is picking things up and moving them around, it may not know the physics or geometry of everything. Our goal was to have the robot still accomplish useful tasks even with this limited dynamics model that characterises how things move."
To enable robots to handle complex objects or environments, engineers usually rely on one of two approaches.
One is to collect a lot of data, using it to develop a detailed model that attempts to cover every possible scenario. This full dynamics model, however, is usually only accurate for small movements and in fixed settings.
Another method is to check how inaccurate a model is in order to generate the best possible action. However, the inaccuracy of a model is difficult to measure, especially if new, unmodeled items have appeared, and if the robot overestimates error it may incorrectly determine that it is impossible to complete a task.
"So you can try to accurately learn the dynamics everywhere, you can try to be conservative and estimate when a model is right, or you can utilize our approach, which is to learn from interacting with the environment where your model is or is not accurate," said Mitrano.
In experiments, the team created a simple model of a rope's dynamics while moving it around an open space. Then, they added obstacles and created a classifier, which learned when this simple rope model was reliable - but did not attempt to learn the more complex behavior of how the rope interacted with the objects. Finally, the team added recovery steps if the robot encountered a situation - such assay, when the rope collided with an obstacle - and the classifier determined that the simple model was unreliable.
The team tested their simple model, classifier, and recovery approach against the current state-of-the-art, full dynamic approach. Given the task of dragging a rope to a goal position among obstacles, the team's method was successful 84% of the time, compared to 18% of the time for the full dynamic model.
"In our approach, we took inspiration from other realms of science and robotics where simple models, despite their limitations, are still very useful," said Dmitry Berenson, Associate Professor of Electrical Engineering and Computer Science and core faculty member in the Robotics Institute .
"Here, we have a simple model of a rope, and we develop ways to make sure that we are using it in appropriate situations where the model is reliable," said Berenson. "This method can allow robots to generalize their knowledge to new situations that they have never encountered before."
The team also demonstrated the success of their model in two real-world settings: grabbing a phone charging cable and manipulating hoses and straps under the hood of a car.
These examples also show the limitations of their method, in that it doesn't provide a solution for the contact actions necessary to completely finish a task. For example, while it enables moving the charging cord into place, you need a different method in order to plug in the phone. Additionally, as the robot is exploring its capabilities by moving things in the world, the robot must be equipped with safety constraints so that it can explore safely.
The next step in this research is exploring where else a given model might be useful, said Mitrano.
"We have our setup that can drag a phone cable around a table, but can we apply the model to dragging something like a fire hose across a ship?"
The paper, " Learning where to trust unreliable models in an unstructured world for deformable object manipulation ," is published in Science Robotics . Dale McConachie, Robotics PhD '20 and Research Scientist at the Toyota Research Institute (TRI), also contributed to the work.
This work was supported by NSF Grant IIS-1750489 and ONR grant N000141712050, and by Toyota Research Institute (TRI). |
|||
368 | Does Driving Wear You Out? You Might Be Experiencing 'Accelerousal' | Admit it: Daily commutes - those stops, the starts, all that stress - gets on your last nerve.
Or is that just me?
It might be, according to a new study from the University of Houston's Computational Physiology Lab. Ioannis Pavlidis , UH Eckhard Pfeiffer Professor of computer science, and his team of researchers took a look at why some drivers can stay cool behind the wheel while others keep getting more irked.
"We call the phenomenon 'accelerousal.' Arousal being a psychology term that describes stress. Accelarousal is what we identify as stress provoked by acceleration events, even small ones," said Pavlidis, who designed the research. According to the professor, the reason for it goes deeper than you might think.
"It may be partly due to genetic predisposition," Pavlidis said. "It was a very consistent behavior, which means, in all likelihood, this is an innate human characteristic."
To reach these conclusions, UH researchers, in collaboration with the Texas A&M Transportation Institute, took a hard look at how individual drivers reacted to common acceleration, speed and steering events on a carefully monitored itinerary. Results appeared in the May 2021 proceedings of ACM CHI, the premier forum on Human-Computer Interaction research. ( Click here .)
"Thanks to our work, we now have an understanding of accelerousal, a phobia that was hidden in plain sight," said Tung Huynh, a research assistant with the team.
For the study, 11 volunteer drivers were monitored for signs of instantaneous physiological stress during separate half-hour drives along the same route in the same Toyota Sienna minivan.
Stress measurements were taken via thermal imaging targeting the drivers' levels of perinasal perspiration, which is an autonomic (involuntary) facial response reflecting a fight-or-flight reaction. Simultaneously, a computer in the Toyota Sienna functioned like an airplane's black box, recording the vehicle's acceleration, speed, brake force and steering.
The driving tests were conducted by Texas A&M Transportation Institute researchers under the direction of Dr. Mike Manser, manager of the Institute's Human Factors Program.
When data was crunched at the University of Houston, researchers found about half the participants consistently exhibited peaked stress during periods of commonplace acceleration, such as happens in stop-and-go progress through red lights. The other half showed no notable changes from their baseline measurements.
"This has all the characteristics of long-term stressor, with all the health and other implications that this may entail," Pavlidis said.
Even more revealing is how far apart the two extremes were.
"The differences were significant, with 'accelaroused' participants logging nearly 50% more stress than non-accelaroused ones," Pavlidis said. "Moreover, psychometric measurements taken through a standardized questionnaire given to every volunteer at the end of the drive revealed that acceleroused drivers felt more overloaded." The anxious drivers were more exhausted after their drives, in other words, than the calm drivers were after theirs.
"This was a clear indication that accelerousal was taking a toll on drivers, and that the drivers were not consciously aware of that," Pavlidis said.
This small-scale study, he suggests, points to the need for deeper research. It also highlights the instrumental role technology could play in understanding human response to demands of driving. Such understanding could not only improve safety on our roads but will also safeguard the long-term health of drivers.
"For instance, delivery drivers, which is an expanding class in the current gig economy, are exposed to stop-and-go events all the time. Therefore, delivery drivers who experience accelerousal - and for now, are unaware - could have a way to detect this condition in themselves and account for its long-term stress effects," Pavlidis explained.
These findings will have even more relevance over coming decades, as automotive innovators move toward semi-automated vehicles that could sense and relieve stressed drivers.
During the recent tests, great care was taken to equalize the volunteers' driving experiences. Each drive happened during daylight hours, in clear weather and light traffic over the same 19-kilometer town itinerary (almost 12 miles). Participants were experienced drivers of similar age (18 to 27) and all had normal vision.
Where would you score on the accelerousal scale? Watch out for signs, the professor urges, and ask yourself: Does driving wear you out more than it does your friends and family?
"That could be a telltale sign of accelerousal," Pavlidis cautioned. | Researchers in the University of Houston (UH) Computational Physiology Lab, working with colleagues at the Texas A&M Transportation Institute, explored why some drivers become more exhausted behind the wheel than others, and found driver stress may be triggered by acceleration events, which they described as "accelerousal." The researchers took thermal stress readings of volunteer drivers during separate half-hour trips along the same route in a Toyota Sienna minivan, with an onboard computer that recorded vehicle acceleration, speed, brake force, and steering. About half the participants consistently showed peak stress during periods of commonplace acceleration, while others exhibited no notable changes from baseline measurements. Said UH's Ioannis Pavlidis, "The differences were significant, with 'accelaroused' participants logging nearly 50% more stress than non-accelaroused ones." | [] | [] | [] | scitechnews | None | None | None | None | Researchers in the University of Houston (UH) Computational Physiology Lab, working with colleagues at the Texas A&M Transportation Institute, explored why some drivers become more exhausted behind the wheel than others, and found driver stress may be triggered by acceleration events, which they described as "accelerousal." The researchers took thermal stress readings of volunteer drivers during separate half-hour trips along the same route in a Toyota Sienna minivan, with an onboard computer that recorded vehicle acceleration, speed, brake force, and steering. About half the participants consistently showed peak stress during periods of commonplace acceleration, while others exhibited no notable changes from baseline measurements. Said UH's Ioannis Pavlidis, "The differences were significant, with 'accelaroused' participants logging nearly 50% more stress than non-accelaroused ones."
Admit it: Daily commutes - those stops, the starts, all that stress - gets on your last nerve.
Or is that just me?
It might be, according to a new study from the University of Houston's Computational Physiology Lab. Ioannis Pavlidis , UH Eckhard Pfeiffer Professor of computer science, and his team of researchers took a look at why some drivers can stay cool behind the wheel while others keep getting more irked.
"We call the phenomenon 'accelerousal.' Arousal being a psychology term that describes stress. Accelarousal is what we identify as stress provoked by acceleration events, even small ones," said Pavlidis, who designed the research. According to the professor, the reason for it goes deeper than you might think.
"It may be partly due to genetic predisposition," Pavlidis said. "It was a very consistent behavior, which means, in all likelihood, this is an innate human characteristic."
To reach these conclusions, UH researchers, in collaboration with the Texas A&M Transportation Institute, took a hard look at how individual drivers reacted to common acceleration, speed and steering events on a carefully monitored itinerary. Results appeared in the May 2021 proceedings of ACM CHI, the premier forum on Human-Computer Interaction research. ( Click here .)
"Thanks to our work, we now have an understanding of accelerousal, a phobia that was hidden in plain sight," said Tung Huynh, a research assistant with the team.
For the study, 11 volunteer drivers were monitored for signs of instantaneous physiological stress during separate half-hour drives along the same route in the same Toyota Sienna minivan.
Stress measurements were taken via thermal imaging targeting the drivers' levels of perinasal perspiration, which is an autonomic (involuntary) facial response reflecting a fight-or-flight reaction. Simultaneously, a computer in the Toyota Sienna functioned like an airplane's black box, recording the vehicle's acceleration, speed, brake force and steering.
The driving tests were conducted by Texas A&M Transportation Institute researchers under the direction of Dr. Mike Manser, manager of the Institute's Human Factors Program.
When data was crunched at the University of Houston, researchers found about half the participants consistently exhibited peaked stress during periods of commonplace acceleration, such as happens in stop-and-go progress through red lights. The other half showed no notable changes from their baseline measurements.
"This has all the characteristics of long-term stressor, with all the health and other implications that this may entail," Pavlidis said.
Even more revealing is how far apart the two extremes were.
"The differences were significant, with 'accelaroused' participants logging nearly 50% more stress than non-accelaroused ones," Pavlidis said. "Moreover, psychometric measurements taken through a standardized questionnaire given to every volunteer at the end of the drive revealed that acceleroused drivers felt more overloaded." The anxious drivers were more exhausted after their drives, in other words, than the calm drivers were after theirs.
"This was a clear indication that accelerousal was taking a toll on drivers, and that the drivers were not consciously aware of that," Pavlidis said.
This small-scale study, he suggests, points to the need for deeper research. It also highlights the instrumental role technology could play in understanding human response to demands of driving. Such understanding could not only improve safety on our roads but will also safeguard the long-term health of drivers.
"For instance, delivery drivers, which is an expanding class in the current gig economy, are exposed to stop-and-go events all the time. Therefore, delivery drivers who experience accelerousal - and for now, are unaware - could have a way to detect this condition in themselves and account for its long-term stress effects," Pavlidis explained.
These findings will have even more relevance over coming decades, as automotive innovators move toward semi-automated vehicles that could sense and relieve stressed drivers.
During the recent tests, great care was taken to equalize the volunteers' driving experiences. Each drive happened during daylight hours, in clear weather and light traffic over the same 19-kilometer town itinerary (almost 12 miles). Participants were experienced drivers of similar age (18 to 27) and all had normal vision.
Where would you score on the accelerousal scale? Watch out for signs, the professor urges, and ask yourself: Does driving wear you out more than it does your friends and family?
"That could be a telltale sign of accelerousal," Pavlidis cautioned. |
|||
369 | Microsoft Pushes into Growing Grocery Tech Market with Deal in China | BEIJING - Microsoft 's China arm announced Thursday a strategic partnership with Chinese retail tech company Hanshow to collaborate on cloud-based software for store operators worldwide.
The deal marks Microsoft's latest foray into a retail industry that is being forced to accelerate a shift online. The integration of offline with internet-based sales strategies is known as omni-channel retail, and includes grocery delivery , demand for which surged in the wake of the coronavirus pandemic.
Retail is one of the industries that's seen some of the biggest disruptions in recent years, Joe Bao, China strategy officer for Microsoft, said at a signing ceremony at the software company's Beijing offices.
The partnership is not just for the China market, but also for bringing China's technology overseas, Bao said in Mandarin, according to a CNBC translation. He said the agreement comes after five years of Microsoft working with Hanshow.
The American software company entered China in 1992, where it has its biggest overseas research and development center. The strategic partnership comes as U.S. and Chinese companies operate in an increasingly tense political environment that has focused on trade and technology, partly in response to longstanding foreign criticism about unfair Chinese business practices . | Microsoft's Chinese branch last week announced its latest omnichannel retail push to develop cloud-based software for store operators, in partnership with Chinese retail technology provider Hanshow. Hanshow, whose clients are mainly Chinese and European supermarkets, said its products include electronic shelf labels that can display price changes in real time, a system that helps workers pack produce faster for delivery, and a cloud-based platform that lets retailers simultaneously view the temperatures of fresh produce in stores worldwide. The partnership also will develop Internet of Things technology, while Hanshow's Gao Bo said Hanshow will gain access to Microsoft Office 365 software such as Word, and Dynamics 365, a cloud-based customer relationship management system. Joe Bao at Microsoft's China unit said the partnership aims to extend the reach of China's grocery technology globally. | [] | [] | [] | scitechnews | None | None | None | None | Microsoft's Chinese branch last week announced its latest omnichannel retail push to develop cloud-based software for store operators, in partnership with Chinese retail technology provider Hanshow. Hanshow, whose clients are mainly Chinese and European supermarkets, said its products include electronic shelf labels that can display price changes in real time, a system that helps workers pack produce faster for delivery, and a cloud-based platform that lets retailers simultaneously view the temperatures of fresh produce in stores worldwide. The partnership also will develop Internet of Things technology, while Hanshow's Gao Bo said Hanshow will gain access to Microsoft Office 365 software such as Word, and Dynamics 365, a cloud-based customer relationship management system. Joe Bao at Microsoft's China unit said the partnership aims to extend the reach of China's grocery technology globally.
BEIJING - Microsoft 's China arm announced Thursday a strategic partnership with Chinese retail tech company Hanshow to collaborate on cloud-based software for store operators worldwide.
The deal marks Microsoft's latest foray into a retail industry that is being forced to accelerate a shift online. The integration of offline with internet-based sales strategies is known as omni-channel retail, and includes grocery delivery , demand for which surged in the wake of the coronavirus pandemic.
Retail is one of the industries that's seen some of the biggest disruptions in recent years, Joe Bao, China strategy officer for Microsoft, said at a signing ceremony at the software company's Beijing offices.
The partnership is not just for the China market, but also for bringing China's technology overseas, Bao said in Mandarin, according to a CNBC translation. He said the agreement comes after five years of Microsoft working with Hanshow.
The American software company entered China in 1992, where it has its biggest overseas research and development center. The strategic partnership comes as U.S. and Chinese companies operate in an increasingly tense political environment that has focused on trade and technology, partly in response to longstanding foreign criticism about unfair Chinese business practices . |
|||
370 | Hiring Troubles Prompt Some Employers to Eye Automation, Machines | Some U.S. employers are ramping up automation as demand for labor outstrips supply, partly driven by innovations in sensors, wireless communications, and optics in the last decade. The COVID-19 pandemic expedited the rollout of automation in industries that previously had been slow to adopt such systems. According to the Association for Advancing Automation, companies outside the auto industry last year constituted over 50% of industrial robot orders for the first time. Bank of America (BofA) 's Ethan Harris said this reflects a gradual "tectonic shift" fueled not just by the pandemic, but also by supply-chain issues and trade issues. BofA analysts forecast twice as many robots in the global economy by 2025 versus 2019, resulting in years of workforce disruption even after the pandemic ends. | [] | [] | [] | scitechnews | None | None | None | None | Some U.S. employers are ramping up automation as demand for labor outstrips supply, partly driven by innovations in sensors, wireless communications, and optics in the last decade. The COVID-19 pandemic expedited the rollout of automation in industries that previously had been slow to adopt such systems. According to the Association for Advancing Automation, companies outside the auto industry last year constituted over 50% of industrial robot orders for the first time. Bank of America (BofA) 's Ethan Harris said this reflects a gradual "tectonic shift" fueled not just by the pandemic, but also by supply-chain issues and trade issues. BofA analysts forecast twice as many robots in the global economy by 2025 versus 2019, resulting in years of workforce disruption even after the pandemic ends.
|
||||
371 | Robotics Hub Carnegie Mellon Lands $150-Million Grant | The grant, announced Thursday, comes from the Richard King Mellon Foundation, which has been investing in computer science initiatives and other areas at the university since the 1960s.
This is the largest grant in the foundation's 74-year history, said Sam Reiman, the foundation's director.
The money comes from the foundation's $1.2 billion total projected grant funding over the next decade in areas such as economic development, conservation and health. "We wanted to make big bets on really bold, visionary projects that have the potential to advance Pittsburgh's economy and position us to be leaders of the new economy of the future," Mr. Reiman said.
Half of the funding will be used to build a new Robotics Innovation Center and to make the university's current Manufacturing Futures Initiative a permanent institute, with both being housed at Hazelwood Green in Pittsburgh, the site of a former steel mill.
Establishing the university's Manufacturing Futures Initiative as a permanent institute means that part of the grant will be used to hire more faculty and staff and provide more funding for research projects over the next few years.
The other $75 million will be used to construct a new science building on the university's campus in the Oakland neighborhood of Pittsburgh.
One of the university's long-term goals is to attract talent needed to build next-generation manufacturing technologies while also providing training and education opportunities for existing manufacturing workers in the Pittsburgh region, said Farnam Jahanian, president of Carnegie Mellon University.
"The importance of advanced manufacturing to both the United States' economic prosperity and competitiveness, and our national security, cannot be underestimated," Mr. Jahanian said.
The university's interest in new manufacturing technologies, including 3-D printing , comes as corporate interest in new digital manufacturing strategies is rising.
The Robotics Innovation Center will encompass about 150,000-square-feet and is expected to be complete by the 2025-2026 academic year, Mr. Jahanian said.
The university wants to continue pioneering robotics-focused education, Mr. Jahanian said. Carnegie Mellon founded the first U.S.-based university department devoted to robotics in 1979. Many of the roughly 80 Pittsburgh-based robotics companies today are spinouts of the university, including some that are focused on autonomous vehicle research.
In 2015, ride-hailing company Uber Technologies Inc. poached 40 Carnegie Mellon University researchers and scientists to develop driverless car technology. Last year, Uber sold its self-driving car unit to Aurora Innovation Inc., which has ties to the university.
Over the next 10 years, the university predicts that its research in robotics and artificial intelligence will double. "We're convinced of it, driven by market needs in almost every sector," Mr. Jahanian said.
Write to Sara Castellanos at sara.castellanos@wsj.com | The Richard King Mellon Foundation has allocated a $150-million grant to Carnegie Mellon University (CMU), half of which the university plans to use to expand research and personnel for its advanced manufacturing program and to construct the new Robotics Innovation Center. The other half will be used to build a new science facility on CMU's campus. The university anticipates its research in robotics and artificial intelligence will double over the next decade; "We're convinced of it," CMU's Farnam Jahanian said, "driven by market needs in almost every sector." The university's goals for these initiatives are to draw talent to CMU for new manufacturing technologies, and to advance Pittsburgh's economic development. | [] | [] | [] | scitechnews | None | None | None | None | The Richard King Mellon Foundation has allocated a $150-million grant to Carnegie Mellon University (CMU), half of which the university plans to use to expand research and personnel for its advanced manufacturing program and to construct the new Robotics Innovation Center. The other half will be used to build a new science facility on CMU's campus. The university anticipates its research in robotics and artificial intelligence will double over the next decade; "We're convinced of it," CMU's Farnam Jahanian said, "driven by market needs in almost every sector." The university's goals for these initiatives are to draw talent to CMU for new manufacturing technologies, and to advance Pittsburgh's economic development.
The grant, announced Thursday, comes from the Richard King Mellon Foundation, which has been investing in computer science initiatives and other areas at the university since the 1960s.
This is the largest grant in the foundation's 74-year history, said Sam Reiman, the foundation's director.
The money comes from the foundation's $1.2 billion total projected grant funding over the next decade in areas such as economic development, conservation and health. "We wanted to make big bets on really bold, visionary projects that have the potential to advance Pittsburgh's economy and position us to be leaders of the new economy of the future," Mr. Reiman said.
Half of the funding will be used to build a new Robotics Innovation Center and to make the university's current Manufacturing Futures Initiative a permanent institute, with both being housed at Hazelwood Green in Pittsburgh, the site of a former steel mill.
Establishing the university's Manufacturing Futures Initiative as a permanent institute means that part of the grant will be used to hire more faculty and staff and provide more funding for research projects over the next few years.
The other $75 million will be used to construct a new science building on the university's campus in the Oakland neighborhood of Pittsburgh.
One of the university's long-term goals is to attract talent needed to build next-generation manufacturing technologies while also providing training and education opportunities for existing manufacturing workers in the Pittsburgh region, said Farnam Jahanian, president of Carnegie Mellon University.
"The importance of advanced manufacturing to both the United States' economic prosperity and competitiveness, and our national security, cannot be underestimated," Mr. Jahanian said.
The university's interest in new manufacturing technologies, including 3-D printing , comes as corporate interest in new digital manufacturing strategies is rising.
The Robotics Innovation Center will encompass about 150,000-square-feet and is expected to be complete by the 2025-2026 academic year, Mr. Jahanian said.
The university wants to continue pioneering robotics-focused education, Mr. Jahanian said. Carnegie Mellon founded the first U.S.-based university department devoted to robotics in 1979. Many of the roughly 80 Pittsburgh-based robotics companies today are spinouts of the university, including some that are focused on autonomous vehicle research.
In 2015, ride-hailing company Uber Technologies Inc. poached 40 Carnegie Mellon University researchers and scientists to develop driverless car technology. Last year, Uber sold its self-driving car unit to Aurora Innovation Inc., which has ties to the university.
Over the next 10 years, the university predicts that its research in robotics and artificial intelligence will double. "We're convinced of it, driven by market needs in almost every sector," Mr. Jahanian said.
Write to Sara Castellanos at sara.castellanos@wsj.com |
|||
372 | Vulnerabilities in Billions of Wi-Fi Devices Let Hackers Bypass Firewalls | One of the things that makes Wi-Fi work is its ability to break big chunks of data into smaller chunks and combine smaller chunks into bigger chunks, depending on the needs of the network at any given moment. These mundane network plumbing features, it turns out, have been harboring vulnerabilities that can be exploited to send users to malicious websites or exploit or tamper with network-connected devices, newly published research shows.
In all, researcher Mathy Vanhoef found a dozen vulnerabilities, either in the Wi-Fi specification or in the way the specification has been implemented in huge numbers of devices. Vanhoef has dubbed the vulnerabilities FragAttacks , short for fragmentation and aggregation attacks, because they all involve frame fragmentation or frame aggregation. Broadly speaking, they allow people within radio range to inject frames of their choice into networks protected by WPA-based encryption.
"It's never good to have someone able to drop packets into your network or target your devices on the network," Mike Kershaw, a Wi-Fi security expert and developer of the open source Kismet wireless sniffer and IDS, wrote in an email. "In some regards, these are no worse than using an unencrypted access point at a coffee shop - someone can do the same to you there, trivially - but because they can happen on networks you'd otherwise think are secure and might have configured as a trusted network, it's certainly bad news."
He added: "Overall, I think they give someone who was already targeting an attack against an individual or company a foothold they wouldn't have had before, which is definitely impactful, but probably don't pose as huge a risk as drive-by attacks to the average person."
While the flaws were disclosed last week in an industry-wide effort nine months in the making, it remains unclear in many cases which devices were vulnerable to which vulnerabilities and which vulnerabilities, if any, have received security updates. It's almost a certainty that many Wi-Fi-enabled devices will never be fixed.
One of the most severe vulnerabilities in the FragAttacks suite resides in the Wi-Fi specification itself. Tracked as CVE-2020-24588, the flaw can be exploited in a way that forces Wi-Fi devices to use a rogue DNS server, which in turn can deliver users to malicious websites rather than the ones they intended. From there, hackers can read and modify any unencrypted traffic. Rogue DNS servers also allow hackers to perform DNS rebinding attacks , in which malicious websites manipulate a browser to attack other devices connected to the same network.
The rogue DNS server is introduced when an attacker injects an ICMPv6 Router Advertisement into Wi-Fi traffic. Routers typically issue these announcements so other devices on the network can locate them. The injected advertisement instructs all devices to use a DNS specified by the attacker for lookups of both IPv6 and IPv4 addresses.
An exploit demoed in a video Vanhoef published shows the attacker luring the target to a website that stashes the router advertisement in an image.
Here's a visual overview:
In an email, Vanhoef explained, saying, "The IPv6 router advertisement is put in the payload (i.e. data portion) of the TCP packet. This data is by default passed on to the application that created the TCP connection. In the demo, that would be the browser, which is expecting an image. This means that by default, the client won't process the IPv6 router advertisement but instead process the TCP payload as application data."
Vanhoef said that it's possible to perform the attack without user interaction when the target's access point is vulnerable to CVE-2021-26139 , one of the 12 vulnerabilities that make up the FragAttacks package. The security flaw stems from a kernel flaw in NetBSD 7.1 that causes Wi-Fi access points to forward Extensible Authentication Protocol (AP) over LAN frames to other devices even when the sender has not yet authenticated to the AP.
It's safe to skip ahead, but for those curious about the specific software bug and the reason the video demo uses a malicious image, Vanhoef explained:
Four of the 12 vulnerabilities that make up the FragAttacks are implementation flaws, meaning they stem from bugs that software developers introduced when writing code based on the Wi-Fi specification. An attacker can exploit them against access points to bypass a key security benefit they provide.
Besides allowing multiple devices to share a single Internet connection, routers prevent incoming traffic from reaching connected devices unless the devices have requested it. This firewall works by using network address translation, or NAT, which maps private IP addresses that the AP assigns each device on the local network to a single IP address that the AP uses to send data over the Internet.
The result is that routers forward data to connected devices only when they have previously requested it from a website, email server, or other machine on the Internet. When one of those machines tries to send unsolicited data to a device behind the router, the router automatically discards it. This arrangement isn't perfect , but it does provide a vital defense that protects billions of devices.
Vanhoef figured out how to exploit the four vulnerabilities in a way that allows an attacker to, as he put it, "punch a hole through a router's firewall." With the ability to connect directly to devices behind a firewall, an Internet attacker can then send them malicious code or commands.
In one demo in the video, Vanhoef exploits the vulnerabilities to control an Internet-of-things device, specifically to remotely turn on and off a smart power socket. Normally, NAT would prevent a device outside the network from interacting with the socket unless the socket had first initiated a connection. The implementation exploits remove this barrier.
"That means that when an access point is vulnerable, it becomes easy to attack clients!" Vanhoef wrote. "So we're abusing the Wi-Fi implementation flaws in an access point as a first step in order to subsequently attack (outdated) clients ."
Despite Vanhoef spending nine months coordinating patches with more than a dozen hardware and software makers, it's not easy to figure out which devices or software are vulnerable to which vulnerabilities, and of those vulnerable products, which ones have received fixes.
This page provides the status for products from several companies. A more comprehensive list of known advisories is here . Other advisories are available individually from their respective vendors. The vulnerabilities to look for are:
Design flaws:
Implementation vulnerabilities allowing the injection of plaintext frames:
Other implementation flaws:
The most effective way to mitigate the threat posed by FragAttacks is to install all available updates that fix the vulnerabilities. Users will have to do this on each vulnerable computer, router, or other Internet-of-things device. It's likely that a huge number of affected devices will never receive a patch.
The next-best mitigation is to ensure that websites are always using HTTPS connections. That's because the encryption HTTPS provides greatly reduces the damage that can be done when a malicious DNS server directs a victim to a fake website.
Sites that use HTTP Strict Transport Security will always use this protection, but Vanhoef said that only about 20 percent of the web does this. Browser extensions like HTTPS everywhere were already a good idea, and the mitigation they provide against FragAttacks makes them even more worthwhile.
As noted earlier, FragAttacks aren't likely to be exploited against the vast majority of Wi-Fi users, since the exploits require a high degree of skill as well as proximity - meaning within 100 feet to a half-mile, depending on the equipment used - to the target. The vulnerabilities pose a higher threat to networks used by high-value targets such as retail chains, embassies, or corporate networks where security is key, and then most likely only in concert with other exploits.
When updates become available, by all means install them, but unless you're in this latter group, remember that drive-by downloads and other more mundane types of attacks will probably pose a bigger threat. | Security researcher Mathy Vanhoef found 12 fragmentation vulnerabilities and aggregation attack (FragAttack) exploits in Wi-Fi systems that leave billions of devices potentially vulnerable. FragAttacks let hackers within radio range inject frames into networks shielded by Wi-Fi Protected Access-based encryption; although FragAttacks cannot be used to read passwords or other sensitive data, they can cause other kinds of damage when coupled with other exploits. One particularly severe FragAttack is a flaw in the Wi-Fi specification itself, which if exploited forces devices to use a rogue Domain Name System server, which can subsequently route users to malicious websites. While the most effective way to mitigate the threat is to install all available updates that address the vulnerabilities on each vulnerable computer, router, or Internet-of-things device, it is likely many affected devices will never be patched. | [] | [] | [] | scitechnews | None | None | None | None | Security researcher Mathy Vanhoef found 12 fragmentation vulnerabilities and aggregation attack (FragAttack) exploits in Wi-Fi systems that leave billions of devices potentially vulnerable. FragAttacks let hackers within radio range inject frames into networks shielded by Wi-Fi Protected Access-based encryption; although FragAttacks cannot be used to read passwords or other sensitive data, they can cause other kinds of damage when coupled with other exploits. One particularly severe FragAttack is a flaw in the Wi-Fi specification itself, which if exploited forces devices to use a rogue Domain Name System server, which can subsequently route users to malicious websites. While the most effective way to mitigate the threat is to install all available updates that address the vulnerabilities on each vulnerable computer, router, or Internet-of-things device, it is likely many affected devices will never be patched.
One of the things that makes Wi-Fi work is its ability to break big chunks of data into smaller chunks and combine smaller chunks into bigger chunks, depending on the needs of the network at any given moment. These mundane network plumbing features, it turns out, have been harboring vulnerabilities that can be exploited to send users to malicious websites or exploit or tamper with network-connected devices, newly published research shows.
In all, researcher Mathy Vanhoef found a dozen vulnerabilities, either in the Wi-Fi specification or in the way the specification has been implemented in huge numbers of devices. Vanhoef has dubbed the vulnerabilities FragAttacks , short for fragmentation and aggregation attacks, because they all involve frame fragmentation or frame aggregation. Broadly speaking, they allow people within radio range to inject frames of their choice into networks protected by WPA-based encryption.
"It's never good to have someone able to drop packets into your network or target your devices on the network," Mike Kershaw, a Wi-Fi security expert and developer of the open source Kismet wireless sniffer and IDS, wrote in an email. "In some regards, these are no worse than using an unencrypted access point at a coffee shop - someone can do the same to you there, trivially - but because they can happen on networks you'd otherwise think are secure and might have configured as a trusted network, it's certainly bad news."
He added: "Overall, I think they give someone who was already targeting an attack against an individual or company a foothold they wouldn't have had before, which is definitely impactful, but probably don't pose as huge a risk as drive-by attacks to the average person."
While the flaws were disclosed last week in an industry-wide effort nine months in the making, it remains unclear in many cases which devices were vulnerable to which vulnerabilities and which vulnerabilities, if any, have received security updates. It's almost a certainty that many Wi-Fi-enabled devices will never be fixed.
One of the most severe vulnerabilities in the FragAttacks suite resides in the Wi-Fi specification itself. Tracked as CVE-2020-24588, the flaw can be exploited in a way that forces Wi-Fi devices to use a rogue DNS server, which in turn can deliver users to malicious websites rather than the ones they intended. From there, hackers can read and modify any unencrypted traffic. Rogue DNS servers also allow hackers to perform DNS rebinding attacks , in which malicious websites manipulate a browser to attack other devices connected to the same network.
The rogue DNS server is introduced when an attacker injects an ICMPv6 Router Advertisement into Wi-Fi traffic. Routers typically issue these announcements so other devices on the network can locate them. The injected advertisement instructs all devices to use a DNS specified by the attacker for lookups of both IPv6 and IPv4 addresses.
An exploit demoed in a video Vanhoef published shows the attacker luring the target to a website that stashes the router advertisement in an image.
Here's a visual overview:
In an email, Vanhoef explained, saying, "The IPv6 router advertisement is put in the payload (i.e. data portion) of the TCP packet. This data is by default passed on to the application that created the TCP connection. In the demo, that would be the browser, which is expecting an image. This means that by default, the client won't process the IPv6 router advertisement but instead process the TCP payload as application data."
Vanhoef said that it's possible to perform the attack without user interaction when the target's access point is vulnerable to CVE-2021-26139 , one of the 12 vulnerabilities that make up the FragAttacks package. The security flaw stems from a kernel flaw in NetBSD 7.1 that causes Wi-Fi access points to forward Extensible Authentication Protocol (AP) over LAN frames to other devices even when the sender has not yet authenticated to the AP.
It's safe to skip ahead, but for those curious about the specific software bug and the reason the video demo uses a malicious image, Vanhoef explained:
Four of the 12 vulnerabilities that make up the FragAttacks are implementation flaws, meaning they stem from bugs that software developers introduced when writing code based on the Wi-Fi specification. An attacker can exploit them against access points to bypass a key security benefit they provide.
Besides allowing multiple devices to share a single Internet connection, routers prevent incoming traffic from reaching connected devices unless the devices have requested it. This firewall works by using network address translation, or NAT, which maps private IP addresses that the AP assigns each device on the local network to a single IP address that the AP uses to send data over the Internet.
The result is that routers forward data to connected devices only when they have previously requested it from a website, email server, or other machine on the Internet. When one of those machines tries to send unsolicited data to a device behind the router, the router automatically discards it. This arrangement isn't perfect , but it does provide a vital defense that protects billions of devices.
Vanhoef figured out how to exploit the four vulnerabilities in a way that allows an attacker to, as he put it, "punch a hole through a router's firewall." With the ability to connect directly to devices behind a firewall, an Internet attacker can then send them malicious code or commands.
In one demo in the video, Vanhoef exploits the vulnerabilities to control an Internet-of-things device, specifically to remotely turn on and off a smart power socket. Normally, NAT would prevent a device outside the network from interacting with the socket unless the socket had first initiated a connection. The implementation exploits remove this barrier.
"That means that when an access point is vulnerable, it becomes easy to attack clients!" Vanhoef wrote. "So we're abusing the Wi-Fi implementation flaws in an access point as a first step in order to subsequently attack (outdated) clients ."
Despite Vanhoef spending nine months coordinating patches with more than a dozen hardware and software makers, it's not easy to figure out which devices or software are vulnerable to which vulnerabilities, and of those vulnerable products, which ones have received fixes.
This page provides the status for products from several companies. A more comprehensive list of known advisories is here . Other advisories are available individually from their respective vendors. The vulnerabilities to look for are:
Design flaws:
Implementation vulnerabilities allowing the injection of plaintext frames:
Other implementation flaws:
The most effective way to mitigate the threat posed by FragAttacks is to install all available updates that fix the vulnerabilities. Users will have to do this on each vulnerable computer, router, or other Internet-of-things device. It's likely that a huge number of affected devices will never receive a patch.
The next-best mitigation is to ensure that websites are always using HTTPS connections. That's because the encryption HTTPS provides greatly reduces the damage that can be done when a malicious DNS server directs a victim to a fake website.
Sites that use HTTP Strict Transport Security will always use this protection, but Vanhoef said that only about 20 percent of the web does this. Browser extensions like HTTPS everywhere were already a good idea, and the mitigation they provide against FragAttacks makes them even more worthwhile.
As noted earlier, FragAttacks aren't likely to be exploited against the vast majority of Wi-Fi users, since the exploits require a high degree of skill as well as proximity - meaning within 100 feet to a half-mile, depending on the equipment used - to the target. The vulnerabilities pose a higher threat to networks used by high-value targets such as retail chains, embassies, or corporate networks where security is key, and then most likely only in concert with other exploits.
When updates become available, by all means install them, but unless you're in this latter group, remember that drive-by downloads and other more mundane types of attacks will probably pose a bigger threat. |
|||
373 | Highly Sensitive LiDAR System Enhances Autonomous Driving Vision | A team of engineers from the University of Texas at Austin and the University of Virginia developed a first-of-its-kind light detecting device that rapidly amplifies weak signals bouncing off of faraway objects.
In doing so, it could massively improve the vision of self-driving cars, robots, and digital mapping technologies. The workings of their new device are outlined in the journal Nature Photonics .
The engineers developed an avalanche photodiode with a staircase-like alignment that helps to amplify the electrical current for light detection. The pixel-sized device is ideal for Light Detection and Ranging (LiDAR) receivers used in self-driving cars, robotics, and surveillance, the researchers explained in their study.
In their study, the team explained that the new device is more sensitive than existing light detectors, allowing it to create, as an example, a more comprehensive picture for a car's onboard computers. The device also eliminates inconsistencies, also known as noise, typically associated with the self-driving detection process, meaning it could make autonomous vehicles safer.
The new device, essentially, is a physical flight of stairs designed to exploit the photoelectric effect - the electrons are like marbles that roll down the stairs, crashing into each other and releasing enough energy to free another electron. Every step, therefore, doubles the number of electrons.
This consistent multiplication of electrons makes the signal from the device more stable and dependable, even in low light conditions, the researchers explained.
"The less random the multiplication is, the weaker the signals you can pick out from the background," Seth Bank, professor in the Cockrell School's Department of Electrical and Computer Engineering, explained in a press statement . "For example, that could allow you to look out to greater distances with a laser radar system for autonomous vehicles."
The researchers also explained that today's technology is much better suited to deliver on the great promise of the avalanche photodiode, which was invented in the 1980s by Federico Capasso.
Their device, for example, can operate at room temperature, unlike the most sensitive commercially available light detectors, which have to be maintained at temperatures hundreds of degrees below zero.
The engineers plan to combine their work on their new device with an avalanche photodiode they built last year specifically for near-infrared light. They explained that this device could be used for incredibly accurate fiber-optic communications and thermal imaging.
Interestingly, while most research into self-driving car sensors is aimed at improving LiDAR technology, a team from Princeton University recently proposed a doppler radar system that overcomes LiDAR limitations and would allow autonomous vehicles to see around corners.
Of course, it's not just self-driving cars that could benefit from these new technologies. LiDAR technology gives eVTOL aircraft sight , it can map entire off-world terrains , and it also allows robots such as Boston Dynamics' Spot to navigate their surroundings. Improvements in this field have a cascading effect throughout the technology sector, much like the electrons rolling down the new staircase avalanche photodiode. | A Light Detection and Ranging (LiDAR) system developed by researchers at the University of Texas at Austin and the University of Virginia can enhance weak signals reflected off faraway objects. A new avalanche photodiode they developed can amplify the photoelectric effect for light detection via a staircase-like alignment. The researchers said its greater light sensitivity enabled it to generate a more comprehensive perspective for a car's onboard computers, while also removing noise. The engineers intend to integrate the new device with an avalanche photodiode they invented last year to capture near-infrared light, for greater accuracy in fiber-optic communications and thermal imaging. | [] | [] | [] | scitechnews | None | None | None | None | A Light Detection and Ranging (LiDAR) system developed by researchers at the University of Texas at Austin and the University of Virginia can enhance weak signals reflected off faraway objects. A new avalanche photodiode they developed can amplify the photoelectric effect for light detection via a staircase-like alignment. The researchers said its greater light sensitivity enabled it to generate a more comprehensive perspective for a car's onboard computers, while also removing noise. The engineers intend to integrate the new device with an avalanche photodiode they invented last year to capture near-infrared light, for greater accuracy in fiber-optic communications and thermal imaging.
A team of engineers from the University of Texas at Austin and the University of Virginia developed a first-of-its-kind light detecting device that rapidly amplifies weak signals bouncing off of faraway objects.
In doing so, it could massively improve the vision of self-driving cars, robots, and digital mapping technologies. The workings of their new device are outlined in the journal Nature Photonics .
The engineers developed an avalanche photodiode with a staircase-like alignment that helps to amplify the electrical current for light detection. The pixel-sized device is ideal for Light Detection and Ranging (LiDAR) receivers used in self-driving cars, robotics, and surveillance, the researchers explained in their study.
In their study, the team explained that the new device is more sensitive than existing light detectors, allowing it to create, as an example, a more comprehensive picture for a car's onboard computers. The device also eliminates inconsistencies, also known as noise, typically associated with the self-driving detection process, meaning it could make autonomous vehicles safer.
The new device, essentially, is a physical flight of stairs designed to exploit the photoelectric effect - the electrons are like marbles that roll down the stairs, crashing into each other and releasing enough energy to free another electron. Every step, therefore, doubles the number of electrons.
This consistent multiplication of electrons makes the signal from the device more stable and dependable, even in low light conditions, the researchers explained.
"The less random the multiplication is, the weaker the signals you can pick out from the background," Seth Bank, professor in the Cockrell School's Department of Electrical and Computer Engineering, explained in a press statement . "For example, that could allow you to look out to greater distances with a laser radar system for autonomous vehicles."
The researchers also explained that today's technology is much better suited to deliver on the great promise of the avalanche photodiode, which was invented in the 1980s by Federico Capasso.
Their device, for example, can operate at room temperature, unlike the most sensitive commercially available light detectors, which have to be maintained at temperatures hundreds of degrees below zero.
The engineers plan to combine their work on their new device with an avalanche photodiode they built last year specifically for near-infrared light. They explained that this device could be used for incredibly accurate fiber-optic communications and thermal imaging.
Interestingly, while most research into self-driving car sensors is aimed at improving LiDAR technology, a team from Princeton University recently proposed a doppler radar system that overcomes LiDAR limitations and would allow autonomous vehicles to see around corners.
Of course, it's not just self-driving cars that could benefit from these new technologies. LiDAR technology gives eVTOL aircraft sight , it can map entire off-world terrains , and it also allows robots such as Boston Dynamics' Spot to navigate their surroundings. Improvements in this field have a cascading effect throughout the technology sector, much like the electrons rolling down the new staircase avalanche photodiode. |
|||
374 | Scientists Bring Sense of Touch to a Robotic Arm | Scientists at the University of Pittsburgh, the University of Texas at Austin, and the University of Chicago have conferred a sense of touch to a robotic arm that provides tactile feedback directly to a paralyzed man's brain. The team planted electrodes in a region of the man's brain that processes sensory input, then developed a method of generating signals from the robotic arm/hand that the brain would recognize as making contact with something. Testing showed the patient was able to perform some manual tasks with the robotic arm and hand as quickly as a person using their own hand could. | [] | [] | [] | scitechnews | None | None | None | None | Scientists at the University of Pittsburgh, the University of Texas at Austin, and the University of Chicago have conferred a sense of touch to a robotic arm that provides tactile feedback directly to a paralyzed man's brain. The team planted electrodes in a region of the man's brain that processes sensory input, then developed a method of generating signals from the robotic arm/hand that the brain would recognize as making contact with something. Testing showed the patient was able to perform some manual tasks with the robotic arm and hand as quickly as a person using their own hand could.
|
||||
376 | U.S. Has Almost 500,000 Job Openings in Cybersecurity | Help wanted: thousands and thousands of people interested in a career in cybersecurity.
There are about 465,000 open positions in cybersecurity nationwide as of May 2021, according to Cyber Seek - a tech job-tracking database from the U.S. Commerce Department - and the trade group CompTIA.
The need for more web watchmen spans from private businesses to government agencies, experts say, and most of the job openings are in California, Florida, Texas and Virginia. That means for anyone looking to switch careers and considering a job in cybersecurity, there's no greater time than now to find work, the job trackers said.
"You don't have to be a graduate of MIT to work in cybersecurity," said Tim Herbert, executive vice president for research at CompTIA. "It just requires someone who has the proper training, proper certification and is certainly committed to the work."
Switching careers to cybersecurity could be as easy as grabbing a Network+ or Security+ certification, said Michelle Moore, who teaches cybersecurity operations at the University of San Diego. An eight-week online course could help someone land an entry-level job as a "pen tester," a network security engineer or an incident response analyst, Moore said. Those jobs pay between $60,000 to $90,000 a year, she added.
"Cybersecurity is not rocket science, but it's not like you can just walk in the door and take a job and pick it up like that," Moore said. "But the biggest problem is that people aren't able to fill those positions because they're not finding enough people who are skilled."
Another reason it's been tough to hire cybersecurity professionals is that college students majoring in computer science don't always elect a career in that field, Herbert said. After graduation, the nation's tech students will pick jobs in software development, artificial intelligence, robotics or data science and "a small percentage is going to select cybersecurity," Herbert said.
"Cybersecurity is competing with many other fields," he said. "and right now we find that's not enough to meet the demand."
The demand for cybersecurity professionals comes after large and small organizations alike have watched the damage caused by major hacking attacks in recent history. One of the biggest trends in cyberattacks right now is ransomware, which is malware installed onto vulnerable networks and computers by hackers that threatens to publish private data unless a bounty is paid.
Hackers executed more than 70 ransomware attacks in the first half of 2019, most of which were targeted at local governments. In 2020, Barnes & Noble, Marriott and Twitter were all victims of hacks in which their customers' personal information was exposed. School districts and their employees have also been frequent targets of cyberattacks.
"There are many hacking groups that are opportunists," Herbert said. "They found companies that may have been in a more vulnerable state."
More recently, a hacker in February increased the chemical levels at a water treatment plant in central Florida in an alleged attempt to poison local residents. Earlier this month, a cyber gang took over the computer systems of one of the nation's largest underground fuel pathways. Colonial Pipeline, which owns the pathway, reportedly paid the hackers a $5 million ransom to stop the hijacking. | The U.S. Commerce Department's Cyber Seek technology job-tracking database and the trade group CompTIA count about 465,000 current U.S. cybersecurity jobs openings. Experts said private businesses and government agencies' need for more cybersecurity staff has unlocked a prime opportunity for anyone considering a job in that field. The University of San Diego's Michelle Moore suggested switching to a cybersecurity career could be as simple as obtaining a Network+ or Security+ certification, while an eight-week online course could help someone gain an entry-level job earning $60,000 to $90,000 a year as a penetration tester, network security engineer, or incident response analyst. Moore cited a lack of skilled cybersecurity personnel as a problem, while CompTIA's Tim Herbert said only a small percentage of computer science graduates pursue cybersecurity careers. | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Commerce Department's Cyber Seek technology job-tracking database and the trade group CompTIA count about 465,000 current U.S. cybersecurity jobs openings. Experts said private businesses and government agencies' need for more cybersecurity staff has unlocked a prime opportunity for anyone considering a job in that field. The University of San Diego's Michelle Moore suggested switching to a cybersecurity career could be as simple as obtaining a Network+ or Security+ certification, while an eight-week online course could help someone gain an entry-level job earning $60,000 to $90,000 a year as a penetration tester, network security engineer, or incident response analyst. Moore cited a lack of skilled cybersecurity personnel as a problem, while CompTIA's Tim Herbert said only a small percentage of computer science graduates pursue cybersecurity careers.
Help wanted: thousands and thousands of people interested in a career in cybersecurity.
There are about 465,000 open positions in cybersecurity nationwide as of May 2021, according to Cyber Seek - a tech job-tracking database from the U.S. Commerce Department - and the trade group CompTIA.
The need for more web watchmen spans from private businesses to government agencies, experts say, and most of the job openings are in California, Florida, Texas and Virginia. That means for anyone looking to switch careers and considering a job in cybersecurity, there's no greater time than now to find work, the job trackers said.
"You don't have to be a graduate of MIT to work in cybersecurity," said Tim Herbert, executive vice president for research at CompTIA. "It just requires someone who has the proper training, proper certification and is certainly committed to the work."
Switching careers to cybersecurity could be as easy as grabbing a Network+ or Security+ certification, said Michelle Moore, who teaches cybersecurity operations at the University of San Diego. An eight-week online course could help someone land an entry-level job as a "pen tester," a network security engineer or an incident response analyst, Moore said. Those jobs pay between $60,000 to $90,000 a year, she added.
"Cybersecurity is not rocket science, but it's not like you can just walk in the door and take a job and pick it up like that," Moore said. "But the biggest problem is that people aren't able to fill those positions because they're not finding enough people who are skilled."
Another reason it's been tough to hire cybersecurity professionals is that college students majoring in computer science don't always elect a career in that field, Herbert said. After graduation, the nation's tech students will pick jobs in software development, artificial intelligence, robotics or data science and "a small percentage is going to select cybersecurity," Herbert said.
"Cybersecurity is competing with many other fields," he said. "and right now we find that's not enough to meet the demand."
The demand for cybersecurity professionals comes after large and small organizations alike have watched the damage caused by major hacking attacks in recent history. One of the biggest trends in cyberattacks right now is ransomware, which is malware installed onto vulnerable networks and computers by hackers that threatens to publish private data unless a bounty is paid.
Hackers executed more than 70 ransomware attacks in the first half of 2019, most of which were targeted at local governments. In 2020, Barnes & Noble, Marriott and Twitter were all victims of hacks in which their customers' personal information was exposed. School districts and their employees have also been frequent targets of cyberattacks.
"There are many hacking groups that are opportunists," Herbert said. "They found companies that may have been in a more vulnerable state."
More recently, a hacker in February increased the chemical levels at a water treatment plant in central Florida in an alleged attempt to poison local residents. Earlier this month, a cyber gang took over the computer systems of one of the nation's largest underground fuel pathways. Colonial Pipeline, which owns the pathway, reportedly paid the hackers a $5 million ransom to stop the hijacking. |
|||
377 | Advanced Technique for Developing Digital Twins Makes Tech Universally Applicable | AUSTIN, Texas - A universally applicable digital twin mathematical model has been co-developed by researchers at The University of Texas at Austin that could be used for systems as diverse as a spacecraft, a person or even an entire city.
In 1970 NASA's Apollo 13 mission to the moon had to be abandoned after one of the oxygen tanks on board the spacecraft exploded, redirecting the crew's attention to survival.
Although the term was not coined for another 40 years, it is now understood that Mission Control's use of Apollo spacecraft simulators to help guide the astronauts safely back to Earth was perhaps the first time "digital twin" technology had been used.
Now, the technology has improved significantly through advanced mathematical modeling techniques, better sensors and more powerful supercomputers. New research conducted by experts from the Oden Institute for Computational Engineering and Sciences, Massachusetts Institute of Technology (MIT) and industry partner The Jessara Group was published in the latest edition of Nature Computational Science . The paper outlines the foundations for a mathematical model that could be used to enable predictive digital twins at various scales and for various situations.
"Digital twins have already been developed for use in specific contexts - like that of a particular engine component or a particular spacecraft mission, but missing has been the foundational mathematical framework that would enable digital twins at scale," said Karen Willcox, director of the Oden Institute and senior author on the paper.
A digital twin is a computational model that evolves over time and continuously represents the structure, behavior and context of a unique physical "asset" such as a spacecraft, a person or even an entire city.
Tailored computational models that reflect the unique characteristics of individual assets enable decision-making that is optimized to the individual rather than based on averages across populations. When is the right time to bring an unmanned aerial vehicle in for servicing? How should your house optimize its energy usage today? Is it time for you to go to the doctor for a thorough check-up?
On the macro scale, smart cities enabled by digital twins and Internet of Things (IoT) devices promise to revolutionize urban planning, resource allocation, sustainability and traffic optimization.
It is difficult to grasp the idea that the same mathematical model could be applied in situations as seemingly disparate as the human body, a space rocket or a building. However, according to Michael Kapteyn, lead author and a doctoral student at MIT, they all share similarities that can be exploited when reduced to mathematical models.
"This is where the power of mathematical abstraction comes into play. Using probabilistic graphical models, we create a mathematical model of the digital twin that applies broadly across application domains," Kapteyn said.
These applications have their own unique requirements, challenges and desired outcomes. But they share commonalities too. There is a set of parameters that describe the state of the system - the structural health of an aircraft wing or the current capacity of an urban roadway. There are data provided by in situ sensors, inspections and other observations of the system. And there are control actions that a decision maker can take. It is the interactions between these quantities - the state, the observational data and the control actions - that the new digital twin model represents mathematically.
The researchers used the new approach to create a structural digital twin of a custom-built unmanned aerial vehicle instrumented with state-of-the-art sensing capabilities. "The value of integrated sensing solutions has been recognized for some time, but combining them with the digital twin concept takes that to a new level," said Jacob Pretorius, chief technology officer of The Jessara Group and co-author on the Nature paper. "We are on the cusp of an exciting future for intelligent engineering systems."
The study was funded by the Air Force Office of Scientific Research, the SUTD-MIT International Design Centre, and the Department of Energy Advanced Scientific Computing Research program. | Researchers at the University of Texas at Austin (UT Austin), the Massachusetts Institute of Technology (MIT), and industry partner The Jessara Group have developed what they're calling a universally applicable digital twin mathematical model. The framework was designed to facilitate predictive digital twins at scale. MIT's Michael Kapteyn said, "Using probabilistic graphical models, we create a mathematical model of the digital twin that applies broadly across application domains." The researchers used this technique to generate a structural digital twin of a custom-built unmanned aerial vehicle equipped with state-of-the-art sensors. Said Jacob Pretorius of the Jessara Group, "The value of integrated sensing solutions has been recognized for some time, but combining them with the digital twin concept takes that to a new level. We are on the cusp of an exciting future for intelligent engineering systems." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the University of Texas at Austin (UT Austin), the Massachusetts Institute of Technology (MIT), and industry partner The Jessara Group have developed what they're calling a universally applicable digital twin mathematical model. The framework was designed to facilitate predictive digital twins at scale. MIT's Michael Kapteyn said, "Using probabilistic graphical models, we create a mathematical model of the digital twin that applies broadly across application domains." The researchers used this technique to generate a structural digital twin of a custom-built unmanned aerial vehicle equipped with state-of-the-art sensors. Said Jacob Pretorius of the Jessara Group, "The value of integrated sensing solutions has been recognized for some time, but combining them with the digital twin concept takes that to a new level. We are on the cusp of an exciting future for intelligent engineering systems."
AUSTIN, Texas - A universally applicable digital twin mathematical model has been co-developed by researchers at The University of Texas at Austin that could be used for systems as diverse as a spacecraft, a person or even an entire city.
In 1970 NASA's Apollo 13 mission to the moon had to be abandoned after one of the oxygen tanks on board the spacecraft exploded, redirecting the crew's attention to survival.
Although the term was not coined for another 40 years, it is now understood that Mission Control's use of Apollo spacecraft simulators to help guide the astronauts safely back to Earth was perhaps the first time "digital twin" technology had been used.
Now, the technology has improved significantly through advanced mathematical modeling techniques, better sensors and more powerful supercomputers. New research conducted by experts from the Oden Institute for Computational Engineering and Sciences, Massachusetts Institute of Technology (MIT) and industry partner The Jessara Group was published in the latest edition of Nature Computational Science . The paper outlines the foundations for a mathematical model that could be used to enable predictive digital twins at various scales and for various situations.
"Digital twins have already been developed for use in specific contexts - like that of a particular engine component or a particular spacecraft mission, but missing has been the foundational mathematical framework that would enable digital twins at scale," said Karen Willcox, director of the Oden Institute and senior author on the paper.
A digital twin is a computational model that evolves over time and continuously represents the structure, behavior and context of a unique physical "asset" such as a spacecraft, a person or even an entire city.
Tailored computational models that reflect the unique characteristics of individual assets enable decision-making that is optimized to the individual rather than based on averages across populations. When is the right time to bring an unmanned aerial vehicle in for servicing? How should your house optimize its energy usage today? Is it time for you to go to the doctor for a thorough check-up?
On the macro scale, smart cities enabled by digital twins and Internet of Things (IoT) devices promise to revolutionize urban planning, resource allocation, sustainability and traffic optimization.
It is difficult to grasp the idea that the same mathematical model could be applied in situations as seemingly disparate as the human body, a space rocket or a building. However, according to Michael Kapteyn, lead author and a doctoral student at MIT, they all share similarities that can be exploited when reduced to mathematical models.
"This is where the power of mathematical abstraction comes into play. Using probabilistic graphical models, we create a mathematical model of the digital twin that applies broadly across application domains," Kapteyn said.
These applications have their own unique requirements, challenges and desired outcomes. But they share commonalities too. There is a set of parameters that describe the state of the system - the structural health of an aircraft wing or the current capacity of an urban roadway. There are data provided by in situ sensors, inspections and other observations of the system. And there are control actions that a decision maker can take. It is the interactions between these quantities - the state, the observational data and the control actions - that the new digital twin model represents mathematically.
The researchers used the new approach to create a structural digital twin of a custom-built unmanned aerial vehicle instrumented with state-of-the-art sensing capabilities. "The value of integrated sensing solutions has been recognized for some time, but combining them with the digital twin concept takes that to a new level," said Jacob Pretorius, chief technology officer of The Jessara Group and co-author on the Nature paper. "We are on the cusp of an exciting future for intelligent engineering systems."
The study was funded by the Air Force Office of Scientific Research, the SUTD-MIT International Design Centre, and the Department of Energy Advanced Scientific Computing Research program. |
|||
378 | Protein Simulation, Experiments Unveil Clues on Origins of Parkinson's Disease | HERSHEY, Pa. - Parkinson's disease is the second most common neurodegenerative disease and affects more than 10 million people around the world. To better understand the origins of the disease, researchers from Penn State College of Medicine and The Hebrew University of Jerusalem have developed an integrative approach, combining experimental and computational methods, to understand how individual proteins may form harmful aggregates, or groupings, that are known to contribute to the development of the disease. They said their findings could guide the development of new therapeutics to delay or even halt the progression of neurodegenerative diseases.
Alpha-synuclein is a protein that helps regulate the release of neurotransmitters in the brain and is found in neurons. It exists as a single unit, but commonly joins together with other units to perform cellular functions. When too many units combine, it can lead to the formation of Lewy bodies, which are associated with neurodegenerative diseases like Parkinson's Disease and dementia.
Although researchers know that aggregates of this protein cause disease, how they form is not well understood. Alpha-synuclein is highly disordered, meaning it exists as an ensemble of different conformations, or shapes, rather than a well-folded 3D structure. This characteristic makes the protein difficult to study using standard laboratory techniques - but the research team used computers together with leading-edge experiments to predict and study the different conformations it may fold into.
"Computational biology allows us to study how forces within and outside of a protein may act on it," said Nikolay Dokholyan , professor of pharmacology at the College of Medicine and Penn State Cancer Institute researcher. "Using experiments performed in professor Eitan Lerner's laboratory at the Biological Chemistry Department at The Hebrew University of Jerusalem, a series of algorithms accounts for effective forces acting in and upon a specific protein and can identify the various conformations it will take based on those forces. This allows us to study the conformations of alpha-synuclein in a way that is otherwise difficult to identify in experimental studies alone."
In the paper published today (May 19) in the journal Structure, the researchers detailed their methodology for studying the different conformations of alpha-synuclein. They used data from previous experiments to program the molecular dynamics of the protein into their calculations. Their experiments revealed the conformational ensemble of alpha-synuclein, which is a series of different shapes the protein can assume.
Using leading-edge experiments, the researchers found that some shapes of alpha-synuclein are surprisingly stable and last longer than milliseconds. They said this is much slower than estimates of a disordered protein that constantly changes conformations.
"Prior knowledge showed this spaghetti-like protein would undergo structure changes in microseconds," Lerner said. "Our results indicate that alpha-synuclein is stable in some conformations for milliseconds - slower than previously estimated."
"We believe that we've identified stable forms of alpha-synuclein that allow it to form complexes with itself and other biomolecules," said Jiaxing Chen, a graduate student at the College of Medicine. "This opens up possibilities for the development of drugs that can regulate the function of this protein."
Chen's lead co-author, Sofia Zaer, alongside colleagues at Hebrew University, used a series of experimental techniques to verify that alpha-synuclein could fold into the stable forms the simulation predicted. The research team continues to study these stable conformations as well as the whole process of alpha-synuclein aggregation in the context of Parkinson's disease.
"The information from our study could be used to develop small molecule regulators of alpha-synuclein activity," Lerner said. "Drugs that prevent protein aggregation and enhance its normal neuro-physiological function may interfere with the development and progression of neurodegenerative diseases."
Paz Drori, Joanna Zamel, Khalil Joron and Nir Kalisman of The Hebrew University of Jerusalem also contributed to this research. The authors disclose no conflicts of interest.
This research was supported by the Michael J. Fox Foundation, National Institutes of Health, the Passan Foundation, the Israel Science Foundation, the Milner Fund and the Hebrew University of Jerusalem. This work was also supported by the National Center for Advancing Translational Science through Penn State Clinical and Translational Science Institute (grant UL1 TR002014). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or other funders. | Researchers at Pennsylvania State University (Penn State) and Israel's Hebrew University of Jerusalem blended experimental and computational methods to explore how individual proteins may form aggregates that play into the development of Parkinson's disease. Penn State's Nikolay Dokholyan said, "Using experiments performed in professor Eitan Lerner's laboratory at the Biological Chemistry Department at The Hebrew University of Jerusalem, a series of algorithms accounts for effective forces acting in and upon a specific protein and can identify the various conformations it will take based on those forces." The researchers used data from earlier experiments to feed the molecular dynamics of the alpha-synuclein protein into their calculations, which exposed conformations that in some instances persisted longer than previously estimated. Penn State's Jiaxing Chen said these findings unlock "possibilities for the development of drugs that can regulate the function of this protein." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Pennsylvania State University (Penn State) and Israel's Hebrew University of Jerusalem blended experimental and computational methods to explore how individual proteins may form aggregates that play into the development of Parkinson's disease. Penn State's Nikolay Dokholyan said, "Using experiments performed in professor Eitan Lerner's laboratory at the Biological Chemistry Department at The Hebrew University of Jerusalem, a series of algorithms accounts for effective forces acting in and upon a specific protein and can identify the various conformations it will take based on those forces." The researchers used data from earlier experiments to feed the molecular dynamics of the alpha-synuclein protein into their calculations, which exposed conformations that in some instances persisted longer than previously estimated. Penn State's Jiaxing Chen said these findings unlock "possibilities for the development of drugs that can regulate the function of this protein."
HERSHEY, Pa. - Parkinson's disease is the second most common neurodegenerative disease and affects more than 10 million people around the world. To better understand the origins of the disease, researchers from Penn State College of Medicine and The Hebrew University of Jerusalem have developed an integrative approach, combining experimental and computational methods, to understand how individual proteins may form harmful aggregates, or groupings, that are known to contribute to the development of the disease. They said their findings could guide the development of new therapeutics to delay or even halt the progression of neurodegenerative diseases.
Alpha-synuclein is a protein that helps regulate the release of neurotransmitters in the brain and is found in neurons. It exists as a single unit, but commonly joins together with other units to perform cellular functions. When too many units combine, it can lead to the formation of Lewy bodies, which are associated with neurodegenerative diseases like Parkinson's Disease and dementia.
Although researchers know that aggregates of this protein cause disease, how they form is not well understood. Alpha-synuclein is highly disordered, meaning it exists as an ensemble of different conformations, or shapes, rather than a well-folded 3D structure. This characteristic makes the protein difficult to study using standard laboratory techniques - but the research team used computers together with leading-edge experiments to predict and study the different conformations it may fold into.
"Computational biology allows us to study how forces within and outside of a protein may act on it," said Nikolay Dokholyan , professor of pharmacology at the College of Medicine and Penn State Cancer Institute researcher. "Using experiments performed in professor Eitan Lerner's laboratory at the Biological Chemistry Department at The Hebrew University of Jerusalem, a series of algorithms accounts for effective forces acting in and upon a specific protein and can identify the various conformations it will take based on those forces. This allows us to study the conformations of alpha-synuclein in a way that is otherwise difficult to identify in experimental studies alone."
In the paper published today (May 19) in the journal Structure, the researchers detailed their methodology for studying the different conformations of alpha-synuclein. They used data from previous experiments to program the molecular dynamics of the protein into their calculations. Their experiments revealed the conformational ensemble of alpha-synuclein, which is a series of different shapes the protein can assume.
Using leading-edge experiments, the researchers found that some shapes of alpha-synuclein are surprisingly stable and last longer than milliseconds. They said this is much slower than estimates of a disordered protein that constantly changes conformations.
"Prior knowledge showed this spaghetti-like protein would undergo structure changes in microseconds," Lerner said. "Our results indicate that alpha-synuclein is stable in some conformations for milliseconds - slower than previously estimated."
"We believe that we've identified stable forms of alpha-synuclein that allow it to form complexes with itself and other biomolecules," said Jiaxing Chen, a graduate student at the College of Medicine. "This opens up possibilities for the development of drugs that can regulate the function of this protein."
Chen's lead co-author, Sofia Zaer, alongside colleagues at Hebrew University, used a series of experimental techniques to verify that alpha-synuclein could fold into the stable forms the simulation predicted. The research team continues to study these stable conformations as well as the whole process of alpha-synuclein aggregation in the context of Parkinson's disease.
"The information from our study could be used to develop small molecule regulators of alpha-synuclein activity," Lerner said. "Drugs that prevent protein aggregation and enhance its normal neuro-physiological function may interfere with the development and progression of neurodegenerative diseases."
Paz Drori, Joanna Zamel, Khalil Joron and Nir Kalisman of The Hebrew University of Jerusalem also contributed to this research. The authors disclose no conflicts of interest.
This research was supported by the Michael J. Fox Foundation, National Institutes of Health, the Passan Foundation, the Israel Science Foundation, the Milner Fund and the Hebrew University of Jerusalem. This work was also supported by the National Center for Advancing Translational Science through Penn State Clinical and Translational Science Institute (grant UL1 TR002014). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or other funders. |
|||
379 | Novel Approach Identifies Genes Linked to Autism, Patient IQ | According to some estimates, hundreds of genes may be associated with autism spectrum disorders (ASD), but it has been difficult to determine which mutations are truly involved in the disease and which are incidental. New work published in the journal Science Translational Medicine led by researchers at Baylor College of Medicine shows that a novel computational approach can effectively identify genes most likely linked to the condition, as well as predict the severity of intellectual disability in patients with ASD using only rare mutations in genes beyond those already associated with the syndrome.
Knowing which genes contribute to ASD, researchers can then study them to better understand how the condition happens and use them to improve predicting the risk of the syndrome and more effectively advise parents of potential outcomes and treatments.
"ASD is a very complex condition and many cases do not have a clear genetic explanation based on current knowledge," said first author Dr. Amanda Koire, a graduate student in the Dr. Olivier Lichtarge lab during the development of this project. She is currently a psychiatry research resident at Brigham and Women's Hospital, Harvard Medical School.
There is not one gene that causes the majority of ASD cases, the researchers explained. "The most commonly mutated genes linked to the syndrome only account for approximately 2% of the cases," said Lichtarge, Cullen Chair and professor of molecular and human genetics , biochemistry and molecular biology and pharmacology and chemical biology at Baylor. "The current thought is that the syndrome results from a very large number of gene mutations, each mutation having a mild effect."
The challenge is to identify which gene mutations are indeed involved in the condition, but because the variants that contribute to the development of ASD are individually rare, a patient by patient approach to identify them would likely not succeed. Even current studies that compare whole populations of affected individuals and unaffected parents and siblings find genes that only explain a fraction of the cases.
The Baylor group decided to take a completely different perspective. First, they added a vast amount of evolutionary data to their analyses. These data provided an extensive and open, but rarely fully accessed, record of the role of mutations on protein evolution, and, by extension, on the impact of human variants on protein function. With this in hand, the researchers could focus on the mutations most likely to be harmful. Two other steps then further narrowed the resolution of the study. A focus on personal mutations, that are unique to each individual, and also on how these mutations add up in each molecular pathway.
Exploring the contribution of de novo missense mutations in ASD
The researchers looked into a group of mutations known as missense variants. While some mutations disrupt the structure of proteins so severely as to render them inactive, missense mutations are much more common but are harder to assess than loss-of-function mutations because they can just tweak the protein's function a little or severely impair it.
"Some loss-of-function mutations have been associated with the severity of ASD, measured by diminished motor skills and IQ, but missense mutations had not been linked to the same ASD patient characteristics on a large-scale due to the difficulty in interpreting their impact," said co-author Dr. Panagiotis Katsonis , assistant professor of molecular and human genetics at Baylor. "However, people with ASD are more likely to carry a de novo missense mutation than a de novo loss-of-function mutation and the tools previously developed in our lab can help with the interpretation of this majority of coding variants. De novo or new mutations are those that appear for the first time in a family member, they are not inherited from either parent."
The researchers took on the challenge to identify, among all the de novo missense mutations in a cohort of patients with ASD and their siblings as a whole, those mutations that would distinguish between the patients and the unaffected siblings.
A multilayered approach
The team applied a multilayered strategy to identify a group of genes and mutations that most likely was involved in causing ASD.
They first identified a group of de novo mutations by examining the sequences of all the protein coding genes of 2,392 families with members with ASD that are in the Simons Simplex Collection. Then, they evaluated the effect of each missense mutation on the fitness or functionality of the corresponding protein using the Evolutionary Action (EA) equation , a computational tool previously developed in the Lichtarge lab. The EA equation provides a score, from 0 to 100, that reflects the effect of the mutation on the fitness of the protein. The higher the score, the lower the fitness of the mutated protein.
The results suggested that among the 1,418 de novo missense mutations affecting 1,269 genes in the patient group, most genes were mutated only once.
"Knowing that ASD is a multigenic condition that presents on a spectrum, we reasoned that the mutations that were contributing to ASD could dispersed amongst the genes of a metabolic pathway when examined at a cohort level, rather than being clustered on a single gene," Koire said. "If any single component of a pathway becomes affected by a rare mutation, it could produce a clinical manifestation of ASD, with slightly different results depending on the specific mutation and the gene."
Without making any a priori assumptions regarding which genes or pathways drive ASD, the researchers looked at the cohort as a whole and asked, in which pathways are there more de novo missense mutations with higher EA scores than expected?
The team found that significantly higher EA scores of grouped de novo missense mutations implicated 398 genes from 23 pathways. For example, they found that axonogenesis, a pathway for the development of new axons in neurons in the brain, stood out among other pathways because it clearly had many missense mutations that together demonstrated a significant bias toward high EA scores indicating impactful mutations. Synaptic transmission and other neurodevelopmental pathways were also among those affected by mutations with high EA scores.
"As a result of layering together all these different complementary views of potential functional impact of the mutations on the biology, we could identify a set of genes that clearly related to ASD," Lichtarge said. "These genes fell in pathways that were not necessarily surprising, but reassuringly related to neurological function. Some of these genes had been linked to ASD before, but others had not been previously associated with the syndrome."
"We also were very excited to see a relationship between the EA score of the mutations in those genes linked to ASD and the patient's IQ," Koire said. "For the new genes we found linked to ASD, the mutations with higher EA scores were related to a 7 point lower IQ in the patients, which suggests that they have a genuine biological effect."
"This opens doors on many fronts," said co-author Young Won Kim, graduate student in Baylor's Integrative Molecular and Biomedical Sciences Graduate Program working in the Lichtarge lab at the time of research. "It suggests new genes we can study in ASD, and that there is a path forward to advise parents of children with these mutations of the potential outcomes in their child and how to best involve external support in early development intervention, which has shown to make a huge difference in outcome as well."
"Our findings may go beyond ASD," Lichtarge said. "This approach, we hope, could be tested in a wide set of complex diseases. As many genome sequence data become increasingly accessible for research, it should then be possible to interpret the rare mutations which they yield as we showed here. This may then resolve better than now the polygenic basis of various adult diseases and also improve estimates of individual risk and morbidity."
Christie Buchovecky, at Baylor and Columbia University, and Stephen J. Wilson at Baylor also contributed to this work.
This work was supported by the National Institutes of Health (grant numbers GM079656-8, DE025181, GM066099, AG061105), the Oskar Fischer Foundation, the National Science Foundation (grant number DBI1356569) and the Defense Advance Research Project Agency (grant number N66001-15-C-4042). In addition, this study received support from RP160283 - Baylor College of Medicine Comprehensive Cancer Training Program, the Baylor Research Advocates for Student Scientists (BRASS), and the McNair MD/PhD Scholars program. | A study led by Baylor College of Medicine researchers identified a novel computational approach for identifying genes most likely associated with autism spectrum disorders (ASD), and predicting the severity of intellectual disability in ASD patients as a result. The team fed a massive volume of evolutionary data to their analyses on mutations' contribution to protein evolution, and on the impact of human variants on protein function. Researchers concentrated on de novo missense variants in particular, to identify mutations that differentiate ASD patients and unaffected siblings. The Baylor researchers used the Evolutionary Action equation to assess the impact of each missense mutation on a corresponding protein. Baylor's Young Won Kim said the results suggest new genes to study, and "a path forward to advise parents of children with these mutations of the potential outcomes in their child and how to best involve external support in early development intervention, which has shown to make a huge difference in outcome as well." | [] | [] | [] | scitechnews | None | None | None | None | A study led by Baylor College of Medicine researchers identified a novel computational approach for identifying genes most likely associated with autism spectrum disorders (ASD), and predicting the severity of intellectual disability in ASD patients as a result. The team fed a massive volume of evolutionary data to their analyses on mutations' contribution to protein evolution, and on the impact of human variants on protein function. Researchers concentrated on de novo missense variants in particular, to identify mutations that differentiate ASD patients and unaffected siblings. The Baylor researchers used the Evolutionary Action equation to assess the impact of each missense mutation on a corresponding protein. Baylor's Young Won Kim said the results suggest new genes to study, and "a path forward to advise parents of children with these mutations of the potential outcomes in their child and how to best involve external support in early development intervention, which has shown to make a huge difference in outcome as well."
According to some estimates, hundreds of genes may be associated with autism spectrum disorders (ASD), but it has been difficult to determine which mutations are truly involved in the disease and which are incidental. New work published in the journal Science Translational Medicine led by researchers at Baylor College of Medicine shows that a novel computational approach can effectively identify genes most likely linked to the condition, as well as predict the severity of intellectual disability in patients with ASD using only rare mutations in genes beyond those already associated with the syndrome.
Knowing which genes contribute to ASD, researchers can then study them to better understand how the condition happens and use them to improve predicting the risk of the syndrome and more effectively advise parents of potential outcomes and treatments.
"ASD is a very complex condition and many cases do not have a clear genetic explanation based on current knowledge," said first author Dr. Amanda Koire, a graduate student in the Dr. Olivier Lichtarge lab during the development of this project. She is currently a psychiatry research resident at Brigham and Women's Hospital, Harvard Medical School.
There is not one gene that causes the majority of ASD cases, the researchers explained. "The most commonly mutated genes linked to the syndrome only account for approximately 2% of the cases," said Lichtarge, Cullen Chair and professor of molecular and human genetics , biochemistry and molecular biology and pharmacology and chemical biology at Baylor. "The current thought is that the syndrome results from a very large number of gene mutations, each mutation having a mild effect."
The challenge is to identify which gene mutations are indeed involved in the condition, but because the variants that contribute to the development of ASD are individually rare, a patient by patient approach to identify them would likely not succeed. Even current studies that compare whole populations of affected individuals and unaffected parents and siblings find genes that only explain a fraction of the cases.
The Baylor group decided to take a completely different perspective. First, they added a vast amount of evolutionary data to their analyses. These data provided an extensive and open, but rarely fully accessed, record of the role of mutations on protein evolution, and, by extension, on the impact of human variants on protein function. With this in hand, the researchers could focus on the mutations most likely to be harmful. Two other steps then further narrowed the resolution of the study. A focus on personal mutations, that are unique to each individual, and also on how these mutations add up in each molecular pathway.
Exploring the contribution of de novo missense mutations in ASD
The researchers looked into a group of mutations known as missense variants. While some mutations disrupt the structure of proteins so severely as to render them inactive, missense mutations are much more common but are harder to assess than loss-of-function mutations because they can just tweak the protein's function a little or severely impair it.
"Some loss-of-function mutations have been associated with the severity of ASD, measured by diminished motor skills and IQ, but missense mutations had not been linked to the same ASD patient characteristics on a large-scale due to the difficulty in interpreting their impact," said co-author Dr. Panagiotis Katsonis , assistant professor of molecular and human genetics at Baylor. "However, people with ASD are more likely to carry a de novo missense mutation than a de novo loss-of-function mutation and the tools previously developed in our lab can help with the interpretation of this majority of coding variants. De novo or new mutations are those that appear for the first time in a family member, they are not inherited from either parent."
The researchers took on the challenge to identify, among all the de novo missense mutations in a cohort of patients with ASD and their siblings as a whole, those mutations that would distinguish between the patients and the unaffected siblings.
A multilayered approach
The team applied a multilayered strategy to identify a group of genes and mutations that most likely was involved in causing ASD.
They first identified a group of de novo mutations by examining the sequences of all the protein coding genes of 2,392 families with members with ASD that are in the Simons Simplex Collection. Then, they evaluated the effect of each missense mutation on the fitness or functionality of the corresponding protein using the Evolutionary Action (EA) equation , a computational tool previously developed in the Lichtarge lab. The EA equation provides a score, from 0 to 100, that reflects the effect of the mutation on the fitness of the protein. The higher the score, the lower the fitness of the mutated protein.
The results suggested that among the 1,418 de novo missense mutations affecting 1,269 genes in the patient group, most genes were mutated only once.
"Knowing that ASD is a multigenic condition that presents on a spectrum, we reasoned that the mutations that were contributing to ASD could dispersed amongst the genes of a metabolic pathway when examined at a cohort level, rather than being clustered on a single gene," Koire said. "If any single component of a pathway becomes affected by a rare mutation, it could produce a clinical manifestation of ASD, with slightly different results depending on the specific mutation and the gene."
Without making any a priori assumptions regarding which genes or pathways drive ASD, the researchers looked at the cohort as a whole and asked, in which pathways are there more de novo missense mutations with higher EA scores than expected?
The team found that significantly higher EA scores of grouped de novo missense mutations implicated 398 genes from 23 pathways. For example, they found that axonogenesis, a pathway for the development of new axons in neurons in the brain, stood out among other pathways because it clearly had many missense mutations that together demonstrated a significant bias toward high EA scores indicating impactful mutations. Synaptic transmission and other neurodevelopmental pathways were also among those affected by mutations with high EA scores.
"As a result of layering together all these different complementary views of potential functional impact of the mutations on the biology, we could identify a set of genes that clearly related to ASD," Lichtarge said. "These genes fell in pathways that were not necessarily surprising, but reassuringly related to neurological function. Some of these genes had been linked to ASD before, but others had not been previously associated with the syndrome."
"We also were very excited to see a relationship between the EA score of the mutations in those genes linked to ASD and the patient's IQ," Koire said. "For the new genes we found linked to ASD, the mutations with higher EA scores were related to a 7 point lower IQ in the patients, which suggests that they have a genuine biological effect."
"This opens doors on many fronts," said co-author Young Won Kim, graduate student in Baylor's Integrative Molecular and Biomedical Sciences Graduate Program working in the Lichtarge lab at the time of research. "It suggests new genes we can study in ASD, and that there is a path forward to advise parents of children with these mutations of the potential outcomes in their child and how to best involve external support in early development intervention, which has shown to make a huge difference in outcome as well."
"Our findings may go beyond ASD," Lichtarge said. "This approach, we hope, could be tested in a wide set of complex diseases. As many genome sequence data become increasingly accessible for research, it should then be possible to interpret the rare mutations which they yield as we showed here. This may then resolve better than now the polygenic basis of various adult diseases and also improve estimates of individual risk and morbidity."
Christie Buchovecky, at Baylor and Columbia University, and Stephen J. Wilson at Baylor also contributed to this work.
This work was supported by the National Institutes of Health (grant numbers GM079656-8, DE025181, GM066099, AG061105), the Oskar Fischer Foundation, the National Science Foundation (grant number DBI1356569) and the Defense Advance Research Project Agency (grant number N66001-15-C-4042). In addition, this study received support from RP160283 - Baylor College of Medicine Comprehensive Cancer Training Program, the Baylor Research Advocates for Student Scientists (BRASS), and the McNair MD/PhD Scholars program. |