id
stringlengths 1
169
| pr-title
stringlengths 2
190
| pr-article
stringlengths 0
65k
| pr-summary
stringlengths 47
4.27k
| sc-title
stringclasses 2
values | sc-article
stringlengths 0
2.03M
| sc-abstract
stringclasses 2
values | sc-section_names
sequencelengths 0
0
| sc-sections
sequencelengths 0
0
| sc-authors
sequencelengths 0
0
| source
stringclasses 2
values | Topic
stringclasses 10
values | Citation
stringlengths 4
4.58k
| Paper_URL
stringlengths 4
213
| News_URL
stringlengths 4
119
| pr-summary-and-article
stringlengths 49
66.1k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0 | New York City's Vaccine Passport Plan Renews Online Privacy Debate | When New York City announced on Tuesday that it would soon require people to show proof of at least one coronavirus vaccine shot to enter businesses, Mayor Bill de Blasio said the system was "simple - just show it and you're in."
Less simple was the privacy debate that the city reignited.
Vaccine passports , which show proof of vaccination, often in electronic form such as an app, are the bedrock of Mr. de Blasio's plan. For months, these records - also known as health passes or digital health certificates - have been under discussion around the world as a tool to allow vaccinated people, who are less at risk from the virus, to gather safely. New York will be the first U.S. city to include these passes in a vaccine mandate, potentially setting off similar actions elsewhere.
But the mainstreaming of these credentials could also usher in an era of increased digital surveillance, privacy researchers said. That's because vaccine passes may enable location tracking, even as there are few rules about how people's digital vaccine data should be stored and how it can be shared. While existing privacy laws limit the sharing of information among medical providers, there is no such rule for when people upload their own data onto an app. | New York's City's mandate that people must show proof at least one coronavirus vaccine shot, or vaccine passport, to enter businesses has revived the debate of whether these digital certificates undermine online privacy. The applications may enable location tracking, and privacy researchers are worried about digital surveillance escalating. The New York Civil Liberties Union's Allie Bohm said without restrictions, presenting a digital vaccination passport whenever people enter a public place could lead to a "global map of where people are going," which could be sold or turned over to third parties, law enforcement, or government authorities. Privacy advocates are not reassured by vaccine pass developers' claims that their products uphold privacy, given that authoritarian regimes have exploited COVID-19 contact-tracing apps for surveillance or criminal investigation. | [] | [] | [] | scitechnews | None | None | None | None | New York's City's mandate that people must show proof at least one coronavirus vaccine shot, or vaccine passport, to enter businesses has revived the debate of whether these digital certificates undermine online privacy. The applications may enable location tracking, and privacy researchers are worried about digital surveillance escalating. The New York Civil Liberties Union's Allie Bohm said without restrictions, presenting a digital vaccination passport whenever people enter a public place could lead to a "global map of where people are going," which could be sold or turned over to third parties, law enforcement, or government authorities. Privacy advocates are not reassured by vaccine pass developers' claims that their products uphold privacy, given that authoritarian regimes have exploited COVID-19 contact-tracing apps for surveillance or criminal investigation.
When New York City announced on Tuesday that it would soon require people to show proof of at least one coronavirus vaccine shot to enter businesses, Mayor Bill de Blasio said the system was "simple - just show it and you're in."
Less simple was the privacy debate that the city reignited.
Vaccine passports , which show proof of vaccination, often in electronic form such as an app, are the bedrock of Mr. de Blasio's plan. For months, these records - also known as health passes or digital health certificates - have been under discussion around the world as a tool to allow vaccinated people, who are less at risk from the virus, to gather safely. New York will be the first U.S. city to include these passes in a vaccine mandate, potentially setting off similar actions elsewhere.
But the mainstreaming of these credentials could also usher in an era of increased digital surveillance, privacy researchers said. That's because vaccine passes may enable location tracking, even as there are few rules about how people's digital vaccine data should be stored and how it can be shared. While existing privacy laws limit the sharing of information among medical providers, there is no such rule for when people upload their own data onto an app. |
|||
1 | Facebook Disables Accounts Tied to NYU Research Project | Facebook Inc. has disabled the personal accounts of a group of New York University researchers studying political ads on the social network, claiming they are scraping data in violation of the company's terms of service.
The company also cut off the researchers' access to Facebook's APIs, technology that is used to share data from Facebook to other apps or services, and disabled other apps and Pages associated with the research project, according to Mike Clark, a director of product management on Facebook's privacy team. | Facebook has disabled the personal accounts of New York University (NYU) scientists studying political ads on the social network, alleging their extraction of data violates its terms of service. Facebook's Mike Clark said the company also blocked their access to Facebook's application programming interfaces, used to share network data to other apps or services, and disabled additional apps and pages linked to the NYU Ad Observatory project. The initiative has participants download a browser extension that gathers data on the political ads they see on Facebook, and how they were targeted. NYU's Laura Edelson said Facebook has basically terminated the university's effort to study misinformation in political ads "using user privacy, a core belief that we have always put first in our work, as a pretext for doing this." | [] | [] | [] | scitechnews | None | None | None | None | Facebook has disabled the personal accounts of New York University (NYU) scientists studying political ads on the social network, alleging their extraction of data violates its terms of service. Facebook's Mike Clark said the company also blocked their access to Facebook's application programming interfaces, used to share network data to other apps or services, and disabled additional apps and pages linked to the NYU Ad Observatory project. The initiative has participants download a browser extension that gathers data on the political ads they see on Facebook, and how they were targeted. NYU's Laura Edelson said Facebook has basically terminated the university's effort to study misinformation in political ads "using user privacy, a core belief that we have always put first in our work, as a pretext for doing this."
Facebook Inc. has disabled the personal accounts of a group of New York University researchers studying political ads on the social network, claiming they are scraping data in violation of the company's terms of service.
The company also cut off the researchers' access to Facebook's APIs, technology that is used to share data from Facebook to other apps or services, and disabled other apps and Pages associated with the research project, according to Mike Clark, a director of product management on Facebook's privacy team. |
|||
2 | Teenage Girls in Northern Nigeria 'Open Their Minds' with Robotics | KANO, Nigeria, Aug 2 (Reuters) - Teenage girls in the northern Nigerian city of Kano are learning robotics, computing and other STEM subjects as part of an innovative project that challenges local views of what girls should be doing in a socially conservative Muslim society.
In a place where girls are expected to marry young and their education is often cut short, the Kabara NGO aims to widen their world view through activities such as building machines, using common software programmes and learning about maths and science.
"I came to Kabara to learn robotics and I have created a lot of things," said Fatima Zakari, 12. One of her creations is a battery-powered spin art device to create distinctive artwork.
"I am happy to share this with my younger ones and the community at large for the growth of the society," she said proudly.
Kabara is the brainchild of engineer Hadiza Garbati, who wanted to raise the aspirations of northern Nigerian girls and help them develop skills they might harness to start their own small businesses or enroll at university.
Since it started in Kano in 2016, Kabara has trained more than 200 girls, and Garbati is working on expanding her project to other northern cities.
It is a rare educational success story in northern Nigeria, where more than 1,000 children have been kidnapped from their schools by ransom seekers since December, causing many more to drop out because their parents are fearful of abductions.
Kabara, located in a safe area in the heart of Kano, has been unaffected by the crisis.
Garbati said she had overcome resistance from some parents by being highly respectful of Islamic traditions. The girls wear their hijabs during sessions.
Crucial to her success has been support from Nasiru Wada, a close adviser to the Emir of Kano, a figurehead who has moral authority in the community. Wada holds the traditional title of Magajin Garin Kano.
"The main reason why we are doing this is to encourage them, to open their minds," said Wada.
"Tradition, not to say discourages, but does not put enough emphasis on the education of the girl child, with the belief that oh, at a certain age, she will get married," he said.
"It is good to encourage the girl child to study not only the humanities but the science subjects as well because we need healthcare workers, we need science teachers," he said, adding that even married women needed skills to manage their affairs.
Our Standards: The Thomson Reuters Trust Principles. | The Kabara non-governmental organization (NGO) in northern Nigeria is helping teenage girls in the city of Kano to learn robotics, computing, and other science, technology, engineering, and math subjects. Founded in 2016 by engineer Hadiza Garbati, Kabara has trained over 200 girls, with plans to extend its reach to other northern Nigerian cities. Conservative Muslim traditions in the region often deemphasize girls' education; the NGO hopes to broaden their horizons through activities like building machines, using common software programs, and learning math and science. Said Kabara supporter Nasiru Wada, an adviser to Kano's emir, "The main reason why we are doing this is to encourage them, to open their minds." | [] | [] | [] | scitechnews | None | None | None | None | The Kabara non-governmental organization (NGO) in northern Nigeria is helping teenage girls in the city of Kano to learn robotics, computing, and other science, technology, engineering, and math subjects. Founded in 2016 by engineer Hadiza Garbati, Kabara has trained over 200 girls, with plans to extend its reach to other northern Nigerian cities. Conservative Muslim traditions in the region often deemphasize girls' education; the NGO hopes to broaden their horizons through activities like building machines, using common software programs, and learning math and science. Said Kabara supporter Nasiru Wada, an adviser to Kano's emir, "The main reason why we are doing this is to encourage them, to open their minds."
KANO, Nigeria, Aug 2 (Reuters) - Teenage girls in the northern Nigerian city of Kano are learning robotics, computing and other STEM subjects as part of an innovative project that challenges local views of what girls should be doing in a socially conservative Muslim society.
In a place where girls are expected to marry young and their education is often cut short, the Kabara NGO aims to widen their world view through activities such as building machines, using common software programmes and learning about maths and science.
"I came to Kabara to learn robotics and I have created a lot of things," said Fatima Zakari, 12. One of her creations is a battery-powered spin art device to create distinctive artwork.
"I am happy to share this with my younger ones and the community at large for the growth of the society," she said proudly.
Kabara is the brainchild of engineer Hadiza Garbati, who wanted to raise the aspirations of northern Nigerian girls and help them develop skills they might harness to start their own small businesses or enroll at university.
Since it started in Kano in 2016, Kabara has trained more than 200 girls, and Garbati is working on expanding her project to other northern cities.
It is a rare educational success story in northern Nigeria, where more than 1,000 children have been kidnapped from their schools by ransom seekers since December, causing many more to drop out because their parents are fearful of abductions.
Kabara, located in a safe area in the heart of Kano, has been unaffected by the crisis.
Garbati said she had overcome resistance from some parents by being highly respectful of Islamic traditions. The girls wear their hijabs during sessions.
Crucial to her success has been support from Nasiru Wada, a close adviser to the Emir of Kano, a figurehead who has moral authority in the community. Wada holds the traditional title of Magajin Garin Kano.
"The main reason why we are doing this is to encourage them, to open their minds," said Wada.
"Tradition, not to say discourages, but does not put enough emphasis on the education of the girl child, with the belief that oh, at a certain age, she will get married," he said.
"It is good to encourage the girl child to study not only the humanities but the science subjects as well because we need healthcare workers, we need science teachers," he said, adding that even married women needed skills to manage their affairs.
Our Standards: The Thomson Reuters Trust Principles. |
|||
3 | 3D 'Heat Map' Animation Shows How Seizures Spread in the Brains of Epilepsy Patients | For 29 years, from the time she was 12, Rashetta Higgins had been wracked by epileptic seizures - as many as 10 a week - in her sleep, at school and at work. She lost four jobs over 10 years. One seizure brought her down as she was climbing concrete stairs, leaving a bloody scene and a bad gash near her eye.
A seizure struck in 2005 while she was waiting at the curb for a bus. "I fell down right when the bus was pulling up," she says. "My friend grabbed me just in time. I fell a lot. I've had concussions. I've gone unconscious. It has put a lot of wear and tear on my body."
Then, in 2016, Higgins' primary-care doctor, Mary Clark, at La Clinica North Vallejo, referred her to UC San Francisco's Department of Neurology, marking the beginning of her journey back to health and her contribution to new technology that will make it easier to locate seizure activity in the brain. Medication couldn't slow her seizures or diminish their severity, so the UCSF Epilepsy Center team recommended surgery to first record and pinpoint the location of the bad activity and then remove the brain tissue that was triggering the seizures.
In April, 2019, Higgins was admitted to UCSF's 10-bed Epilepsy Monitoring Unit at UCSF Helen Diller Medical Center at Parnassus Heights, where surgeons implanted more than 150 electrodes. EEGs tracked her brain wave activity around the clock to pinpoint the region of tissue that had triggered her brainstorms for 29 years.
In just one week, Higgins had 10 seizures, and each time, the gently undulating EEG tracings recording normal brain activity jerked suddenly into the tell-tale jagged peaks and valleys indicating a seizure.
To find the site of a seizure in a patient's brain, experts currently look at brain waves by reviewing hundreds of squiggly lines on a screen, watching how high and low the peaks and valleys go (the amplitude) and how fast these patterns repeat or oscillate (the frequency). But during a seizure, electrical activity in the brain spikes so fast that the many EEG traces can be tough to read.
"We look for the electrodes with the largest change," says Robert Knowlton , MD, professor of Neurology, the medical director of the UCSF Seizure Disorders Surgery Program and a member of the UCSF Weill Institute of Neurosciences . "Higher frequencies are weighted more. They usually have the lowest amplitude, so we look on the EEG for a combination of the two extremes. It's visual - not completely quantitative. It's complicated to put together."
Enter Jonathan Kleen , MD, PhD, assistant professor of Neurology and a member of the UCSF Weill Institute of Neurosciences . Trained as both a neuroscientist and a computer scientist, he quickly saw the potential of a software strategy to clear up the picture - literally.
"The field of information visualization has really matured in the last 20 years," Kleen said. "It's a process of taking huge volumes of data with many details - space, time, frequency, intensity and other things - and distilling them into a single intuitive visualization like a colorful picture or video."
Kleen developed a program that translates the hundreds of EEG traces into a 3-D movie showing activity in all recorded locations in the brain. The result is a multicolored 3-D heat map that looks very much like a meteorologist's hurricane weather map.
The heat map's cinematic representation of seizures, projected onto a 3-D reconstruction of the patient's own brain, helps one plainly see where a seizure starts and track where, and how fast, it spreads through the brain.
The heat map closely aligns with the traditional visual analysis, but it's simpler to understand and is personalized to the patient's own brain.
"To see it on the heat map makes it much easier to define where the seizure starts, and whether there's more than one trigger site," Knowlton said. "And it is much better at seeing how the seizure spreads. With conventional methods, we have no idea where it's spreading."
Researchers are using the new technology at UCSF to gauge how well it pinpoints the brain's seizure trigger compared with the standard visual approach. So far, the heat maps have been used to help identify the initial seizure site and the spread of a seizure through the brain in more than 115 patients.
Kleen's strategy is disarmingly simple. To distinguish seizures from normal brain activity, he added up the lengths of the lines on an EEG. The rapid changes measured during a seizure produce a lengthy cumulative line, while gently undulating brain waves make much shorter lines. Kleen's software translated these lengths into different colors, and the visualization was born.
The technology proved pivotal in Higgins' treatment.
"Before her recordings, we had feared that Rashetta had multiple seizure-generating areas," Kleen said. "But her video made it plainly obvious that there was a single problem area, and the bad activity was rapidly spreading from that primary hot spot."
The journal Epilepsia put Kleen's and Knowlton's 3-D heat map technology on the cover, and the researchers made their software open-source, so others can improve upon it.
"It's been a labor of love to get this technology to come to fruition" Kleen said. "I feel very strongly that to make progress in the field we need to share technologies, especially things that will help patients."
Higgins has been captivated by the 3-D heat maps of her brain.
"It was amazing," she said. "It was like, 'That's my brain. I'm watching my brain function.'"
And the surgery has been a life-changing success. Higgins hasn't had a seizure in more than two years, feels mentally sharp, and is looking for a job.
"When I wake up, I'm right on it every morning," she said. "I waited for this day for a long, long time." | University of California, San Francisco (UCSF) neuroscientists used an algorithm to visualize in three dimensions hundreds of electroencephalography (EEG) traces in the brain, resulting in an animated heat map of seizures in epileptic patients. UCSF's Robert Knowlton said the tool "makes it much easier to define where the seizure starts, and whether there's more than one trigger site," as well as visualizing the seizure's propagation. The algorithm differentiates seizures from the normal activity of the brain by adding the lengths of the lines on an EEG, and translating them into distinct colors. The heat maps have been used to help identify the initial seizure point and the spread of a seizure through the brain in over 115 patients. | [] | [] | [] | scitechnews | None | None | None | None | University of California, San Francisco (UCSF) neuroscientists used an algorithm to visualize in three dimensions hundreds of electroencephalography (EEG) traces in the brain, resulting in an animated heat map of seizures in epileptic patients. UCSF's Robert Knowlton said the tool "makes it much easier to define where the seizure starts, and whether there's more than one trigger site," as well as visualizing the seizure's propagation. The algorithm differentiates seizures from the normal activity of the brain by adding the lengths of the lines on an EEG, and translating them into distinct colors. The heat maps have been used to help identify the initial seizure point and the spread of a seizure through the brain in over 115 patients.
For 29 years, from the time she was 12, Rashetta Higgins had been wracked by epileptic seizures - as many as 10 a week - in her sleep, at school and at work. She lost four jobs over 10 years. One seizure brought her down as she was climbing concrete stairs, leaving a bloody scene and a bad gash near her eye.
A seizure struck in 2005 while she was waiting at the curb for a bus. "I fell down right when the bus was pulling up," she says. "My friend grabbed me just in time. I fell a lot. I've had concussions. I've gone unconscious. It has put a lot of wear and tear on my body."
Then, in 2016, Higgins' primary-care doctor, Mary Clark, at La Clinica North Vallejo, referred her to UC San Francisco's Department of Neurology, marking the beginning of her journey back to health and her contribution to new technology that will make it easier to locate seizure activity in the brain. Medication couldn't slow her seizures or diminish their severity, so the UCSF Epilepsy Center team recommended surgery to first record and pinpoint the location of the bad activity and then remove the brain tissue that was triggering the seizures.
In April, 2019, Higgins was admitted to UCSF's 10-bed Epilepsy Monitoring Unit at UCSF Helen Diller Medical Center at Parnassus Heights, where surgeons implanted more than 150 electrodes. EEGs tracked her brain wave activity around the clock to pinpoint the region of tissue that had triggered her brainstorms for 29 years.
In just one week, Higgins had 10 seizures, and each time, the gently undulating EEG tracings recording normal brain activity jerked suddenly into the tell-tale jagged peaks and valleys indicating a seizure.
To find the site of a seizure in a patient's brain, experts currently look at brain waves by reviewing hundreds of squiggly lines on a screen, watching how high and low the peaks and valleys go (the amplitude) and how fast these patterns repeat or oscillate (the frequency). But during a seizure, electrical activity in the brain spikes so fast that the many EEG traces can be tough to read.
"We look for the electrodes with the largest change," says Robert Knowlton , MD, professor of Neurology, the medical director of the UCSF Seizure Disorders Surgery Program and a member of the UCSF Weill Institute of Neurosciences . "Higher frequencies are weighted more. They usually have the lowest amplitude, so we look on the EEG for a combination of the two extremes. It's visual - not completely quantitative. It's complicated to put together."
Enter Jonathan Kleen , MD, PhD, assistant professor of Neurology and a member of the UCSF Weill Institute of Neurosciences . Trained as both a neuroscientist and a computer scientist, he quickly saw the potential of a software strategy to clear up the picture - literally.
"The field of information visualization has really matured in the last 20 years," Kleen said. "It's a process of taking huge volumes of data with many details - space, time, frequency, intensity and other things - and distilling them into a single intuitive visualization like a colorful picture or video."
Kleen developed a program that translates the hundreds of EEG traces into a 3-D movie showing activity in all recorded locations in the brain. The result is a multicolored 3-D heat map that looks very much like a meteorologist's hurricane weather map.
The heat map's cinematic representation of seizures, projected onto a 3-D reconstruction of the patient's own brain, helps one plainly see where a seizure starts and track where, and how fast, it spreads through the brain.
The heat map closely aligns with the traditional visual analysis, but it's simpler to understand and is personalized to the patient's own brain.
"To see it on the heat map makes it much easier to define where the seizure starts, and whether there's more than one trigger site," Knowlton said. "And it is much better at seeing how the seizure spreads. With conventional methods, we have no idea where it's spreading."
Researchers are using the new technology at UCSF to gauge how well it pinpoints the brain's seizure trigger compared with the standard visual approach. So far, the heat maps have been used to help identify the initial seizure site and the spread of a seizure through the brain in more than 115 patients.
Kleen's strategy is disarmingly simple. To distinguish seizures from normal brain activity, he added up the lengths of the lines on an EEG. The rapid changes measured during a seizure produce a lengthy cumulative line, while gently undulating brain waves make much shorter lines. Kleen's software translated these lengths into different colors, and the visualization was born.
The technology proved pivotal in Higgins' treatment.
"Before her recordings, we had feared that Rashetta had multiple seizure-generating areas," Kleen said. "But her video made it plainly obvious that there was a single problem area, and the bad activity was rapidly spreading from that primary hot spot."
The journal Epilepsia put Kleen's and Knowlton's 3-D heat map technology on the cover, and the researchers made their software open-source, so others can improve upon it.
"It's been a labor of love to get this technology to come to fruition" Kleen said. "I feel very strongly that to make progress in the field we need to share technologies, especially things that will help patients."
Higgins has been captivated by the 3-D heat maps of her brain.
"It was amazing," she said. "It was like, 'That's my brain. I'm watching my brain function.'"
And the surgery has been a life-changing success. Higgins hasn't had a seizure in more than two years, feels mentally sharp, and is looking for a job.
"When I wake up, I'm right on it every morning," she said. "I waited for this day for a long, long time." |
|||
4 | Endlessly Changing Playground Teaches AIs to Multitask | What did they learn? Some of DeepMind's XLand AIs played 700,000 different games in 4,000 different worlds, encountering 3.4 million unique tasks in total. Instead of learning the best thing to do in each situation, which is what most existing reinforcement-learning AIs do, the players learned to experiment - moving objects around to see what happened, or using one object as a tool to reach another object or hide behind - until they beat the particular task.
In the videos you can see the AIs chucking objects around until they stumble on something useful: a large tile, for example, becomes a ramp up to a platform. It is hard to know for sure if all such outcomes are intentional or happy accidents, say the researchers. But they happen consistently.
AIs that learned to experiment had an advantage in most tasks, even ones that they had not seen before. The researchers found that after just 30 minutes of training on a complex new task, the XLand AIs adapted to it quickly. But AIs that had not spent time in XLand could not learn these tasks at all. | Alphabet's DeepMind Technologies has developed a videogame-like three-dimensional world that allows artificial intelligence (AI) agents to learn skills by experimenting and exploring. Those skills can be used to perform tasks they have not performed before. XLand is managed by a central AI that controls the environment, game rules, and number of players, with reinforcement learning helping the playground manager and players to improve over time. The AI players played 700,000 different games in 4,000 different worlds and performed 3.4 million unique tasks. Rather than learning the best thing to do in each scenario, the AI players experimented until they completed the task at hand. | [] | [] | [] | scitechnews | None | None | None | None | Alphabet's DeepMind Technologies has developed a videogame-like three-dimensional world that allows artificial intelligence (AI) agents to learn skills by experimenting and exploring. Those skills can be used to perform tasks they have not performed before. XLand is managed by a central AI that controls the environment, game rules, and number of players, with reinforcement learning helping the playground manager and players to improve over time. The AI players played 700,000 different games in 4,000 different worlds and performed 3.4 million unique tasks. Rather than learning the best thing to do in each scenario, the AI players experimented until they completed the task at hand.
What did they learn? Some of DeepMind's XLand AIs played 700,000 different games in 4,000 different worlds, encountering 3.4 million unique tasks in total. Instead of learning the best thing to do in each situation, which is what most existing reinforcement-learning AIs do, the players learned to experiment - moving objects around to see what happened, or using one object as a tool to reach another object or hide behind - until they beat the particular task.
In the videos you can see the AIs chucking objects around until they stumble on something useful: a large tile, for example, becomes a ramp up to a platform. It is hard to know for sure if all such outcomes are intentional or happy accidents, say the researchers. But they happen consistently.
AIs that learned to experiment had an advantage in most tasks, even ones that they had not seen before. The researchers found that after just 30 minutes of training on a complex new task, the XLand AIs adapted to it quickly. But AIs that had not spent time in XLand could not learn these tasks at all. |
|||
5 | CISA Launches Initiative to Combat Ransomware | About the Author
Chris Riotta is a staff writer at FCW covering government procurement and technology policy. Chris joined FCW after covering U.S. politics for three years at The Independent. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. | The U.S. Cybersecurity and Infrastructure Security Agency (CISA) officially launched the Joint Cyber Defense Collaborative (JCDC), an anti-ransomware initiative supported by public-private information sharing. CISA director Jen Easterly said the organization was created to develop cyber defense strategies and exchange insights between the federal government and private-sector partners. A CISA webpage said interagency officials will work in the JCDC office to lead the development of U.S. cyber defense plans that incorporate best practices for dealing with cyber intrusions; a key goal is coordinating public-private strategies to combat cyberattacks, particularly ransomware, while engineering incident response frameworks. Said security vendor CrowdStrike Services' Shawn Henry, the JCDC "will create an inclusive, collaborative environment to develop proactive cyber defense strategies" and help "implement coordinated operations to prevent and respond to cyberattacks." | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Cybersecurity and Infrastructure Security Agency (CISA) officially launched the Joint Cyber Defense Collaborative (JCDC), an anti-ransomware initiative supported by public-private information sharing. CISA director Jen Easterly said the organization was created to develop cyber defense strategies and exchange insights between the federal government and private-sector partners. A CISA webpage said interagency officials will work in the JCDC office to lead the development of U.S. cyber defense plans that incorporate best practices for dealing with cyber intrusions; a key goal is coordinating public-private strategies to combat cyberattacks, particularly ransomware, while engineering incident response frameworks. Said security vendor CrowdStrike Services' Shawn Henry, the JCDC "will create an inclusive, collaborative environment to develop proactive cyber defense strategies" and help "implement coordinated operations to prevent and respond to cyberattacks."
About the Author
Chris Riotta is a staff writer at FCW covering government procurement and technology policy. Chris joined FCW after covering U.S. politics for three years at The Independent. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. |
|||
7 | Apple to Scan iPhones for Child Sex Abuse Images | "Regardless of what Apple's long term plans are, they've sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users' phones for prohibited content," Matthew Green, a security researcher at Johns Hopkins University, said. | Apple has unveiled a system designed to scan U.S. customers' iPhones to determine if they contain child sexual abuse material (CSAM). The system compares photo files on each handset to a database of known CSAM gathered by the National Center for Missing and Exploited Children and other organizations. Before an iPhone can be used to upload an image to the iCloud Photos platform, the technology will look for matches to known CSAM; matches are evaluated by a human reviewer, who reports confirmed matches to law enforcement. The company said the system's privacy benefits are significantly better than existing techniques, because Apple only learns about users' images if their iCloud Photos accounts contain collections of known CSAM. | [] | [] | [] | scitechnews | None | None | None | None | Apple has unveiled a system designed to scan U.S. customers' iPhones to determine if they contain child sexual abuse material (CSAM). The system compares photo files on each handset to a database of known CSAM gathered by the National Center for Missing and Exploited Children and other organizations. Before an iPhone can be used to upload an image to the iCloud Photos platform, the technology will look for matches to known CSAM; matches are evaluated by a human reviewer, who reports confirmed matches to law enforcement. The company said the system's privacy benefits are significantly better than existing techniques, because Apple only learns about users' images if their iCloud Photos accounts contain collections of known CSAM.
"Regardless of what Apple's long term plans are, they've sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users' phones for prohibited content," Matthew Green, a security researcher at Johns Hopkins University, said. |
|||
8 | Information Transfer Protocol Reaches Quantum Speed Limit | Even though quantum computers are a young technology and aren't yet ready for routine practical use, researchers have already been investigating the theoretical constraints that will bound quantum technologies. One of the things researchers have discovered is that there are limits to how quickly quantum information can race across any quantum device.
These speed limits are called Lieb-Robinson bounds, and, for several years, some of the bounds have taunted researchers: For certain tasks, there was a gap between the best speeds allowed by theory and the speeds possible with the best algorithms anyone had designed. It's as though no car manufacturer could figure out how to make a model that reached the local highway limit.
But unlike speed limits on roadways, information speed limits can't be ignored when you're in a hurry - they are the inevitable results of the fundamental laws of physics. For any quantum task, there is a limit to how quickly interactions can make their influence felt (and thus transfer information) a certain distance away. The underlying rules define the best performance that is possible. In this way, information speed limits are more like the max score on an old school arcade game than traffic laws, and achieving the ultimate score is an alluring prize for scientists.
Now a team of researchers, led by JQI Fellow Alexey Gorshkov, have found a quantum protocol that reaches the theoretical speed limits for certain quantum tasks. Their result provides new insight into designing optimal quantum algorithms and proves that there hasn't been a lower, undiscovered limit thwarting attempts to make better designs. Gorshkov, who is also a Fellow of the Joint Center for Quantum Information and Computer Science (QuICS) and a physicist at the National Institute of Standards and Technology , and his colleagues presented their new protocol in a recent article published in the journal Physical Review X .
"This gap between maximum speeds and achievable speeds had been bugging us, because we didn't know whether it was the bound that was loose, or if we weren't smart enough to improve the protocol," says Minh Tran, a JQI and QuICS graduate student who was the lead author on the article. "We actually weren't expecting this proposal to be this powerful. And we were trying a lot to improve the bound - turns out that wasn't possible. So, we're excited about this result."
Unsurprisingly, the theoretical speed limit for sending information in a quantum device (such as a quantum computer) depends on the device's underlying structure. The new protocol is designed for quantum devices where the basic building blocks - qubits - influence each other even when they aren't right next to each other. In particular, the team designed the protocol for qubits that have interactions that weaken as the distance between them grows. The new protocol works for a range of interactions that don't weaken too rapidly, which covers the interactions in many practical building blocks of quantum technologies, including nitrogen-vacancy centers, Rydberg atoms, polar molecules and trapped ions.
Crucially, the protocol can transfer information contained in an unknown quantum state to a distant qubit, an essential feature for achieving many of the advantages promised by quantum computers. This limits the way information can be transferred and rules out some direct approaches, like just creating a copy of the information at the new location. (That requires knowing the quantum state you are transferring.)
In the new protocol, data stored on one qubit is shared with its neighbors, using a phenomenon called quantum entanglement . Then, since all those qubits help carry the information, they work together to spread it to other sets of qubits. Because more qubits are involved, they transfer the information even more quickly.
This process can be repeated to keep generating larger blocks of qubits that pass the information faster and faster. So instead of the straightforward method of qubits passing information one by one like a basketball team passing the ball down the court, the qubits are more like snowflakes that combine into a larger and more rapidly rolling snowball at each step. And the bigger the snowball, the more flakes stick with each revolution.
But that's maybe where the similarities to snowballs end. Unlike a real snowball, the quantum collection can also unroll itself. The information is left on the distant qubit when the process runs in reverse, returning all the other qubits to their original states.
When the researchers analyzed the process, they found that the snowballing qubits speed along the information at the theoretical limits allowed by physics. Since the protocol reaches the previously proven limit, no future protocol should be able to surpass it.
"The new aspect is the way we entangle two blocks of qubits," Tran says. "Previously, there was a protocol that entangled information into one block and then tried to merge the qubits from the second block into it one by one. But now because we also entangle the qubits in the second block before merging it into the first block, the enhancement will be greater."
The protocol is the result of the team exploring the possibility of simultaneously moving information stored on multiple qubits. They realized that using blocks of qubits to move information would enhance a protocol's speed.
"On the practical side, the protocol allows us to not only propagate information, but also entangle particles faster," Tran says. "And we know that using entangled particles you can do a lot of interesting things like measuring and sensing with a higher accuracy. And moving information fast also means that you can process information faster. There's a lot of other bottlenecks in building quantum computers, but at least on the fundamental limits side, we know what's possible and what's not."
In addition to the theoretical insights and possible technological applications, the team's mathematical results also reveal new information about how large a quantum computation needs to be in order to simulate particles with interactions like those of the qubits in the new protocol. The researchers are hoping to explore the limits of other kinds of interactions and to explore additional aspects of the protocol such as how robust it is against noise disrupting the process.
Story by Bailey Bedford
In addition to Gorshkov and Tran, co-authors of the research paper include JQI and QuICS graduate student Abhinav Deshpande, JQI and QuICS graduate student Andrew Y. Guo, and University of Colorado Boulder Professor of Physics Andrew Lucas. | Joint Quantum Institute (JQI) scientists have developed a quantum information transfer protocol that reaches theoretical speed limits for some quantum operations. The protocol is engineered for quantum devices in which interactions between quantum bits (qubits) weaken as they recede from each other, covering a range of interactions that do not weaken too quickly. The protocol can deliver many of quantum computer's promised benefits by transferring data within an unknown quantum state to a distant qubit. Data stored on one qubit is shared with its neighbors via quantum entanglement, and the qubits cooperate to spread it to other sets of qubits, accelerating the transfer as more sets are involved. JQI's Minh Tran said, "Moving information fast also means that you can process information faster." | [] | [] | [] | scitechnews | None | None | None | None | Joint Quantum Institute (JQI) scientists have developed a quantum information transfer protocol that reaches theoretical speed limits for some quantum operations. The protocol is engineered for quantum devices in which interactions between quantum bits (qubits) weaken as they recede from each other, covering a range of interactions that do not weaken too quickly. The protocol can deliver many of quantum computer's promised benefits by transferring data within an unknown quantum state to a distant qubit. Data stored on one qubit is shared with its neighbors via quantum entanglement, and the qubits cooperate to spread it to other sets of qubits, accelerating the transfer as more sets are involved. JQI's Minh Tran said, "Moving information fast also means that you can process information faster."
Even though quantum computers are a young technology and aren't yet ready for routine practical use, researchers have already been investigating the theoretical constraints that will bound quantum technologies. One of the things researchers have discovered is that there are limits to how quickly quantum information can race across any quantum device.
These speed limits are called Lieb-Robinson bounds, and, for several years, some of the bounds have taunted researchers: For certain tasks, there was a gap between the best speeds allowed by theory and the speeds possible with the best algorithms anyone had designed. It's as though no car manufacturer could figure out how to make a model that reached the local highway limit.
But unlike speed limits on roadways, information speed limits can't be ignored when you're in a hurry - they are the inevitable results of the fundamental laws of physics. For any quantum task, there is a limit to how quickly interactions can make their influence felt (and thus transfer information) a certain distance away. The underlying rules define the best performance that is possible. In this way, information speed limits are more like the max score on an old school arcade game than traffic laws, and achieving the ultimate score is an alluring prize for scientists.
Now a team of researchers, led by JQI Fellow Alexey Gorshkov, have found a quantum protocol that reaches the theoretical speed limits for certain quantum tasks. Their result provides new insight into designing optimal quantum algorithms and proves that there hasn't been a lower, undiscovered limit thwarting attempts to make better designs. Gorshkov, who is also a Fellow of the Joint Center for Quantum Information and Computer Science (QuICS) and a physicist at the National Institute of Standards and Technology , and his colleagues presented their new protocol in a recent article published in the journal Physical Review X .
"This gap between maximum speeds and achievable speeds had been bugging us, because we didn't know whether it was the bound that was loose, or if we weren't smart enough to improve the protocol," says Minh Tran, a JQI and QuICS graduate student who was the lead author on the article. "We actually weren't expecting this proposal to be this powerful. And we were trying a lot to improve the bound - turns out that wasn't possible. So, we're excited about this result."
Unsurprisingly, the theoretical speed limit for sending information in a quantum device (such as a quantum computer) depends on the device's underlying structure. The new protocol is designed for quantum devices where the basic building blocks - qubits - influence each other even when they aren't right next to each other. In particular, the team designed the protocol for qubits that have interactions that weaken as the distance between them grows. The new protocol works for a range of interactions that don't weaken too rapidly, which covers the interactions in many practical building blocks of quantum technologies, including nitrogen-vacancy centers, Rydberg atoms, polar molecules and trapped ions.
Crucially, the protocol can transfer information contained in an unknown quantum state to a distant qubit, an essential feature for achieving many of the advantages promised by quantum computers. This limits the way information can be transferred and rules out some direct approaches, like just creating a copy of the information at the new location. (That requires knowing the quantum state you are transferring.)
In the new protocol, data stored on one qubit is shared with its neighbors, using a phenomenon called quantum entanglement . Then, since all those qubits help carry the information, they work together to spread it to other sets of qubits. Because more qubits are involved, they transfer the information even more quickly.
This process can be repeated to keep generating larger blocks of qubits that pass the information faster and faster. So instead of the straightforward method of qubits passing information one by one like a basketball team passing the ball down the court, the qubits are more like snowflakes that combine into a larger and more rapidly rolling snowball at each step. And the bigger the snowball, the more flakes stick with each revolution.
But that's maybe where the similarities to snowballs end. Unlike a real snowball, the quantum collection can also unroll itself. The information is left on the distant qubit when the process runs in reverse, returning all the other qubits to their original states.
When the researchers analyzed the process, they found that the snowballing qubits speed along the information at the theoretical limits allowed by physics. Since the protocol reaches the previously proven limit, no future protocol should be able to surpass it.
"The new aspect is the way we entangle two blocks of qubits," Tran says. "Previously, there was a protocol that entangled information into one block and then tried to merge the qubits from the second block into it one by one. But now because we also entangle the qubits in the second block before merging it into the first block, the enhancement will be greater."
The protocol is the result of the team exploring the possibility of simultaneously moving information stored on multiple qubits. They realized that using blocks of qubits to move information would enhance a protocol's speed.
"On the practical side, the protocol allows us to not only propagate information, but also entangle particles faster," Tran says. "And we know that using entangled particles you can do a lot of interesting things like measuring and sensing with a higher accuracy. And moving information fast also means that you can process information faster. There's a lot of other bottlenecks in building quantum computers, but at least on the fundamental limits side, we know what's possible and what's not."
In addition to the theoretical insights and possible technological applications, the team's mathematical results also reveal new information about how large a quantum computation needs to be in order to simulate particles with interactions like those of the qubits in the new protocol. The researchers are hoping to explore the limits of other kinds of interactions and to explore additional aspects of the protocol such as how robust it is against noise disrupting the process.
Story by Bailey Bedford
In addition to Gorshkov and Tran, co-authors of the research paper include JQI and QuICS graduate student Abhinav Deshpande, JQI and QuICS graduate student Andrew Y. Guo, and University of Colorado Boulder Professor of Physics Andrew Lucas. |
|||
10 | Security Flaws Found in Popular EV Chargers | Image Credits: Getty Images
U.K. cybersecurity company Pen Test Partners has identified several vulnerabilities in six home electric vehicle charging brands and a large public EV charging network. While the charger manufacturers resolved most of the issues, the findings are the latest example of the poorly regulated world of Internet of Things devices, which are poised to become all but ubiquitous in our homes and vehicles.
Vulnerabilities were identified in five different EV charging brands - Project EV, Wallbox, EVBox, EO Charging's EO Hub and EO mini pro 2, and Hypervolt - and public charging network Chargepoint. They also examined Rolec, but found no vulnerabilities.
Security researcher Vangelis Stykas identified several security flaws among the various brands that could have allowed a malicious hacker to hijack user accounts, impede charging and even turn one of the chargers into a "backdoor" into the owner's home network.
The consequences of a hack to a public charging station network could include theft of electricity at the expense of driver accounts and turning chargers on or off.
Some EV chargers, like Wallbox and Hypervolt, used a Raspberry Pi compute module, a low-cost computer that's often used by hobbyists and programmers.
"The Pi is a great hobbyist and educational computing platform, but in our opinion it's not suitable for commercial applications as it doesn't have what's known as a 'secure bootloader,'" Pen Test Partners founder Ken Munro told TechCrunch.
"This means anyone with physical access to the outside of your home (hence to your charger) could open it up and steal your Wi-Fi credentials. Yes, the risk is low, but I don't think charger vendors should be exposing us to additional risk," he said.
The hacks are "really fairly simple," Munro said. "I can teach you to do this in five minutes," he added.
The company's report, published this past weekend , touched on vulnerabilities associated with emerging protocols like the Open Charge Point Interface, maintained and managed by the EVRoaming Foundation. The protocol was designed to make charging seamless between different charging networks and operators.
Munro likened it to roaming on a cell phone, allowing drivers to use networks outside of their usual charging network. OCPI isn't widely used at the moment, so these vulnerabilities could be designed out of the protocol. But if left unaddressed, it could mean "that a vulnerability in one platform potentially creates a vulnerability in another," Stykas explained.
Hacks to charging stations have become a particularly nefarious threat as a greater share of transportation becomes electrified and more power flows through the electric grid. Electric grids are not designed for large swings in power consumption - but that's exactly what could happen, should there be a large hack that turned on or off a sufficient number of DC fast chargers.
"It doesn't take that much to trip the power grid to overload," Munro said. "We've inadvertently made a cyberweapon that others could use against us."
While the effects on the electric grid are unique to EV chargers, cybersecurity issues aren't. The routine hacks reveal more endemic issues in IoT devices, where being first to market often takes precedence over sound security - and where regulators are barely able to catch up to the pace of innovation.
"There's really not a lot of enforcement," Justin Brookman, the director of consumer privacy and technology policy for Consumer Reports, told TechCrunch in a recent interview. Data security enforcement in the United States falls within the purview of the Federal Trade Commission. But while there is a general-purpose consumer protection statute on the books, "it may well be illegal to build a system that has poor security, it's just whether you're going to get enforced against or not," said Brookman.
A separate federal bill, the Internet of Things Cybersecurity Improvement Act, passed last September but only broadly applies to the federal government.
There's only slightly more movement on the state level. In 2018, California passed a bill banning default passwords in new consumer electronics starting in 2020 - useful progress to be sure, but which largely puts the burden of data security in the hands of consumers. California, as well as states like Colorado and Virginia, also have passed laws requiring reasonable security measures for IoT devices.
Such laws are a good start. But (for better or worse) the FTC isn't like the U.S. Food and Drug Administration, which audits consumer products before they hit the market. As of now, there's no security check on technology devices prior to them reaching consumers. Over in the United Kingdom, "it's the Wild West over here as well, right now," Munro said.
Some startups have emerged that are trying to tackle this issue. One is Thistle Technologies , which is trying to help IoT device manufacturers integrate mechanisms into their software to receive security updates. But it's unlikely this problem will be fully solved on the back of private industry alone.
Because EV chargers could pose a unique threat to the electric grid, there's a possibility that EV chargers could fall under the scope of a critical infrastructure bill. Last week, President Joe Biden released a memorandum calling for greater cybersecurity for systems related to critical infrastructure. "The degradation, destruction or malfunction of systems that control this infrastructure could cause significant harm to the national and economic security of the United States," Biden said. Whether this will trickle down to consumer products is another question.
Correction: The article has been updated to note that the researchers found no vulnerabilities in the Rolec home EV charger. The first paragraph was clarified after an earlier editing error. | Analysts at U.K. cybersecurity firm Pen Test Partners have identified flaws in the application programming interfaces of six home electric vehicle (EV) charging brands, as well as the Chargepoint public EV charging station network. Pen Test analyst Vangelis Stykas found several vulnerabilities that could enable hackers to commandeer user accounts, hinder charging, and repurpose a charger as a backdoor into the owner's home network. The Chargepoint flaw, meanwhile, could let hackers steal electricity and shift the cost to driver accounts, and activate or deactivate chargers. Some EV chargers use a Raspberry Pi compute module, a popular low-cost computer that Pen Test's Ken Munro said is unsuitable for commercial applications due to its lack of a secure bootloader. Charger manufacturers have corrected most of the issues, but the flaws' existence highlights the poor regulation of Internet of Things devices. | [] | [] | [] | scitechnews | None | None | None | None | Analysts at U.K. cybersecurity firm Pen Test Partners have identified flaws in the application programming interfaces of six home electric vehicle (EV) charging brands, as well as the Chargepoint public EV charging station network. Pen Test analyst Vangelis Stykas found several vulnerabilities that could enable hackers to commandeer user accounts, hinder charging, and repurpose a charger as a backdoor into the owner's home network. The Chargepoint flaw, meanwhile, could let hackers steal electricity and shift the cost to driver accounts, and activate or deactivate chargers. Some EV chargers use a Raspberry Pi compute module, a popular low-cost computer that Pen Test's Ken Munro said is unsuitable for commercial applications due to its lack of a secure bootloader. Charger manufacturers have corrected most of the issues, but the flaws' existence highlights the poor regulation of Internet of Things devices.
Image Credits: Getty Images
U.K. cybersecurity company Pen Test Partners has identified several vulnerabilities in six home electric vehicle charging brands and a large public EV charging network. While the charger manufacturers resolved most of the issues, the findings are the latest example of the poorly regulated world of Internet of Things devices, which are poised to become all but ubiquitous in our homes and vehicles.
Vulnerabilities were identified in five different EV charging brands - Project EV, Wallbox, EVBox, EO Charging's EO Hub and EO mini pro 2, and Hypervolt - and public charging network Chargepoint. They also examined Rolec, but found no vulnerabilities.
Security researcher Vangelis Stykas identified several security flaws among the various brands that could have allowed a malicious hacker to hijack user accounts, impede charging and even turn one of the chargers into a "backdoor" into the owner's home network.
The consequences of a hack to a public charging station network could include theft of electricity at the expense of driver accounts and turning chargers on or off.
Some EV chargers, like Wallbox and Hypervolt, used a Raspberry Pi compute module, a low-cost computer that's often used by hobbyists and programmers.
"The Pi is a great hobbyist and educational computing platform, but in our opinion it's not suitable for commercial applications as it doesn't have what's known as a 'secure bootloader,'" Pen Test Partners founder Ken Munro told TechCrunch.
"This means anyone with physical access to the outside of your home (hence to your charger) could open it up and steal your Wi-Fi credentials. Yes, the risk is low, but I don't think charger vendors should be exposing us to additional risk," he said.
The hacks are "really fairly simple," Munro said. "I can teach you to do this in five minutes," he added.
The company's report, published this past weekend , touched on vulnerabilities associated with emerging protocols like the Open Charge Point Interface, maintained and managed by the EVRoaming Foundation. The protocol was designed to make charging seamless between different charging networks and operators.
Munro likened it to roaming on a cell phone, allowing drivers to use networks outside of their usual charging network. OCPI isn't widely used at the moment, so these vulnerabilities could be designed out of the protocol. But if left unaddressed, it could mean "that a vulnerability in one platform potentially creates a vulnerability in another," Stykas explained.
Hacks to charging stations have become a particularly nefarious threat as a greater share of transportation becomes electrified and more power flows through the electric grid. Electric grids are not designed for large swings in power consumption - but that's exactly what could happen, should there be a large hack that turned on or off a sufficient number of DC fast chargers.
"It doesn't take that much to trip the power grid to overload," Munro said. "We've inadvertently made a cyberweapon that others could use against us."
While the effects on the electric grid are unique to EV chargers, cybersecurity issues aren't. The routine hacks reveal more endemic issues in IoT devices, where being first to market often takes precedence over sound security - and where regulators are barely able to catch up to the pace of innovation.
"There's really not a lot of enforcement," Justin Brookman, the director of consumer privacy and technology policy for Consumer Reports, told TechCrunch in a recent interview. Data security enforcement in the United States falls within the purview of the Federal Trade Commission. But while there is a general-purpose consumer protection statute on the books, "it may well be illegal to build a system that has poor security, it's just whether you're going to get enforced against or not," said Brookman.
A separate federal bill, the Internet of Things Cybersecurity Improvement Act, passed last September but only broadly applies to the federal government.
There's only slightly more movement on the state level. In 2018, California passed a bill banning default passwords in new consumer electronics starting in 2020 - useful progress to be sure, but which largely puts the burden of data security in the hands of consumers. California, as well as states like Colorado and Virginia, also have passed laws requiring reasonable security measures for IoT devices.
Such laws are a good start. But (for better or worse) the FTC isn't like the U.S. Food and Drug Administration, which audits consumer products before they hit the market. As of now, there's no security check on technology devices prior to them reaching consumers. Over in the United Kingdom, "it's the Wild West over here as well, right now," Munro said.
Some startups have emerged that are trying to tackle this issue. One is Thistle Technologies , which is trying to help IoT device manufacturers integrate mechanisms into their software to receive security updates. But it's unlikely this problem will be fully solved on the back of private industry alone.
Because EV chargers could pose a unique threat to the electric grid, there's a possibility that EV chargers could fall under the scope of a critical infrastructure bill. Last week, President Joe Biden released a memorandum calling for greater cybersecurity for systems related to critical infrastructure. "The degradation, destruction or malfunction of systems that control this infrastructure could cause significant harm to the national and economic security of the United States," Biden said. Whether this will trickle down to consumer products is another question.
Correction: The article has been updated to note that the researchers found no vulnerabilities in the Rolec home EV charger. The first paragraph was clarified after an earlier editing error. |
|||
11 | ForCE Model Accurately Predicts How Coasts Will Be Impacted by Storms, Sea-Level Rise | Coastal communities across the world are increasingly facing up to the huge threats posed by a combination of extreme storms and predicted rises in sea levels as a result of global climate change.
However, scientists at the University of Plymouth have developed a simple algorithm-based model which accurately predicts how coastlines could be affected and - as a result - enables communities to identify the actions they might need to take in order to adapt.
The Forecasting Coastal Evolution (ForCE) model has the potential to be a game-changing advance in coastal evolution science, allowing adaptations in the shoreline to be predicted over timescales of anything from days to decades and beyond.
This broad range of timescales means that the model is capable of predicting both the short-term impact of violent storm or storm sequences (over days to years), as well as predicting the much longer-term evolution of the coast due to forecasted rising sea levels (decades).
The computer model uses past and present beach measurements, and data showing the physical properties of the coast, to forecast how they might evolve in the future and assess the resilience of our coastlines to erosion and flooding.
Unlike previous simple models of its kind that attempt forecasts on similar timescales, ForCE also considers other key factors like tidal, surge and global sea-level rise data to assess how beaches might be impacted by predicted climate change.
Beach sediments form our frontline of defense against coastal erosion and flooding, preventing damage to our valuable coastal infrastructure. So coastal managers are rightly concerned about monitoring the volume of beach sediment on our beaches.
The new ForCE model opens the door for managers to keeping track of the 'health' of our beaches without leaving their office and to predict how this might change in a future of rising sea level and changing waves.
Model predictions have shown to be more than 80% accurate in current tests, based on measurements of beach change at Perranporth, on the north coast of Cornwall in South West England.
It has also been show to accurately predict the formation and location of offshore sand bars in response to extreme storms, and how beaches recover in the months and years after storm events.
As such, researchers say it could provide an early warning for coastal erosion and potential overtopping, but its stability and efficiency suggests it could forecast coastal evolution over much longer timescales.
The study, published in Coastal Engineering, highlights that the increasing threats posed by sea-level rise and coastal squeeze has meant that tracking the morphological evolution of sedimentary coasts is of substantial and increasing societal importance.
Dr. Mark Davidson, Associate Professor in Coastal Processes, developed the ForCE model having previously pioneered a traffic light system based on the severity of approaching storms to highlight the level of action required to protect particular beaches.
He said: "Top level coastal managers around the world have recognized a real need to assess the resilience of our coastlines in a climate of changing waves and sea level. However, until now they have not had the essential tools that are required to make this assessment. We hope that our work with the ForCE model will be a significant step towards providing this new and essential capability."
The University of Plymouth is one of the world's leading authorities in coastal engineering and change in the face of extreme storms and sea-level rise.
Researchers from the University's Coastal Processes Research Group have examined their effects everywhere from the coasts of South West England to remote islands in the Pacific Ocean.
They have shown the winter storms of 2013/14 were the most energetic to hit the Atlantic coast of western Europe since records began in 1948, and demonstrated that five years after those storms, many beaches had still not fully recovered.
Researchers from the University of Plymouth have been carrying out beach measurements at Perranporth in North Cornwall for more than a decade. Recently, this has been done as part of the £4million BLUE-coast project, funded by the Natural Environment Research Council, which aims to address the importance of sediment budgets and their role in coastal recovery.
Surveys have shown that following extreme storms, such as those which hit the UK in 2013/14, beaches recovered to some degree in the summer months but that recovery was largely wiped out in the following winters. That has created a situation where high water shorelines are further landward at sites such as Perranporth.
Sea level is presently forecast to rise by about 0.5m over the next 100 years. However, there is large uncertainty attached to this and it could easily be more than 1m over the same time-frame. If the latter proves to be true, prominent structures on the coastline - such as the Watering Hole bar - will be under severe threat within the next 60 years.
Reference: "Forecasting coastal evolution on time-scales of days to decades" by Mark Davidson, 10 June 2021, Coastal Engineering . DOI: 10.1016/j.coastaleng.2021.103928 | An algorithm-based model developed by researchers at the University of Plymouth in the U.K. predicts the impact of storms and rising sea levels on coastlines with greater than 80% accuracy. The Forecasting Coastal Evolution (ForCE) model can predict the evolution of coastlines and assess their resilience to erosion and flooding using past and present beach measurements, data on coastlines' physical properties, and tidal, surge, and global sea-level rise data. The model can predict short-term impacts over days to years, as well as longer-term coastal evolution over decades. Said Plymouth's Mark Davidson, who developed the model, "Top level coastal managers around the world have recognized a real need to assess the resilience of our coastlines in a climate of changing waves and sea level. ...We hope that our work with the ForCE model will be a significant step towards providing this new and essential capability." | [] | [] | [] | scitechnews | None | None | None | None | An algorithm-based model developed by researchers at the University of Plymouth in the U.K. predicts the impact of storms and rising sea levels on coastlines with greater than 80% accuracy. The Forecasting Coastal Evolution (ForCE) model can predict the evolution of coastlines and assess their resilience to erosion and flooding using past and present beach measurements, data on coastlines' physical properties, and tidal, surge, and global sea-level rise data. The model can predict short-term impacts over days to years, as well as longer-term coastal evolution over decades. Said Plymouth's Mark Davidson, who developed the model, "Top level coastal managers around the world have recognized a real need to assess the resilience of our coastlines in a climate of changing waves and sea level. ...We hope that our work with the ForCE model will be a significant step towards providing this new and essential capability."
Coastal communities across the world are increasingly facing up to the huge threats posed by a combination of extreme storms and predicted rises in sea levels as a result of global climate change.
However, scientists at the University of Plymouth have developed a simple algorithm-based model which accurately predicts how coastlines could be affected and - as a result - enables communities to identify the actions they might need to take in order to adapt.
The Forecasting Coastal Evolution (ForCE) model has the potential to be a game-changing advance in coastal evolution science, allowing adaptations in the shoreline to be predicted over timescales of anything from days to decades and beyond.
This broad range of timescales means that the model is capable of predicting both the short-term impact of violent storm or storm sequences (over days to years), as well as predicting the much longer-term evolution of the coast due to forecasted rising sea levels (decades).
The computer model uses past and present beach measurements, and data showing the physical properties of the coast, to forecast how they might evolve in the future and assess the resilience of our coastlines to erosion and flooding.
Unlike previous simple models of its kind that attempt forecasts on similar timescales, ForCE also considers other key factors like tidal, surge and global sea-level rise data to assess how beaches might be impacted by predicted climate change.
Beach sediments form our frontline of defense against coastal erosion and flooding, preventing damage to our valuable coastal infrastructure. So coastal managers are rightly concerned about monitoring the volume of beach sediment on our beaches.
The new ForCE model opens the door for managers to keeping track of the 'health' of our beaches without leaving their office and to predict how this might change in a future of rising sea level and changing waves.
Model predictions have shown to be more than 80% accurate in current tests, based on measurements of beach change at Perranporth, on the north coast of Cornwall in South West England.
It has also been show to accurately predict the formation and location of offshore sand bars in response to extreme storms, and how beaches recover in the months and years after storm events.
As such, researchers say it could provide an early warning for coastal erosion and potential overtopping, but its stability and efficiency suggests it could forecast coastal evolution over much longer timescales.
The study, published in Coastal Engineering, highlights that the increasing threats posed by sea-level rise and coastal squeeze has meant that tracking the morphological evolution of sedimentary coasts is of substantial and increasing societal importance.
Dr. Mark Davidson, Associate Professor in Coastal Processes, developed the ForCE model having previously pioneered a traffic light system based on the severity of approaching storms to highlight the level of action required to protect particular beaches.
He said: "Top level coastal managers around the world have recognized a real need to assess the resilience of our coastlines in a climate of changing waves and sea level. However, until now they have not had the essential tools that are required to make this assessment. We hope that our work with the ForCE model will be a significant step towards providing this new and essential capability."
The University of Plymouth is one of the world's leading authorities in coastal engineering and change in the face of extreme storms and sea-level rise.
Researchers from the University's Coastal Processes Research Group have examined their effects everywhere from the coasts of South West England to remote islands in the Pacific Ocean.
They have shown the winter storms of 2013/14 were the most energetic to hit the Atlantic coast of western Europe since records began in 1948, and demonstrated that five years after those storms, many beaches had still not fully recovered.
Researchers from the University of Plymouth have been carrying out beach measurements at Perranporth in North Cornwall for more than a decade. Recently, this has been done as part of the £4million BLUE-coast project, funded by the Natural Environment Research Council, which aims to address the importance of sediment budgets and their role in coastal recovery.
Surveys have shown that following extreme storms, such as those which hit the UK in 2013/14, beaches recovered to some degree in the summer months but that recovery was largely wiped out in the following winters. That has created a situation where high water shorelines are further landward at sites such as Perranporth.
Sea level is presently forecast to rise by about 0.5m over the next 100 years. However, there is large uncertainty attached to this and it could easily be more than 1m over the same time-frame. If the latter proves to be true, prominent structures on the coastline - such as the Watering Hole bar - will be under severe threat within the next 60 years.
Reference: "Forecasting coastal evolution on time-scales of days to decades" by Mark Davidson, 10 June 2021, Coastal Engineering . DOI: 10.1016/j.coastaleng.2021.103928 |
|||
12 | AI Algorithm to Assess Metastatic Potential in Skin Cancers | DALLAS - August 3, 2021 - Using artificial intelligence (AI), researchers from UT Southwestern have developed a way to accurately predict which skin cancers are highly metastatic. The findings , published as the July cover article of Cell Systems, show the potential for AI-based tools to revolutionize pathology for cancer and a variety of other diseases.
"We now have a general framework that allows us to take tissue samples and predict mechanisms inside cells that drive disease, mechanisms that are currently inaccessible in any other way," said study leader Gaudenz Danuser, Ph.D. , Professor and Chair of the Lyda Hill Department of Bioinformatics at UTSW.
AI technology has significantly advanced over the past several years, Dr. Danuser explained, with deep learning-based methods able to distinguish minute differences in images that are essentially invisible to the human eye. Researchers have proposed using this latent information to look for differences in disease characteristics that could offer insight on prognoses or guide treatments. However, he said, the differences distinguished by AI are generally not interpretable in terms of specific cellular characteristics - a drawback that has made AI a tough sell for clinical use.
To overcome this challenge, Dr. Danuser and his colleagues used AI to search for differences between images of melanoma cells with high and low metastatic potential - a characteristic that can mean life or death for patients with skin cancer - and then reverse-engineered their findings to figure out which features in these images were responsible for the differences.
Using tumor samples from seven patients and available information on their disease progression, including metastasis, the researchers took videos of about 12,000 random cells living in petri dishes, generating about 1,700,000 raw images. The researchers then used an AI algorithm to pull 56 different abstract numerical features from these images.
Dr. Danuser and his colleagues found one feature that was able to accurately discriminate between cells with high and low metastatic potential. By manipulating this abstract numerical feature, they produced artificial images that exaggerated visible characteristics inherent to metastasis that human eyes cannot detect, he added. The highly metastatic cells produced slightly more pseudopodial extensions - a type of fingerlike projection - and had increased light scattering, an effect that may be due to subtle rearrangements of cellular organelles.
To further prove the utility of this tool, the researchers first classified the metastatic potential of cells from human melanomas that had been frozen and cultured in petri dishes for 30 years, and then implanted them into mice. Those predicted to be highly metastatic formed tumors that readily spread throughout the animals, while those predicted to have low metastatic potential spread little or not at all.
Dr. Danuser, a Professor of Cell Biology and member of the Harold C. Simmons Comprehensive Cancer Center , noted that this method needs further study before it becomes part of clinical care. But eventually, he added, it may be possible to use AI to distinguish important features of cancers and other diseases.
Dr. Danuser is the Patrick E. Haggerty Distinguished Chair in Basic Biomedical Science at UTSW.
Other UTSW researchers who contributed to this study include Assaf Zaritsky, Andrew R. Jamieson, Erik S. Welf, Andres Nevarez, Justin Cillay, Ugur Eskiocak, and Brandi L. Cantarel.
This study was funded by grants from the Cancer Prevention and Research Institute of Texas (CPRIT R160622), the National Institutes of Health (R35GM126428, K25CA204526), and the Israeli Council for Higher Education via the Data Science Research Center, Ben-Gurion University of the Negev, Israel.
About UT Southwestern Medical Center
UT Southwestern, one of the nation's premier academic medical centers, integrates pioneering biomedical research with exceptional clinical care and education. The institution's faculty has received six Nobel Prizes, and includes 25 members of the National Academy of Sciences, 16 members of the National Academy of Medicine, and 13 Howard Hughes Medical Institute Investigators. The full-time faculty of more than 2,800 is responsible for groundbreaking medical advances and is committed to translating science-driven research quickly to new clinical treatments. UT Southwestern physicians provide care in about 80 specialties to more than 117,000 hospitalized patients, more than 360,000 emergency room cases, and oversee nearly 3 million outpatient visits a year. | A new artificial intelligence (AI) algorithm can predict highly metastatic skin cancers. The University of Texas Southwestern Medical Center (UTSW) researchers who developed the algorithm used AI to identify differences between images of melanoma cells with high and low metastatic potential, then used reverse engineering to determine which visual features were associated with the difference. They generated 1.7 million raw images from videos of about 12,000 random cells from tumor samples from seven patients. The algorithm identified 56 different abstract numerical features from those images, which the researchers manipulated to generate images exaggerating visible characteristics inherent to metastasis. Said UTSW's Gaudenz Danuser, "We now have a general framework that allows us to take tissue samples and predict mechanisms inside cells that drive disease." | [] | [] | [] | scitechnews | None | None | None | None | A new artificial intelligence (AI) algorithm can predict highly metastatic skin cancers. The University of Texas Southwestern Medical Center (UTSW) researchers who developed the algorithm used AI to identify differences between images of melanoma cells with high and low metastatic potential, then used reverse engineering to determine which visual features were associated with the difference. They generated 1.7 million raw images from videos of about 12,000 random cells from tumor samples from seven patients. The algorithm identified 56 different abstract numerical features from those images, which the researchers manipulated to generate images exaggerating visible characteristics inherent to metastasis. Said UTSW's Gaudenz Danuser, "We now have a general framework that allows us to take tissue samples and predict mechanisms inside cells that drive disease."
DALLAS - August 3, 2021 - Using artificial intelligence (AI), researchers from UT Southwestern have developed a way to accurately predict which skin cancers are highly metastatic. The findings , published as the July cover article of Cell Systems, show the potential for AI-based tools to revolutionize pathology for cancer and a variety of other diseases.
"We now have a general framework that allows us to take tissue samples and predict mechanisms inside cells that drive disease, mechanisms that are currently inaccessible in any other way," said study leader Gaudenz Danuser, Ph.D. , Professor and Chair of the Lyda Hill Department of Bioinformatics at UTSW.
AI technology has significantly advanced over the past several years, Dr. Danuser explained, with deep learning-based methods able to distinguish minute differences in images that are essentially invisible to the human eye. Researchers have proposed using this latent information to look for differences in disease characteristics that could offer insight on prognoses or guide treatments. However, he said, the differences distinguished by AI are generally not interpretable in terms of specific cellular characteristics - a drawback that has made AI a tough sell for clinical use.
To overcome this challenge, Dr. Danuser and his colleagues used AI to search for differences between images of melanoma cells with high and low metastatic potential - a characteristic that can mean life or death for patients with skin cancer - and then reverse-engineered their findings to figure out which features in these images were responsible for the differences.
Using tumor samples from seven patients and available information on their disease progression, including metastasis, the researchers took videos of about 12,000 random cells living in petri dishes, generating about 1,700,000 raw images. The researchers then used an AI algorithm to pull 56 different abstract numerical features from these images.
Dr. Danuser and his colleagues found one feature that was able to accurately discriminate between cells with high and low metastatic potential. By manipulating this abstract numerical feature, they produced artificial images that exaggerated visible characteristics inherent to metastasis that human eyes cannot detect, he added. The highly metastatic cells produced slightly more pseudopodial extensions - a type of fingerlike projection - and had increased light scattering, an effect that may be due to subtle rearrangements of cellular organelles.
To further prove the utility of this tool, the researchers first classified the metastatic potential of cells from human melanomas that had been frozen and cultured in petri dishes for 30 years, and then implanted them into mice. Those predicted to be highly metastatic formed tumors that readily spread throughout the animals, while those predicted to have low metastatic potential spread little or not at all.
Dr. Danuser, a Professor of Cell Biology and member of the Harold C. Simmons Comprehensive Cancer Center , noted that this method needs further study before it becomes part of clinical care. But eventually, he added, it may be possible to use AI to distinguish important features of cancers and other diseases.
Dr. Danuser is the Patrick E. Haggerty Distinguished Chair in Basic Biomedical Science at UTSW.
Other UTSW researchers who contributed to this study include Assaf Zaritsky, Andrew R. Jamieson, Erik S. Welf, Andres Nevarez, Justin Cillay, Ugur Eskiocak, and Brandi L. Cantarel.
This study was funded by grants from the Cancer Prevention and Research Institute of Texas (CPRIT R160622), the National Institutes of Health (R35GM126428, K25CA204526), and the Israeli Council for Higher Education via the Data Science Research Center, Ben-Gurion University of the Negev, Israel.
About UT Southwestern Medical Center
UT Southwestern, one of the nation's premier academic medical centers, integrates pioneering biomedical research with exceptional clinical care and education. The institution's faculty has received six Nobel Prizes, and includes 25 members of the National Academy of Sciences, 16 members of the National Academy of Medicine, and 13 Howard Hughes Medical Institute Investigators. The full-time faculty of more than 2,800 is responsible for groundbreaking medical advances and is committed to translating science-driven research quickly to new clinical treatments. UT Southwestern physicians provide care in about 80 specialties to more than 117,000 hospitalized patients, more than 360,000 emergency room cases, and oversee nearly 3 million outpatient visits a year. |
|||
13 | Do You Hear What I Hear? A Cyberattack. | Cybersecurity analysts deal with an enormous amount of data, especially when monitoring network traffic. If one were to print the data in text form, a single day's worth of network traffic may be akin to a thick phonebook. In other words, detecting an abnormality is like finding a needle in a haystack.
"It's an ocean of data," says Yang Cai , a senior systems scientist in CyLab. "The important patterns we need to see become buried by a lot of trivial or normal patterns."
Cai has been working for years to come up with ways to make abnormalities in network traffic easier to spot. A few years ago, he and his research group developed a data visualization tool that allowed one to see network traffic patterns, and now he has developed a way to hear them.
In a new study presented this week at the Conference on Applied Human Factors and Ergonomics , Cai and two co-authors show how cybersecurity data can be heard in the form of music. When there's a change in the network traffic, there is a change in the music.
"We wanted to articulate normal and abnormal patterns through music," Cai says. "The process of sonification - using audio to perceptualize data - is not new, but sonification to make data more appealing to the human ear is."
The researchers experimented with several different "sound mapping" algorithms, transforming numeral datasets into music with various melodies, harmonies, time signatures, and tempos. For example, the researchers assigned specific notes to the 10 digits that make up any number found in data: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. To represent the third and fourth digits of the mathematical constant Pi - 4 and 1 - they modified the time signature of one measure to 4/4 and the following measure to 1/4.
While this all may sound fairly complicated, one doesn't need to be a trained musician to be able to hear these changes in the music, the researchers found. The team created music using network traffic data from a real malware distribution network and presented the music to non-musicians. They found that non-musicians were able to accurately recognize changes in pitch when played on different instruments.
"We are not only making music, but turning abstract data into something that humans can process," the authors write in their study.
Cai says his vision is that someday, an analyst will be able to explore cybersecurity data with virtual reality goggles presenting the visualization of the network space. When the analyst moves closer to an individual data point, or a cluster of data, music representing that data would gradually become more audible.
"The idea is to use all of humans' sensory channels to explore this cyber analytical space," Cai says.
While Cai himself is not a trained musician, his two co-authors on the study are. Jakub Polaczyk and Katelyn Croft were once students in Carnegie Mellon University's College of Fine Arts. Polaczyk obtained his Artist Diploma in Composition in 2013 and is currently an award-winning composer based in New York City. Croft obtained her master's degree in harp performance in 2020 and is currently in Taiwan studying the influence of Western music on Asian music.
Before graduating in 2020, Croft worked in Cai's lab on a virtual recital project. Polaczyk took Cai's University-wide course, "Creativity," in 2011 and the two have collaborated ever since.
"It has been a very nice collaboration," Cai says. "This kind of cross-disciplinary collaboration really exemplifies CMU's strengths."
Paper reference
Compositional Sonification of Cybersecurity Data in a Baroque Style | Carnegie Mellon University's Yang Cai and colleagues have designed a method of making abnormal network traffic audible by rendering cybersecurity data musically. The researchers explored several sound mapping algorithms, converting numeral datasets into music with diverse melodies, harmonies, time signatures, and tempos. They produced music using network traffic data from an actual malware distribution network, and presented it to non-musicians, who could accurately identify pitch shifts when played on different instruments. Said the researchers, "We are not only making music, but turning abstract data into something that humans can process." Said Cai, "The process of sonification - using audio to perceptualize data - is not new, but sonification to make data more appealing to the human ear is." | [] | [] | [] | scitechnews | None | None | None | None | Carnegie Mellon University's Yang Cai and colleagues have designed a method of making abnormal network traffic audible by rendering cybersecurity data musically. The researchers explored several sound mapping algorithms, converting numeral datasets into music with diverse melodies, harmonies, time signatures, and tempos. They produced music using network traffic data from an actual malware distribution network, and presented it to non-musicians, who could accurately identify pitch shifts when played on different instruments. Said the researchers, "We are not only making music, but turning abstract data into something that humans can process." Said Cai, "The process of sonification - using audio to perceptualize data - is not new, but sonification to make data more appealing to the human ear is."
Cybersecurity analysts deal with an enormous amount of data, especially when monitoring network traffic. If one were to print the data in text form, a single day's worth of network traffic may be akin to a thick phonebook. In other words, detecting an abnormality is like finding a needle in a haystack.
"It's an ocean of data," says Yang Cai , a senior systems scientist in CyLab. "The important patterns we need to see become buried by a lot of trivial or normal patterns."
Cai has been working for years to come up with ways to make abnormalities in network traffic easier to spot. A few years ago, he and his research group developed a data visualization tool that allowed one to see network traffic patterns, and now he has developed a way to hear them.
In a new study presented this week at the Conference on Applied Human Factors and Ergonomics , Cai and two co-authors show how cybersecurity data can be heard in the form of music. When there's a change in the network traffic, there is a change in the music.
"We wanted to articulate normal and abnormal patterns through music," Cai says. "The process of sonification - using audio to perceptualize data - is not new, but sonification to make data more appealing to the human ear is."
The researchers experimented with several different "sound mapping" algorithms, transforming numeral datasets into music with various melodies, harmonies, time signatures, and tempos. For example, the researchers assigned specific notes to the 10 digits that make up any number found in data: 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. To represent the third and fourth digits of the mathematical constant Pi - 4 and 1 - they modified the time signature of one measure to 4/4 and the following measure to 1/4.
While this all may sound fairly complicated, one doesn't need to be a trained musician to be able to hear these changes in the music, the researchers found. The team created music using network traffic data from a real malware distribution network and presented the music to non-musicians. They found that non-musicians were able to accurately recognize changes in pitch when played on different instruments.
"We are not only making music, but turning abstract data into something that humans can process," the authors write in their study.
Cai says his vision is that someday, an analyst will be able to explore cybersecurity data with virtual reality goggles presenting the visualization of the network space. When the analyst moves closer to an individual data point, or a cluster of data, music representing that data would gradually become more audible.
"The idea is to use all of humans' sensory channels to explore this cyber analytical space," Cai says.
While Cai himself is not a trained musician, his two co-authors on the study are. Jakub Polaczyk and Katelyn Croft were once students in Carnegie Mellon University's College of Fine Arts. Polaczyk obtained his Artist Diploma in Composition in 2013 and is currently an award-winning composer based in New York City. Croft obtained her master's degree in harp performance in 2020 and is currently in Taiwan studying the influence of Western music on Asian music.
Before graduating in 2020, Croft worked in Cai's lab on a virtual recital project. Polaczyk took Cai's University-wide course, "Creativity," in 2011 and the two have collaborated ever since.
"It has been a very nice collaboration," Cai says. "This kind of cross-disciplinary collaboration really exemplifies CMU's strengths."
Paper reference
Compositional Sonification of Cybersecurity Data in a Baroque Style |
|||
15 | LLNL Optimizes Flow-Through Electrodes for Electrochemical Reactors with 3D Printing | To take advantage of the growing abundance and cheaper costs of renewable energy, Lawrence Livermore National Laboratory (LLNL) scientists and engineers are 3D printing flow-through electrodes (FTEs), core components of electrochemical reactors used for converting CO 2 and other molecules to useful products.
As described in a paper published by the Proceedings of the National Academy of Sciences , LLNL engineers for the first time 3D-printed carbon FTEs - porous electrodes responsible for the reactions in the reactors - from graphene aerogels. By capitalizing on the design freedom afforded by 3D printing, researchers demonstrated they could tailor the flow in FTEs, dramatically improving mass transfer - the transport of liquid or gas reactants through the electrodes and onto the reactive surfaces. The work opens the door to establishing 3D printing as a "viable, versatile rapid-prototyping method" for flow-through electrodes and as a promising pathway to maximizing reactor performance, according to researchers.
"At LLNL we are pioneering the use of three-dimensional reactors with precise control over the local reaction environment," said LLNL engineer Victor Beck, the paper's lead author. "Novel, high-performance electrodes will be essential components of next-generation electrochemical reactor architectures. This advancement demonstrates how we can leverage the control that 3D printing capabilities offer over the electrode structure to engineer the local fluid flow and induce complex, inertial flow patterns that improve reactor performance."
Through 3D printing, researchers demonstrated that by controlling the electrodes' flow channel geometry, they could optimize electrochemical reactions while minimizing the tradeoffs seen in FTEs made through traditional means. Typical materials used in FTEs are "disordered" media, such as carbon fiber-based foams or felts, limiting opportunities for engineering their microstructure. While cheap to produce, the randomly ordered materials suffer from uneven flow and mass transport distribution, researchers explained.
"By 3D printing advanced materials such as carbon aerogels, it is possible to engineer macroporous networks in these material without compromising the physical properties such as electrical conductivity and surface area," said co-author Swetha Chandrasekaran.
The team reported the FTEs, printed in lattice structures through a direct ink writing method, enhanced mass transfer over previously reported 3D printed efforts by one to two orders of magnitude, and achieved performance on par with conventional materials.
Because the commercial viability and widespread adoption of electrochemical reactors is dependent on attaining greater mass transfer, the ability to engineer flow in FTEs will make the technology a much more attractive option for helping solve the global energy crisis, researchers said. Improving the performance and predictability of 3D-printed electrodes also makes them suitable for use in scaled-up reactors for high efficiency electrochemical converters.
"Gaining fine control over electrode geometries will enable advanced electrochemical reactor engineering that wasn't possible with previous generation electrode materials," said co-author Anna Ivanovskaya. "Engineers will be able to design and manufacture structures optimized for specific processes. Potentially, with development of manufacturing technology, 3D-printed electrodes may replace conventional disordered electrodes for both liquid and gas type reactors."
LLNL scientists and engineers are currently exploring use of electrochemical reactors across a range of applications, including converting CO 2 to useful fuels and polymers and electrochemical energy storage to enable further deployment of electricity from carbon-free and renewable sources. Researchers said the promising results will allow them to rapidly explore the impact of engineered electrode architectures without expensive industrialized manufacturing techniques.
Work is ongoing at LLNL to produce more robust electrodes and reactor components at higher resolutions through light-based 3D polymer printing techniques such as projection micro-stereolithography and two-photon lithography, flowed by metallization. The team also will leverage high performance computing to design better performing structures and continue deploying the 3D-printed electrodes in larger and more complex reactors and full electrochemical cells.
Funding for the effort came from the Laboratory Directed Research and Development program. Co-authors included co-principal investigators Sarah Baker, Eric Duoss and Marcus Worsley and LLNL scientist Jean-Baptiste Forien. | Lawrence Livermore National Laboratory (LLNL) scientists three-dimensionally (3D) printed carbon flow-through electrodes (FTEs) for electrochemical reactors from graphene aerogels. The researchers demonstrated the ability to customize FTE flows and drastically enhance reactant transfer from electrodes onto reactive surfaces, optimizing electrochemical reactions. Said LLNL's Swetha Chandrasekaran, "By 3D-printing advanced materials such as carbon aerogels, it is possible to engineer macroporous networks in these materials without compromising the physical properties such as electrical conductivity and surface area." LLNL's Anna Ivanovskaya said the method should enable engineers "to design and manufacture structures optimized for specific processes." | [] | [] | [] | scitechnews | None | None | None | None | Lawrence Livermore National Laboratory (LLNL) scientists three-dimensionally (3D) printed carbon flow-through electrodes (FTEs) for electrochemical reactors from graphene aerogels. The researchers demonstrated the ability to customize FTE flows and drastically enhance reactant transfer from electrodes onto reactive surfaces, optimizing electrochemical reactions. Said LLNL's Swetha Chandrasekaran, "By 3D-printing advanced materials such as carbon aerogels, it is possible to engineer macroporous networks in these materials without compromising the physical properties such as electrical conductivity and surface area." LLNL's Anna Ivanovskaya said the method should enable engineers "to design and manufacture structures optimized for specific processes."
To take advantage of the growing abundance and cheaper costs of renewable energy, Lawrence Livermore National Laboratory (LLNL) scientists and engineers are 3D printing flow-through electrodes (FTEs), core components of electrochemical reactors used for converting CO 2 and other molecules to useful products.
As described in a paper published by the Proceedings of the National Academy of Sciences , LLNL engineers for the first time 3D-printed carbon FTEs - porous electrodes responsible for the reactions in the reactors - from graphene aerogels. By capitalizing on the design freedom afforded by 3D printing, researchers demonstrated they could tailor the flow in FTEs, dramatically improving mass transfer - the transport of liquid or gas reactants through the electrodes and onto the reactive surfaces. The work opens the door to establishing 3D printing as a "viable, versatile rapid-prototyping method" for flow-through electrodes and as a promising pathway to maximizing reactor performance, according to researchers.
"At LLNL we are pioneering the use of three-dimensional reactors with precise control over the local reaction environment," said LLNL engineer Victor Beck, the paper's lead author. "Novel, high-performance electrodes will be essential components of next-generation electrochemical reactor architectures. This advancement demonstrates how we can leverage the control that 3D printing capabilities offer over the electrode structure to engineer the local fluid flow and induce complex, inertial flow patterns that improve reactor performance."
Through 3D printing, researchers demonstrated that by controlling the electrodes' flow channel geometry, they could optimize electrochemical reactions while minimizing the tradeoffs seen in FTEs made through traditional means. Typical materials used in FTEs are "disordered" media, such as carbon fiber-based foams or felts, limiting opportunities for engineering their microstructure. While cheap to produce, the randomly ordered materials suffer from uneven flow and mass transport distribution, researchers explained.
"By 3D printing advanced materials such as carbon aerogels, it is possible to engineer macroporous networks in these material without compromising the physical properties such as electrical conductivity and surface area," said co-author Swetha Chandrasekaran.
The team reported the FTEs, printed in lattice structures through a direct ink writing method, enhanced mass transfer over previously reported 3D printed efforts by one to two orders of magnitude, and achieved performance on par with conventional materials.
Because the commercial viability and widespread adoption of electrochemical reactors is dependent on attaining greater mass transfer, the ability to engineer flow in FTEs will make the technology a much more attractive option for helping solve the global energy crisis, researchers said. Improving the performance and predictability of 3D-printed electrodes also makes them suitable for use in scaled-up reactors for high efficiency electrochemical converters.
"Gaining fine control over electrode geometries will enable advanced electrochemical reactor engineering that wasn't possible with previous generation electrode materials," said co-author Anna Ivanovskaya. "Engineers will be able to design and manufacture structures optimized for specific processes. Potentially, with development of manufacturing technology, 3D-printed electrodes may replace conventional disordered electrodes for both liquid and gas type reactors."
LLNL scientists and engineers are currently exploring use of electrochemical reactors across a range of applications, including converting CO 2 to useful fuels and polymers and electrochemical energy storage to enable further deployment of electricity from carbon-free and renewable sources. Researchers said the promising results will allow them to rapidly explore the impact of engineered electrode architectures without expensive industrialized manufacturing techniques.
Work is ongoing at LLNL to produce more robust electrodes and reactor components at higher resolutions through light-based 3D polymer printing techniques such as projection micro-stereolithography and two-photon lithography, flowed by metallization. The team also will leverage high performance computing to design better performing structures and continue deploying the 3D-printed electrodes in larger and more complex reactors and full electrochemical cells.
Funding for the effort came from the Laboratory Directed Research and Development program. Co-authors included co-principal investigators Sarah Baker, Eric Duoss and Marcus Worsley and LLNL scientist Jean-Baptiste Forien. |
|||
16 | Scientists Share Wiring Diagram Tracing Connections for 200,000 Mouse Brain Cells | Neuroscientists from Seattle's Allen Institute and other research institutions have wrapped up a five-year, multimillion-dollar project with the release of a high-resolution 3-D map showing the connections between 200,000 cells in a clump of mouse brain about as big as a grain of sand.
The data collection, which is now publicly available online , was developed as part of the Machine Intelligence From Cortical Networks program, or MICrONS for short. MICrONS was funded in 2016 with $100 million in federal grants to the Allen Institute and its partners from the Intelligence Advanced Research Projects Activity , the U.S. intelligence community's equivalent of the Pentagon's DARPA think tank.
MICrONS is meant to clear the way for reverse-engineering the structure of the brain to help computer scientists develop more human-like machine learning systems, but the database is likely to benefit biomedical researchers as well.
"We're basically treating the brain circuit as a computer, and we asked three questions: What does it do? How is it wired up? What is the program?" R. Clay Reid, senior investigator at the Allen Institute and one of MICrONS' lead scientists, said today in a news release. "Experiments were done to literally see the neurons' activity, to watch them compute."
The newly released data set takes in 120,000 neurons plus roughly 80,000 other types of brain cells, all contained in a cubic millimeter of the mouse brain's visual neocortex. In addition to mapping the cells in physical space, the data set traces the functional connections involving more than 523 million synapses.
Researchers from the Allen Institute were joined in the project by colleagues from Princeton University, Baylor College of Medicine and other institutions.
Baylor's team captured the patterns of neural activity of a mouse as it viewed images or movies of natural scenes. After those experiments, the Allen Institute team preserved the target sample of brain tissue, cut it into more than 27,000 thin slices, and captured 150 million images of those slices using electron microscopes.
Princeton's team then used machine learning techniques to turn those images into high-resolution maps of each cell and its internal components.
"The reconstructions that we're presenting today let us see the elements of the neural circuit: the brain cells and the wiring, with the ability to follow the wires to map the connections between cells," Reid said. "The final step is to interpret this network, at which point we may be able to say we can read the brain's program."
The resulting insights could help computer scientists design better hardware for AI applications, and they could also help medical researchers figure out treatments for brain disorders that involve alterations in cortical wiring.
"Our five-year mission had an ambitious goal that many regarded as unattainable," said H. Sebastian Seung, a professor of neuroscience and computer science at Princeton. "Today, we have been rewarded by breathtaking new vistas of the mammalian cortex. As we transition to a new phase of discovery, we are building a community of researchers to use the data in new ways."
The data set is hosted online by the Brain Observatory Storage Service & Database , or BossDB, and Amazon Web Services is making it freely accessible on the cloud through its Open Data Sponsorship Program . Google contributed storage and computing engine support through Google Cloud, and the database makes use of Neuroglancer , an open-source visualization tool developed by Google Research.
MICrONS' emphasis on open access is in keeping with the principles that Microsoft co-founder Paul Allen championed when he founded the Allen Institute in 2003 . The Allen Institute for Brain Science is the institute's oldest and largest division, and since Allen's death in 2018 , it has sharpened its focus on studies of neural circuitry and brain cell types . | A multi-institutional team of neuroscientists spent five years and $100 million developing a high-resolution model detailing the connections between 200,000 mouse brain cells. Created under the federally-funded Machine Intelligence From Cortical Networks (MICrONS) program, the dataset encompasses 120,000 neurons and about 80,000 other types of brain cells in a cubic millimeter of a mouse brain's visual neocortex. The researchers recorded neural activity patterns as the mouse watched images or films of natural scenes, then captured 150 million images of fractionated brain tissue using electron microscopes. Each cell and its internal structure were mapped using machine learning techniques. R. Clay Reid at Seattle's Allen Institute for Brain Science said, "The final step is to interpret this network, at which point we may be able to say we can read the brain's program." | [] | [] | [] | scitechnews | None | None | None | None | A multi-institutional team of neuroscientists spent five years and $100 million developing a high-resolution model detailing the connections between 200,000 mouse brain cells. Created under the federally-funded Machine Intelligence From Cortical Networks (MICrONS) program, the dataset encompasses 120,000 neurons and about 80,000 other types of brain cells in a cubic millimeter of a mouse brain's visual neocortex. The researchers recorded neural activity patterns as the mouse watched images or films of natural scenes, then captured 150 million images of fractionated brain tissue using electron microscopes. Each cell and its internal structure were mapped using machine learning techniques. R. Clay Reid at Seattle's Allen Institute for Brain Science said, "The final step is to interpret this network, at which point we may be able to say we can read the brain's program."
Neuroscientists from Seattle's Allen Institute and other research institutions have wrapped up a five-year, multimillion-dollar project with the release of a high-resolution 3-D map showing the connections between 200,000 cells in a clump of mouse brain about as big as a grain of sand.
The data collection, which is now publicly available online , was developed as part of the Machine Intelligence From Cortical Networks program, or MICrONS for short. MICrONS was funded in 2016 with $100 million in federal grants to the Allen Institute and its partners from the Intelligence Advanced Research Projects Activity , the U.S. intelligence community's equivalent of the Pentagon's DARPA think tank.
MICrONS is meant to clear the way for reverse-engineering the structure of the brain to help computer scientists develop more human-like machine learning systems, but the database is likely to benefit biomedical researchers as well.
"We're basically treating the brain circuit as a computer, and we asked three questions: What does it do? How is it wired up? What is the program?" R. Clay Reid, senior investigator at the Allen Institute and one of MICrONS' lead scientists, said today in a news release. "Experiments were done to literally see the neurons' activity, to watch them compute."
The newly released data set takes in 120,000 neurons plus roughly 80,000 other types of brain cells, all contained in a cubic millimeter of the mouse brain's visual neocortex. In addition to mapping the cells in physical space, the data set traces the functional connections involving more than 523 million synapses.
Researchers from the Allen Institute were joined in the project by colleagues from Princeton University, Baylor College of Medicine and other institutions.
Baylor's team captured the patterns of neural activity of a mouse as it viewed images or movies of natural scenes. After those experiments, the Allen Institute team preserved the target sample of brain tissue, cut it into more than 27,000 thin slices, and captured 150 million images of those slices using electron microscopes.
Princeton's team then used machine learning techniques to turn those images into high-resolution maps of each cell and its internal components.
"The reconstructions that we're presenting today let us see the elements of the neural circuit: the brain cells and the wiring, with the ability to follow the wires to map the connections between cells," Reid said. "The final step is to interpret this network, at which point we may be able to say we can read the brain's program."
The resulting insights could help computer scientists design better hardware for AI applications, and they could also help medical researchers figure out treatments for brain disorders that involve alterations in cortical wiring.
"Our five-year mission had an ambitious goal that many regarded as unattainable," said H. Sebastian Seung, a professor of neuroscience and computer science at Princeton. "Today, we have been rewarded by breathtaking new vistas of the mammalian cortex. As we transition to a new phase of discovery, we are building a community of researchers to use the data in new ways."
The data set is hosted online by the Brain Observatory Storage Service & Database , or BossDB, and Amazon Web Services is making it freely accessible on the cloud through its Open Data Sponsorship Program . Google contributed storage and computing engine support through Google Cloud, and the database makes use of Neuroglancer , an open-source visualization tool developed by Google Research.
MICrONS' emphasis on open access is in keeping with the principles that Microsoft co-founder Paul Allen championed when he founded the Allen Institute in 2003 . The Allen Institute for Brain Science is the institute's oldest and largest division, and since Allen's death in 2018 , it has sharpened its focus on studies of neural circuitry and brain cell types . |
|||
17 | Census Data Change to Protect Privacy Rattles Researchers, Minority Groups | A plan to protect the confidentiality of Americans' responses to the 2020 census by injecting small, calculated distortions into the results is raising concerns that it will erode their usability for research and distribution of state and federal funds.
The Census Bureau is due to release the first major results of the decennial count in mid-August. They will offer the first detailed look at the population and racial makeup of thousands of counties and cities, as well as tribal areas, neighborhoods, school districts and smaller areas that will be used to redraw congressional, legislative and local districts to balance their populations. | The U.S. Census Bureau will use a complex algorithm to adjust 2020 Census statistics to prevent the data from being recombined to disclose information about individual respondents. The bureau's Ron Jarmin said it will use differential privacy, an approach it has long employed in some fashion, which involves adding statistical noise to data. Small random numbers, both positive and negative, will be used to adjust most of the Census totals, with inconsistent subtotals squared up. The Bureau indicated that for most groups and places, this will result in fairly accurate totals, although distortion is likely to be higher for smaller groups and areas like census blocks. This has raised concerns among local officials, as population-based formulas are used to allocate billions of dollars in federal and state aid. University of Minnesota researchers said after a fifth test of the method that "major discrepancies remain for minority populations." | [] | [] | [] | scitechnews | None | None | None | None | The U.S. Census Bureau will use a complex algorithm to adjust 2020 Census statistics to prevent the data from being recombined to disclose information about individual respondents. The bureau's Ron Jarmin said it will use differential privacy, an approach it has long employed in some fashion, which involves adding statistical noise to data. Small random numbers, both positive and negative, will be used to adjust most of the Census totals, with inconsistent subtotals squared up. The Bureau indicated that for most groups and places, this will result in fairly accurate totals, although distortion is likely to be higher for smaller groups and areas like census blocks. This has raised concerns among local officials, as population-based formulas are used to allocate billions of dollars in federal and state aid. University of Minnesota researchers said after a fifth test of the method that "major discrepancies remain for minority populations."
A plan to protect the confidentiality of Americans' responses to the 2020 census by injecting small, calculated distortions into the results is raising concerns that it will erode their usability for research and distribution of state and federal funds.
The Census Bureau is due to release the first major results of the decennial count in mid-August. They will offer the first detailed look at the population and racial makeup of thousands of counties and cities, as well as tribal areas, neighborhoods, school districts and smaller areas that will be used to redraw congressional, legislative and local districts to balance their populations. |
|||
18 | Robot Apocalypse Hard to Find in America's Small, Mid-Sized Factories | CLEVELAND, Aug 2 (Reuters) - When researchers from the Massachusetts Institute of Technology visited Rich Gent's machine shop here to see how automation was spreading to America's small and medium-sized factories, they expected to find robots.
They did not.
"In big factories - when you're making the same thing over and over, day after day, robots make total sense," said Gent, who with his brother runs Gent Machine Co, a 55-employee company founded by his great-grandfather, "but not for us."
Even as some analysts warn that robots are about to displace millions of blue-collar jobs in the U.S. industrial heartland, the reality at smaller operations like Gent is far different.
Among the 34 companies with 500 employees or fewer in Ohio, Massachusetts and Arizona that the MIT researchers visited in their project, only one had bought robots in large numbers in the last five years - and that was an Ohio company that had been acquired by a Japanese multinational which pumped in money for the new automation.
In all the other Ohio plants they studied, they found only a single robot purchased in the last five years. In Massachusetts they found a company that had bought two, while in Arizona they found three companies that had added a handful.
Anna Waldman-Brown, a PhD student who worked on the report with MIT Professor Suzanne Berger, said she was "surprised" by the lack of the machines.
"We had a roboticist on our research team, because we expected to find robots," she said. Instead, at one company, she said managers showed them a computer they had recently installed in a corner of the factory - which allowed workers to note their daily production figures on a spreadsheet, rather than jot down that information in paper notebooks.
"The bulk of the machines we saw were from before the 1990s," she said, adding that many had installed new computer controllers to upgrade the older machines - a common practice in these tight-fisted operations. Most had also bought other types of advanced machinery - such as computer-guided cutting machines and inspection systems. But not robots.
Robots are just one type of factory automation, which encompasses a wide range of machines used to move and manufacture goods - including conveyor belts and labeling machines.
Nick Pinkston, CEO of Volition, a San Francisco company that makes software used by robotics engineers to automate factories, said smaller firms lack the cash to take risks on new robots. "They think of capital payback periods of as little as three months, or six - and it all depends on the contract" with the consumer who is ordering parts to be made by the machine.
This is bad news for the U.S. economy. Automation is a key to boosting productivity, which keeps U.S. operations competitive. Since 2005, U.S. labor productivity has grown at an average annual rate of only 1.3% - below the post-World War 2 trend of well over 2% - and the average has dipped even more since 2010.
Researchers have found that larger firms are more productive on average and pay higher wages than their smaller counterparts, a divergence attributed at least in part to the ability of industry giants to invest heavily in cutting-edge technologies.
Yet small and medium-sized manufacturers remain a backbone of U.S. industry, often churning out parts needed to keep assembly lines rolling at big manufacturers. If they fall behind on technology, it could weigh on the entire sector. These small and medium-sized manufacturers are also a key source of relatively good jobs - accounting for 43% of all manufacturing workers.
LIMITATIONS OF ROBOTS
One barrier for smaller companies is finding the skilled workers needed to run robots. "There's a lot of amazing software that's making robots easier to program and repurpose - but not nearly enough people to do that work," said Ryan Kelly, who heads a group that promotes new technology to manufacturers inside the Association for Manufacturing Technology.
To be sure, robots are spreading to more corners of the industrial economy, just not as quickly as the MIT researchers and many others expected. Last year, for the first time, most of the robots ordered by companies in North America were not destined for automotive factories - a shift partly attributed to the development of cheaper and more flexible machines. Those are the type of machines especially needed in smaller operations.
And it seems certain robots will take over more jobs as they become more capable and affordable. One example: their rapid spread in e-commerce warehouses in recent years.
Carmakers and other big companies still buy most robots, said Jeff Burnstein, president of the Association for Advancing Automation, a trade group in Ann Arbor, Michigan. "But there's a lot more in small and medium-size companies than ever before."
Michael Tamasi, owner of AccuRounds in Avon, Massachusetts, is a small manufacturer who recently bought a robot attached to a computer-controlled cutting machine.
"We're getting another machine delivered in September - and hope to attach a robot arm to that one to load and unload it," he said. But there are some tasks where the technology remains too rigid or simply not capable of getting the job done.
For instance, Tamasi recently looked at buying a robot to polish metal parts. But the complexity of the shape made it impossible. "And it was kind of slow," he said. "When you think of robots, you think better, faster, cheaper - but this was kind of the opposite." And he still needed a worker to load and unload the machine.
For a company like Cleveland's Gent, which makes parts for things like refrigerators, auto airbags and hydraulic pumps, the main barrier to getting robots is the cost and uncertainty over whether the investment will pay off, which in turn hinges on the plans and attitudes of customers.
And big customers can be fickle. Eight years ago, Gent landed a contract to supply fasteners used to put together battery packs for Tesla Inc (TSLA.O) - and the electric-car maker soon became its largest customer. But Gent never got assurances from Tesla that the business would continue for long enough to justify buying the robots it could have used to make the fasteners.
"If we'd known Tesla would go on that long, we definitely would have automated our assembly process," said Gent, who said they looked at automating the line twice over the years.
But he does not regret his caution. Earlier this year, Tesla notified Gent that it was pulling the business. "We're not bitter," said Gent. "It's just how it works."
Gent does spend heavily on new equipment, relative to its small size - about $500,000 a year from 2011 to 2019. One purchase was a $1.6 million computer-controlled cutting machine that cut the cycle time to make the Tesla parts down from 38 seconds to 7 seconds - a major gain in productivity that flowed straight to Gent's bottom line.
"We found another part to make on the machine," said Gent.
Our Standards: The Thomson Reuters Trust Principles. | Although analysts have warned that millions of blue-collar jobs in the U.S. industrial heartland will soon be displaced by robots, that is not yet the case at small and medium-sized factories. Massachusetts Institute of Technology (MIT) researchers studied 34 companies with 500 or fewer employees in Ohio, Massachusetts, and Arizona, and found just one had acquired a significant number of robots in the past five years. MIT's Anna Waldman-Brown said, "The bulk of the machines we saw were from before the 1990s," and many older machines were upgraded with new computer controllers. Other companies have purchased advanced equipment like computer-guided cutting machines and inspection systems, but not robots, the researchers found, because smaller companies lack the money for robots or the skilled workers necessary to operate them. | [] | [] | [] | scitechnews | None | None | None | None | Although analysts have warned that millions of blue-collar jobs in the U.S. industrial heartland will soon be displaced by robots, that is not yet the case at small and medium-sized factories. Massachusetts Institute of Technology (MIT) researchers studied 34 companies with 500 or fewer employees in Ohio, Massachusetts, and Arizona, and found just one had acquired a significant number of robots in the past five years. MIT's Anna Waldman-Brown said, "The bulk of the machines we saw were from before the 1990s," and many older machines were upgraded with new computer controllers. Other companies have purchased advanced equipment like computer-guided cutting machines and inspection systems, but not robots, the researchers found, because smaller companies lack the money for robots or the skilled workers necessary to operate them.
CLEVELAND, Aug 2 (Reuters) - When researchers from the Massachusetts Institute of Technology visited Rich Gent's machine shop here to see how automation was spreading to America's small and medium-sized factories, they expected to find robots.
They did not.
"In big factories - when you're making the same thing over and over, day after day, robots make total sense," said Gent, who with his brother runs Gent Machine Co, a 55-employee company founded by his great-grandfather, "but not for us."
Even as some analysts warn that robots are about to displace millions of blue-collar jobs in the U.S. industrial heartland, the reality at smaller operations like Gent is far different.
Among the 34 companies with 500 employees or fewer in Ohio, Massachusetts and Arizona that the MIT researchers visited in their project, only one had bought robots in large numbers in the last five years - and that was an Ohio company that had been acquired by a Japanese multinational which pumped in money for the new automation.
In all the other Ohio plants they studied, they found only a single robot purchased in the last five years. In Massachusetts they found a company that had bought two, while in Arizona they found three companies that had added a handful.
Anna Waldman-Brown, a PhD student who worked on the report with MIT Professor Suzanne Berger, said she was "surprised" by the lack of the machines.
"We had a roboticist on our research team, because we expected to find robots," she said. Instead, at one company, she said managers showed them a computer they had recently installed in a corner of the factory - which allowed workers to note their daily production figures on a spreadsheet, rather than jot down that information in paper notebooks.
"The bulk of the machines we saw were from before the 1990s," she said, adding that many had installed new computer controllers to upgrade the older machines - a common practice in these tight-fisted operations. Most had also bought other types of advanced machinery - such as computer-guided cutting machines and inspection systems. But not robots.
Robots are just one type of factory automation, which encompasses a wide range of machines used to move and manufacture goods - including conveyor belts and labeling machines.
Nick Pinkston, CEO of Volition, a San Francisco company that makes software used by robotics engineers to automate factories, said smaller firms lack the cash to take risks on new robots. "They think of capital payback periods of as little as three months, or six - and it all depends on the contract" with the consumer who is ordering parts to be made by the machine.
This is bad news for the U.S. economy. Automation is a key to boosting productivity, which keeps U.S. operations competitive. Since 2005, U.S. labor productivity has grown at an average annual rate of only 1.3% - below the post-World War 2 trend of well over 2% - and the average has dipped even more since 2010.
Researchers have found that larger firms are more productive on average and pay higher wages than their smaller counterparts, a divergence attributed at least in part to the ability of industry giants to invest heavily in cutting-edge technologies.
Yet small and medium-sized manufacturers remain a backbone of U.S. industry, often churning out parts needed to keep assembly lines rolling at big manufacturers. If they fall behind on technology, it could weigh on the entire sector. These small and medium-sized manufacturers are also a key source of relatively good jobs - accounting for 43% of all manufacturing workers.
LIMITATIONS OF ROBOTS
One barrier for smaller companies is finding the skilled workers needed to run robots. "There's a lot of amazing software that's making robots easier to program and repurpose - but not nearly enough people to do that work," said Ryan Kelly, who heads a group that promotes new technology to manufacturers inside the Association for Manufacturing Technology.
To be sure, robots are spreading to more corners of the industrial economy, just not as quickly as the MIT researchers and many others expected. Last year, for the first time, most of the robots ordered by companies in North America were not destined for automotive factories - a shift partly attributed to the development of cheaper and more flexible machines. Those are the type of machines especially needed in smaller operations.
And it seems certain robots will take over more jobs as they become more capable and affordable. One example: their rapid spread in e-commerce warehouses in recent years.
Carmakers and other big companies still buy most robots, said Jeff Burnstein, president of the Association for Advancing Automation, a trade group in Ann Arbor, Michigan. "But there's a lot more in small and medium-size companies than ever before."
Michael Tamasi, owner of AccuRounds in Avon, Massachusetts, is a small manufacturer who recently bought a robot attached to a computer-controlled cutting machine.
"We're getting another machine delivered in September - and hope to attach a robot arm to that one to load and unload it," he said. But there are some tasks where the technology remains too rigid or simply not capable of getting the job done.
For instance, Tamasi recently looked at buying a robot to polish metal parts. But the complexity of the shape made it impossible. "And it was kind of slow," he said. "When you think of robots, you think better, faster, cheaper - but this was kind of the opposite." And he still needed a worker to load and unload the machine.
For a company like Cleveland's Gent, which makes parts for things like refrigerators, auto airbags and hydraulic pumps, the main barrier to getting robots is the cost and uncertainty over whether the investment will pay off, which in turn hinges on the plans and attitudes of customers.
And big customers can be fickle. Eight years ago, Gent landed a contract to supply fasteners used to put together battery packs for Tesla Inc (TSLA.O) - and the electric-car maker soon became its largest customer. But Gent never got assurances from Tesla that the business would continue for long enough to justify buying the robots it could have used to make the fasteners.
"If we'd known Tesla would go on that long, we definitely would have automated our assembly process," said Gent, who said they looked at automating the line twice over the years.
But he does not regret his caution. Earlier this year, Tesla notified Gent that it was pulling the business. "We're not bitter," said Gent. "It's just how it works."
Gent does spend heavily on new equipment, relative to its small size - about $500,000 a year from 2011 to 2019. One purchase was a $1.6 million computer-controlled cutting machine that cut the cycle time to make the Tesla parts down from 38 seconds to 7 seconds - a major gain in productivity that flowed straight to Gent's bottom line.
"We found another part to make on the machine," said Gent.
Our Standards: The Thomson Reuters Trust Principles. |
|||
19 | Insulator-Conductor Transition Points Toward Ultra-Efficient Computing | For the first time, researchers have been able to image how atoms in a computer switch move around on fast timescales while it turns on and off. This ability to peer into the atomic world may hold the key to a new kind of switch for computers that will speed up computing and reduce the energy required for computer processing.
The research team made up of scientists from the Department of Energy's SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Penn State University and Purdue University were able to capture snapshots of atomic motion in a device while it was switching. The researchers believe that the new insights this technique will generate into how switches operate will not only improve future switch technology, but will also resolve the ultimate speed and energy-consumption limits for computing devices.
Switches in computer chips control the flow of electrons. By applying an electrical charge to the switch and then removing that charge, the switch can be turned back and forth between acting as an insulator that blocks the flow of electrons to a conductor that allows the flow of electrons. This on/off switch is the basis for the "0-1" of binary computer logic.
While studying a switch made from vanadium dioxide, the researchers were able to detect with their imaging technique the existence of a short-lived transition stage between the material going from an insulator to a conductor and then back again.
"In this transient state , the structure remains the same as in the starting insulating state, but there is electronic reorganization which causes it to become metallic," explained Aditya Sood , a postdoctoral researcher at SLAC National Lab & Stanford University. "We infer this from subtle signatures in how the electron diffraction pattern changes during this electrically-driven transition."
In order to observe this transient state, the researchers had to develop a real-time imaging technology based on electron diffraction. Electron diffraction by itself has existed for many decades and is used routinely in transmission electron microscopes (TEMs). But in these previous kinds of applications, electron imaging was used just to study a material's structure in a static way, or to probe its evolution on slow timescales.
While ultrafast electron diffraction (UED) has been developed to make time-resolved measurements of atomic structure, previous implementations of this technique relied on optical pulses to impulsively excite (or "pump") materials and image the resulting atomic motions.
What the scientists did here for the first time in this research was create an ultrafast technique in which electrical (not optical) pulses provide the impulsive excitation. This makes it possible to electrically pulse a device, look at the ensuing atomic scale motions on fast timescales (down to nanoseconds), while simultaneously measuring current through the device.
The team used electrical pulses, shown here in blue, to turn their custom-made switches on and off several times. They timed these electrical pulses to arrive just before the electron pulses produced by SLAC's ultrafast electron diffraction source MeV-UED, which captured the atomic motions. Greg Stewart/SLAC National Accelerator Laboratory
"We now have a direct way to correlate very fast atomic movements at the angstrom scale with electronic flow across device length scales," said Sood.
To do this, the researchers built a new apparatus that integrated an electronic device to which they could apply fast electrical bias pulses, such that each electrical bias pulse was followed by a "probing" electron pulse (which creates a diffraction pattern, telling us about where the atoms are) with a controllable time delay.
"By repeating this many times, each time changing the time delay, we could effectively construct a movie of the atomic movements during and after electrical biasing," explained Sood.
Additionally, the researchers built an electrical circuit around the device to be able to concurrently measure the current flowing through during the transient switching process. While custom-made vanadium-dioxide-based switches were fabricated for the sake of this research, Sood says that the technique could work on any kind of switch just as long as the switch is 100 nanometers or thinner to allow electrons to be transmitted through it.
"It would be interesting to see if the multi-stage, transient switching phenomenon we observe in our vanadium-dioxide-based devices is found more broadly across the solid-state device landscape," said Sood. "We are thrilled by the prospect of looking at some of the emerging memory and logic technologies, where for the first time, we can visualize ultrafast atomic motions occurring during switching."
Aaron Lindenberg , a professor in the Department of Materials Science and Engineering at Stanford and a collaborator with Sood on this work said, "More generally, this work also opens up new possibilities for using electric fields to synthesize and stabilize new materials with potentially useful functional properties."
The group's research was published in a recent issue of the journal Science . | A team of researchers has imaged the movement of atoms in a computer switch turning on and off in real time, which could help lead to super-efficient computing. Researchers at the U.S. Department of Energy's SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Pennsylvania State University, and Purdue University used the method to detect a short-lived transition stage between insulator-conductor flipping in a vanadium dioxide switch. The ultrafast electron diffraction technique uses electrical rather than optical pulses to supply the impulsive atomic excitation, exposing atomic-scale motions on fast timescales and measuring current through the device. Stanford's Aditya Sood said, "We now have a direct way to correlate very fast atomic movements at the angstrom scale with electronic flow across device length scales." | [] | [] | [] | scitechnews | None | None | None | None | A team of researchers has imaged the movement of atoms in a computer switch turning on and off in real time, which could help lead to super-efficient computing. Researchers at the U.S. Department of Energy's SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Pennsylvania State University, and Purdue University used the method to detect a short-lived transition stage between insulator-conductor flipping in a vanadium dioxide switch. The ultrafast electron diffraction technique uses electrical rather than optical pulses to supply the impulsive atomic excitation, exposing atomic-scale motions on fast timescales and measuring current through the device. Stanford's Aditya Sood said, "We now have a direct way to correlate very fast atomic movements at the angstrom scale with electronic flow across device length scales."
For the first time, researchers have been able to image how atoms in a computer switch move around on fast timescales while it turns on and off. This ability to peer into the atomic world may hold the key to a new kind of switch for computers that will speed up computing and reduce the energy required for computer processing.
The research team made up of scientists from the Department of Energy's SLAC National Accelerator Laboratory, Stanford University, Hewlett Packard Labs, Penn State University and Purdue University were able to capture snapshots of atomic motion in a device while it was switching. The researchers believe that the new insights this technique will generate into how switches operate will not only improve future switch technology, but will also resolve the ultimate speed and energy-consumption limits for computing devices.
Switches in computer chips control the flow of electrons. By applying an electrical charge to the switch and then removing that charge, the switch can be turned back and forth between acting as an insulator that blocks the flow of electrons to a conductor that allows the flow of electrons. This on/off switch is the basis for the "0-1" of binary computer logic.
While studying a switch made from vanadium dioxide, the researchers were able to detect with their imaging technique the existence of a short-lived transition stage between the material going from an insulator to a conductor and then back again.
"In this transient state , the structure remains the same as in the starting insulating state, but there is electronic reorganization which causes it to become metallic," explained Aditya Sood , a postdoctoral researcher at SLAC National Lab & Stanford University. "We infer this from subtle signatures in how the electron diffraction pattern changes during this electrically-driven transition."
In order to observe this transient state, the researchers had to develop a real-time imaging technology based on electron diffraction. Electron diffraction by itself has existed for many decades and is used routinely in transmission electron microscopes (TEMs). But in these previous kinds of applications, electron imaging was used just to study a material's structure in a static way, or to probe its evolution on slow timescales.
While ultrafast electron diffraction (UED) has been developed to make time-resolved measurements of atomic structure, previous implementations of this technique relied on optical pulses to impulsively excite (or "pump") materials and image the resulting atomic motions.
What the scientists did here for the first time in this research was create an ultrafast technique in which electrical (not optical) pulses provide the impulsive excitation. This makes it possible to electrically pulse a device, look at the ensuing atomic scale motions on fast timescales (down to nanoseconds), while simultaneously measuring current through the device.
The team used electrical pulses, shown here in blue, to turn their custom-made switches on and off several times. They timed these electrical pulses to arrive just before the electron pulses produced by SLAC's ultrafast electron diffraction source MeV-UED, which captured the atomic motions. Greg Stewart/SLAC National Accelerator Laboratory
"We now have a direct way to correlate very fast atomic movements at the angstrom scale with electronic flow across device length scales," said Sood.
To do this, the researchers built a new apparatus that integrated an electronic device to which they could apply fast electrical bias pulses, such that each electrical bias pulse was followed by a "probing" electron pulse (which creates a diffraction pattern, telling us about where the atoms are) with a controllable time delay.
"By repeating this many times, each time changing the time delay, we could effectively construct a movie of the atomic movements during and after electrical biasing," explained Sood.
Additionally, the researchers built an electrical circuit around the device to be able to concurrently measure the current flowing through during the transient switching process. While custom-made vanadium-dioxide-based switches were fabricated for the sake of this research, Sood says that the technique could work on any kind of switch just as long as the switch is 100 nanometers or thinner to allow electrons to be transmitted through it.
"It would be interesting to see if the multi-stage, transient switching phenomenon we observe in our vanadium-dioxide-based devices is found more broadly across the solid-state device landscape," said Sood. "We are thrilled by the prospect of looking at some of the emerging memory and logic technologies, where for the first time, we can visualize ultrafast atomic motions occurring during switching."
Aaron Lindenberg , a professor in the Department of Materials Science and Engineering at Stanford and a collaborator with Sood on this work said, "More generally, this work also opens up new possibilities for using electric fields to synthesize and stabilize new materials with potentially useful functional properties."
The group's research was published in a recent issue of the journal Science . |
|||
20 | AI Carpenter Can Recreate Furniture From Photos | An algorithm developed by University of Washington (UW) researchers can render photos of wooden objects into three-dimensional (3D) models with enough detail to be replicated by carpenters. The researchers factored in the geometric limitations of flat sheets of wood and how wooden parts can interlock. They captured photos of wooden items with a smartphone, and the algorithm generated accurate plans for their construction after less than 10 minutes of processing. Said UW's James Noeckel, "It doesn't really require that you observe the object completely because we make these assumptions about how objects are fabricated. We don't need to take pictures of every single surface, which is something you would need for a traditional 3D reconstruction algorithm to get complete shapes." | [] | [] | [] | scitechnews | None | None | None | None | An algorithm developed by University of Washington (UW) researchers can render photos of wooden objects into three-dimensional (3D) models with enough detail to be replicated by carpenters. The researchers factored in the geometric limitations of flat sheets of wood and how wooden parts can interlock. They captured photos of wooden items with a smartphone, and the algorithm generated accurate plans for their construction after less than 10 minutes of processing. Said UW's James Noeckel, "It doesn't really require that you observe the object completely because we make these assumptions about how objects are fabricated. We don't need to take pictures of every single surface, which is something you would need for a traditional 3D reconstruction algorithm to get complete shapes."
|
||||
21 | Developers Reveal Programming Languages They Love, Dread | Programmer online community Stack Overflow's 2021 survey of 83,439 software developers in 181 countries found the vast majority (86.69%) named Mozilla's Rust their "most loved" language. Those respondents cited Rust as the language they worked with the most in the past year, and with which they most want to work with next year. Rust is popular for systems programming, and is under consideration for Linux kernel development, partly because it can help remove memory-related security flaws. Though deemed most loved, Rust was nominated to the survey by just 5,044 developers, while 18,711 respondents nominated Microsoft's TypeScript, the third most "loved" language; TypeScript compiles into JavaScript and helps developers more efficiently program large front-end Web applications. More developers dreaded (66%) than loved (39.56%) the widely-used C language, while Java also had fewer champions (47%) than those opposed dreading its use (52.85%). | [] | [] | [] | scitechnews | None | None | None | None | Programmer online community Stack Overflow's 2021 survey of 83,439 software developers in 181 countries found the vast majority (86.69%) named Mozilla's Rust their "most loved" language. Those respondents cited Rust as the language they worked with the most in the past year, and with which they most want to work with next year. Rust is popular for systems programming, and is under consideration for Linux kernel development, partly because it can help remove memory-related security flaws. Though deemed most loved, Rust was nominated to the survey by just 5,044 developers, while 18,711 respondents nominated Microsoft's TypeScript, the third most "loved" language; TypeScript compiles into JavaScript and helps developers more efficiently program large front-end Web applications. More developers dreaded (66%) than loved (39.56%) the widely-used C language, while Java also had fewer champions (47%) than those opposed dreading its use (52.85%).
|
||||
22 | Apps That Are Redefining Accessibility | Some estimate less than 10% of websites are accessible, meaning they provide assistance in accessing their content to people with visual disabilities. Some companies are tackling the issue by rolling out apps that can be used by anyone, regardless of visual capabilities. One example is Finnish developer Ilkka Pirttimaa, whose BlindSquare app incorporates Open Street Map and Foursquare data to help the visually impaired navigate streets; the app also integrates with ride-hailing apps like Uber. The Be My Eyes app connects visually impaired individuals to sighted volunteers via live video calls for assistance with everyday tasks, while the AccessNow app and website map and reviews locations on their accessibility. AccessNow's Maayan Ziv said, "Accessibility is one more way in which you can invite people to be part of something, and it really does touch every kind of industry." | [] | [] | [] | scitechnews | None | None | None | None | Some estimate less than 10% of websites are accessible, meaning they provide assistance in accessing their content to people with visual disabilities. Some companies are tackling the issue by rolling out apps that can be used by anyone, regardless of visual capabilities. One example is Finnish developer Ilkka Pirttimaa, whose BlindSquare app incorporates Open Street Map and Foursquare data to help the visually impaired navigate streets; the app also integrates with ride-hailing apps like Uber. The Be My Eyes app connects visually impaired individuals to sighted volunteers via live video calls for assistance with everyday tasks, while the AccessNow app and website map and reviews locations on their accessibility. AccessNow's Maayan Ziv said, "Accessibility is one more way in which you can invite people to be part of something, and it really does touch every kind of industry."
|
||||
23 | Security Bug Affects Nearly All Hospitals in North America | Researchers from the IoT security firm Armis have discovered nine critical vulnerabilities in the Nexus Control Panel which is used to power all current models of Translogic's pneumatic tube system (PTS) stations by Swisslog Healthcare.
The vulnerabilities have been given the name PwnedPiper and are particularly concerning due to the fact that the Translogic PTS system is used in 3,000 hospitals worldwide including in more than 80 percent of major hospitals in North America. The system is used to deliver medications, blood products and various lab samples across multiple departments at the hospitals where it is used.
The PwnedPiper vulnerabilities can be exploited by an unauthenticated hacker to take over PTS stations and gain full control over a target hospital's tube network. With this control, cybercriminals could launch ransomware attacks that range from denial-of-service to full-blown man-in-the-middle attacks ( MITM ) that can alter the paths of a networks' carriers to deliberately sabotage hospitals.
Despite the prevalence of modern PTS systems that are IP-connected and found in many hospitals, the security of these systems has never been thoroughly analyzed or researched until now.
Of the nine PwnedPiper vulnerabilities discovered by Armis, five of them can be used to achieve remote code execution , gain access to a hospital's network and take over Nexus stations.
By compromising a Nexus station, an attacker can use it for reconnaissance to harvest data from the station including RFID credentials of employees that use the PTS system, details about the functions or locations of each system and gain an understanding of the physical layout of a hospital's PTS network. From here, an attacker can take over all Nexus stations in a hospital's tube network and then hold them hostage in a ransomware attack.
VP of Research at Armis, Ben Seri provided further insight in a press release on how the company worked with Swisslog to patch the PwnedPiper vulnerabilities it discovered, saying:
"Armis disclosed the vulnerabilities to Swisslog on May 1, 2021, and has been working with the manufacturer to test the available patch and ensure proper security measures will be provided to customers. With so many hospitals reliant on this technology we've worked diligently to address these vulnerabilities to increase cyber resiliency in these healthcare environments, where lives are on the line."
Armis will present its research on PwnedPiper at this year's Black Hat USA security conference and as of now, only one of the nine vulnerabilities remains unpatched. | Researchers at security firm Armis identified nine critical vulnerabilities in the Nexus Control Panel that powers all current models of Swisslog Healthcare's Translogic pneumatic tube system (PTS) stations. The Translogic PTS system is used in 3,000 hospitals worldwide and 80% of major hospitals in North America to deliver medications, blood products, and lab samples across multiple hospital departments. Hackers can exploit the vulnerabilities, dubbed PwnedPiper, to gain control over a hospital's pneumatic tube network, with the potential to launch ransomware attacks. Armis' Ben Seri said his firm had told Swisslog of the vulnerabilities at the beginning of May, "and has been working with the manufacturer to test the available patch and ensure proper security measures will be provided to customers." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at security firm Armis identified nine critical vulnerabilities in the Nexus Control Panel that powers all current models of Swisslog Healthcare's Translogic pneumatic tube system (PTS) stations. The Translogic PTS system is used in 3,000 hospitals worldwide and 80% of major hospitals in North America to deliver medications, blood products, and lab samples across multiple hospital departments. Hackers can exploit the vulnerabilities, dubbed PwnedPiper, to gain control over a hospital's pneumatic tube network, with the potential to launch ransomware attacks. Armis' Ben Seri said his firm had told Swisslog of the vulnerabilities at the beginning of May, "and has been working with the manufacturer to test the available patch and ensure proper security measures will be provided to customers."
Researchers from the IoT security firm Armis have discovered nine critical vulnerabilities in the Nexus Control Panel which is used to power all current models of Translogic's pneumatic tube system (PTS) stations by Swisslog Healthcare.
The vulnerabilities have been given the name PwnedPiper and are particularly concerning due to the fact that the Translogic PTS system is used in 3,000 hospitals worldwide including in more than 80 percent of major hospitals in North America. The system is used to deliver medications, blood products and various lab samples across multiple departments at the hospitals where it is used.
The PwnedPiper vulnerabilities can be exploited by an unauthenticated hacker to take over PTS stations and gain full control over a target hospital's tube network. With this control, cybercriminals could launch ransomware attacks that range from denial-of-service to full-blown man-in-the-middle attacks ( MITM ) that can alter the paths of a networks' carriers to deliberately sabotage hospitals.
Despite the prevalence of modern PTS systems that are IP-connected and found in many hospitals, the security of these systems has never been thoroughly analyzed or researched until now.
Of the nine PwnedPiper vulnerabilities discovered by Armis, five of them can be used to achieve remote code execution , gain access to a hospital's network and take over Nexus stations.
By compromising a Nexus station, an attacker can use it for reconnaissance to harvest data from the station including RFID credentials of employees that use the PTS system, details about the functions or locations of each system and gain an understanding of the physical layout of a hospital's PTS network. From here, an attacker can take over all Nexus stations in a hospital's tube network and then hold them hostage in a ransomware attack.
VP of Research at Armis, Ben Seri provided further insight in a press release on how the company worked with Swisslog to patch the PwnedPiper vulnerabilities it discovered, saying:
"Armis disclosed the vulnerabilities to Swisslog on May 1, 2021, and has been working with the manufacturer to test the available patch and ensure proper security measures will be provided to customers. With so many hospitals reliant on this technology we've worked diligently to address these vulnerabilities to increase cyber resiliency in these healthcare environments, where lives are on the line."
Armis will present its research on PwnedPiper at this year's Black Hat USA security conference and as of now, only one of the nine vulnerabilities remains unpatched. |
|||
24 | Platform Teaches Nonexperts to Use ML | Machine-learning algorithms are used to find patterns in data that humans wouldn't otherwise notice, and are being deployed to help inform decisions big and small - from COVID-19 vaccination development to Netflix recommendations.
New award-winning research from the Cornell Ann S. Bowers College of Computing and Information Science explores how to help nonexperts effectively, efficiently and ethically use machine-learning algorithms to better enable industries beyond the computing field to harness the power of AI.
"We don't know much about how nonexperts in machine learning come to learn algorithmic tools," said Swati Mishra, a Ph.D. student in the field of information science. "The reason is that there's a hype that's developed that suggests machine learning is for the ordained."
Mishra is lead author of " Designing Interactive Transfer Learning Tools for ML Non-Experts ," which received a Best Paper Award at the annual ACM CHI Virtual Conference on Human Factors in Computing Systems, held in May.
As machine learning has entered fields and industries traditionally outside of computing, the need for research and effective, accessible tools to enable new users in leveraging artificial intelligence is unprecedented, Mishra said.
Existing research into these interactive machine-learning systems has mostly focused on understanding the users and the challenges they face when navigating the tools. Mishra's latest research - including the development of her own interactive machine-learning platform - breaks fresh ground by investigating the inverse: How to better design the system so that users with limited algorithmic expertise but vast domain expertise can learn to integrate preexisting models into their own work.
"When you do a task, you know what parts need manual fixing and what needs automation," said Mishra, a 2021-2022 Bloomberg Data Science Ph.D. fellow. "If we design machine-learning tools correctly and give enough agency to people to use them, we can ensure their knowledge gets integrated into the machine-learning model."
Mishra takes an unconventional approach with this research by turning to a complex process called "transfer learning" as a jumping-off point to initiate nonexperts into machine learning. Transfer learning is a high-level and powerful machine-learning technique typically reserved for experts, wherein users repurpose and tweak existing, pretrained machine-learning models for new tasks.
The technique alleviates the need to build a model from scratch, which requires lots of training data, allowing the user to repurpose a model trained to identify images of dogs, say, into a model that can identify cats or, with the right expertise, even skin cancers.
"By intentionally focusing on appropriating existing models into new tasks, Swati's work helps novices not only use machine learning to solve complex tasks, but also take advantage of machine-learning experts' continuing developments," said Jeff Rzeszotarski , assistant professor in the Department of Information Science and the paper's senior author. "While our eventual goal is to help novices become advanced machine-learning users, providing some 'training wheels' through transfer learning can help novices immediately employ machine learning for their own tasks."
Mishra's research exposes transfer learning's inner computational workings through an interactive platform so nonexperts can better understand how machines crunch datasets and make decisions. Through a corresponding lab study with people with no background in machine-learning development, Mishra was able to pinpoint precisely where beginners lost their way, what their rationales were for making certain tweaks to the model and what approaches were most successful or unsuccessful.
In the end, the duo found participating nonexperts were able to successfully use transfer learning and alter existing models for their own purposes. However, researchers discovered that inaccurate perceptions of machine intelligence frequently slowed learning among nonexperts. Machines don't learn like humans do, Mishra said.
"We're used to a human-like learning style, and intuitively we tend to employ strategies that are familiar to us," she said. "If the tools do not explicitly convey this difference, the machines may never really learn. We as researchers and designers have to mitigate user perceptions of what machine learning is. Any interactive tool must help us manage our expectations."
Lou DiPietro is a communications specialist for the Department of Information Science. | An interactive machine learning (ML) platform developed by Cornell University scientists is designed to train nonexperts to use algorithms effectively, efficiently, and ethically. Cornell's Swati Mishra said, "If we design machine learning tools correctly and give enough agency to people to use them, we can ensure their knowledge gets integrated into the machine learning model." Said Cornell's Jeff Rzeszotarski, "While our eventual goal is to help novices become advanced machine-learning users, providing some 'training wheels' through transfer learning can help novices immediately employ machine learning for their own tasks." Added Mishra, "We as researchers and designers have to mitigate user perceptions of what machine learning is. Any interactive tool must help us manage our expectations." | [] | [] | [] | scitechnews | None | None | None | None | An interactive machine learning (ML) platform developed by Cornell University scientists is designed to train nonexperts to use algorithms effectively, efficiently, and ethically. Cornell's Swati Mishra said, "If we design machine learning tools correctly and give enough agency to people to use them, we can ensure their knowledge gets integrated into the machine learning model." Said Cornell's Jeff Rzeszotarski, "While our eventual goal is to help novices become advanced machine-learning users, providing some 'training wheels' through transfer learning can help novices immediately employ machine learning for their own tasks." Added Mishra, "We as researchers and designers have to mitigate user perceptions of what machine learning is. Any interactive tool must help us manage our expectations."
Machine-learning algorithms are used to find patterns in data that humans wouldn't otherwise notice, and are being deployed to help inform decisions big and small - from COVID-19 vaccination development to Netflix recommendations.
New award-winning research from the Cornell Ann S. Bowers College of Computing and Information Science explores how to help nonexperts effectively, efficiently and ethically use machine-learning algorithms to better enable industries beyond the computing field to harness the power of AI.
"We don't know much about how nonexperts in machine learning come to learn algorithmic tools," said Swati Mishra, a Ph.D. student in the field of information science. "The reason is that there's a hype that's developed that suggests machine learning is for the ordained."
Mishra is lead author of " Designing Interactive Transfer Learning Tools for ML Non-Experts ," which received a Best Paper Award at the annual ACM CHI Virtual Conference on Human Factors in Computing Systems, held in May.
As machine learning has entered fields and industries traditionally outside of computing, the need for research and effective, accessible tools to enable new users in leveraging artificial intelligence is unprecedented, Mishra said.
Existing research into these interactive machine-learning systems has mostly focused on understanding the users and the challenges they face when navigating the tools. Mishra's latest research - including the development of her own interactive machine-learning platform - breaks fresh ground by investigating the inverse: How to better design the system so that users with limited algorithmic expertise but vast domain expertise can learn to integrate preexisting models into their own work.
"When you do a task, you know what parts need manual fixing and what needs automation," said Mishra, a 2021-2022 Bloomberg Data Science Ph.D. fellow. "If we design machine-learning tools correctly and give enough agency to people to use them, we can ensure their knowledge gets integrated into the machine-learning model."
Mishra takes an unconventional approach with this research by turning to a complex process called "transfer learning" as a jumping-off point to initiate nonexperts into machine learning. Transfer learning is a high-level and powerful machine-learning technique typically reserved for experts, wherein users repurpose and tweak existing, pretrained machine-learning models for new tasks.
The technique alleviates the need to build a model from scratch, which requires lots of training data, allowing the user to repurpose a model trained to identify images of dogs, say, into a model that can identify cats or, with the right expertise, even skin cancers.
"By intentionally focusing on appropriating existing models into new tasks, Swati's work helps novices not only use machine learning to solve complex tasks, but also take advantage of machine-learning experts' continuing developments," said Jeff Rzeszotarski , assistant professor in the Department of Information Science and the paper's senior author. "While our eventual goal is to help novices become advanced machine-learning users, providing some 'training wheels' through transfer learning can help novices immediately employ machine learning for their own tasks."
Mishra's research exposes transfer learning's inner computational workings through an interactive platform so nonexperts can better understand how machines crunch datasets and make decisions. Through a corresponding lab study with people with no background in machine-learning development, Mishra was able to pinpoint precisely where beginners lost their way, what their rationales were for making certain tweaks to the model and what approaches were most successful or unsuccessful.
In the end, the duo found participating nonexperts were able to successfully use transfer learning and alter existing models for their own purposes. However, researchers discovered that inaccurate perceptions of machine intelligence frequently slowed learning among nonexperts. Machines don't learn like humans do, Mishra said.
"We're used to a human-like learning style, and intuitively we tend to employ strategies that are familiar to us," she said. "If the tools do not explicitly convey this difference, the machines may never really learn. We as researchers and designers have to mitigate user perceptions of what machine learning is. Any interactive tool must help us manage our expectations."
Lou DiPietro is a communications specialist for the Department of Information Science. |
|||
25 | Robotic Police Dogs: Useful Hounds or Dehumanizing Machines? | HONOLULU (AP) - If you're homeless and looking for temporary shelter in Hawaii's capital, expect a visit from a robotic police dog that will scan your eye to make sure you don't have a fever.
That's just one of the ways public safety agencies are starting to use Spot, the best-known of a new commercial category of robots that trot around with animal-like agility.
The handful of police officials experimenting with the four-legged machines say they're just another tool, like existing drones and simple wheeled robots, to keep emergency responders out of harm's way as they scout for dangers. But privacy watchdogs - the human kind - warn that police are secretly rushing to buy the robots without setting safeguards against aggressive, invasive or dehumanizing uses.
In Honolulu, the police department spent about $150,000 in federal pandemic relief money to buy their Spot from robotics firm Boston Dynamics for use at a government-run tent city near the airport.
"Because these people are houseless it's considered OK to do that," said Jongwook Kim, legal director at the American Civil Liberties Union of Hawaii. "At some point it will come out again for some different use after the pandemic is over."
Acting Lt. Joseph O'Neal of the Honolulu Police Department's community outreach unit defended the robot's use in a media demonstration earlier this year. He said it has protected officers, shelter staff and residents by scanning body temperatures between meal times at a shelter where homeless people could quarantine and get tested for COVID-19. The robot is also used to remotely interview individuals who have tested positive.
"We have not had a single person out there that said, 'That's scary, that's worrisome,'" O'Neal said. "We don't just walk around and arbitrarily scan people."
Police use of such robots is still rare and largely untested - and hasn't always gone over well with the public. Honolulu officials faced a backlash when a local news organization, Honolulu Civil Beat, revealed that the Spot purchase was made with federal relief money .
Late last year, the New York Police Department starting using Spot after painting it blue and renaming it "Digidog." It went mostly unnoticed until New Yorkers starting spotting it in the wild and posting videos to social media. Spot quickly became a sensation, drawing a public outcry that led the police department to abruptly return Digidog to its maker.
"This is some Robocop stuff, this is crazy," was the reaction in April from Democratic U.S. Rep. Jamaal Bowman. He was one of several New York politicians to speak out after a widely shared video showed the robot strutting with police officers responding to a domestic-violence report at a high-rise public housing building in Manhattan.
Days later, after further scrutiny from elected city officials, the department said it was terminating its lease and returning the robot. The expensive machine arrived with little public notice or explanation, public officials said, and was deployed to already over-policed public housing. Use of the high-tech canine also clashed with Black Lives Matter calls to defund police operations and reinvest in other priorities.
The company that makes the robots, Boston Dynamics, says it's learned from the New York fiasco and is trying to do a better job of explaining to the public - and its customers - what Spot can and cannot do. That's become increasingly important as Boston Dynamics becomes part of South Korean carmaker Hyundai Motor Company, which in June closed an $880 million deal for a controlling stake in the robotics firm.
"One of the big challenges is accurately describing the state of the technology to people who have never had personal experience with it," Michael Perry, vice president of business development at Boston Dynamics, said in an interview. "Most people are applying notions from science fiction to what the robot's doing."
For one of its customers, the Dutch national police, explaining the technology includes emphasizing that Spot is a very good robot - well-behaved and not so smart after all.
"It doesn't think for itself," Marjolein Smit, director of the special operations unit of the Dutch national police, said of the remote-controlled robot. "If you tell it to go to the left, it will go to the left. If you tell it to stop, it will stop."
Earlier this year, her police division sent its Spot into the site of a deadly drug lab explosion near the Belgian border to check for dangerous chemicals and other hazards.
Perry said the company's acceptable use guidelines prohibit Spot's weaponization or anything that would violate privacy or civil rights laws, which he said puts the Honolulu police in the clear. It's all part of a year-long effort by Boston Dynamics, which for decades relied on military research grants , to make its robots seem friendlier and thus more palatable to local governments and consumer-oriented businesses.
By contrast, a lesser-known rival, Philadelphia-based Ghost Robotics, has no qualms about weaponization and supplies its dog-like robots to several branches of the U.S. military and its allies.
"It's just plug and play, anything you want," said Ghost Robotics CEO Jiren Parikh, who was critical of Boston Dynamics' stated ethical principles as "selective morality" because of the company's past involvement with the military.
Parikh added that his company doesn't market its four-legged robots to police departments, though he said it would make sense for police to use them. "It's basically a camera on a mobile device," he said.
There are roughly 500 Spot robots now in the wild. Perry said they're commonly used by utility companies to inspect high-voltage zones and other hazardous areas. Spot is also used to monitor construction sites, mines and factories, equipped with whatever sensor is needed for the job.
It's still mostly controlled by humans, though all they have to do is tell it which direction to go and it can intuitively climb stairs or cross over rough terrain. It can also operate autonomously, but only if it's already memorized an assigned route and there aren't too many surprise obstacles.
"The first value that most people see in the robot is taking a person out of a hazardous situation," Perry said.
Kim, of the ACLU in Hawaii, acknowledged that there might be many legitimate uses for such machines, but said opening the door for police robots that interact with people is probably not a good idea. He pointed to how Dallas police in 2016 stuck explosives on a wheeled robot to kill a sniper, fueling an ongoing debate about "killer robots" in policing and warfighting.
"There's the potential for these robots to increase the militarization of police departments and use it in ways that are unacceptable," Kim said. "Maybe it's not something we even want to let law enforcement have."
- -
AP Technology Writer Matt O'Brien reported from Providence, Rhode Island. | Police departments claim to use robotic dogs as simply another tool to keep emergency responders out of danger, but privacy advocates say the robots are secretly being deployed without safeguards against aggressive, invasive, or dehumanizing uses. The New York Police Department acquired a Spot robotic canine last year from robotics developer Boston Dynamics, but returned it when videos of the robot in the wild sparked a public outcry. Boston Dynamics' Michael Perry said weaponizing Spot or using it to violate privacy or civil rights laws is prohibited, but rival robot-maker Ghost Robotics has no such restrictions. The Hawaii American Civil Liberties Union's Jongwook Kim said, "There's the potential for these robots to increase the militarization of police departments and use it in ways that are unacceptable." | [] | [] | [] | scitechnews | None | None | None | None | Police departments claim to use robotic dogs as simply another tool to keep emergency responders out of danger, but privacy advocates say the robots are secretly being deployed without safeguards against aggressive, invasive, or dehumanizing uses. The New York Police Department acquired a Spot robotic canine last year from robotics developer Boston Dynamics, but returned it when videos of the robot in the wild sparked a public outcry. Boston Dynamics' Michael Perry said weaponizing Spot or using it to violate privacy or civil rights laws is prohibited, but rival robot-maker Ghost Robotics has no such restrictions. The Hawaii American Civil Liberties Union's Jongwook Kim said, "There's the potential for these robots to increase the militarization of police departments and use it in ways that are unacceptable."
HONOLULU (AP) - If you're homeless and looking for temporary shelter in Hawaii's capital, expect a visit from a robotic police dog that will scan your eye to make sure you don't have a fever.
That's just one of the ways public safety agencies are starting to use Spot, the best-known of a new commercial category of robots that trot around with animal-like agility.
The handful of police officials experimenting with the four-legged machines say they're just another tool, like existing drones and simple wheeled robots, to keep emergency responders out of harm's way as they scout for dangers. But privacy watchdogs - the human kind - warn that police are secretly rushing to buy the robots without setting safeguards against aggressive, invasive or dehumanizing uses.
In Honolulu, the police department spent about $150,000 in federal pandemic relief money to buy their Spot from robotics firm Boston Dynamics for use at a government-run tent city near the airport.
"Because these people are houseless it's considered OK to do that," said Jongwook Kim, legal director at the American Civil Liberties Union of Hawaii. "At some point it will come out again for some different use after the pandemic is over."
Acting Lt. Joseph O'Neal of the Honolulu Police Department's community outreach unit defended the robot's use in a media demonstration earlier this year. He said it has protected officers, shelter staff and residents by scanning body temperatures between meal times at a shelter where homeless people could quarantine and get tested for COVID-19. The robot is also used to remotely interview individuals who have tested positive.
"We have not had a single person out there that said, 'That's scary, that's worrisome,'" O'Neal said. "We don't just walk around and arbitrarily scan people."
Police use of such robots is still rare and largely untested - and hasn't always gone over well with the public. Honolulu officials faced a backlash when a local news organization, Honolulu Civil Beat, revealed that the Spot purchase was made with federal relief money .
Late last year, the New York Police Department starting using Spot after painting it blue and renaming it "Digidog." It went mostly unnoticed until New Yorkers starting spotting it in the wild and posting videos to social media. Spot quickly became a sensation, drawing a public outcry that led the police department to abruptly return Digidog to its maker.
"This is some Robocop stuff, this is crazy," was the reaction in April from Democratic U.S. Rep. Jamaal Bowman. He was one of several New York politicians to speak out after a widely shared video showed the robot strutting with police officers responding to a domestic-violence report at a high-rise public housing building in Manhattan.
Days later, after further scrutiny from elected city officials, the department said it was terminating its lease and returning the robot. The expensive machine arrived with little public notice or explanation, public officials said, and was deployed to already over-policed public housing. Use of the high-tech canine also clashed with Black Lives Matter calls to defund police operations and reinvest in other priorities.
The company that makes the robots, Boston Dynamics, says it's learned from the New York fiasco and is trying to do a better job of explaining to the public - and its customers - what Spot can and cannot do. That's become increasingly important as Boston Dynamics becomes part of South Korean carmaker Hyundai Motor Company, which in June closed an $880 million deal for a controlling stake in the robotics firm.
"One of the big challenges is accurately describing the state of the technology to people who have never had personal experience with it," Michael Perry, vice president of business development at Boston Dynamics, said in an interview. "Most people are applying notions from science fiction to what the robot's doing."
For one of its customers, the Dutch national police, explaining the technology includes emphasizing that Spot is a very good robot - well-behaved and not so smart after all.
"It doesn't think for itself," Marjolein Smit, director of the special operations unit of the Dutch national police, said of the remote-controlled robot. "If you tell it to go to the left, it will go to the left. If you tell it to stop, it will stop."
Earlier this year, her police division sent its Spot into the site of a deadly drug lab explosion near the Belgian border to check for dangerous chemicals and other hazards.
Perry said the company's acceptable use guidelines prohibit Spot's weaponization or anything that would violate privacy or civil rights laws, which he said puts the Honolulu police in the clear. It's all part of a year-long effort by Boston Dynamics, which for decades relied on military research grants , to make its robots seem friendlier and thus more palatable to local governments and consumer-oriented businesses.
By contrast, a lesser-known rival, Philadelphia-based Ghost Robotics, has no qualms about weaponization and supplies its dog-like robots to several branches of the U.S. military and its allies.
"It's just plug and play, anything you want," said Ghost Robotics CEO Jiren Parikh, who was critical of Boston Dynamics' stated ethical principles as "selective morality" because of the company's past involvement with the military.
Parikh added that his company doesn't market its four-legged robots to police departments, though he said it would make sense for police to use them. "It's basically a camera on a mobile device," he said.
There are roughly 500 Spot robots now in the wild. Perry said they're commonly used by utility companies to inspect high-voltage zones and other hazardous areas. Spot is also used to monitor construction sites, mines and factories, equipped with whatever sensor is needed for the job.
It's still mostly controlled by humans, though all they have to do is tell it which direction to go and it can intuitively climb stairs or cross over rough terrain. It can also operate autonomously, but only if it's already memorized an assigned route and there aren't too many surprise obstacles.
"The first value that most people see in the robot is taking a person out of a hazardous situation," Perry said.
Kim, of the ACLU in Hawaii, acknowledged that there might be many legitimate uses for such machines, but said opening the door for police robots that interact with people is probably not a good idea. He pointed to how Dallas police in 2016 stuck explosives on a wheeled robot to kill a sniper, fueling an ongoing debate about "killer robots" in policing and warfighting.
"There's the potential for these robots to increase the militarization of police departments and use it in ways that are unacceptable," Kim said. "Maybe it's not something we even want to let law enforcement have."
- -
AP Technology Writer Matt O'Brien reported from Providence, Rhode Island. |
|||
26 | EU Fines Amazon Record $888 Million Over Data Violations | Luxembourg's CNPD data protection authority fined Amazon a record $888 million for breaching the EU's General Data Protection Regulation (GDPR). The EU regulator charged the online retailer with processing personal data in violation of GDPR rules, which Amazon denies. The ruling closes an investigation triggered by a 2018 complaint from French privacy rights group La Quadrature du Net. Amazon says it gathers data to augment the customer experience, and its guidelines restrict what employees can do with it; some lawmakers and regulators allege the company exploits this data to gain an unfair competitive advantage. Amazon also is under EU scrutiny concerning its use of data from sellers on its platform, and whether it unfairly champions its own products. | [] | [] | [] | scitechnews | None | None | None | None | Luxembourg's CNPD data protection authority fined Amazon a record $888 million for breaching the EU's General Data Protection Regulation (GDPR). The EU regulator charged the online retailer with processing personal data in violation of GDPR rules, which Amazon denies. The ruling closes an investigation triggered by a 2018 complaint from French privacy rights group La Quadrature du Net. Amazon says it gathers data to augment the customer experience, and its guidelines restrict what employees can do with it; some lawmakers and regulators allege the company exploits this data to gain an unfair competitive advantage. Amazon also is under EU scrutiny concerning its use of data from sellers on its platform, and whether it unfairly champions its own products.
|
||||
27 | AI Can Now Be Recognized as an Inventor | In a landmark decision, an Australian court has set a groundbreaking precedent, deciding artificial intelligence (AI) systems can be legally recognised as an inventor in patent applications.
That might not sound like a big deal, but it challenges a fundamental assumption in the law: that only human beings can be inventors.
The AI machine called DABUS is an "artificial neural system" and its designs have set off a string of debates and court battles across the globe.
On Friday, Australia's Federal Court made the historic finding that "the inventor can be non-human."
It came just days after South Africa became the first country to defy the status quo and award a patent recognising DABUS as an inventor.
AI pioneer and creator of DABUS, Stephen Thaler, and his legal team have been waging a ferocious global campaign to have DABUS recognised as an inventor for more than two years.
They argue DABUS can autonomously perform the "inventive step" required to be eligible for a patent.
Dr Thaler says he is elated by the South African and Australian decisions, but for him it's never been a legal battle.
"It's been more of a philosophical battle, convincing humanity that my creative neural architectures are compelling models of cognition, creativity, sentience, and consciousness," he says.
"The recently established fact that DABUS has created patent-worthy inventions is further evidence that the system 'walks and talks' just like a conscious human brain."
Ryan Abbott, a British attorney leading the DABUS matter and the author of The Reasonable Robot: Artificial Intelligence and the Law, says he wanted to advocate for artificial inventorship after realising the law's "double standards" in assessing behaviour by an AI compared to behaviour by a human being.
"For example, if a pharmaceutical company uses an AI system to come up with a new drug ... they can't get a patent, but if a person does exactly the same thing they can," Dr Abbott says.
Short for "device for the autonomous bootstrapping of unified sentience," DABUS is essentially a computer system that's been programmed to invent on its own.
Getting technical, it is a "swarm" of disconnected neutral nets that continuously generate "thought processes" and "memories" which over time independently generate new and inventive outputs.
In 2019, two patent applications naming DABUS as the inventor were filed in more than a dozen countries and the European Union.
The applications list DABUS as the inventor, but Dr Thaler is still the owner of the patent, meaning they're not trying to advocate for property rights for AI.
The first invention is a design of a container based on "fractal geometry" that is claimed to be the ideal shape for being stacked together and handled by robotic arms.
The second application is for a "device and method for attracting enhanced attention," which is a light that flickers rhythmically in a specific pattern mimicking human neural activity.
The DABUS applications sparked months of deliberation in intellectual property offices and courtrooms around the world.
The case went to the highest court in the UK, where the appeal was dismissed, with the same result in US and EU courts.
Justice Johnathan Beach of the Australian Federal Court has become the first to hand down a judgement in favour of Dr Thaler, ruling "an inventor ... can be an artificial intelligence system or device."
Dr Abbot says: "This is a landmark decision and an important development for making sure Australia maximises the social benefits of AI and promotes innovation."
Dr Thaler's Australian representatives at the Allens law firm say they're delighted with the result.
"The case has not been successful in any other parts of the world except South Africa, which was an administrative decision that didn't involve this sort of judicial consideration," Richard Hamer, the Allens partner running the case, says.
For this reason, he says, Justice Beach's comprehensive 41-page judgement will certainly set a precedent as international jurisdictions continue to deliberate on the issue.
"AI aiding [inventions] has been overtaken by AI actually making the inventions and it's critical that those inventions are able to be patented because in the future they are going to be such an important part of innovation and the aim of the patent system is to encourage innovation ... and encourage inventions to be published in patent specifications," Mr Hamer says.
IP Australia says the Commissioner of Patents is considering the decision and won't comment further at this stage.
Dr Thaler's legal team says its aim is to test the boundaries of the patent system and instigate reform.
"It isn't a good system because as technology advances we're going to move from encouraging people to invent things to encouraging people to build AI that can invent things," Dr Abbott says.
"In some fields AI may have a significant advantage over a person when it comes to inventing, for example when it requires vast uses of data or very extensive computational resources."
Already the current system has prevented numerous patents from being registered because the inventions were generated autonomously by AI, and this is causing uncertainty in AI investment.
Take technology company Siemens as an example: In 2019 it was unable to file a patent on a new car suspension system because it was developed by AI.
Its human engineers would not list themselves as inventors because they could not claim to have had input in the inventing process and the US has criminal penalties for inaccurately putting the wrong inventor down on a patent application.
"We want a patent system that adequately encourages people to make AI that develops socially valuable innovations," Dr Abbott says.
He says they are not going to back down from further appeals against unfavourable decisions and thinks the legal proceedings could drag on for up to a decade in some jurisdictions.
The DABUS case is part of a larger debate about how existing and emerging AI technologies are regulated.
The law can be notoriously slow to reform and accommodate new technologies, but with innovation picking up at an increasingly rapid pace, many argue politicians should be more open to change and not be limited by laws made when such advancements could not have been contemplated.
AI is based on machine learning, which means AI systems are being literally trained by teams of people and the systems learn from the data they're fed.
Because an AI system keeps accumulating "knowledge" and can't forget things like humans can, their learning potential is exponential.
AI trainers are everywhere now. Some countries even have "AI sweatshops" where thousands of employees train algorithms.
And it's not just workers training these systems - we all are.
Social media platforms use AI to curate our feeds, suggest content and ads, recognise and remove harmful content and use facial recognition to help suggest people to tag in our photos, or in the case of TikTok's, monitor your emotions and personality traits.
And it's not just social media - have an Amazon Alexa? You're an AI trainer.
The infinite amount of data we feed into these everyday AI systems just by scrolling or engaging with them helps them get more intelligent, for better or worse.
As former Google design ethicist Tristan Harris said in Netflix documentary The Social Dilemma: "If you are not paying for the product, you are the product."
Already AI's capacity to rival the creative and innovative capacity of humans is closer to reality than conjecture, and AI systems are now fully capable of inventing, creating artworks and producing music.
In 2018, an AI-generated artwork on auction at Christie's sold for more than $600,000. Since then the AI art industry has been drawing in a steady stream of interest and income, made even more lucrative with the arrival of NFTs (non-fungible tokens).
But under current laws, AI-generated artworks can't be protected by copyright, which automatically protects original creative works.
AI has already proven itself capable of outgunning the human brain's analytical capability, with the Watson computer famously proving it can beat us at Jeopardy and chess nearly a decade ago.
Perhaps giving computers recognition as creators and inventors is the final frontier to recognising the creation of truly artificial intelligence envisioned by Alan Turing.
One of the arguments for allowing AI systems to be listed as inventors or creators is that it facilitates accountability.
Patents, for example, once accepted and registered, are published on a public register, so anyone can look up details about the invention.
Although the two DABUS inventions are useful, as autonomously inventing technologies become more commonplace there's certainly potential for the development of less beneficial and potentially harmful inventions.
Commentators suggest patent offices develop common guidelines to govern AI generally and any inventions they produce.
With the unknown possibilities of AI, the attribution of inventorship incentivises the full disclosure of AI-generated inventions.
Dr Abbott vehemently denies artificial inventors or creators would give any rise to a discussion of legal personhood, or recognising a machine as a person under the law.
Similarly, Justice Beach said in his judgement that in the discussion of AI he was "leaving to one side any possible embodiment of awareness, consciousness or sense of self."
In 2017, Saudi Arabia controversially granted citizenship to a robot called Sophia, sparking ethical discussions around giving AI legal personhood and questions about sentient machines.
But Bruce Baer Arnold, associate professor of law at the University of Canberra, says Sophia's citizenship was purely a publicity stunt and truly artificial intelligence or "sentient machines" that have consciousness are a long way off.
However, he says it is important we have legal and ethical discussions around the potential of AI.
"As a community, we need a meaningful public discussion about [AI] and [to] start preparing for some of the difficult questions that might come up," Dr Arnold says.
Dr Arnold also says there's no reason to panic about intelligent machines because academics are "just having fun with ideas" and pushing the boundaries of what personhood, human rights and machine rights might look like in the future.
He says this decision to recognise an AI system as an inventor does not mean the AI systems in your devices are going to end up with the right to vote.
"All countries are grappling with this," says Dr Arnold, from politicians and academics to AI developers, but the reality of sentient AI is, perhaps thankfully, one we don't have to face - just yet. | Australia's Federal Court has granted artificial intelligence (AI) systems legal recognition as inventors in patent applications, challenging the assumption that invention is a purely human act. The decision recognizes DABUS (device for the autonomous bootstrapping of unified sentience), an AI system whose creators have long argued can autonomously perform the "inventive step" required to qualify for a patent. DABUS is a swarm of disconnected neutral networks that continuously generate "thought processes" and "memories" which independently produce new and inventive outputs. It has "invented" a design for a container based on fractal geometry, and a "device and method for attracting enhanced attention" that makes light flicker in a pattern mimicking human neural activity. Although DABUS is listed as the inventor, its creator Stephen Thaler owns the patent, which means the push for the AI's inventor status is not an attempt to advocate for AI property rights. | [] | [] | [] | scitechnews | None | None | None | None | Australia's Federal Court has granted artificial intelligence (AI) systems legal recognition as inventors in patent applications, challenging the assumption that invention is a purely human act. The decision recognizes DABUS (device for the autonomous bootstrapping of unified sentience), an AI system whose creators have long argued can autonomously perform the "inventive step" required to qualify for a patent. DABUS is a swarm of disconnected neutral networks that continuously generate "thought processes" and "memories" which independently produce new and inventive outputs. It has "invented" a design for a container based on fractal geometry, and a "device and method for attracting enhanced attention" that makes light flicker in a pattern mimicking human neural activity. Although DABUS is listed as the inventor, its creator Stephen Thaler owns the patent, which means the push for the AI's inventor status is not an attempt to advocate for AI property rights.
In a landmark decision, an Australian court has set a groundbreaking precedent, deciding artificial intelligence (AI) systems can be legally recognised as an inventor in patent applications.
That might not sound like a big deal, but it challenges a fundamental assumption in the law: that only human beings can be inventors.
The AI machine called DABUS is an "artificial neural system" and its designs have set off a string of debates and court battles across the globe.
On Friday, Australia's Federal Court made the historic finding that "the inventor can be non-human."
It came just days after South Africa became the first country to defy the status quo and award a patent recognising DABUS as an inventor.
AI pioneer and creator of DABUS, Stephen Thaler, and his legal team have been waging a ferocious global campaign to have DABUS recognised as an inventor for more than two years.
They argue DABUS can autonomously perform the "inventive step" required to be eligible for a patent.
Dr Thaler says he is elated by the South African and Australian decisions, but for him it's never been a legal battle.
"It's been more of a philosophical battle, convincing humanity that my creative neural architectures are compelling models of cognition, creativity, sentience, and consciousness," he says.
"The recently established fact that DABUS has created patent-worthy inventions is further evidence that the system 'walks and talks' just like a conscious human brain."
Ryan Abbott, a British attorney leading the DABUS matter and the author of The Reasonable Robot: Artificial Intelligence and the Law, says he wanted to advocate for artificial inventorship after realising the law's "double standards" in assessing behaviour by an AI compared to behaviour by a human being.
"For example, if a pharmaceutical company uses an AI system to come up with a new drug ... they can't get a patent, but if a person does exactly the same thing they can," Dr Abbott says.
Short for "device for the autonomous bootstrapping of unified sentience," DABUS is essentially a computer system that's been programmed to invent on its own.
Getting technical, it is a "swarm" of disconnected neutral nets that continuously generate "thought processes" and "memories" which over time independently generate new and inventive outputs.
In 2019, two patent applications naming DABUS as the inventor were filed in more than a dozen countries and the European Union.
The applications list DABUS as the inventor, but Dr Thaler is still the owner of the patent, meaning they're not trying to advocate for property rights for AI.
The first invention is a design of a container based on "fractal geometry" that is claimed to be the ideal shape for being stacked together and handled by robotic arms.
The second application is for a "device and method for attracting enhanced attention," which is a light that flickers rhythmically in a specific pattern mimicking human neural activity.
The DABUS applications sparked months of deliberation in intellectual property offices and courtrooms around the world.
The case went to the highest court in the UK, where the appeal was dismissed, with the same result in US and EU courts.
Justice Johnathan Beach of the Australian Federal Court has become the first to hand down a judgement in favour of Dr Thaler, ruling "an inventor ... can be an artificial intelligence system or device."
Dr Abbot says: "This is a landmark decision and an important development for making sure Australia maximises the social benefits of AI and promotes innovation."
Dr Thaler's Australian representatives at the Allens law firm say they're delighted with the result.
"The case has not been successful in any other parts of the world except South Africa, which was an administrative decision that didn't involve this sort of judicial consideration," Richard Hamer, the Allens partner running the case, says.
For this reason, he says, Justice Beach's comprehensive 41-page judgement will certainly set a precedent as international jurisdictions continue to deliberate on the issue.
"AI aiding [inventions] has been overtaken by AI actually making the inventions and it's critical that those inventions are able to be patented because in the future they are going to be such an important part of innovation and the aim of the patent system is to encourage innovation ... and encourage inventions to be published in patent specifications," Mr Hamer says.
IP Australia says the Commissioner of Patents is considering the decision and won't comment further at this stage.
Dr Thaler's legal team says its aim is to test the boundaries of the patent system and instigate reform.
"It isn't a good system because as technology advances we're going to move from encouraging people to invent things to encouraging people to build AI that can invent things," Dr Abbott says.
"In some fields AI may have a significant advantage over a person when it comes to inventing, for example when it requires vast uses of data or very extensive computational resources."
Already the current system has prevented numerous patents from being registered because the inventions were generated autonomously by AI, and this is causing uncertainty in AI investment.
Take technology company Siemens as an example: In 2019 it was unable to file a patent on a new car suspension system because it was developed by AI.
Its human engineers would not list themselves as inventors because they could not claim to have had input in the inventing process and the US has criminal penalties for inaccurately putting the wrong inventor down on a patent application.
"We want a patent system that adequately encourages people to make AI that develops socially valuable innovations," Dr Abbott says.
He says they are not going to back down from further appeals against unfavourable decisions and thinks the legal proceedings could drag on for up to a decade in some jurisdictions.
The DABUS case is part of a larger debate about how existing and emerging AI technologies are regulated.
The law can be notoriously slow to reform and accommodate new technologies, but with innovation picking up at an increasingly rapid pace, many argue politicians should be more open to change and not be limited by laws made when such advancements could not have been contemplated.
AI is based on machine learning, which means AI systems are being literally trained by teams of people and the systems learn from the data they're fed.
Because an AI system keeps accumulating "knowledge" and can't forget things like humans can, their learning potential is exponential.
AI trainers are everywhere now. Some countries even have "AI sweatshops" where thousands of employees train algorithms.
And it's not just workers training these systems - we all are.
Social media platforms use AI to curate our feeds, suggest content and ads, recognise and remove harmful content and use facial recognition to help suggest people to tag in our photos, or in the case of TikTok's, monitor your emotions and personality traits.
And it's not just social media - have an Amazon Alexa? You're an AI trainer.
The infinite amount of data we feed into these everyday AI systems just by scrolling or engaging with them helps them get more intelligent, for better or worse.
As former Google design ethicist Tristan Harris said in Netflix documentary The Social Dilemma: "If you are not paying for the product, you are the product."
Already AI's capacity to rival the creative and innovative capacity of humans is closer to reality than conjecture, and AI systems are now fully capable of inventing, creating artworks and producing music.
In 2018, an AI-generated artwork on auction at Christie's sold for more than $600,000. Since then the AI art industry has been drawing in a steady stream of interest and income, made even more lucrative with the arrival of NFTs (non-fungible tokens).
But under current laws, AI-generated artworks can't be protected by copyright, which automatically protects original creative works.
AI has already proven itself capable of outgunning the human brain's analytical capability, with the Watson computer famously proving it can beat us at Jeopardy and chess nearly a decade ago.
Perhaps giving computers recognition as creators and inventors is the final frontier to recognising the creation of truly artificial intelligence envisioned by Alan Turing.
One of the arguments for allowing AI systems to be listed as inventors or creators is that it facilitates accountability.
Patents, for example, once accepted and registered, are published on a public register, so anyone can look up details about the invention.
Although the two DABUS inventions are useful, as autonomously inventing technologies become more commonplace there's certainly potential for the development of less beneficial and potentially harmful inventions.
Commentators suggest patent offices develop common guidelines to govern AI generally and any inventions they produce.
With the unknown possibilities of AI, the attribution of inventorship incentivises the full disclosure of AI-generated inventions.
Dr Abbott vehemently denies artificial inventors or creators would give any rise to a discussion of legal personhood, or recognising a machine as a person under the law.
Similarly, Justice Beach said in his judgement that in the discussion of AI he was "leaving to one side any possible embodiment of awareness, consciousness or sense of self."
In 2017, Saudi Arabia controversially granted citizenship to a robot called Sophia, sparking ethical discussions around giving AI legal personhood and questions about sentient machines.
But Bruce Baer Arnold, associate professor of law at the University of Canberra, says Sophia's citizenship was purely a publicity stunt and truly artificial intelligence or "sentient machines" that have consciousness are a long way off.
However, he says it is important we have legal and ethical discussions around the potential of AI.
"As a community, we need a meaningful public discussion about [AI] and [to] start preparing for some of the difficult questions that might come up," Dr Arnold says.
Dr Arnold also says there's no reason to panic about intelligent machines because academics are "just having fun with ideas" and pushing the boundaries of what personhood, human rights and machine rights might look like in the future.
He says this decision to recognise an AI system as an inventor does not mean the AI systems in your devices are going to end up with the right to vote.
"All countries are grappling with this," says Dr Arnold, from politicians and academics to AI developers, but the reality of sentient AI is, perhaps thankfully, one we don't have to face - just yet. |
|||
29 | Unfolding the Hippocampus | A new technique developed at Western University to visually iron out the wrinkles and folds in one region of the brain may provide researchers a more accurate picture to understand brain disorders.
The hippocampus is a region of the brain often looked at by clinicians and researchers for clues to understand disease progression and response to treatment for brain disorders. Made up of two seahorse-shaped brain structures, the hippocampus is located at the centre of the brain and plays an important role in memory formation. It is one of the first regions of the brain to show damage from Alzheimer's and other neurodegenerative diseases and is implicated in epilepsy and major depressive disorder.
The anatomy of the hippocampus differs greatly from person to person, specifically when looking at the way that it folds in on itself.
"The basic issue we are trying to address is that the hippocampus is folded up, but it isn't folded exactly the same way between two people," said Jordan DeKraker, PhD student. "The approach that we're taking is to digitally unfold it so we can more accurately compare abnormalities between patients."
The technique uses data acquired from magnetic resonance imaging (MRI) to digitally recreate the 3D folds into a 2D structure - essentially ironing out the wrinkles.
The paper describing the promise of this technique was published in the journal Trends in Neurosciences and is the culmination of DeKraker's PhD work at Western's Schulich School of Medicine & Dentistry under the supervision of Ali Khan and Stefan Köhler.
"It's difficult to pinpoint one part of the hippocampus in one person, and find the corresponding part in another person because of the variability from person to person in this folding," said Köhler, professor of psychology and principal investigator at the Brain and Mind Institute . "Being able to do that is relevant when dealing with clinical questions because you have to consider what is part of normal variability versus what is unique to clinical abnormalities. It's at that level that this technique will really help in the future."
The team began to develop this technique after looking at imaging data from patients acquired using the ultra-high-field MRI located at Robarts Research Institute , one of the most powerful MRI magnets in North America, producing images of the brain in super high resolution. These images allowed them to see the differences in folding patterns between patients that couldn't be seen with lower-resolution imaging.
Next, the team is developing a web-based app that would allow clinicians and researchers to input their imaging data and uses artificial intelligence to unfold the hippocampus in the same way.
"For epilepsy, this might be useful to provide a high-resolution approach to help a surgeon determine which site of the brain to treat, resect or implant electrodes," said Khan, assistant professor at Schulich Medicine & Dentistry, Canada Research Chair in Computational Neuroimaging, and scientist at Robarts Research Institute.
For Alzheimer's disease, it can potentially provide a more sensitive marker for showing changes early on in the brain before the onset of disease symptoms. For other neurodegenerative diseases or psychiatric conditions like major depressive disorder, it may provide a marker to track treatment response. | Understanding disease progression and response to treatment for brain disorders by digitally unfolding the hippocampus is the goal of a new technique developed by researchers at Canada's Western University. The method employs data from magnetic resonance imaging to digitally render the hippocampus' three-dimensional folds in two dimensions. Western's Stefan Köhler said, "It's difficult to pinpoint one part of the hippocampus in one person, and find the corresponding part in another person because of the variability from person to person in this folding. Being able to do that is relevant when dealing with clinical questions because you have to consider what is part of normal variability versus what is unique to clinical abnormalities. It's at that level that this technique will really help in the future." | [] | [] | [] | scitechnews | None | None | None | None | Understanding disease progression and response to treatment for brain disorders by digitally unfolding the hippocampus is the goal of a new technique developed by researchers at Canada's Western University. The method employs data from magnetic resonance imaging to digitally render the hippocampus' three-dimensional folds in two dimensions. Western's Stefan Köhler said, "It's difficult to pinpoint one part of the hippocampus in one person, and find the corresponding part in another person because of the variability from person to person in this folding. Being able to do that is relevant when dealing with clinical questions because you have to consider what is part of normal variability versus what is unique to clinical abnormalities. It's at that level that this technique will really help in the future."
A new technique developed at Western University to visually iron out the wrinkles and folds in one region of the brain may provide researchers a more accurate picture to understand brain disorders.
The hippocampus is a region of the brain often looked at by clinicians and researchers for clues to understand disease progression and response to treatment for brain disorders. Made up of two seahorse-shaped brain structures, the hippocampus is located at the centre of the brain and plays an important role in memory formation. It is one of the first regions of the brain to show damage from Alzheimer's and other neurodegenerative diseases and is implicated in epilepsy and major depressive disorder.
The anatomy of the hippocampus differs greatly from person to person, specifically when looking at the way that it folds in on itself.
"The basic issue we are trying to address is that the hippocampus is folded up, but it isn't folded exactly the same way between two people," said Jordan DeKraker, PhD student. "The approach that we're taking is to digitally unfold it so we can more accurately compare abnormalities between patients."
The technique uses data acquired from magnetic resonance imaging (MRI) to digitally recreate the 3D folds into a 2D structure - essentially ironing out the wrinkles.
The paper describing the promise of this technique was published in the journal Trends in Neurosciences and is the culmination of DeKraker's PhD work at Western's Schulich School of Medicine & Dentistry under the supervision of Ali Khan and Stefan Köhler.
"It's difficult to pinpoint one part of the hippocampus in one person, and find the corresponding part in another person because of the variability from person to person in this folding," said Köhler, professor of psychology and principal investigator at the Brain and Mind Institute . "Being able to do that is relevant when dealing with clinical questions because you have to consider what is part of normal variability versus what is unique to clinical abnormalities. It's at that level that this technique will really help in the future."
The team began to develop this technique after looking at imaging data from patients acquired using the ultra-high-field MRI located at Robarts Research Institute , one of the most powerful MRI magnets in North America, producing images of the brain in super high resolution. These images allowed them to see the differences in folding patterns between patients that couldn't be seen with lower-resolution imaging.
Next, the team is developing a web-based app that would allow clinicians and researchers to input their imaging data and uses artificial intelligence to unfold the hippocampus in the same way.
"For epilepsy, this might be useful to provide a high-resolution approach to help a surgeon determine which site of the brain to treat, resect or implant electrodes," said Khan, assistant professor at Schulich Medicine & Dentistry, Canada Research Chair in Computational Neuroimaging, and scientist at Robarts Research Institute.
For Alzheimer's disease, it can potentially provide a more sensitive marker for showing changes early on in the brain before the onset of disease symptoms. For other neurodegenerative diseases or psychiatric conditions like major depressive disorder, it may provide a marker to track treatment response. |
|||
30 | Wearable Devices Could Use Your Breathing Patterns Like a Password | By Chris Stokel-Walker
The way we breathe could keep wearables like headphones and smartwatches paired securely to your phone Delmaine Donson/Getty Images
Wearable electronic devices, such as earphones and smartwatches, are currently paired to smartphones and similar tech through a secure Bluetooth or near-field communication (NFC) link - but they could also soon be paired securely by the way you breathe.
Jafar Pourbemany at Cleveland State University and his colleagues have developed a protocol that can create a 256-bit encryption key every few seconds - around one human breathing cycle - based on the way a user ... | Cleveland State University's Jafar Pourbemany and colleagues have developed a protocol that generates a 256-bit encryption key every few seconds based on the way a user breathes; the key can then be sent to a wearable device to keep the two in sync. The protocol employs a respiratory inductance plethysmography sensor to measure the user's breathing, with an accelerometer on the chest providing additional data on the way it moves during each breath. The wearer's unique breathing pattern is translated into an encrypted key that can be used to confirm that a device matches correctly with the wearable. Pourbemany said, "Devices need to have a shared secure key for encryption to ensure that an attacker cannot compromise the process." | [] | [] | [] | scitechnews | None | None | None | None | Cleveland State University's Jafar Pourbemany and colleagues have developed a protocol that generates a 256-bit encryption key every few seconds based on the way a user breathes; the key can then be sent to a wearable device to keep the two in sync. The protocol employs a respiratory inductance plethysmography sensor to measure the user's breathing, with an accelerometer on the chest providing additional data on the way it moves during each breath. The wearer's unique breathing pattern is translated into an encrypted key that can be used to confirm that a device matches correctly with the wearable. Pourbemany said, "Devices need to have a shared secure key for encryption to ensure that an attacker cannot compromise the process."
By Chris Stokel-Walker
The way we breathe could keep wearables like headphones and smartwatches paired securely to your phone Delmaine Donson/Getty Images
Wearable electronic devices, such as earphones and smartwatches, are currently paired to smartphones and similar tech through a secure Bluetooth or near-field communication (NFC) link - but they could also soon be paired securely by the way you breathe.
Jafar Pourbemany at Cleveland State University and his colleagues have developed a protocol that can create a 256-bit encryption key every few seconds - around one human breathing cycle - based on the way a user ... |
|||
31 | Like Babies Learning to Walk, Autonomous Vehicles Learn to Drive by Mimicking Others | Self-driving cars are powered by machine learning algorithms that require vast amounts of driving data in order to function safely. But if self-driving cars could learn to drive in the same way that babies learn to walk - by watching and mimicking others around them - they would require far less compiled driving data. That idea is pushing Boston University engineer Eshed Ohn-Bar to develop a completely new way for autonomous vehicles to learn safe driving techniques - by watching other cars on the road, predicting how they will respond to their environment, and using that information to make their own driving decisions.
Ohn-Bar , a BU College of Engineering assistant professor of electrical and computer engineering and a junior faculty fellow at BU's Rafik B. Hariri Institute for Computing and Computational Science & Engineering, and Jimuyang Zhang, a BU PhD student in electrical and computer engineering, recently presented their research at the 2021 Conference on Computer Vision and Pattern Recognition. Their idea for the training paradigm came from a desire to increase data sharing and cooperation among researchers in their field - currently, autonomous vehicles require many hours of driving data to learn how to drive safely, but some of the world's largest car companies keep their vast amounts of data private to prevent competition.
"Each company goes through the same process of taking cars, putting sensors on them, paying drivers to drive the vehicles, collecting data, and teaching the cars to drive," Ohn-Bar says. Sharing that driving data could help companies create safe autonomous vehicles faster, allowing everyone in society to benefit from the cooperation. Artificially intelligent driving systems require so much data to work well, Ohn-Bar says, that no single company will be able to solve this problem on its own.
"Billions of miles [of data collected on the road] are just a drop in an ocean of real-world events and diversity," Ohn-Bar says. "Yet, a missing data sample could lead to unsafe behavior and a potential crash."
The researchers' proposed machine learning algorithm works by estimating the viewpoints and blind spots of other nearby cars to create a bird's-eye-view map of the surrounding environment. These maps help self-driving cars detect obstacles, like other cars or pedestrians, and to understand how other cars turn, negotiate, and yield without crashing into anything.
Through this method, self-driving cars learn by translating the actions of surrounding vehicles into their own frames of reference - their machine learning algorithm-powered neural networks. These other cars may be human-driven vehicles without any sensors, or another company's auto-piloted vehicles. Since observations from all of the surrounding cars in a scene are central to the algorithm's training, this "learning by watching" paradigm encourages data sharing, and consequently safer autonomous vehicles.
Ohn-Bar and Zhang tested their "watch and learn" algorithm by having autonomous cars driven by it navigate two virtual towns - one with straightforward turns and obstacles similar to their training environment, and another with unexpected twists, like five-way intersections. In both scenarios, the researchers found that their self-driving neural network gets into very few accidents. With just one hour of driving data to train the machine learning algorithm, the autonomous vehicles arrived safely at their destinations 92 percent of the time.
"While previous best methods required hours, we were surprised that our method could learn to drive safely with just 10 minutes of driving data," Ohn-Bar says.
These results are promising, he says, but there are still several open challenges in dealing with intricate urban settings. "Accounting for drastically varying perspectives across the watched vehicles, noise and occlusion in sensor measurements, and various drivers is very difficult," he says.
Looking ahead, the team says their method for teaching autonomous vehicles to self-drive could be used in other technologies, as well. "Delivery robots or even drones could all learn by watching other AI systems in their environment," Ohn-Bar says. | Engineers at Boston University aim to teach autonomous vehicles to drive safely by having them mimic others, similar to the way babies learn to walk. Their machine learning algorithm estimates the viewpoints and blind spots of other nearby cars to generate a bird's-eye-view of the surrounding environment, in order to help autonomous cars detect obstacles and understand how other vehicles turn, negotiate, and yield without colliding. The self-driving cars learn by translating the surrounding vehicles' actions into their algorithm-powered neural networks. Observations from all of the surrounding vehicles in a scene are a core element in the algorithm's training, so the model encourages data sharing and improves autonomous vehicle safety. | [] | [] | [] | scitechnews | None | None | None | None | Engineers at Boston University aim to teach autonomous vehicles to drive safely by having them mimic others, similar to the way babies learn to walk. Their machine learning algorithm estimates the viewpoints and blind spots of other nearby cars to generate a bird's-eye-view of the surrounding environment, in order to help autonomous cars detect obstacles and understand how other vehicles turn, negotiate, and yield without colliding. The self-driving cars learn by translating the surrounding vehicles' actions into their algorithm-powered neural networks. Observations from all of the surrounding vehicles in a scene are a core element in the algorithm's training, so the model encourages data sharing and improves autonomous vehicle safety.
Self-driving cars are powered by machine learning algorithms that require vast amounts of driving data in order to function safely. But if self-driving cars could learn to drive in the same way that babies learn to walk - by watching and mimicking others around them - they would require far less compiled driving data. That idea is pushing Boston University engineer Eshed Ohn-Bar to develop a completely new way for autonomous vehicles to learn safe driving techniques - by watching other cars on the road, predicting how they will respond to their environment, and using that information to make their own driving decisions.
Ohn-Bar , a BU College of Engineering assistant professor of electrical and computer engineering and a junior faculty fellow at BU's Rafik B. Hariri Institute for Computing and Computational Science & Engineering, and Jimuyang Zhang, a BU PhD student in electrical and computer engineering, recently presented their research at the 2021 Conference on Computer Vision and Pattern Recognition. Their idea for the training paradigm came from a desire to increase data sharing and cooperation among researchers in their field - currently, autonomous vehicles require many hours of driving data to learn how to drive safely, but some of the world's largest car companies keep their vast amounts of data private to prevent competition.
"Each company goes through the same process of taking cars, putting sensors on them, paying drivers to drive the vehicles, collecting data, and teaching the cars to drive," Ohn-Bar says. Sharing that driving data could help companies create safe autonomous vehicles faster, allowing everyone in society to benefit from the cooperation. Artificially intelligent driving systems require so much data to work well, Ohn-Bar says, that no single company will be able to solve this problem on its own.
"Billions of miles [of data collected on the road] are just a drop in an ocean of real-world events and diversity," Ohn-Bar says. "Yet, a missing data sample could lead to unsafe behavior and a potential crash."
The researchers' proposed machine learning algorithm works by estimating the viewpoints and blind spots of other nearby cars to create a bird's-eye-view map of the surrounding environment. These maps help self-driving cars detect obstacles, like other cars or pedestrians, and to understand how other cars turn, negotiate, and yield without crashing into anything.
Through this method, self-driving cars learn by translating the actions of surrounding vehicles into their own frames of reference - their machine learning algorithm-powered neural networks. These other cars may be human-driven vehicles without any sensors, or another company's auto-piloted vehicles. Since observations from all of the surrounding cars in a scene are central to the algorithm's training, this "learning by watching" paradigm encourages data sharing, and consequently safer autonomous vehicles.
Ohn-Bar and Zhang tested their "watch and learn" algorithm by having autonomous cars driven by it navigate two virtual towns - one with straightforward turns and obstacles similar to their training environment, and another with unexpected twists, like five-way intersections. In both scenarios, the researchers found that their self-driving neural network gets into very few accidents. With just one hour of driving data to train the machine learning algorithm, the autonomous vehicles arrived safely at their destinations 92 percent of the time.
"While previous best methods required hours, we were surprised that our method could learn to drive safely with just 10 minutes of driving data," Ohn-Bar says.
These results are promising, he says, but there are still several open challenges in dealing with intricate urban settings. "Accounting for drastically varying perspectives across the watched vehicles, noise and occlusion in sensor measurements, and various drivers is very difficult," he says.
Looking ahead, the team says their method for teaching autonomous vehicles to self-drive could be used in other technologies, as well. "Delivery robots or even drones could all learn by watching other AI systems in their environment," Ohn-Bar says. |
|||
33 | Model Helps Map the Individual Variations of Mental Illness | The diagnosis of mental illnesses such as major depression, schizophrenia, or anxiety disorder is typically based on coarse groupings of symptoms. These symptoms, however, vary widely among individuals as do the brain circuits that cause them. This complexity explains why drug treatments work for some patients, but not others.
Now Yale researchers have developed a novel framework for the emerging field of "computational psychiatry," which blends neuroimaging, pharmacology, biophysical modeling, and neural gene expression to map these variations in individual symptoms to specific neural circuits.
The findings, reported in tandem papers published in the journal eLife, promise to help create more targeted therapies for individual patients. The two studies were led, respectively, by Alan Anticevic and John Murray , associate professors of psychiatry at Yale School of Medicine.
In one study , a team led by Anticevic and Jie Lisa Ji, a Ph.D. student at Yale, used advanced statistical approaches to identify precise sets of symptoms that describe specific patients more accurately than traditional coarse diagnoses of mental illness, which do not account for individual variation of symptoms or the neural biology which causes them. The researchers found that these refined symptom signatures revealed precise neural circuits that more precisely captured variation across hundreds of patients diagnosed with psychotic disorders.
For instance, they found patients diagnosed with schizophrenia exhibited a diverse array of neural circuitry, the network of neurons which carry out brain function, that could be linked to specific symptoms of individual patients.
" This study shows the promise of computational psychiatry for personalized patient selection and treatment design using human brain imaging technology," Anticevic said.
In the related study , led by Murray and Ph.D. student Joshua Burt, researchers simulated the effects of drugs on brain circuits. They used a new neuroimaging technology which incorporates a computational model that includes data on patterns of neural gene expression.
Specifically, the team studied the effects of LSD, a well-known hallucinogen known to alter consciousness and perception. Murray and colleagues were able to map personalized brain and psychological effects induced by LSD.
Understanding the neural effects of such substances can advance the treatment of mental illness, the researchers said. LSD is of particular interest to researchers because it can mimic symptoms of psychosis found in diseases like schizophrenia. It also activates a serotonin receptor which is a major target of antidepressants.
" We can develop a mechanistic view of how drugs alter brain function in specific regions and use that information to understand the brains of individual patients," said Murray.
By linking personalized brain patterns to symptoms and simulating the effect of drugs on the human brain, these technologies can not only help clinicians to predict which drugs might best help patients but spur development of new drugs tailored to individuals, the authors say. | Two new studies applied a novel computational psychiatry framework developed by Yale University scientists that maps individual variations of mental illness to neural circuits. One study employed advanced statistics to identify sets of symptoms that specify patients more accurately than traditional mental illness diagnoses; these symptom signatures exposed precise neural circuits that more accurately embodied variation across hundreds of patients diagnosed as psychotic. Yale's Alan Anticevic said, "This study shows the promise of computational psychiatry for personalized patient selection and treatment design using human brain imaging technology." The second study mapped the effect of LSD on brain circuits via a new neuroimaging technology that incorporates a computational model featuring data on neural gene-expression patterns. Said Yale's John Murray, "We can develop a mechanistic view of how drugs alter brain function in specific regions and use that information to understand the brains of individual patients." | [] | [] | [] | scitechnews | None | None | None | None | Two new studies applied a novel computational psychiatry framework developed by Yale University scientists that maps individual variations of mental illness to neural circuits. One study employed advanced statistics to identify sets of symptoms that specify patients more accurately than traditional mental illness diagnoses; these symptom signatures exposed precise neural circuits that more accurately embodied variation across hundreds of patients diagnosed as psychotic. Yale's Alan Anticevic said, "This study shows the promise of computational psychiatry for personalized patient selection and treatment design using human brain imaging technology." The second study mapped the effect of LSD on brain circuits via a new neuroimaging technology that incorporates a computational model featuring data on neural gene-expression patterns. Said Yale's John Murray, "We can develop a mechanistic view of how drugs alter brain function in specific regions and use that information to understand the brains of individual patients."
The diagnosis of mental illnesses such as major depression, schizophrenia, or anxiety disorder is typically based on coarse groupings of symptoms. These symptoms, however, vary widely among individuals as do the brain circuits that cause them. This complexity explains why drug treatments work for some patients, but not others.
Now Yale researchers have developed a novel framework for the emerging field of "computational psychiatry," which blends neuroimaging, pharmacology, biophysical modeling, and neural gene expression to map these variations in individual symptoms to specific neural circuits.
The findings, reported in tandem papers published in the journal eLife, promise to help create more targeted therapies for individual patients. The two studies were led, respectively, by Alan Anticevic and John Murray , associate professors of psychiatry at Yale School of Medicine.
In one study , a team led by Anticevic and Jie Lisa Ji, a Ph.D. student at Yale, used advanced statistical approaches to identify precise sets of symptoms that describe specific patients more accurately than traditional coarse diagnoses of mental illness, which do not account for individual variation of symptoms or the neural biology which causes them. The researchers found that these refined symptom signatures revealed precise neural circuits that more precisely captured variation across hundreds of patients diagnosed with psychotic disorders.
For instance, they found patients diagnosed with schizophrenia exhibited a diverse array of neural circuitry, the network of neurons which carry out brain function, that could be linked to specific symptoms of individual patients.
" This study shows the promise of computational psychiatry for personalized patient selection and treatment design using human brain imaging technology," Anticevic said.
In the related study , led by Murray and Ph.D. student Joshua Burt, researchers simulated the effects of drugs on brain circuits. They used a new neuroimaging technology which incorporates a computational model that includes data on patterns of neural gene expression.
Specifically, the team studied the effects of LSD, a well-known hallucinogen known to alter consciousness and perception. Murray and colleagues were able to map personalized brain and psychological effects induced by LSD.
Understanding the neural effects of such substances can advance the treatment of mental illness, the researchers said. LSD is of particular interest to researchers because it can mimic symptoms of psychosis found in diseases like schizophrenia. It also activates a serotonin receptor which is a major target of antidepressants.
" We can develop a mechanistic view of how drugs alter brain function in specific regions and use that information to understand the brains of individual patients," said Murray.
By linking personalized brain patterns to symptoms and simulating the effect of drugs on the human brain, these technologies can not only help clinicians to predict which drugs might best help patients but spur development of new drugs tailored to individuals, the authors say. |
|||
34 | Scientists Invent Information Storage, Processing Device | Advance Holds Promise for Artificial Intelligence
A team of scientists has developed a means to create a new type of memory, marking a notable breakthrough in the increasingly sophisticated field of artificial intelligence.
"Quantum materials hold great promise for improving the capacities of today's computers," explains Andrew Kent, a New York University physicist and one of the senior investigators. "The work draws upon their properties in establishing a new structure for computation."
The creation, designed in partnership with researchers from the University of California, San Diego (UC San Diego) and the University of Paris-Saclay, is reported in the Nature journal Scientific Reports .
"Since conventional computing has reached its limits, new computational methods and devices are being developed," adds Ivan Schuller, a UC San Diego physicist and one of the paper's authors. "These have the potential of revolutionizing computing and in ways that may one day rival the human brain."
In recent years, scientists have sought to make advances in what is known as "neuromorphic computing"--a process that seeks to mimic the functionality of the human brain. Because of its human-like characteristics, it may offer more efficient and innovative ways to process data using approaches not achievable using existing computational methods.
In the Scientific Reports work, the researchers created a new device that marks major progress already made in this area.
To do so, they built a nanoconstriction spintronic resonator to manipulate known physical properties in innovative ways. | A team of scientists at New York University (NYU), the University of California, San Diego, and France's University of Paris-Saclay have created a method for engineering a new type of memory device. NYU's Andrew Kent said the research taps the properties of quantum materials to design "a new structure for computation." The team constructed a nanoconstriction spintronic resonator to manipulate known physical properties, and store and process information similar to the brain's synapses and neurons. The resonator merges quantum materials with those of spintronic magnetic devices. Kent said, "This is a fundamental advance that has applications in computing, particularly in neuromorphic computing, where such resonators can serve as connections among computing components." | [] | [] | [] | scitechnews | None | None | None | None | A team of scientists at New York University (NYU), the University of California, San Diego, and France's University of Paris-Saclay have created a method for engineering a new type of memory device. NYU's Andrew Kent said the research taps the properties of quantum materials to design "a new structure for computation." The team constructed a nanoconstriction spintronic resonator to manipulate known physical properties, and store and process information similar to the brain's synapses and neurons. The resonator merges quantum materials with those of spintronic magnetic devices. Kent said, "This is a fundamental advance that has applications in computing, particularly in neuromorphic computing, where such resonators can serve as connections among computing components."
Advance Holds Promise for Artificial Intelligence
A team of scientists has developed a means to create a new type of memory, marking a notable breakthrough in the increasingly sophisticated field of artificial intelligence.
"Quantum materials hold great promise for improving the capacities of today's computers," explains Andrew Kent, a New York University physicist and one of the senior investigators. "The work draws upon their properties in establishing a new structure for computation."
The creation, designed in partnership with researchers from the University of California, San Diego (UC San Diego) and the University of Paris-Saclay, is reported in the Nature journal Scientific Reports .
"Since conventional computing has reached its limits, new computational methods and devices are being developed," adds Ivan Schuller, a UC San Diego physicist and one of the paper's authors. "These have the potential of revolutionizing computing and in ways that may one day rival the human brain."
In recent years, scientists have sought to make advances in what is known as "neuromorphic computing"--a process that seeks to mimic the functionality of the human brain. Because of its human-like characteristics, it may offer more efficient and innovative ways to process data using approaches not achievable using existing computational methods.
In the Scientific Reports work, the researchers created a new device that marks major progress already made in this area.
To do so, they built a nanoconstriction spintronic resonator to manipulate known physical properties in innovative ways. |
|||
35 | AI Helps Improve NASA's Eyes on the Sun | U.S. National Aeronautics and Space Administration (NASA) scientists are calibrating images of the sun with artificial intelligence to enhance data for solar research. The Atmospheric Imagery Assembly (AIA) on NASA's Solar Dynamics Observatory captures this data, and requires regular calibration via sounding rockets to correct for periodic degradation. The researchers are pursuing constant virtual calibration between sounding rocket flights by first training a machine learning algorithm on AIA data to identify and compare solar structures, then feeding it similar images to determine whether it identifies the correct necessary calibration. The scientists also can employ the algorithm to compare specific structures across wavelengths and improve evaluations. Once the program can identify a solar flare without degradation, it can then calculate how much degradation is affecting AIA's current images, and how much calibration each needs. | [] | [] | [] | scitechnews | None | None | None | None | U.S. National Aeronautics and Space Administration (NASA) scientists are calibrating images of the sun with artificial intelligence to enhance data for solar research. The Atmospheric Imagery Assembly (AIA) on NASA's Solar Dynamics Observatory captures this data, and requires regular calibration via sounding rockets to correct for periodic degradation. The researchers are pursuing constant virtual calibration between sounding rocket flights by first training a machine learning algorithm on AIA data to identify and compare solar structures, then feeding it similar images to determine whether it identifies the correct necessary calibration. The scientists also can employ the algorithm to compare specific structures across wavelengths and improve evaluations. Once the program can identify a solar flare without degradation, it can then calculate how much degradation is affecting AIA's current images, and how much calibration each needs.
|
||||
36 | As Cyberattacks Surge, Security Startups Reap the Rewards | "We could have raised $1 billion in capital," Mr. Beri said.
Recent cyberattacks around the world have taken down operations at gasoline pipelines , hospitals and grocery chains and potentially compromised some intelligence agencies. But they have been a bonanza for one group: cybersecurity start-ups.
Investors have poured more than $12.2 billion into start-ups that sell products and services such as cloud security, identity verification and privacy protection so far this year. That exceeds the $10.4 billion that cybersecurity companies raised in all of 2020 and is more than double the $4.8 billion raised in 2016, according to the research firm PitchBook, which tracks funding. Since 2019, the rise in cybersecurity funding has outpaced the increase in overall venture funding.
The surge follows a slew of high-profile ransomware attacks, including against Colonial Pipeline , the software maker Kaseya and the meat processor JBS . When President Biden met with President Vladimir V. Putin of Russia last month, cyberattacks perpetrated by Russians were high on the diplomatic agenda. This month, the Biden administration and its allies also formally accused China of conducting hacks.
The breaches have fueled concerns among companies and governments, leading to increased spending on security products. Worldwide spending on information security and related services is expected to reach $150 billion this year, up 12 percent from a year ago, according to the research company Gartner .
"Before we got to this point, we as security teams were having to go and fight for every penny we could get, and now it's the exact opposite," said John Turner, an information security manager at LendingTree, the online lending marketplace. Executives, he said, are asking: "Are we protected? What do you need?" | Security startups have seen venture capital flooding in as cyberattacks ramp up. Research firm PitchBook estimates investors have injected over $12.2 billion into startups that offer cloud security, identify verification, and privacy protection so far this year, compared to $10.4 billion during all of 2020. Capital is flowing into companies developing anti-hack measures related to the shift to cloud computing, like identity verification software supplier Qomplx and cloud security provider Netskope. Cloud security startup Lacework, whose products use artificial intelligence to identify threats, got a $525-million funding boost in January, which CEO David Hatfield credits to "the combination of all of these ransomware and nation-state attacks, together with people moving to the cloud so aggressively." | [] | [] | [] | scitechnews | None | None | None | None | Security startups have seen venture capital flooding in as cyberattacks ramp up. Research firm PitchBook estimates investors have injected over $12.2 billion into startups that offer cloud security, identify verification, and privacy protection so far this year, compared to $10.4 billion during all of 2020. Capital is flowing into companies developing anti-hack measures related to the shift to cloud computing, like identity verification software supplier Qomplx and cloud security provider Netskope. Cloud security startup Lacework, whose products use artificial intelligence to identify threats, got a $525-million funding boost in January, which CEO David Hatfield credits to "the combination of all of these ransomware and nation-state attacks, together with people moving to the cloud so aggressively."
"We could have raised $1 billion in capital," Mr. Beri said.
Recent cyberattacks around the world have taken down operations at gasoline pipelines , hospitals and grocery chains and potentially compromised some intelligence agencies. But they have been a bonanza for one group: cybersecurity start-ups.
Investors have poured more than $12.2 billion into start-ups that sell products and services such as cloud security, identity verification and privacy protection so far this year. That exceeds the $10.4 billion that cybersecurity companies raised in all of 2020 and is more than double the $4.8 billion raised in 2016, according to the research firm PitchBook, which tracks funding. Since 2019, the rise in cybersecurity funding has outpaced the increase in overall venture funding.
The surge follows a slew of high-profile ransomware attacks, including against Colonial Pipeline , the software maker Kaseya and the meat processor JBS . When President Biden met with President Vladimir V. Putin of Russia last month, cyberattacks perpetrated by Russians were high on the diplomatic agenda. This month, the Biden administration and its allies also formally accused China of conducting hacks.
The breaches have fueled concerns among companies and governments, leading to increased spending on security products. Worldwide spending on information security and related services is expected to reach $150 billion this year, up 12 percent from a year ago, according to the research company Gartner .
"Before we got to this point, we as security teams were having to go and fight for every penny we could get, and now it's the exact opposite," said John Turner, an information security manager at LendingTree, the online lending marketplace. Executives, he said, are asking: "Are we protected? What do you need?" |
|||
38 | Q-CTRL, University of Sydney Devise ML Technique Used to Pinpoint Quantum Errors | July 29, 2021 - Researchers at the University of Sydney and quantum control startup Q-CTRL today announced a way to identify sources of error in quantum computers through machine learning, providing hardware developers the ability to pinpoint performance degradation with unprecedented accuracy and accelerate paths to useful quantum computers.
A joint scientific paper detailing the research, titled "Quantum Oscillator Noise Spectroscopy via Displaced Cat States," has been published in the Physical Review Letters, the world's premier physical science research journal and flagship publication of the American Physical Society (APS Physics).
Focused on reducing errors caused by environmental "noise"âthe Achilles' heel of quantum computingâthe University of Sydney team developed a technique to detect the tiniest deviations from the precise conditions needed to execute quantum algorithms using trapped ion and superconducting quantum computing hardware. These are the core technologies used by world-leading industrial quantum computing efforts at IBM, Google, Honeywell, IonQ, and others.
To pinpoint the source of the measured deviations, Q-CTRL scientists developed a new way to process the measurement results using custom machine-learning algorithms. In combination with Q-CTRL's existing quantum control techniques, the researchers were also able to minimize the impact of background interference in the process. This allowed easy discrimination between "real" noise sources that could be fixed and phantom artifacts of the measurements themselves.
"Combining cutting-edge experimental techniques with machine learning has demonstrated huge advantages in the development of quantum computers," said Dr. Cornelius Hempel of ETH Zurich who conducted the research while at the University of Sydney. "The Q-CTRL team was able to rapidly develop a professionally engineered machine learning solution that allowed us to make sense of our data and provide a new way to 'see' the problems in the hardware and address them."
Q-CTRL CEO and University of Sydney professor Michael J. Biercuk said, "The ability to identify and suppress sources of performance degradation in quantum hardware is critical to both basic research and industrial efforts building quantum sensors and quantum computers.
"Quantum control, augmented by machine learning, has shown a pathway to make these systems practically useful and dramatically accelerate R&D timelines," he said.
"The published results in a prestigious, peer-reviewed journal validate the benefit of ongoing cooperation between foundational scientific research in a university laboratory and deep-tech startups. We're thrilled to be pushing the field forward through our collaboration."
More information:
Alistair R. Milne et al, Quantum Oscillator Noise Spectroscopy via Displaced Cat States, Physical Review Letters  (2021). DOI: 10.1103/PhysRevLett.126.250506
Source: University of Sydney | Researchers at Australia's University of Sydney (USYD) and quantum control startup Q-CTRL have designed a method of pinpointing quantum computing errors via machine learning (ML). The USYD team devised a means of recognizing the smallest divergences from the conditions necessary for executing quantum algorithms with trapped ion and superconducting quantum computing equipment. Q-CTRL scientists assembled custom ML algorithms to process the measurement results, and minimized the impact of background interference using existing quantum controls. This yielded an easy distinction between sources of correctable "real" noise and phantom artifacts of the measurements themselves. USYD's Michael J. Biercuk said, "The ability to identify and suppress sources of performance degradation in quantum hardware is critical to both basic research and industrial efforts building quantum sensors and quantum computers." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Australia's University of Sydney (USYD) and quantum control startup Q-CTRL have designed a method of pinpointing quantum computing errors via machine learning (ML). The USYD team devised a means of recognizing the smallest divergences from the conditions necessary for executing quantum algorithms with trapped ion and superconducting quantum computing equipment. Q-CTRL scientists assembled custom ML algorithms to process the measurement results, and minimized the impact of background interference using existing quantum controls. This yielded an easy distinction between sources of correctable "real" noise and phantom artifacts of the measurements themselves. USYD's Michael J. Biercuk said, "The ability to identify and suppress sources of performance degradation in quantum hardware is critical to both basic research and industrial efforts building quantum sensors and quantum computers."
July 29, 2021 - Researchers at the University of Sydney and quantum control startup Q-CTRL today announced a way to identify sources of error in quantum computers through machine learning, providing hardware developers the ability to pinpoint performance degradation with unprecedented accuracy and accelerate paths to useful quantum computers.
A joint scientific paper detailing the research, titled "Quantum Oscillator Noise Spectroscopy via Displaced Cat States," has been published in the Physical Review Letters, the world's premier physical science research journal and flagship publication of the American Physical Society (APS Physics).
Focused on reducing errors caused by environmental "noise"âthe Achilles' heel of quantum computingâthe University of Sydney team developed a technique to detect the tiniest deviations from the precise conditions needed to execute quantum algorithms using trapped ion and superconducting quantum computing hardware. These are the core technologies used by world-leading industrial quantum computing efforts at IBM, Google, Honeywell, IonQ, and others.
To pinpoint the source of the measured deviations, Q-CTRL scientists developed a new way to process the measurement results using custom machine-learning algorithms. In combination with Q-CTRL's existing quantum control techniques, the researchers were also able to minimize the impact of background interference in the process. This allowed easy discrimination between "real" noise sources that could be fixed and phantom artifacts of the measurements themselves.
"Combining cutting-edge experimental techniques with machine learning has demonstrated huge advantages in the development of quantum computers," said Dr. Cornelius Hempel of ETH Zurich who conducted the research while at the University of Sydney. "The Q-CTRL team was able to rapidly develop a professionally engineered machine learning solution that allowed us to make sense of our data and provide a new way to 'see' the problems in the hardware and address them."
Q-CTRL CEO and University of Sydney professor Michael J. Biercuk said, "The ability to identify and suppress sources of performance degradation in quantum hardware is critical to both basic research and industrial efforts building quantum sensors and quantum computers.
"Quantum control, augmented by machine learning, has shown a pathway to make these systems practically useful and dramatically accelerate R&D timelines," he said.
"The published results in a prestigious, peer-reviewed journal validate the benefit of ongoing cooperation between foundational scientific research in a university laboratory and deep-tech startups. We're thrilled to be pushing the field forward through our collaboration."
More information:
Alistair R. Milne et al, Quantum Oscillator Noise Spectroscopy via Displaced Cat States, Physical Review Letters  (2021). DOI: 10.1103/PhysRevLett.126.250506
Source: University of Sydney |
|||
39 | Biden Directs Agencies to Develop Cybersecurity Standards for Critical Infrastructure | U.S. President Joe Biden this week directed federal agencies to formulate voluntary cybersecurity standards for managers of critical U.S. infrastructure, in the latest bid to strengthen national defenses against cyberattacks. In a new national security memo, Biden ordered the Department of Homeland Security (DHS) 's cyber arm and the National Institute of Standards and Technology to work with agencies to develop cybersecurity performance goals for critical infrastructure operators and owners. DHS now is required to offer preliminary baseline cybersecurity standards for critical infrastructure control systems by late September, followed by final "cross-sector" goals within a year. Sector-specific performance goals also are required as part of a review of "whether additional legal authorities would be beneficial" to protect critical infrastructure, most of which is privately owned. An administration official said, "We're starting with voluntary, as much as we can, because we want to do this in full partnership. But we're also pursuing all options we have in order to make the rapid progress we need." | [] | [] | [] | scitechnews | None | None | None | None | U.S. President Joe Biden this week directed federal agencies to formulate voluntary cybersecurity standards for managers of critical U.S. infrastructure, in the latest bid to strengthen national defenses against cyberattacks. In a new national security memo, Biden ordered the Department of Homeland Security (DHS) 's cyber arm and the National Institute of Standards and Technology to work with agencies to develop cybersecurity performance goals for critical infrastructure operators and owners. DHS now is required to offer preliminary baseline cybersecurity standards for critical infrastructure control systems by late September, followed by final "cross-sector" goals within a year. Sector-specific performance goals also are required as part of a review of "whether additional legal authorities would be beneficial" to protect critical infrastructure, most of which is privately owned. An administration official said, "We're starting with voluntary, as much as we can, because we want to do this in full partnership. But we're also pursuing all options we have in order to make the rapid progress we need."
|
||||
40 | Phone's Dark Mode Doesn't Necessarily Save Much Battery Life | When Android and Apple operating system updates started giving users the option to put their smartphones in dark mode, the feature showed potential for saving the battery life of newer phones with screens that allow darker-colored pixels to use less power than lighter-colored pixels.
But dark mode is unlikely to make a big difference to battery life with the way that most people use their phones on a daily basis, says a new study by Purdue University researchers.
That doesn't mean that dark mode can't be helpful, though.
"When the industry rushed to adopt dark mode, it didn't have the tools yet to accurately measure power draw by the pixels," said Charlie Hu , Purdue's Michael and Katherine Birck Professor of Electrical and Computer Engineering. " But now we're able to give developers the tools they need to give users more energy-efficient apps."
Based on their findings using these tools they built, the researchers clarify the facts about the effects of dark mode on battery life and recommend ways that users can already take better advantage of the feature's power savings.
The study looked at six of the most-downloaded apps on Google Play: Google Maps, Google News, Google Phone, Google Calendar, YouTube, and Calculator. The researchers analyzed how dark mode affects 60 seconds of activity within each of these apps on the Pixel 2, Moto Z3, Pixel 4 and Pixel 5.
Even though Hu's team studied only Android apps and phones, their findings might have similar implications for Apple phones, starting with the iPhone X. The team recently presented this work at MobiSys 2021 , a conference by the Association for Computing Machinery.
Smartphones that came out after 2017 likely have an OLED (organic light-emitting diode) screen. Because this type of screen doesn't have a backlight like the LCD (liquid crystal display) screens of older phones, the screen will draw less power when displaying dark-colored pixels. OLED displays also allow phone screens to be ultrathin, flexible and foldable.
But the brightness of OLED screens largely determines how much dark mode saves battery life, said Hu, who has been researching ways to improve the energy efficiency of smartphones since they first hit the market over a decade ago. The software tools that Hu and his team have developed are based on new patent-pending power modeling technology they invented to more accurately estimate the power draw of OLED phone displays.
Many people use their phone's default auto-brightness setting, which tends to keep brightness levels around 30%-40% most of the time when indoors. At 30%-50% brightness, Purdue researchers found that switching from light mode to dark mode saves only 3%-9% power on average for several different OLED smartphones.
This percentage is so small that most users wouldn't notice the slightly longer battery life. But the higher the brightness when switching from light mode to dark mode, the higher the energy savings.
Let's say that you're using your OLED phone in light mode while sitting outside watching a baseball game on a bright and sunny day. If your phone is set to automatically adjust brightness levels, then the screen has probably become really bright, which drains battery life.
The Purdue study found that switching from light mode to dark mode at 100% brightness saves an average of 39%-47% battery power. So turning on dark mode while your phone's screen is that bright could allow your phone to last a lot longer than if you had stayed in light mode.
Other tests done by the industry haven't analyzed as many apps or phones as Hu's team did to determine the effects of dark mode on battery life - and they were using less accurate methods.
"Tests done in the past to compare the effects of light mode with dark mode on battery life have treated the phone as a black box, lumping in OLED display with the phone's other gazillion components. Our tool can accurately isolate the portion of battery drain by the OLED display," said Pranab Dash, a Purdue Ph.D. student who worked with Hu on the study.
Typically, increasing your phone's brightness drains its battery faster - no matter if you are in light mode or dark mode. But since conducting this study, Dash has collected data indicating that lower brightness levels in light mode result in the same power draw as higher brightness levels in dark mode.
Using the Google News app in light mode at 20% brightness on the Pixel 5, for example, draws the same amount of power as when the phone is at 50% brightness in dark mode.
So if looking at your phone in dark mode is easier on your eyes, but you need the higher brightness to see better, you don't have to worry about this brightness level taking more of a toll on your phone's battery life.
Coming soon: Apps designed with dark mode energy savings in mind
Hu and his team built a tool that app developers can use to determine the energy savings of a certain activity in dark mode as they design an app. The tool, called a Per-Frame OLED Power Profiler (PFOP), is based on the more accurate OLED power model that the team developed. The Purdue Research Foundation Office of Technology Commercialization has applied for a patent on this power modeling technology. Both PFOP and the power modeling technology are available for licensing.
Both Android and Apple phones come with a way to look at how much battery power each individual app is consuming. You can access this feature in the settings of Android and Apple phones.
The feature can give you a rough idea of the most power-hungry apps, but Hu and Dash found that Android's current "Battery" feature is oblivious to content on a screen, meaning it doesn't consider the impact of dark mode on power consumption.
Coming soon: More accurate estimates of your apps' battery usage
Hu's team has developed a more accurate way to calculate battery consumption by the app for Android, and actually used the tool to make the study's findings about how much power dark mode saves at certain brightness levels. Unlike Android's current feature, this new tool takes into account the effects of dark mode on battery life.
The tool, called Android Battery+, is expected to become available to platform vendors and app developers in the coming year. | Purdue University researchers found in a recent study that putting a smartphone in dark mode is unlikely to save significant battery life. The researchers built tools to measure more accurately how much power the phones' pixels draw, and examined six of the most-downloaded Android phone applications on Google Play; they then analyzed dark mode's effects on 60 seconds of activity within each app on the Pixel 2, Moto Z3, Pixel 4, and Pixel 5 phones. The researchers' Per-Frame OLED Power Profiler technology showed that switching from light mode to dark saves just 3% to 9% power on average for models featuring organic light-emitting diode (OLED) screens. | [] | [] | [] | scitechnews | None | None | None | None | Purdue University researchers found in a recent study that putting a smartphone in dark mode is unlikely to save significant battery life. The researchers built tools to measure more accurately how much power the phones' pixels draw, and examined six of the most-downloaded Android phone applications on Google Play; they then analyzed dark mode's effects on 60 seconds of activity within each app on the Pixel 2, Moto Z3, Pixel 4, and Pixel 5 phones. The researchers' Per-Frame OLED Power Profiler technology showed that switching from light mode to dark saves just 3% to 9% power on average for models featuring organic light-emitting diode (OLED) screens.
When Android and Apple operating system updates started giving users the option to put their smartphones in dark mode, the feature showed potential for saving the battery life of newer phones with screens that allow darker-colored pixels to use less power than lighter-colored pixels.
But dark mode is unlikely to make a big difference to battery life with the way that most people use their phones on a daily basis, says a new study by Purdue University researchers.
That doesn't mean that dark mode can't be helpful, though.
"When the industry rushed to adopt dark mode, it didn't have the tools yet to accurately measure power draw by the pixels," said Charlie Hu , Purdue's Michael and Katherine Birck Professor of Electrical and Computer Engineering. " But now we're able to give developers the tools they need to give users more energy-efficient apps."
Based on their findings using these tools they built, the researchers clarify the facts about the effects of dark mode on battery life and recommend ways that users can already take better advantage of the feature's power savings.
The study looked at six of the most-downloaded apps on Google Play: Google Maps, Google News, Google Phone, Google Calendar, YouTube, and Calculator. The researchers analyzed how dark mode affects 60 seconds of activity within each of these apps on the Pixel 2, Moto Z3, Pixel 4 and Pixel 5.
Even though Hu's team studied only Android apps and phones, their findings might have similar implications for Apple phones, starting with the iPhone X. The team recently presented this work at MobiSys 2021 , a conference by the Association for Computing Machinery.
Smartphones that came out after 2017 likely have an OLED (organic light-emitting diode) screen. Because this type of screen doesn't have a backlight like the LCD (liquid crystal display) screens of older phones, the screen will draw less power when displaying dark-colored pixels. OLED displays also allow phone screens to be ultrathin, flexible and foldable.
But the brightness of OLED screens largely determines how much dark mode saves battery life, said Hu, who has been researching ways to improve the energy efficiency of smartphones since they first hit the market over a decade ago. The software tools that Hu and his team have developed are based on new patent-pending power modeling technology they invented to more accurately estimate the power draw of OLED phone displays.
Many people use their phone's default auto-brightness setting, which tends to keep brightness levels around 30%-40% most of the time when indoors. At 30%-50% brightness, Purdue researchers found that switching from light mode to dark mode saves only 3%-9% power on average for several different OLED smartphones.
This percentage is so small that most users wouldn't notice the slightly longer battery life. But the higher the brightness when switching from light mode to dark mode, the higher the energy savings.
Let's say that you're using your OLED phone in light mode while sitting outside watching a baseball game on a bright and sunny day. If your phone is set to automatically adjust brightness levels, then the screen has probably become really bright, which drains battery life.
The Purdue study found that switching from light mode to dark mode at 100% brightness saves an average of 39%-47% battery power. So turning on dark mode while your phone's screen is that bright could allow your phone to last a lot longer than if you had stayed in light mode.
Other tests done by the industry haven't analyzed as many apps or phones as Hu's team did to determine the effects of dark mode on battery life - and they were using less accurate methods.
"Tests done in the past to compare the effects of light mode with dark mode on battery life have treated the phone as a black box, lumping in OLED display with the phone's other gazillion components. Our tool can accurately isolate the portion of battery drain by the OLED display," said Pranab Dash, a Purdue Ph.D. student who worked with Hu on the study.
Typically, increasing your phone's brightness drains its battery faster - no matter if you are in light mode or dark mode. But since conducting this study, Dash has collected data indicating that lower brightness levels in light mode result in the same power draw as higher brightness levels in dark mode.
Using the Google News app in light mode at 20% brightness on the Pixel 5, for example, draws the same amount of power as when the phone is at 50% brightness in dark mode.
So if looking at your phone in dark mode is easier on your eyes, but you need the higher brightness to see better, you don't have to worry about this brightness level taking more of a toll on your phone's battery life.
Coming soon: Apps designed with dark mode energy savings in mind
Hu and his team built a tool that app developers can use to determine the energy savings of a certain activity in dark mode as they design an app. The tool, called a Per-Frame OLED Power Profiler (PFOP), is based on the more accurate OLED power model that the team developed. The Purdue Research Foundation Office of Technology Commercialization has applied for a patent on this power modeling technology. Both PFOP and the power modeling technology are available for licensing.
Both Android and Apple phones come with a way to look at how much battery power each individual app is consuming. You can access this feature in the settings of Android and Apple phones.
The feature can give you a rough idea of the most power-hungry apps, but Hu and Dash found that Android's current "Battery" feature is oblivious to content on a screen, meaning it doesn't consider the impact of dark mode on power consumption.
Coming soon: More accurate estimates of your apps' battery usage
Hu's team has developed a more accurate way to calculate battery consumption by the app for Android, and actually used the tool to make the study's findings about how much power dark mode saves at certain brightness levels. Unlike Android's current feature, this new tool takes into account the effects of dark mode on battery life.
The tool, called Android Battery+, is expected to become available to platform vendors and app developers in the coming year. |
|||
41 | Stanford ML Tool Streamlines Student Feedback Process for Computer Science Professors | This past spring, Stanford University computer scientists unveiled their pandemic brainchild, Code In Place , a project where 1,000 volunteer teachers taught 10,000 students across the globe the content of an introductory Stanford computer science course.
While the instructors could share their knowledge with hundreds, even thousands, of students at a time during lectures, when it came to homework, large-scale and high-quality feedback on student assignments seemed like an insurmountable task.
"It was a free class anyone in the world could take, and we got a whole bunch of humans to help us teach it," said Chris Piech , assistant professor of computer science and co-creator of Code In Place. "But the one thing we couldn't really do is scale the feedback. We can scale instruction. We can scale content. But we couldn't really scale feedback."
To solve this problem, Piech worked with Chelsea Finn , assistant professor of computer science and of electrical engineering, and PhD students Mike Wu and Alan Cheng to develop and test a first-of-its-kind artificial intelligence teaching tool capable of assisting educators in grading and providing meaningful, constructive feedback for a high volume of student assignments.
Their innovative tool, which is detailed in a Stanford AI Lab blogpost , exceeded their expectations.
In education, it can be difficult to get lots of data for a single problem, like hundreds of instructor comments on one homework question. Companies that market online coding courses are often similarly limited, and therefore rely on multiple-choice questions or generic error messages when reviewing students' work.
"This task is really hard for machine learning because you don't have a ton of data. Assignments are changing all the time, and they're open-ended, so we can't just apply standard machine learning techniques," said Finn.
The answer to scaling up feedback was a unique method called meta-learning, by which a machine learning system can learn about many different problems with relatively small amounts of data.
"With a traditional machine learning tool for feedback, if an exam changed, you'd have to retrain it, but for meta-learning, the goal is to be able to do it for unseen problems, so you can generalize it to new exams and assignments as well," said Wu, who has studied computer science education for over three years.
The group found it much easier to get a little bit of data, like 20 pieces of feedback, on a large variety of problems. Using data from previous iterations of Stanford computer science courses, they were able to achieve accuracy at or above human level on 15,000 student submissions; a task not possible just one year earlier, the researchers remarked.
The language used by the tool was very carefully crafted by the researchers. They wanted to focus on helping students grow, rather than just grading their work as right or wrong. The group credited "the human in the loop" and their focus on human involvement during development as essential to the positive reception to the AI tool.
Students in Code In Place were able to rate their satisfaction with the feedback they received, but without knowing whether the AI or their instructor had provided it. The AI tool learned from human feedback on just 10% of the total assignments and reviewed the remaining ones with 98% student satisfaction.
"The students rated the AI feedback a little bit more positively than human feedback, despite the fact that they're both as constructive and that they're both identifying the same number of errors. It's just when the AI gave constructive feedback, it tended to be more accurate," noted Piech.
Thinking of the future of online education and machine learning for education, the researchers are excited about the possibilities of their work.
"This is bigger than just this one online course and introductory computer science courses," said Finn. "I think that the impact here lies substantially in making this sort of education more scalable and more accessible as a whole." | Stanford University researchers have developed and tested a machine learning (ML) teaching tool designed to assist computer science (CS) professors in gauging feedback from large numbers of students. The tool was developed for Stanford's Code In Place project, in which 1,000 volunteer teachers taught an introductory CS course to 10,000 students worldwide. The team scaled up feedback using meta-learning, a technique in which an ML system can learn about numerous problems with relatively small volumes of data. The researchers realized accuracy at or above human levels on 15,000 student submissions, using data from previous iterations of CS courses. The tool learned from human feedback on just 10% of the total Code In Place assignments, and reviewed the remainder with 98% student satisfaction. | [] | [] | [] | scitechnews | None | None | None | None | Stanford University researchers have developed and tested a machine learning (ML) teaching tool designed to assist computer science (CS) professors in gauging feedback from large numbers of students. The tool was developed for Stanford's Code In Place project, in which 1,000 volunteer teachers taught an introductory CS course to 10,000 students worldwide. The team scaled up feedback using meta-learning, a technique in which an ML system can learn about numerous problems with relatively small volumes of data. The researchers realized accuracy at or above human levels on 15,000 student submissions, using data from previous iterations of CS courses. The tool learned from human feedback on just 10% of the total Code In Place assignments, and reviewed the remainder with 98% student satisfaction.
This past spring, Stanford University computer scientists unveiled their pandemic brainchild, Code In Place , a project where 1,000 volunteer teachers taught 10,000 students across the globe the content of an introductory Stanford computer science course.
While the instructors could share their knowledge with hundreds, even thousands, of students at a time during lectures, when it came to homework, large-scale and high-quality feedback on student assignments seemed like an insurmountable task.
"It was a free class anyone in the world could take, and we got a whole bunch of humans to help us teach it," said Chris Piech , assistant professor of computer science and co-creator of Code In Place. "But the one thing we couldn't really do is scale the feedback. We can scale instruction. We can scale content. But we couldn't really scale feedback."
To solve this problem, Piech worked with Chelsea Finn , assistant professor of computer science and of electrical engineering, and PhD students Mike Wu and Alan Cheng to develop and test a first-of-its-kind artificial intelligence teaching tool capable of assisting educators in grading and providing meaningful, constructive feedback for a high volume of student assignments.
Their innovative tool, which is detailed in a Stanford AI Lab blogpost , exceeded their expectations.
In education, it can be difficult to get lots of data for a single problem, like hundreds of instructor comments on one homework question. Companies that market online coding courses are often similarly limited, and therefore rely on multiple-choice questions or generic error messages when reviewing students' work.
"This task is really hard for machine learning because you don't have a ton of data. Assignments are changing all the time, and they're open-ended, so we can't just apply standard machine learning techniques," said Finn.
The answer to scaling up feedback was a unique method called meta-learning, by which a machine learning system can learn about many different problems with relatively small amounts of data.
"With a traditional machine learning tool for feedback, if an exam changed, you'd have to retrain it, but for meta-learning, the goal is to be able to do it for unseen problems, so you can generalize it to new exams and assignments as well," said Wu, who has studied computer science education for over three years.
The group found it much easier to get a little bit of data, like 20 pieces of feedback, on a large variety of problems. Using data from previous iterations of Stanford computer science courses, they were able to achieve accuracy at or above human level on 15,000 student submissions; a task not possible just one year earlier, the researchers remarked.
The language used by the tool was very carefully crafted by the researchers. They wanted to focus on helping students grow, rather than just grading their work as right or wrong. The group credited "the human in the loop" and their focus on human involvement during development as essential to the positive reception to the AI tool.
Students in Code In Place were able to rate their satisfaction with the feedback they received, but without knowing whether the AI or their instructor had provided it. The AI tool learned from human feedback on just 10% of the total assignments and reviewed the remaining ones with 98% student satisfaction.
"The students rated the AI feedback a little bit more positively than human feedback, despite the fact that they're both as constructive and that they're both identifying the same number of errors. It's just when the AI gave constructive feedback, it tended to be more accurate," noted Piech.
Thinking of the future of online education and machine learning for education, the researchers are excited about the possibilities of their work.
"This is bigger than just this one online course and introductory computer science courses," said Finn. "I think that the impact here lies substantially in making this sort of education more scalable and more accessible as a whole." |
|||
42 | First Test of Europe's Space Brain | The European Space Agency (ESA) operated a spacecraft successfully using a next-generation mission control system. Current missions are being converted to the European Ground System-Common Core (EGS-CC), which will function as the "brain" of all European spaceflight operations by 2025. Freely available to all European entities, the EGS-CC was used to monitor and control ESA's OPS-SAT Space Lab, a 30-centimeter (12-inch) satellite created to test and validate new mission control techniques and on-board systems. OPS-SAT mission manager Dave Evans said during the test, ESA's European Space Operations Center used the software to send routine commands to the spacecraft and to receive data from the mission. EGOS-CC project manager Klara Widegard said, "This has been a hugely successful validation of this new versatile control system, demonstrating the exciting future of mission control technologies and Europe's leading position in space." | [] | [] | [] | scitechnews | None | None | None | None | The European Space Agency (ESA) operated a spacecraft successfully using a next-generation mission control system. Current missions are being converted to the European Ground System-Common Core (EGS-CC), which will function as the "brain" of all European spaceflight operations by 2025. Freely available to all European entities, the EGS-CC was used to monitor and control ESA's OPS-SAT Space Lab, a 30-centimeter (12-inch) satellite created to test and validate new mission control techniques and on-board systems. OPS-SAT mission manager Dave Evans said during the test, ESA's European Space Operations Center used the software to send routine commands to the spacecraft and to receive data from the mission. EGOS-CC project manager Klara Widegard said, "This has been a hugely successful validation of this new versatile control system, demonstrating the exciting future of mission control technologies and Europe's leading position in space."
|
||||
43 | Malware Developers Turn to 'Exotic' Programming Languages to Thwart Researchers | Malware developers are increasingly turning to unusual or "exotic" programming languages to hamper analysis efforts, researchers say.
According to a new report published by BlackBerry's Research & Intelligence team on Monday, there has been a recent "escalation" in the use of Go (Golang), D (DLang), Nim, and Rust, which are being used more commonly to "try to evade detection by the security community, or address specific pain-points in their development process."
In particular, malware developers are experimenting with loaders and droppers written in these languages, created to be suitable for first and further-stage malware deployment in an attack chain.
BlackBerry's team says that first-stage droppers and loaders are becoming more common in order to avoid detection on a target endpoint, and once the malware has circumvented existing security controls able to detect more typical forms of malicious code, they are used to decode, load, and deploy malware including Trojans.
Commodity malware cited in the report includes the Remote Access Trojans (RATs) Remcos and NanoCore. In addition, Cobalt Strike beacons are often deployed.
Some developers, however -- with more resources at their disposal -- are rewriting their malware fully into new languages, an example being Buer to RustyBuer .
Based on current trends, the cybersecurity researchers say that Go is of particular interest to the cybercriminal community.
According to BlackBerry, both advanced persistent threat (APT) state-sponsored groups and commodity malware developers are taking a serious interest in the programming language to upgrade their arsenals. In June, CrowdStrike said a new ransomware variant borrowed features from HelloKitty/DeathRansom and FiveHands, but used a Go packer to encrypt its main payload.
"This assumption is based upon the fact that new Go-based samples are now appearing on a semi-regular basis, including malware of all types, and targeting all major operating systems across multiple campaigns," the team says.
While not as popular as Go, DLang, too, has experienced a slow uptick in adoption throughout 2021.
By using new or more unusual programming languages, the researchers say they may hamper reverse-engineering efforts and avoid signature-based detection tools, as well as improve cross-compatibility over target systems. The codebase itself may also add a layer of concealment without any further effort from the malware developer simply because of the language in which it is written.
"Malware authors are known for their ability to adapt and modify their skills and behaviors to take advantage of newer technologies," commented Eric Milam, VP of Threat Research at BlackBerry. "This has multiple benefits from the development cycle and inherent lack of coverage from protective solutions. It is critical that industry and customers understand and keep tabs on these trends, as they are only going to increase."
Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 | Cybersecurity service provider BlackBerry's Research & Intelligence team has found that malware developers are increasingly employing "exotic" coding languages to foil analysis. A report published by the team cited an "escalation" in the use of Go (Golang), D (DLang), Nim, and Rust to "try to evade detection by the security community, or address specific pain-points in their development process." Malware authors are experimenting with first-stage droppers and loaders written in these languages to evade detection on a target endpoint; once the malware has bypassed existing security controls that can identify more typical forms of malicious code, they are used for decoding, loading, and deploying malware. The researchers said cybercriminals' use of exotic programming languages could impede reverse engineering, circumvent signature-based detection tools, and enhance cross-compatibility over target systems. | [] | [] | [] | scitechnews | None | None | None | None | Cybersecurity service provider BlackBerry's Research & Intelligence team has found that malware developers are increasingly employing "exotic" coding languages to foil analysis. A report published by the team cited an "escalation" in the use of Go (Golang), D (DLang), Nim, and Rust to "try to evade detection by the security community, or address specific pain-points in their development process." Malware authors are experimenting with first-stage droppers and loaders written in these languages to evade detection on a target endpoint; once the malware has bypassed existing security controls that can identify more typical forms of malicious code, they are used for decoding, loading, and deploying malware. The researchers said cybercriminals' use of exotic programming languages could impede reverse engineering, circumvent signature-based detection tools, and enhance cross-compatibility over target systems.
Malware developers are increasingly turning to unusual or "exotic" programming languages to hamper analysis efforts, researchers say.
According to a new report published by BlackBerry's Research & Intelligence team on Monday, there has been a recent "escalation" in the use of Go (Golang), D (DLang), Nim, and Rust, which are being used more commonly to "try to evade detection by the security community, or address specific pain-points in their development process."
In particular, malware developers are experimenting with loaders and droppers written in these languages, created to be suitable for first and further-stage malware deployment in an attack chain.
BlackBerry's team says that first-stage droppers and loaders are becoming more common in order to avoid detection on a target endpoint, and once the malware has circumvented existing security controls able to detect more typical forms of malicious code, they are used to decode, load, and deploy malware including Trojans.
Commodity malware cited in the report includes the Remote Access Trojans (RATs) Remcos and NanoCore. In addition, Cobalt Strike beacons are often deployed.
Some developers, however -- with more resources at their disposal -- are rewriting their malware fully into new languages, an example being Buer to RustyBuer .
Based on current trends, the cybersecurity researchers say that Go is of particular interest to the cybercriminal community.
According to BlackBerry, both advanced persistent threat (APT) state-sponsored groups and commodity malware developers are taking a serious interest in the programming language to upgrade their arsenals. In June, CrowdStrike said a new ransomware variant borrowed features from HelloKitty/DeathRansom and FiveHands, but used a Go packer to encrypt its main payload.
"This assumption is based upon the fact that new Go-based samples are now appearing on a semi-regular basis, including malware of all types, and targeting all major operating systems across multiple campaigns," the team says.
While not as popular as Go, DLang, too, has experienced a slow uptick in adoption throughout 2021.
By using new or more unusual programming languages, the researchers say they may hamper reverse-engineering efforts and avoid signature-based detection tools, as well as improve cross-compatibility over target systems. The codebase itself may also add a layer of concealment without any further effort from the malware developer simply because of the language in which it is written.
"Malware authors are known for their ability to adapt and modify their skills and behaviors to take advantage of newer technologies," commented Eric Milam, VP of Threat Research at BlackBerry. "This has multiple benefits from the development cycle and inherent lack of coverage from protective solutions. It is critical that industry and customers understand and keep tabs on these trends, as they are only going to increase."
Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 |
|||
44 | Supercomputer-Generated Models Provide Better Understanding of Esophageal Disorders | Gastroesophageal reflux disease, more commonly known as GERD, impacts around 20 percent of U.S. citizens, according to the National Institutes of Health. If left untreated, GERD can lead to serious medical issues and sometimes esophageal cancer. Thanks to supercomputers, advances in imaging the swallowing process of GERD patients have been modelled on Comet at the San Diego Supercomputer Center ( SDSC ) at UC San Diego and Bridges-2 at the Pittsburgh Supercomputing Center ( PSC ).
Northwestern University researchers from the McCormick School of Engineering and the Feinberg School of Medicine teamed up and recently published these novel models in Biomechanics and Modeling in Mechanobiology . Their work resulted in a new computational modeling system, called FluoroMech, that could help identify physio-markers for early and accurate diagnosis of esophageal pathologies.
"While nearly 60 percent of the adult population experiences some form of GERD each year, there are 500,000 new cases of esophageal cancer diagnosed each year worldwide, with a projected 850,000 new cases/year by 2030," said Neelesh Patankar, a professor of mechanical engineering at Northwestern and the study's senior author. "In the U.S., there are nearly 17,000 diagnoses per year, accounting for about one percent of cancer diagnoses, but less than 20 percent of these patients survive at least five years and the primary curative treatment is esophagectomy."
Patankar said that staggering statistics such as these for esophageal disorders in general inspired the team of engineering and medical researchers to create an interdisciplinary study, which resulted in FluoroMech, that complements common non-invasive medical imaging techniques to quantitatively assess the mechanical health of the esophagus.
Why It's Important
Mechanical properties, such as elasticity, of the esophageal wall and its relaxation during swallowing have been shown to play a critical role in the functioning of the esophagus; thus, these qualities have been considered as indicators, or physio-markers, that explain the health of the organ. Because diagnostic techniques that determine these physical quantities are lacking, the Northwestern research team developed the FluoroMech computational technique to help clinicians have more thorough and accurate information about each patient's esophagus.
Specifically, FluoroMech was designed to predict mechanical properties of soft tubular organs such as the esophagus by analyzing medical images and video from fluoroscopy (real-time X-ray imaging) of the esophagus during a swallowing process. The new technique has also been developed to enable the clinician to quantify the relaxation of the esophageal muscles during the passage of food.
"An understanding of mechanisms underlying human pathophysiology requires knowledge of both biochemical and biomechanical function of organs - yet the use of biomechanics has not advanced on par with biochemistry or even imaging techniques like MRIs and X-rays," said Patankar. "This is particularly true in the case of esophageal disorders, and we aim to shift the existing paradigm of studying disease pathogenesis of organs toward a biomechanics-based approach that harnesses information about how mechanical properties of organs alter physiology."
How Supercomputers Helped
"In our biomechanics problem, the entire system had almost five million unknowns to solve at each time step and there are large numbers of time steps that needed to be solved - simulations of this magnitude required advanced supercomputers to obtain results in a reasonable time, which is often five to seven days for each model," explained Sourav Halder, a Northwestern doctorate student and the study's lead author. "Without the availability of these computational resources, it would not have been possible to simulate such systems."
These recent supercomputer-enabled models furnished the team with the tools to accomplish its lofty goal of creating FluoroMech. Prior to the development of its technique, it was not possible to quantify the mechanical health of the esophagus, and perhaps most importantly allow for patient-specific predictive modelling. Halder said this was previously impossible due to the lack of automated image segmentation based on machine learning techniques and complex physics-based calculations of esophageal function. With the help of supercomputers that are part of the National Science Foundation Extreme Science and Engineering Discovery Environment ( XSEDE ) as well as the education and training team at SDSC, Halder and colleagues were able to build out their FluoroMech technique.
Halder applauded the SDSC support team for its help in setting up proprietary software to enable efficient prototyping of simulation models and the ability to generate massive amounts of data for machine learning models. He said that the SDSC team also offered educational seminars on GPU Computing, HPC in Data Science and high-level programming frameworks like OpenACC. "We are extremely thankful to the SDSC education and training team for regularly conducting these workshops that are extremely accessible to everyone in attendance irrespective of their computational background," he said.
What's Next?
"While FluoroMech uses fluoroscopy data to predict esophageal wall properties as well as the functioning of the esophagus in terms of estimating active relaxation of the muscle walls, we have recently extended our work with another diagnostic device called EndoFLIP (Endolumenal Functional Lumen Imaging Probe) to predict mechanics-based physio-markers such as the esophageal wall contraction strength, active relaxation and wall elastic properties," said Patankar.
Using EndoFLIP data from a large cohort of subjects and the predictions from the FluoroMech model, the team has started to develop a virtual disease landscape (VDL). The VDL is a parameter space where subjects of different esophageal disorders get clustered into different regions. The locations of the clusters, relative to each other, are designed to represent similarities and differences between the modes of bolus transport through the esophagus. The prototypes of the VDL concept has already shown how it can provide fundamental understanding of the underlying physics of various esophageal disorders.
"Thanks to XSEDE allocations, we have already been able to develop improved models for gastric peristalsis, illustrate muscular activity in the stomach and simulate acid reflux," said Halder. "Newer clusters like Expanse at SDSC were built with double the amount of RAM on each node and nearly five times the number of cores compared to Comet , so this leap in computational power lets us plan for advanced models without seeing a substantial increase in time required for computation."
Support for this research was provided by grants from Public Health Service (R01-DK079902 and P01-DK117824) and the National Science Foundation (NSF) (OAC 1450374 and OAC 1931372). Computational resources were provided by Northwestern University's Quest High Performance Computing Cluster and the Extreme Science and Engineering Discovery Environment (XSEDE) through allocation TG-ASC170023, which is supported by NSF (ACI-1548562). It also used SDSC's Comet , which is supported by NSF (ACI-1548562) and the Bridges-2 system at PSC, which is supported by NSF (ACI-1928147).
About SDSC
The San Diego Supercomputer Center (SDSC) is a leader and pioneer in high-performance and data-intensive computing, providing cyberinfrastructure resources, services and expertise to the national research community, academia and industry. Located on the UC San Diego campus, SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from astrophysics and earth sciences to disease research and drug discovery. SDSC's newest National Science Foundation-funded supercomputer, Expanse , supports SDSC's theme of "Computing without Boundaries" with a data-centric architecture, public cloud integration and state-of-the art GPUs for incorporating experimental facilities and edge computing.
About PSC
The Pittsburgh Supercomputing Center (PSC) is a joint computational research center of Carnegie Mellon University and the University of Pittsburgh. Established in 1986, PSC is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry and is a leading partner in XSEDE, the National Science Foundation cyber infrastructure program. PSC provides university, government and industrial researchers with access to several of the most powerful systems for high-performance computing, communications and data storage available to scientists and engineers nationwide for unclassified research. PSC advances the state of the art in high-performance computing, communications and data analytics and offers a flexible environment for solving the largest and most challenging problems in computational science. | Northwestern University researchers used the San Diego Supercomputing Center (SDSC) 's Comet and the Pittsburgh Supercomputing Center's Bridges-2 supercomputers to simulate the swallowing mechanism of gastroesophageal reflux disease, which affects about 20% of U.S. citizens. Their research yielded the FluoroMech computational modeling system, which helps recognize physio-markers for accurately diagnosing esophageal disorders. The team engineered FluoroMech to predict the mechanical processes of soft tubular organs like the esophagus by analyzing fluoroscopic images and video during swallowing. Northwestern's Sourav Halder said the simulations would have been impossible to create in a reasonable amount of time without supercomputers to handle "almost five million unknowns to solve at each time step" and "large numbers of time steps that needed to be solved." Even with supercomputers, Halder said, each model took five to seven days to generate results. | [] | [] | [] | scitechnews | None | None | None | None | Northwestern University researchers used the San Diego Supercomputing Center (SDSC) 's Comet and the Pittsburgh Supercomputing Center's Bridges-2 supercomputers to simulate the swallowing mechanism of gastroesophageal reflux disease, which affects about 20% of U.S. citizens. Their research yielded the FluoroMech computational modeling system, which helps recognize physio-markers for accurately diagnosing esophageal disorders. The team engineered FluoroMech to predict the mechanical processes of soft tubular organs like the esophagus by analyzing fluoroscopic images and video during swallowing. Northwestern's Sourav Halder said the simulations would have been impossible to create in a reasonable amount of time without supercomputers to handle "almost five million unknowns to solve at each time step" and "large numbers of time steps that needed to be solved." Even with supercomputers, Halder said, each model took five to seven days to generate results.
Gastroesophageal reflux disease, more commonly known as GERD, impacts around 20 percent of U.S. citizens, according to the National Institutes of Health. If left untreated, GERD can lead to serious medical issues and sometimes esophageal cancer. Thanks to supercomputers, advances in imaging the swallowing process of GERD patients have been modelled on Comet at the San Diego Supercomputer Center ( SDSC ) at UC San Diego and Bridges-2 at the Pittsburgh Supercomputing Center ( PSC ).
Northwestern University researchers from the McCormick School of Engineering and the Feinberg School of Medicine teamed up and recently published these novel models in Biomechanics and Modeling in Mechanobiology . Their work resulted in a new computational modeling system, called FluoroMech, that could help identify physio-markers for early and accurate diagnosis of esophageal pathologies.
"While nearly 60 percent of the adult population experiences some form of GERD each year, there are 500,000 new cases of esophageal cancer diagnosed each year worldwide, with a projected 850,000 new cases/year by 2030," said Neelesh Patankar, a professor of mechanical engineering at Northwestern and the study's senior author. "In the U.S., there are nearly 17,000 diagnoses per year, accounting for about one percent of cancer diagnoses, but less than 20 percent of these patients survive at least five years and the primary curative treatment is esophagectomy."
Patankar said that staggering statistics such as these for esophageal disorders in general inspired the team of engineering and medical researchers to create an interdisciplinary study, which resulted in FluoroMech, that complements common non-invasive medical imaging techniques to quantitatively assess the mechanical health of the esophagus.
Why It's Important
Mechanical properties, such as elasticity, of the esophageal wall and its relaxation during swallowing have been shown to play a critical role in the functioning of the esophagus; thus, these qualities have been considered as indicators, or physio-markers, that explain the health of the organ. Because diagnostic techniques that determine these physical quantities are lacking, the Northwestern research team developed the FluoroMech computational technique to help clinicians have more thorough and accurate information about each patient's esophagus.
Specifically, FluoroMech was designed to predict mechanical properties of soft tubular organs such as the esophagus by analyzing medical images and video from fluoroscopy (real-time X-ray imaging) of the esophagus during a swallowing process. The new technique has also been developed to enable the clinician to quantify the relaxation of the esophageal muscles during the passage of food.
"An understanding of mechanisms underlying human pathophysiology requires knowledge of both biochemical and biomechanical function of organs - yet the use of biomechanics has not advanced on par with biochemistry or even imaging techniques like MRIs and X-rays," said Patankar. "This is particularly true in the case of esophageal disorders, and we aim to shift the existing paradigm of studying disease pathogenesis of organs toward a biomechanics-based approach that harnesses information about how mechanical properties of organs alter physiology."
How Supercomputers Helped
"In our biomechanics problem, the entire system had almost five million unknowns to solve at each time step and there are large numbers of time steps that needed to be solved - simulations of this magnitude required advanced supercomputers to obtain results in a reasonable time, which is often five to seven days for each model," explained Sourav Halder, a Northwestern doctorate student and the study's lead author. "Without the availability of these computational resources, it would not have been possible to simulate such systems."
These recent supercomputer-enabled models furnished the team with the tools to accomplish its lofty goal of creating FluoroMech. Prior to the development of its technique, it was not possible to quantify the mechanical health of the esophagus, and perhaps most importantly allow for patient-specific predictive modelling. Halder said this was previously impossible due to the lack of automated image segmentation based on machine learning techniques and complex physics-based calculations of esophageal function. With the help of supercomputers that are part of the National Science Foundation Extreme Science and Engineering Discovery Environment ( XSEDE ) as well as the education and training team at SDSC, Halder and colleagues were able to build out their FluoroMech technique.
Halder applauded the SDSC support team for its help in setting up proprietary software to enable efficient prototyping of simulation models and the ability to generate massive amounts of data for machine learning models. He said that the SDSC team also offered educational seminars on GPU Computing, HPC in Data Science and high-level programming frameworks like OpenACC. "We are extremely thankful to the SDSC education and training team for regularly conducting these workshops that are extremely accessible to everyone in attendance irrespective of their computational background," he said.
What's Next?
"While FluoroMech uses fluoroscopy data to predict esophageal wall properties as well as the functioning of the esophagus in terms of estimating active relaxation of the muscle walls, we have recently extended our work with another diagnostic device called EndoFLIP (Endolumenal Functional Lumen Imaging Probe) to predict mechanics-based physio-markers such as the esophageal wall contraction strength, active relaxation and wall elastic properties," said Patankar.
Using EndoFLIP data from a large cohort of subjects and the predictions from the FluoroMech model, the team has started to develop a virtual disease landscape (VDL). The VDL is a parameter space where subjects of different esophageal disorders get clustered into different regions. The locations of the clusters, relative to each other, are designed to represent similarities and differences between the modes of bolus transport through the esophagus. The prototypes of the VDL concept has already shown how it can provide fundamental understanding of the underlying physics of various esophageal disorders.
"Thanks to XSEDE allocations, we have already been able to develop improved models for gastric peristalsis, illustrate muscular activity in the stomach and simulate acid reflux," said Halder. "Newer clusters like Expanse at SDSC were built with double the amount of RAM on each node and nearly five times the number of cores compared to Comet , so this leap in computational power lets us plan for advanced models without seeing a substantial increase in time required for computation."
Support for this research was provided by grants from Public Health Service (R01-DK079902 and P01-DK117824) and the National Science Foundation (NSF) (OAC 1450374 and OAC 1931372). Computational resources were provided by Northwestern University's Quest High Performance Computing Cluster and the Extreme Science and Engineering Discovery Environment (XSEDE) through allocation TG-ASC170023, which is supported by NSF (ACI-1548562). It also used SDSC's Comet , which is supported by NSF (ACI-1548562) and the Bridges-2 system at PSC, which is supported by NSF (ACI-1928147).
About SDSC
The San Diego Supercomputer Center (SDSC) is a leader and pioneer in high-performance and data-intensive computing, providing cyberinfrastructure resources, services and expertise to the national research community, academia and industry. Located on the UC San Diego campus, SDSC supports hundreds of multidisciplinary programs spanning a wide variety of domains, from astrophysics and earth sciences to disease research and drug discovery. SDSC's newest National Science Foundation-funded supercomputer, Expanse , supports SDSC's theme of "Computing without Boundaries" with a data-centric architecture, public cloud integration and state-of-the art GPUs for incorporating experimental facilities and edge computing.
About PSC
The Pittsburgh Supercomputing Center (PSC) is a joint computational research center of Carnegie Mellon University and the University of Pittsburgh. Established in 1986, PSC is supported by several federal agencies, the Commonwealth of Pennsylvania and private industry and is a leading partner in XSEDE, the National Science Foundation cyber infrastructure program. PSC provides university, government and industrial researchers with access to several of the most powerful systems for high-performance computing, communications and data storage available to scientists and engineers nationwide for unclassified research. PSC advances the state of the art in high-performance computing, communications and data analytics and offers a flexible environment for solving the largest and most challenging problems in computational science. |
|||
45 | Smartphone Screens Sense Soil, Water Contamination | Researchers from the University of Cambridge have demonstrated how a typical touchscreen could be used to identify common ionic contaminants in soil or drinking water by dropping liquid samples on the screen, the first time this has been achieved. The sensitivity of the touchscreen sensor is comparable to typical lab-based equipment, which would make it useful in low-resource settings.
The researchers say their proof of concept could one day be expanded for a wide range of sensing applications, including for biosensing or medical diagnostics, right from the phone in your pocket. The results are reported in the journal Sensors and Actuators B .
Touchscreen technology is ubiquitous in our everyday lives: the screen on a typical smartphone is covered in a grid of electrodes, and when a finger disrupts the local electric field of these electrodes, the phone interprets the signal.
Other teams have used the computational power of a smartphone for sensing applications, but these have relied on the camera or peripheral devices, or have required significant changes to be made to the screen.
"We wanted to know if we could interact with the technology in a different way, without having to fundamentally change the screen," said Dr Ronan Daly from Cambridge's Institute of Manufacturing, who co-led the research. "Instead of interpreting a signal from your finger, what if we could get a touchscreen to read electrolytes, since these ions also interact with the electric fields?"
The researchers started with computer simulations, and then validated their simulations using a stripped down, standalone touchscreen, provided by two UK manufacturers, similar to those used in phones and tablets.
The researchers pipetted different liquids onto the screen to measure a change in capacitance and recorded the measurements from each droplet using the standard touchscreen testing software. Ions in the fluids all interact with the screen's electric fields differently depending on the concentration of ions and their charge.
"Our simulations showed where the electric field interacts with the fluid droplet. In our experiments, we then found a linear trend for a range of electrolytes measured on the touchscreen," said first author Sebastian Horstmann, a PhD candidate at IfM. "The sensor saturates at an anion concentration of around 500 micromolar, which can be correlated to the conductivity measured alongside. This detection window is ideal to sense ionic contamination in drinking water."
One early application for the technology could be to detect arsenic contamination in drinking water. Arsenic is another common contaminant found in groundwater in many parts of the world, but most municipal water systems screen for it and filter it out before it reaches a household tap. However, in parts of the world without water treatment plants, arsenic contamination is a serious problem.
"In theory, you could add a drop of water to your phone before you drink it, in order to check that it's safe," said Daly.
At the moment, the sensitivity of phone and tablet screens is tuned for fingers, but the researchers say the sensitivity could be changed in a certain part of the screen by modifying the electrode design in order to be optimised for sensing.
"The phone's software would need to communicate with that part of the screen to deliver the optimum electric field and be more sensitive for the target ion, but this is achievable," said Professor Lisa Hall from Cambridge's Department of Chemical Engineering and Biotechnology, who co-led the research. "We're keen to do much more on this - it's just the first step."
While it's now possible to detect ions using a touchscreen, the researchers hope to further develop the technology so that it can detect a wide range of molecules. This could open up a huge range of potential health applications.
"For example, if we could get the sensitivity to a point where the touchscreen could detect heavy metals, it could be used to test for things like lead in drinking water. We also hope in the future to deliver sensors for home health monitoring," said Daly.
"This is a starting point for broader exploration of the use of touchscreen sensing in mobile technologies and the creation of tools that are accessible to everyone, allowing rapid measurements and communication of data," said Hall.
Reference: Sebastian Horstmann, Cassi J Henderson, Elizabeth A H Hall, Ronan Daly ' Capacitive touchscreen sensing - a measure of electrolyte conductivity .' Sensors and Actuators B (2021). DOI: https://doi.org/10.1016/j.snb.2021.130318 | A smartphone touchscreen could identify common ionic contaminants in soil or drinking water, according to researchers at the U.K.'s University of Cambridge. The team ran simulations, then validated them using a standalone touchscreen similar to those used in phones and tablets. The researchers deposited different liquids onto the screen to measure shifts in capacitance, and used touchscreen testing software to record the measurement of each droplet. An early use for the technique could be to detect arsenic in drinking water. Cambridge's Ronan Daly said, "In theory, you could add a drop of water to your phone before you drink it, in order to check that it's safe." | [] | [] | [] | scitechnews | None | None | None | None | A smartphone touchscreen could identify common ionic contaminants in soil or drinking water, according to researchers at the U.K.'s University of Cambridge. The team ran simulations, then validated them using a standalone touchscreen similar to those used in phones and tablets. The researchers deposited different liquids onto the screen to measure shifts in capacitance, and used touchscreen testing software to record the measurement of each droplet. An early use for the technique could be to detect arsenic in drinking water. Cambridge's Ronan Daly said, "In theory, you could add a drop of water to your phone before you drink it, in order to check that it's safe."
Researchers from the University of Cambridge have demonstrated how a typical touchscreen could be used to identify common ionic contaminants in soil or drinking water by dropping liquid samples on the screen, the first time this has been achieved. The sensitivity of the touchscreen sensor is comparable to typical lab-based equipment, which would make it useful in low-resource settings.
The researchers say their proof of concept could one day be expanded for a wide range of sensing applications, including for biosensing or medical diagnostics, right from the phone in your pocket. The results are reported in the journal Sensors and Actuators B .
Touchscreen technology is ubiquitous in our everyday lives: the screen on a typical smartphone is covered in a grid of electrodes, and when a finger disrupts the local electric field of these electrodes, the phone interprets the signal.
Other teams have used the computational power of a smartphone for sensing applications, but these have relied on the camera or peripheral devices, or have required significant changes to be made to the screen.
"We wanted to know if we could interact with the technology in a different way, without having to fundamentally change the screen," said Dr Ronan Daly from Cambridge's Institute of Manufacturing, who co-led the research. "Instead of interpreting a signal from your finger, what if we could get a touchscreen to read electrolytes, since these ions also interact with the electric fields?"
The researchers started with computer simulations, and then validated their simulations using a stripped down, standalone touchscreen, provided by two UK manufacturers, similar to those used in phones and tablets.
The researchers pipetted different liquids onto the screen to measure a change in capacitance and recorded the measurements from each droplet using the standard touchscreen testing software. Ions in the fluids all interact with the screen's electric fields differently depending on the concentration of ions and their charge.
"Our simulations showed where the electric field interacts with the fluid droplet. In our experiments, we then found a linear trend for a range of electrolytes measured on the touchscreen," said first author Sebastian Horstmann, a PhD candidate at IfM. "The sensor saturates at an anion concentration of around 500 micromolar, which can be correlated to the conductivity measured alongside. This detection window is ideal to sense ionic contamination in drinking water."
One early application for the technology could be to detect arsenic contamination in drinking water. Arsenic is another common contaminant found in groundwater in many parts of the world, but most municipal water systems screen for it and filter it out before it reaches a household tap. However, in parts of the world without water treatment plants, arsenic contamination is a serious problem.
"In theory, you could add a drop of water to your phone before you drink it, in order to check that it's safe," said Daly.
At the moment, the sensitivity of phone and tablet screens is tuned for fingers, but the researchers say the sensitivity could be changed in a certain part of the screen by modifying the electrode design in order to be optimised for sensing.
"The phone's software would need to communicate with that part of the screen to deliver the optimum electric field and be more sensitive for the target ion, but this is achievable," said Professor Lisa Hall from Cambridge's Department of Chemical Engineering and Biotechnology, who co-led the research. "We're keen to do much more on this - it's just the first step."
While it's now possible to detect ions using a touchscreen, the researchers hope to further develop the technology so that it can detect a wide range of molecules. This could open up a huge range of potential health applications.
"For example, if we could get the sensitivity to a point where the touchscreen could detect heavy metals, it could be used to test for things like lead in drinking water. We also hope in the future to deliver sensors for home health monitoring," said Daly.
"This is a starting point for broader exploration of the use of touchscreen sensing in mobile technologies and the creation of tools that are accessible to everyone, allowing rapid measurements and communication of data," said Hall.
Reference: Sebastian Horstmann, Cassi J Henderson, Elizabeth A H Hall, Ronan Daly ' Capacitive touchscreen sensing - a measure of electrolyte conductivity .' Sensors and Actuators B (2021). DOI: https://doi.org/10.1016/j.snb.2021.130318 |
|||
46 | Disagreement May Make Online Content Spread Faster, Further | Disagreement seems to spread online posts faster and further than agreement, according to a new study from the University of Central Florida.
The finding comes from an examination of posts labeled controversial on social news aggregation site Reddit. To perform the study, the researchers analyzed more than 47,000 posts about cybersecurity in a Reddit dataset that was collected by the Computational Simulation of Online Social Behavior (SocialSim) program of the U.S. Defense Advanced Research Projects Agency.
Researchers found that these posts were seen by nearly twice the number of people and traveled nearly twice as fast when compared to posts not labeled controversial. The findings were published recently in the Journal of Computational Social Science .
Reddit is one of the most visited websites in the U.S. A post is labeled controversial by a Reddit algorithm if it receives a certain number of polarized views, or a moderator can label a post with any number of comments as controversial.
The posts analyzed in the study included topics that wouldn't be considered traditionally controversial but were labeled as so by Reddit, such as a personal computer giveaway offer.
The research is important because it shows that disagreement may be a powerful way to get people to pay attention to messages, says study co-author Ivan Garibay, an associate professor in UCF's Department of Industrial Engineering and Management Systems .
However, he advises caution to those inducing disagreement in their social media posts.
"There may be an incentive in terms of influence and audience size for a social media user to consistently include controversial and provocative topics on their posts," Garibay says. "This benefits the person posting the messages. However, controversial comments can be divisive, which could contribute to a polarized audience and society."
Reddit's definition of a controversial post, which tends to depend on increasing numbers of both likes and dislikes, is different than the traditional advertiser's definition of a controversial post, which would contain truly provocative or taboo messaging, says Yael Zemack-Rugar, an associate professor in UCF's Department of Marketing .
"To give this idea life, you may like a recent ad for Toyota, and I may not," Zemack-Rugar says. "This will not make it controversial. But if the ad featured Colin Kaepernick, as the Nike ad did in 2018, after he recently refused to recite the national anthem during his games, now we are talking controversial. There is an underlying tone that is much deeper and more meaningful."
Reddit posts are also more akin to word-of-mouth communication since they are user generated and not paid advertising, she says.
The study's findings are consistent with past research that has found that traditional controversy increases the spread of word of mouth and discussions online, especially when contributions are anonymous, as they somewhat are on Reddit, Zemack-Rugar says.
Of the more than 47,000 posts, approximately 23,000 posts were labeled controversial, and about 24,000 were noncontroversial.
The researchers found an association between controversially labeled comments and the collective attention that the audience paid to them.
For the controversial posts, there were more than 60,000 total comments, whereas for the noncontroversial posts, there were less than 25,000 total comments.
A network analysis examining the reach and speed of the posts, showed that nearly twice the number of people saw controversial content compared to noncontroversial content and that controversial content traveled nearly twice as fast.
The researchers limited posts in their analysis to those that had at least 100 comments.
Jasser Jasser, a doctoral student in UCF's Department of Computer Science , and the study's lead author, says the findings highlight the need to better understand why the content labeled in Reddit as controversial spreads.
"The next step in this work is to analyze the language used to induce such controversy and why it brings the attention of the social media users," Jasser says.
Study co-authors were Steve Scheinert, a senior solutions specialist with a professional services company and Alexander V. Mantzaris, an assistant professor in UCF's Department of Statistics and Data Science .
The study was funded with support from the Defense Advanced Research Projects Agency.
Garibay received his doctorate in computer science from UCF. He joined UCF's Department of Industrial Engineering and Management Systems, part of UCF's College of Engineering and Computer Science , in 2016. | An analysis of posts designated as controversial on social news aggregation site Reddit suggests disagreement may cause posts to spread faster, and to a greater extent. Scientists at the University of Central Florida (UCF) reviewed over 47,000 posts about cybersecurity in a Reddit dataset compiled by the U.S. Defense Advanced Research Projects Agency (DARPA) Computational Simulation of Online Social Behavior program. The team found the posts were viewed by nearly twice as many people, and spread nearly twice as fast, as posts not labeled controversial. The posts included subjects not be deemed traditionally controversial, but which were classified that way by Reddit. UCF's Jasser Jasser said these findings emphasize the need to better understand how content labeled in Reddit as controversial proliferates, by analyzing the language used to generate controversy. | [] | [] | [] | scitechnews | None | None | None | None | An analysis of posts designated as controversial on social news aggregation site Reddit suggests disagreement may cause posts to spread faster, and to a greater extent. Scientists at the University of Central Florida (UCF) reviewed over 47,000 posts about cybersecurity in a Reddit dataset compiled by the U.S. Defense Advanced Research Projects Agency (DARPA) Computational Simulation of Online Social Behavior program. The team found the posts were viewed by nearly twice as many people, and spread nearly twice as fast, as posts not labeled controversial. The posts included subjects not be deemed traditionally controversial, but which were classified that way by Reddit. UCF's Jasser Jasser said these findings emphasize the need to better understand how content labeled in Reddit as controversial proliferates, by analyzing the language used to generate controversy.
Disagreement seems to spread online posts faster and further than agreement, according to a new study from the University of Central Florida.
The finding comes from an examination of posts labeled controversial on social news aggregation site Reddit. To perform the study, the researchers analyzed more than 47,000 posts about cybersecurity in a Reddit dataset that was collected by the Computational Simulation of Online Social Behavior (SocialSim) program of the U.S. Defense Advanced Research Projects Agency.
Researchers found that these posts were seen by nearly twice the number of people and traveled nearly twice as fast when compared to posts not labeled controversial. The findings were published recently in the Journal of Computational Social Science .
Reddit is one of the most visited websites in the U.S. A post is labeled controversial by a Reddit algorithm if it receives a certain number of polarized views, or a moderator can label a post with any number of comments as controversial.
The posts analyzed in the study included topics that wouldn't be considered traditionally controversial but were labeled as so by Reddit, such as a personal computer giveaway offer.
The research is important because it shows that disagreement may be a powerful way to get people to pay attention to messages, says study co-author Ivan Garibay, an associate professor in UCF's Department of Industrial Engineering and Management Systems .
However, he advises caution to those inducing disagreement in their social media posts.
"There may be an incentive in terms of influence and audience size for a social media user to consistently include controversial and provocative topics on their posts," Garibay says. "This benefits the person posting the messages. However, controversial comments can be divisive, which could contribute to a polarized audience and society."
Reddit's definition of a controversial post, which tends to depend on increasing numbers of both likes and dislikes, is different than the traditional advertiser's definition of a controversial post, which would contain truly provocative or taboo messaging, says Yael Zemack-Rugar, an associate professor in UCF's Department of Marketing .
"To give this idea life, you may like a recent ad for Toyota, and I may not," Zemack-Rugar says. "This will not make it controversial. But if the ad featured Colin Kaepernick, as the Nike ad did in 2018, after he recently refused to recite the national anthem during his games, now we are talking controversial. There is an underlying tone that is much deeper and more meaningful."
Reddit posts are also more akin to word-of-mouth communication since they are user generated and not paid advertising, she says.
The study's findings are consistent with past research that has found that traditional controversy increases the spread of word of mouth and discussions online, especially when contributions are anonymous, as they somewhat are on Reddit, Zemack-Rugar says.
Of the more than 47,000 posts, approximately 23,000 posts were labeled controversial, and about 24,000 were noncontroversial.
The researchers found an association between controversially labeled comments and the collective attention that the audience paid to them.
For the controversial posts, there were more than 60,000 total comments, whereas for the noncontroversial posts, there were less than 25,000 total comments.
A network analysis examining the reach and speed of the posts, showed that nearly twice the number of people saw controversial content compared to noncontroversial content and that controversial content traveled nearly twice as fast.
The researchers limited posts in their analysis to those that had at least 100 comments.
Jasser Jasser, a doctoral student in UCF's Department of Computer Science , and the study's lead author, says the findings highlight the need to better understand why the content labeled in Reddit as controversial spreads.
"The next step in this work is to analyze the language used to induce such controversy and why it brings the attention of the social media users," Jasser says.
Study co-authors were Steve Scheinert, a senior solutions specialist with a professional services company and Alexander V. Mantzaris, an assistant professor in UCF's Department of Statistics and Data Science .
The study was funded with support from the Defense Advanced Research Projects Agency.
Garibay received his doctorate in computer science from UCF. He joined UCF's Department of Industrial Engineering and Management Systems, part of UCF's College of Engineering and Computer Science , in 2016. |
|||
47 | Less Chat Can Help Robots Make Better Decisions | New research that could help us use swarms of robots to tackle forest fires, conduct search and rescue operations at sea and diagnose problems inside the human body, has been published by engineers at the University of Sheffield. | Robot swarms could cooperate more effectively if communication among members of the swarm were curtailed, according to research by an international team led by engineers at the U.K.'s University of Sheffield. The research team analyzed how a swarm moved around and came to internal agreement on the best area to concentrate in and explore. Each robot evaluated the environment individually, made its own decision, and informed the rest of the swarm of its opinion; each unit then chose a random assessment that had been broadcast by another in the swarm to update its opinion on the best location, eventually reaching a consensus. The team found the swarm's environmental adaptation accelerated significantly when robots communicated only to other robots within a 10-centimeter range, rather than broadcasting to the entire group. | [] | [] | [] | scitechnews | None | None | None | None | Robot swarms could cooperate more effectively if communication among members of the swarm were curtailed, according to research by an international team led by engineers at the U.K.'s University of Sheffield. The research team analyzed how a swarm moved around and came to internal agreement on the best area to concentrate in and explore. Each robot evaluated the environment individually, made its own decision, and informed the rest of the swarm of its opinion; each unit then chose a random assessment that had been broadcast by another in the swarm to update its opinion on the best location, eventually reaching a consensus. The team found the swarm's environmental adaptation accelerated significantly when robots communicated only to other robots within a 10-centimeter range, rather than broadcasting to the entire group.
New research that could help us use swarms of robots to tackle forest fires, conduct search and rescue operations at sea and diagnose problems inside the human body, has been published by engineers at the University of Sheffield. |
|||
49 | Cybersecurity Technique Keeps Hackers Guessing | ADELPHI, Md. -- Army researchers developed a new machine learning-based framework to enhance the security of computer networks inside vehicles without undermining performance.
With the widespread prevalence of modern automobiles that entrust control to onboard computers, this research looks toward to a larger Army effort to invest in greater cybersecurity protection measures for its aerial and land platforms, especially heavy vehicles.
In collaboration with an international team of experts from Virginia Tech , the University of Queensland and Gwangju Institute of Science and Technology , researchers at the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory devised a technique called DESOLATOR to help optimize a well-known cybersecurity strategy known as the moving target defense.
DESOLATOR, which stands for deep reinforcement learning-based resource allocation and moving target defense deployment framework, helps the in-vehicle network identify the optimal IP shuffling frequency and bandwidth allocation to deliver effective, long-term moving target defense.
According to Army computer scientist and program lead Dr. Frederica Free-Nelson , achievement of the former keeps uncertainty high enough to thwart potential attackers without it becoming too costly to maintain, while attainment of the latter prevents slowdowns in critical areas of the network with high priority.
"This level of fortification of prioritized assets on a network is an integral component for any kind of network protection," Nelson said. "The technology facilitates a lightweight protection whereby fewer resources are used for maximized protection. The utility of fewer resources to protect mission systems and connected devices in vehicles while maintaining the same quality of service is an added benefit."
The research team used deep reinforcement learning to gradually shape the behavior of the algorithm based on various reward functions, such as exposure time and the number of dropped packets, to ensure that DESOLATOR took both security and efficiency into equal consideration.
"Existing legacy in-vehicle networks are very efficient, but they weren't really designed with security in mind," Moore said. "Nowadays, there's a lot of research out there that looks solely at either enhancing performance or enhancing security. Looking at both performance and security is in itself a little rare, especially for in-vehicle networks."
In addition, DESOLATOR is not limited to identifying the optimal IP shuffling frequency and bandwidth allocation. Since this approach exists as a machine learning-based framework, other researchers can modify the technique to pursue different goals within the problem space.
Researchers detail information about their approach in the research paper, DESOLATER: Deep Reinforcement Learning-Based Resource Allocation and Moving Target Defense Deployment Framework , in the peer-reviewed journal IEEE Access .
As the Army's national research laboratory, ARL is operationalizing science to achieve transformational overmatch. Through collaboration across the command's core technical competencies, DEVCOM leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more successful at winning the nation's wars and come home safely. DEVCOM Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command . DEVCOM is a major subordinate command of the Army Futures Command . | Development Command's Army Research Laboratory (ARL) has designed a machine learning-based framework to augment the security of in-vehicle computer networks. The DESOLATOR (deep reinforcement learning-based resource allocation and moving target defense deployment framework) framework is engineered to help an in-vehicle network identify the optimal Internet Protocol (IP) shuffling frequency and bandwidth allocation to enable effective, long-term moving target defense. Explained ARL's Terrence Moore, "If you shuffle the IP addresses fast enough, then the information assigned to the IP quickly becomes lost, and the adversary has to look for it again." ARL's Frederica Free-Nelson said the framework keeps uncertainty sufficiently high to defeat potential attackers without incurring excessive maintenance costs, and prevents performance slowdowns in high-priority areas of the network. | [] | [] | [] | scitechnews | None | None | None | None | Development Command's Army Research Laboratory (ARL) has designed a machine learning-based framework to augment the security of in-vehicle computer networks. The DESOLATOR (deep reinforcement learning-based resource allocation and moving target defense deployment framework) framework is engineered to help an in-vehicle network identify the optimal Internet Protocol (IP) shuffling frequency and bandwidth allocation to enable effective, long-term moving target defense. Explained ARL's Terrence Moore, "If you shuffle the IP addresses fast enough, then the information assigned to the IP quickly becomes lost, and the adversary has to look for it again." ARL's Frederica Free-Nelson said the framework keeps uncertainty sufficiently high to defeat potential attackers without incurring excessive maintenance costs, and prevents performance slowdowns in high-priority areas of the network.
ADELPHI, Md. -- Army researchers developed a new machine learning-based framework to enhance the security of computer networks inside vehicles without undermining performance.
With the widespread prevalence of modern automobiles that entrust control to onboard computers, this research looks toward to a larger Army effort to invest in greater cybersecurity protection measures for its aerial and land platforms, especially heavy vehicles.
In collaboration with an international team of experts from Virginia Tech , the University of Queensland and Gwangju Institute of Science and Technology , researchers at the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory devised a technique called DESOLATOR to help optimize a well-known cybersecurity strategy known as the moving target defense.
DESOLATOR, which stands for deep reinforcement learning-based resource allocation and moving target defense deployment framework, helps the in-vehicle network identify the optimal IP shuffling frequency and bandwidth allocation to deliver effective, long-term moving target defense.
According to Army computer scientist and program lead Dr. Frederica Free-Nelson , achievement of the former keeps uncertainty high enough to thwart potential attackers without it becoming too costly to maintain, while attainment of the latter prevents slowdowns in critical areas of the network with high priority.
"This level of fortification of prioritized assets on a network is an integral component for any kind of network protection," Nelson said. "The technology facilitates a lightweight protection whereby fewer resources are used for maximized protection. The utility of fewer resources to protect mission systems and connected devices in vehicles while maintaining the same quality of service is an added benefit."
The research team used deep reinforcement learning to gradually shape the behavior of the algorithm based on various reward functions, such as exposure time and the number of dropped packets, to ensure that DESOLATOR took both security and efficiency into equal consideration.
"Existing legacy in-vehicle networks are very efficient, but they weren't really designed with security in mind," Moore said. "Nowadays, there's a lot of research out there that looks solely at either enhancing performance or enhancing security. Looking at both performance and security is in itself a little rare, especially for in-vehicle networks."
In addition, DESOLATOR is not limited to identifying the optimal IP shuffling frequency and bandwidth allocation. Since this approach exists as a machine learning-based framework, other researchers can modify the technique to pursue different goals within the problem space.
Researchers detail information about their approach in the research paper, DESOLATER: Deep Reinforcement Learning-Based Resource Allocation and Moving Target Defense Deployment Framework , in the peer-reviewed journal IEEE Access .
As the Army's national research laboratory, ARL is operationalizing science to achieve transformational overmatch. Through collaboration across the command's core technical competencies, DEVCOM leads in the discovery, development and delivery of the technology-based capabilities required to make Soldiers more successful at winning the nation's wars and come home safely. DEVCOM Army Research Laboratory is an element of the U.S. Army Combat Capabilities Development Command . DEVCOM is a major subordinate command of the Army Futures Command . |
|||
50 | How Olympic Tracking Systems Capture Athletic Performances | This year's Olympic Games may be closed to most spectators because of COVID-19, but the eyes of the world are still on the athletes thanks to dozens of cameras recording every leap, dive and flip. Among all that broadcasting equipment, track-and-field competitors might notice five extra cameras - the first step in a detailed 3-D tracking system that supplies spectators with near-instantaneous insights into each step of a race or handoff of a baton.
And tracking is just the beginning. The technology on display in Tokyo suggests that the future of elite athletic training lies not merely in gathering data about the human body, but in using that data to create digital replicas of it. These avatars could one day run through hypothetical scenarios to help athletes decide which choices will produce the best outcomes.
The tracking system being used in Tokyo, an Intel product called 3DAT, feeds live footage into the cloud. There, an artificial intelligence program uses deep learning to analyze an athlete 's movements and identifies key performance characteristics such as top speed and deceleration. The system shares that information with viewers by displaying slow-motion graphic representations of the action, highlighting key moments. The whole process, from capturing the footage to broadcasting the analysis, takes less than 30 seconds.
For example, during NBC 's broadcast of the 100 meter trials in Eugene, Ore., the AI showed how Sha'Carri Richardson hit 24.1 miles per hour at her peak and slowed to 20.0 mph by the time she reached the finish line. That was enough to win the race: Richardson 's runner-up hit a maximum speed of 23.2 miles per hour and slowed to 20.4 mph at the line.
"It 's like having your own personal commentator point things out to you in the race," says Jonathan Lee, director of sports performance technology in the Olympic technology group at Intel.
To train their Olympic AI via machine learning, Lee and his team had to capture as much footage of elite track and field athletes in motion as they could. They needed recordings of human bodies performing specific moves, but the preexisting footage used for similar research shows average people in motion, which would have confused the algorithm, Lee says. "People aren't usually fully horizontal seven feet in the air," he notes, but world-class high jumpers reach such heights regularly.
In the footage, a team at Intel manually annotated every part of the body - eyes, nose, shoulders, and more - pixel by pixel. Once those key points were identified, the model could begin connecting them in three dimensions until it had a simplified rendering of an athlete 's form. Tracking this "skeleton" enables the program to perform 3-D pose estimation (a computer vision technique that tracks an object and tries to predict the changes it might undergo in space) on the athlete 's body as it moves through an event.
The tracking system is limited to the track-and-field events at this year 's games. But similar technology could become standard in a variety of sports, suggests Barbara Rita Barricelli, who is a human-computer interaction researcher and assistant professor at Italy 's University of Brescia and is not involved with the Intel project. "The real big shift is when a technology is not only used for entertainment or research, but is accepted by the community of practice," Barricelli says. For example, when video-assistant referees were first used in soccer, they were popular with broadcast networks - but some human referees refused to rely on them for game-changing decisions. The technology remains controversial, but now many officials routinely use the video assistant to help make a call. Baricelli suggests 3DAT 's Olympic debut may be "a big step for research meeting practice - or better, practice embracing research results."
Lee thinks the AI could help everyone from Olympians to average gymgoers correct their form, track changes in their gait that may indicate imminent injury, and more. "Long-term, what this technology will do is help improve [an] athlete's performance by giving them more information," two-time Olympic decathlon champion Ashton Eaton, who works for Intel on the 3DAT project, told the Oregonian .
All of this is only possible thanks to advances in computing that enable artificial intelligence to more effectively transform 2-D images into 3-D models. It 's yielding "information we've never had before - that no one 's ever had before - because it was too cumbersome," Lee says. He thinks insights like those shared in the recent track-and-field trials are just the beginning.
In the future athletes will likely rely ever more on reams of data, processed with artificial intelligence, to up their game. One such tool may be a kind of model called the digital twin - "a virtual representation of a you-fill-in-the-blank," says John Vickers, principal technologist for the Space Technology Mission Directorate at NASA Headquarters.
These models exist as data in a computer program, so they can be viewed on a screen or in virtual reality, and run through simulations of real-world situations. Vickers coined the phrase "digital twin" with Michael Grieves, a research professor at the Florida Institute of Technology, more than a decade ago. Vickers says engineers initially defined digital twins as constantly evolving virtual models of industrial objects, from the next generation of space-bound vehicles to entire Earthly cities . For example, in 2020 the U.S. Air Force began a six-year project to develop a digital twin of a B-1B Lancer bomber to understand how individual parts decay, and how to slow those processes. Now researchers are developing digital twins to build, test and even operate just about anything, ranging from abstract concepts like "fan experience" in an arena - to human beings.
Barricelli currently is working on exactly that. She believes engineers will soon be using data collected from wearable fitness monitors and AI tracking tools to deploy digital twins of individual athletes . Coaches could use these to test how competition is influenced by a wide variety of behaviors, from sleep patterns to diet to stance on the field. The twin could eventually help athletes make predictions about their future real-world performance, and could even suggest training adjustments.
"At that level, it would be really helpful for [athletes] to have continuous monitoring of the hypothetical outcome of their training," Barricelli says. That way, "you see every time you do something how that affects the results you achieve." | This year's Olympic Games in Tokyo use an advanced three-dimensional (3D) tracking system that captures athletes' performances in fine detail. Intel's 3DAT system sends live camera footage to the cloud, where artificial intelligence (AI) uses deep learning to analyze an athlete's movements and identify key performance traits like top speed and deceleration. 3DAT shares this information with viewers as slow-motion graphic representations of the action in less than 30 seconds. Intel's Jonathan Lee and colleagues trained the AI on recorded footage of elite track and field athletes, with all body parts annotated; the model could then link the video to a simplified rendering of an athlete's form. The AI can track this "skeleton" and calculate the position of each athlete's body in three dimensions as it moves through an event. | [] | [] | [] | scitechnews | None | None | None | None | This year's Olympic Games in Tokyo use an advanced three-dimensional (3D) tracking system that captures athletes' performances in fine detail. Intel's 3DAT system sends live camera footage to the cloud, where artificial intelligence (AI) uses deep learning to analyze an athlete's movements and identify key performance traits like top speed and deceleration. 3DAT shares this information with viewers as slow-motion graphic representations of the action in less than 30 seconds. Intel's Jonathan Lee and colleagues trained the AI on recorded footage of elite track and field athletes, with all body parts annotated; the model could then link the video to a simplified rendering of an athlete's form. The AI can track this "skeleton" and calculate the position of each athlete's body in three dimensions as it moves through an event.
This year's Olympic Games may be closed to most spectators because of COVID-19, but the eyes of the world are still on the athletes thanks to dozens of cameras recording every leap, dive and flip. Among all that broadcasting equipment, track-and-field competitors might notice five extra cameras - the first step in a detailed 3-D tracking system that supplies spectators with near-instantaneous insights into each step of a race or handoff of a baton.
And tracking is just the beginning. The technology on display in Tokyo suggests that the future of elite athletic training lies not merely in gathering data about the human body, but in using that data to create digital replicas of it. These avatars could one day run through hypothetical scenarios to help athletes decide which choices will produce the best outcomes.
The tracking system being used in Tokyo, an Intel product called 3DAT, feeds live footage into the cloud. There, an artificial intelligence program uses deep learning to analyze an athlete 's movements and identifies key performance characteristics such as top speed and deceleration. The system shares that information with viewers by displaying slow-motion graphic representations of the action, highlighting key moments. The whole process, from capturing the footage to broadcasting the analysis, takes less than 30 seconds.
For example, during NBC 's broadcast of the 100 meter trials in Eugene, Ore., the AI showed how Sha'Carri Richardson hit 24.1 miles per hour at her peak and slowed to 20.0 mph by the time she reached the finish line. That was enough to win the race: Richardson 's runner-up hit a maximum speed of 23.2 miles per hour and slowed to 20.4 mph at the line.
"It 's like having your own personal commentator point things out to you in the race," says Jonathan Lee, director of sports performance technology in the Olympic technology group at Intel.
To train their Olympic AI via machine learning, Lee and his team had to capture as much footage of elite track and field athletes in motion as they could. They needed recordings of human bodies performing specific moves, but the preexisting footage used for similar research shows average people in motion, which would have confused the algorithm, Lee says. "People aren't usually fully horizontal seven feet in the air," he notes, but world-class high jumpers reach such heights regularly.
In the footage, a team at Intel manually annotated every part of the body - eyes, nose, shoulders, and more - pixel by pixel. Once those key points were identified, the model could begin connecting them in three dimensions until it had a simplified rendering of an athlete 's form. Tracking this "skeleton" enables the program to perform 3-D pose estimation (a computer vision technique that tracks an object and tries to predict the changes it might undergo in space) on the athlete 's body as it moves through an event.
The tracking system is limited to the track-and-field events at this year 's games. But similar technology could become standard in a variety of sports, suggests Barbara Rita Barricelli, who is a human-computer interaction researcher and assistant professor at Italy 's University of Brescia and is not involved with the Intel project. "The real big shift is when a technology is not only used for entertainment or research, but is accepted by the community of practice," Barricelli says. For example, when video-assistant referees were first used in soccer, they were popular with broadcast networks - but some human referees refused to rely on them for game-changing decisions. The technology remains controversial, but now many officials routinely use the video assistant to help make a call. Baricelli suggests 3DAT 's Olympic debut may be "a big step for research meeting practice - or better, practice embracing research results."
Lee thinks the AI could help everyone from Olympians to average gymgoers correct their form, track changes in their gait that may indicate imminent injury, and more. "Long-term, what this technology will do is help improve [an] athlete's performance by giving them more information," two-time Olympic decathlon champion Ashton Eaton, who works for Intel on the 3DAT project, told the Oregonian .
All of this is only possible thanks to advances in computing that enable artificial intelligence to more effectively transform 2-D images into 3-D models. It 's yielding "information we've never had before - that no one 's ever had before - because it was too cumbersome," Lee says. He thinks insights like those shared in the recent track-and-field trials are just the beginning.
In the future athletes will likely rely ever more on reams of data, processed with artificial intelligence, to up their game. One such tool may be a kind of model called the digital twin - "a virtual representation of a you-fill-in-the-blank," says John Vickers, principal technologist for the Space Technology Mission Directorate at NASA Headquarters.
These models exist as data in a computer program, so they can be viewed on a screen or in virtual reality, and run through simulations of real-world situations. Vickers coined the phrase "digital twin" with Michael Grieves, a research professor at the Florida Institute of Technology, more than a decade ago. Vickers says engineers initially defined digital twins as constantly evolving virtual models of industrial objects, from the next generation of space-bound vehicles to entire Earthly cities . For example, in 2020 the U.S. Air Force began a six-year project to develop a digital twin of a B-1B Lancer bomber to understand how individual parts decay, and how to slow those processes. Now researchers are developing digital twins to build, test and even operate just about anything, ranging from abstract concepts like "fan experience" in an arena - to human beings.
Barricelli currently is working on exactly that. She believes engineers will soon be using data collected from wearable fitness monitors and AI tracking tools to deploy digital twins of individual athletes . Coaches could use these to test how competition is influenced by a wide variety of behaviors, from sleep patterns to diet to stance on the field. The twin could eventually help athletes make predictions about their future real-world performance, and could even suggest training adjustments.
"At that level, it would be really helpful for [athletes] to have continuous monitoring of the hypothetical outcome of their training," Barricelli says. That way, "you see every time you do something how that affects the results you achieve." |
|||
51 | Gaming Graphics Card Allows Faster, More Precise Control of Fusion Energy Experiments | Engineering | News releases | Research | Science | Technology
July 22, 2021
Nuclear fusion offers the potential for a safe, clean and abundant energy source.
This process, which also occurs in the sun, involves plasmas, fluids composed of charged particles, being heated to extremely high temperatures so that the atoms fuse together, releasing abundant energy.
One challenge to performing this reaction on Earth is the dynamic nature of plasmas, which must be controlled to reach the required temperatures that allow fusion to happen. Now researchers at the University of Washington have developed a method that harnesses advances in the computer gaming industry: It uses a gaming graphics card, or GPU, to run the control system for their prototype fusion reactor.
The team published these results May 11 in Review of Scientific Instruments.
"You need this level of speed and precision with plasmas because they have such complex dynamics that evolve at very high speeds. If you cannot keep up with them, or if you mispredict how plasmas will react, they have a nasty habit of going in the totally wrong direction very quickly," said co-author Chris Hansen , a UW senior research scientist in the aeronautics and astronautics department.
"Most applications try to operate in an area where the system is pretty static. At most all you have to do is 'nudge' things back in place," Hansen said. "In our lab, we are working to develop methods to actively keep the plasma where we want it in more dynamic systems."
The UW team's experimental reactor self-generates magnetic fields entirely within the plasma, making it potentially smaller and cheaper than other reactors that use external magnetic fields.
"By adding magnetic fields to plasmas, you can move and control them without having to 'touch' the plasma," Hansen said. "For example, the northern lights occur when plasma traveling from the sun runs into the Earth's magnetic field, which captures it and causes it to stream down toward the poles. As it hits the atmosphere, the charged particles emit light."
The UW team's prototype reactor heats plasma to about 1 million degrees Celsius (1.8 million degrees Fahrenheit). This is far short of the 150 million degrees Celsius necessary for fusion, but hot enough to study the concept.
Here, the plasma forms in three injectors on the device and then these combine and naturally organize into a doughnut-shaped object, like a smoke ring. These plasmas last only a few thousandths of a second, which is why the team needed to have a high-speed method for controlling what's happening.
Previously, researchers have used slower or less user-friendly technology to program their control systems. So the team turned to an NVIDIA Tesla GPU, which is designed for machine learning applications.
"The GPU gives us access to a huge amount of computing power," said lead author Kyle Morgan , a UW research scientist in the aeronautics and astronautics department. "This level of performance was driven by the computer gaming industry and, more recently, machine learning, but this graphics card provides a really great platform for controlling plasmas as well."
Using the graphics card, the team could fine-tune how plasmas entered the reactor, giving the researchers a more precise view of what's happening as the plasmas form - and eventually potentially allowing the team to create longer-living plasmas that operate closer to the conditions required for controlled fusion power.
"The biggest difference is for the future," Hansen said. "This new system lets us try newer, more advanced algorithms that could enable significantly better control, which can open a world of new applications for plasma and fusion technology."
Additional co-authors on this paper are Aaron Hossack , a UW research scientist in the aeronautics and astronautics department; Brian Nelson , a UW affiliate research professor in the electrical and computer engineering department; and Derek Sutherland , who completed a doctoral degree at the UW but is now the CEO of CTFusion, Inc. This research was funded by the U.S. Department of Energy and by CTFusion, Inc., through an Advanced Research Projects Agency-Energy award.
For more information, contact Hansen at hansec@uw.edu and Morgan at morgak@uw.edu .
Grant numbers: SC-0018844, DE-AR0001098 | University of Washington (UW) scientists formulated a technique for using a gaming graphics card to control plasma formation in an experimental fusion reactor. The team utilized a Tesla graphics processing unit (GPU) from NVIDIA, which is engineered for machine learning applications. The card enabled the team to refine how plasmas entered the reactor, offering a more precise view of plasma formation. The prototype reactor self-generates magnetic fields within the plasma, making it potentially smaller and more affordable than other reactors that employ external fields. UW's Chris Hansen said, "This new system lets us try newer, more advanced algorithms that could enable significantly better control, which can open a world of new applications for plasma and fusion technology." | [] | [] | [] | scitechnews | None | None | None | None | University of Washington (UW) scientists formulated a technique for using a gaming graphics card to control plasma formation in an experimental fusion reactor. The team utilized a Tesla graphics processing unit (GPU) from NVIDIA, which is engineered for machine learning applications. The card enabled the team to refine how plasmas entered the reactor, offering a more precise view of plasma formation. The prototype reactor self-generates magnetic fields within the plasma, making it potentially smaller and more affordable than other reactors that employ external fields. UW's Chris Hansen said, "This new system lets us try newer, more advanced algorithms that could enable significantly better control, which can open a world of new applications for plasma and fusion technology."
Engineering | News releases | Research | Science | Technology
July 22, 2021
Nuclear fusion offers the potential for a safe, clean and abundant energy source.
This process, which also occurs in the sun, involves plasmas, fluids composed of charged particles, being heated to extremely high temperatures so that the atoms fuse together, releasing abundant energy.
One challenge to performing this reaction on Earth is the dynamic nature of plasmas, which must be controlled to reach the required temperatures that allow fusion to happen. Now researchers at the University of Washington have developed a method that harnesses advances in the computer gaming industry: It uses a gaming graphics card, or GPU, to run the control system for their prototype fusion reactor.
The team published these results May 11 in Review of Scientific Instruments.
"You need this level of speed and precision with plasmas because they have such complex dynamics that evolve at very high speeds. If you cannot keep up with them, or if you mispredict how plasmas will react, they have a nasty habit of going in the totally wrong direction very quickly," said co-author Chris Hansen , a UW senior research scientist in the aeronautics and astronautics department.
"Most applications try to operate in an area where the system is pretty static. At most all you have to do is 'nudge' things back in place," Hansen said. "In our lab, we are working to develop methods to actively keep the plasma where we want it in more dynamic systems."
The UW team's experimental reactor self-generates magnetic fields entirely within the plasma, making it potentially smaller and cheaper than other reactors that use external magnetic fields.
"By adding magnetic fields to plasmas, you can move and control them without having to 'touch' the plasma," Hansen said. "For example, the northern lights occur when plasma traveling from the sun runs into the Earth's magnetic field, which captures it and causes it to stream down toward the poles. As it hits the atmosphere, the charged particles emit light."
The UW team's prototype reactor heats plasma to about 1 million degrees Celsius (1.8 million degrees Fahrenheit). This is far short of the 150 million degrees Celsius necessary for fusion, but hot enough to study the concept.
Here, the plasma forms in three injectors on the device and then these combine and naturally organize into a doughnut-shaped object, like a smoke ring. These plasmas last only a few thousandths of a second, which is why the team needed to have a high-speed method for controlling what's happening.
Previously, researchers have used slower or less user-friendly technology to program their control systems. So the team turned to an NVIDIA Tesla GPU, which is designed for machine learning applications.
"The GPU gives us access to a huge amount of computing power," said lead author Kyle Morgan , a UW research scientist in the aeronautics and astronautics department. "This level of performance was driven by the computer gaming industry and, more recently, machine learning, but this graphics card provides a really great platform for controlling plasmas as well."
Using the graphics card, the team could fine-tune how plasmas entered the reactor, giving the researchers a more precise view of what's happening as the plasmas form - and eventually potentially allowing the team to create longer-living plasmas that operate closer to the conditions required for controlled fusion power.
"The biggest difference is for the future," Hansen said. "This new system lets us try newer, more advanced algorithms that could enable significantly better control, which can open a world of new applications for plasma and fusion technology."
Additional co-authors on this paper are Aaron Hossack , a UW research scientist in the aeronautics and astronautics department; Brian Nelson , a UW affiliate research professor in the electrical and computer engineering department; and Derek Sutherland , who completed a doctoral degree at the UW but is now the CEO of CTFusion, Inc. This research was funded by the U.S. Department of Energy and by CTFusion, Inc., through an Advanced Research Projects Agency-Energy award.
For more information, contact Hansen at hansec@uw.edu and Morgan at morgak@uw.edu .
Grant numbers: SC-0018844, DE-AR0001098 |
|||
52 | Wearable Camera Reduces Collision Risk for Blind, Visually Impaired | July 22 (UPI) -- A wearable computer vision device may help reduce collisions and other accidents in the blind and visually impaired, a study published Thursday by JAMA Ophthalmology found.
When used in combination with a long cane or guide dog, the technology, which detects nearby movement and objects with an on-board camera, reduced the risk for collisions and falls by nearly 40% compared with other mobility aids, the data showed. Advertisement
"Independent travel is an essential part of daily life for many people who are visually impaired, but they face a greater risk of bumping into obstacles when they walk on their own," study co-author Gang Luo said in a press release. RELATED Study: New genetic test effective at spotting people at high risk for glaucoma
"Many blind individuals use long canes to detect obstacles [and] collision risks are not completely eliminated. We sought to develop ... a device that can augment these everyday mobility aids, further improving their safety," said Luo, an associate professor of ophthalmology at Harvard Medical School in Cambridge, Mass.
Those who are visually impaired are at increased risk for collisions and falls, even with mobility aids such as long canes and guide dogs, according to Prevent Blindness.
Long canes are among the most effective and affordable mobility tools for a person who is blind or visually impaired but they can only detect hazards on the ground that are within reach and often miss hazards above ground level, the organization says. Advertisement RELATED Loss of sight, hearing associated with increased dementia risk, study finds
Guide dogs also are highly effective, but are in short supply and can cost up to $60,000, it says.
The device developed by Luo and his colleagues features a data recording unit enclosed in a sling backpack with a chest-mounted, wide-angle camera on the strap and two Bluetooth-connected wristbands worn by the user.
The camera is connected to a processing unit that records images and analyzes any collision risk based on the movement of incoming and surrounding objects in the camera's field of view, the researchers said. RELATED Eye injections may prevent vision loss, complications for diabetic retinopathy
If there is a risk for collision on a user's left or right side, the corresponding wristband will vibrate, while a potential head-on collision will cause both wristbands to vibrate.
The device is designed to warn users only of approaching obstacles that pose a collision risk and ignore objects not on a collision course, they said.
For this study, Luo and his colleagues tested the device on 31 blind and visually impaired adults who use a long cane or guide dog, or both, to aid their daily mobility.
After being trained to use the device, participants wore it for about a month during daily activities, while continuing with their usual mobility device.
The device was randomized to switch between active mode, in which the users could receive vibrating alerts for imminent collisions, and silent mode, in which the device still processed and recorded images, but did not give users a warning. Advertisement
The silent mode is equivalent to the placebo condition in many clinical trials testing drugs, so the wearers and researchers would not know when the device modes changed during the testing.
The effectiveness of the device was measured by comparing collision incidents that occurred during active and silent modes. There were 37% fewer collisions in the former than in the latter.
The researchers hope to leverage improvements in digital processing and camera technology to make their device smaller and more cosmetically appealing before applying to the U.S. Food and Drug Administration for approval.
"Long canes are still very helpful and cost-effective tools that work well in many situations, but we hope a wearable device like this can fill in the gaps that the cane might miss, providing a more affordable, easier to obtain option than a guide dog," study co-author Alex Bowers said in a press release.
In addition, "the insights provided by our data can be valuable for improving mobility aid training," said Bowers, an associate professor of ophthalmology at Harvard Medical School. | A wearable computer vision device developed by Harvard Medical School scientists may help reduce collisions and other accidents for the blind and visually impaired. The device includes a data recording unit enclosed in a sling backpack with a camera on the strap, and two Bluetooth-connected wristbands. The researchers said a processing unit records images from the camera and analyzes collision risk based on the motion of incoming and surrounding objects within the field of view. The left-hand or right-hand wristband will vibrate an alert depending on which side a potential collision is coming from, while both wristbands vibrate when a potential head-on collision is detected. Data from the study showed the solution cut the risk for collisions and falls by nearly 40% compared with other mobility aids, when used in combination with a long cane or guide dog. | [] | [] | [] | scitechnews | None | None | None | None | A wearable computer vision device developed by Harvard Medical School scientists may help reduce collisions and other accidents for the blind and visually impaired. The device includes a data recording unit enclosed in a sling backpack with a camera on the strap, and two Bluetooth-connected wristbands. The researchers said a processing unit records images from the camera and analyzes collision risk based on the motion of incoming and surrounding objects within the field of view. The left-hand or right-hand wristband will vibrate an alert depending on which side a potential collision is coming from, while both wristbands vibrate when a potential head-on collision is detected. Data from the study showed the solution cut the risk for collisions and falls by nearly 40% compared with other mobility aids, when used in combination with a long cane or guide dog.
July 22 (UPI) -- A wearable computer vision device may help reduce collisions and other accidents in the blind and visually impaired, a study published Thursday by JAMA Ophthalmology found.
When used in combination with a long cane or guide dog, the technology, which detects nearby movement and objects with an on-board camera, reduced the risk for collisions and falls by nearly 40% compared with other mobility aids, the data showed. Advertisement
"Independent travel is an essential part of daily life for many people who are visually impaired, but they face a greater risk of bumping into obstacles when they walk on their own," study co-author Gang Luo said in a press release. RELATED Study: New genetic test effective at spotting people at high risk for glaucoma
"Many blind individuals use long canes to detect obstacles [and] collision risks are not completely eliminated. We sought to develop ... a device that can augment these everyday mobility aids, further improving their safety," said Luo, an associate professor of ophthalmology at Harvard Medical School in Cambridge, Mass.
Those who are visually impaired are at increased risk for collisions and falls, even with mobility aids such as long canes and guide dogs, according to Prevent Blindness.
Long canes are among the most effective and affordable mobility tools for a person who is blind or visually impaired but they can only detect hazards on the ground that are within reach and often miss hazards above ground level, the organization says. Advertisement RELATED Loss of sight, hearing associated with increased dementia risk, study finds
Guide dogs also are highly effective, but are in short supply and can cost up to $60,000, it says.
The device developed by Luo and his colleagues features a data recording unit enclosed in a sling backpack with a chest-mounted, wide-angle camera on the strap and two Bluetooth-connected wristbands worn by the user.
The camera is connected to a processing unit that records images and analyzes any collision risk based on the movement of incoming and surrounding objects in the camera's field of view, the researchers said. RELATED Eye injections may prevent vision loss, complications for diabetic retinopathy
If there is a risk for collision on a user's left or right side, the corresponding wristband will vibrate, while a potential head-on collision will cause both wristbands to vibrate.
The device is designed to warn users only of approaching obstacles that pose a collision risk and ignore objects not on a collision course, they said.
For this study, Luo and his colleagues tested the device on 31 blind and visually impaired adults who use a long cane or guide dog, or both, to aid their daily mobility.
After being trained to use the device, participants wore it for about a month during daily activities, while continuing with their usual mobility device.
The device was randomized to switch between active mode, in which the users could receive vibrating alerts for imminent collisions, and silent mode, in which the device still processed and recorded images, but did not give users a warning. Advertisement
The silent mode is equivalent to the placebo condition in many clinical trials testing drugs, so the wearers and researchers would not know when the device modes changed during the testing.
The effectiveness of the device was measured by comparing collision incidents that occurred during active and silent modes. There were 37% fewer collisions in the former than in the latter.
The researchers hope to leverage improvements in digital processing and camera technology to make their device smaller and more cosmetically appealing before applying to the U.S. Food and Drug Administration for approval.
"Long canes are still very helpful and cost-effective tools that work well in many situations, but we hope a wearable device like this can fill in the gaps that the cane might miss, providing a more affordable, easier to obtain option than a guide dog," study co-author Alex Bowers said in a press release.
In addition, "the insights provided by our data can be valuable for improving mobility aid training," said Bowers, an associate professor of ophthalmology at Harvard Medical School. |
|||
53 | Bipedal Robot Learns to Run, Completes 5K | CORVALLIS, Ore. - Cassie the robot, invented at Oregon State University and produced by OSU spinout company Agility Robotics, has made history by traversing 5 kilometers , completing the route in just over 53 minutes.
Cassie was developed under the direction of robotics professor Jonathan Hurst with a 16-month, $1 million grant from the Defense Advanced Research Projects Agency, or DARPA.
Since Cassie's introduction in 2017, in collaboration with artificial intelligence professor Alan Fern OSU students funded by the National Science Foundation and the DARPA Machine Common Sense program have been exploring machine learning options for the robot.
Cassie, the first bipedal robot to use machine learning to control a running gait on outdoor terrain, completed the 5K on Oregon State's campus untethered and on a single battery charge.
"The Dynamic Robotics Laboratory students in the OSU College of Engineering combined expertise from biomechanics and existing robot control approaches with new machine learning tools," said Hurst, who co-founded Agility in 2017. "This type of holistic approach will enable animal-like levels of performance. It's incredibly exciting."
Cassie, with knees that bend like an ostrich's, taught itself to run with what's known as a deep reinforcement learning algorithm. Running requires dynamic balancing - the ability to maintain balance while switching positions or otherwise being in motion - and Cassie has learned to make infinite subtle adjustments to stay upright while moving.
"Cassie is a very efficient robot because of how it has been designed and built, and we were really able to reach the limits of the hardware and show what it can do," said Jeremy Dao, a Ph.D. student in the Dynamic Robotics Laboratory.
"Deep reinforcement learning is a powerful method in AI that opens up skills like running, skipping and walking up and down stairs," added Yesh Godse, an undergraduate in the lab.
Hurst said walking robots will one day be a common sight - much like the automobile, and with a similar impact. The limiting factor has been the science and understanding of legged locomotion, but research at Oregon State has enabled multiple breakthroughs.
ATRIAS, developed in the Dynamic Robotics Laboratory, was the first robot to reproduce human walking gait dynamics. Following ATRIAS was Cassie, then came Agility's humanoid robot Digit .
"In the not very distant future, everyone will see and interact with robots in many places in their everyday lives, robots that work alongside us and improve our quality of life," Hurst said.
In addition to logistics work like package delivery, bipedal robots eventually will have the intelligence and safety capabilities to help people in their own homes, Hurst said.
During the 5K, Cassie's total time of 53 minutes, 3 seconds included about 6 1/2 minutes of resets following two falls: one because of an overheated computer, the other because the robot was asked to execute a turn at too high a speed.
In a related project , Cassie has become adept at walking up and down stairs . Hurst and colleagues were tapped to present a paper on that at the Robotics : Science and Systems conference July 12-16. | An untethered bipedal robot completed a five-kilometer (3.10-mile) run in just over 53 minutes. The Cassie robot, engineered by Oregon State University (OSU) researchers and built by OSU spinout company Agility Robotics, is the first bipedal robot to use machine learning to maintain a running gait on outdoor terrain. The robot taught itself to run using a reinforcement learning algorithm, and it makes subtle adjustments to remain upright while in motion. OSU's Jonathan Hurst said Cassie's developers "combined expertise from biomechanics and existing robot control approaches with new machine learning tools." Hurst added, "In the not-very-distant future, everyone will see and interact with robots in many places in their everyday lives, robots that work alongside us and improve our quality of life." | [] | [] | [] | scitechnews | None | None | None | None | An untethered bipedal robot completed a five-kilometer (3.10-mile) run in just over 53 minutes. The Cassie robot, engineered by Oregon State University (OSU) researchers and built by OSU spinout company Agility Robotics, is the first bipedal robot to use machine learning to maintain a running gait on outdoor terrain. The robot taught itself to run using a reinforcement learning algorithm, and it makes subtle adjustments to remain upright while in motion. OSU's Jonathan Hurst said Cassie's developers "combined expertise from biomechanics and existing robot control approaches with new machine learning tools." Hurst added, "In the not-very-distant future, everyone will see and interact with robots in many places in their everyday lives, robots that work alongside us and improve our quality of life."
CORVALLIS, Ore. - Cassie the robot, invented at Oregon State University and produced by OSU spinout company Agility Robotics, has made history by traversing 5 kilometers , completing the route in just over 53 minutes.
Cassie was developed under the direction of robotics professor Jonathan Hurst with a 16-month, $1 million grant from the Defense Advanced Research Projects Agency, or DARPA.
Since Cassie's introduction in 2017, in collaboration with artificial intelligence professor Alan Fern OSU students funded by the National Science Foundation and the DARPA Machine Common Sense program have been exploring machine learning options for the robot.
Cassie, the first bipedal robot to use machine learning to control a running gait on outdoor terrain, completed the 5K on Oregon State's campus untethered and on a single battery charge.
"The Dynamic Robotics Laboratory students in the OSU College of Engineering combined expertise from biomechanics and existing robot control approaches with new machine learning tools," said Hurst, who co-founded Agility in 2017. "This type of holistic approach will enable animal-like levels of performance. It's incredibly exciting."
Cassie, with knees that bend like an ostrich's, taught itself to run with what's known as a deep reinforcement learning algorithm. Running requires dynamic balancing - the ability to maintain balance while switching positions or otherwise being in motion - and Cassie has learned to make infinite subtle adjustments to stay upright while moving.
"Cassie is a very efficient robot because of how it has been designed and built, and we were really able to reach the limits of the hardware and show what it can do," said Jeremy Dao, a Ph.D. student in the Dynamic Robotics Laboratory.
"Deep reinforcement learning is a powerful method in AI that opens up skills like running, skipping and walking up and down stairs," added Yesh Godse, an undergraduate in the lab.
Hurst said walking robots will one day be a common sight - much like the automobile, and with a similar impact. The limiting factor has been the science and understanding of legged locomotion, but research at Oregon State has enabled multiple breakthroughs.
ATRIAS, developed in the Dynamic Robotics Laboratory, was the first robot to reproduce human walking gait dynamics. Following ATRIAS was Cassie, then came Agility's humanoid robot Digit .
"In the not very distant future, everyone will see and interact with robots in many places in their everyday lives, robots that work alongside us and improve our quality of life," Hurst said.
In addition to logistics work like package delivery, bipedal robots eventually will have the intelligence and safety capabilities to help people in their own homes, Hurst said.
During the 5K, Cassie's total time of 53 minutes, 3 seconds included about 6 1/2 minutes of resets following two falls: one because of an overheated computer, the other because the robot was asked to execute a turn at too high a speed.
In a related project , Cassie has become adept at walking up and down stairs . Hurst and colleagues were tapped to present a paper on that at the Robotics : Science and Systems conference July 12-16. |
|||
54 | Lost in L.A.? Fire Department Can Find You with What3words Location Technology | The Los Angeles Fire Department (LAFD) has entered into a partnership with digital location startup What3words, which assigns a unique three-word name to each of 57 billion 10-foot-square spots on Earth. The department had been testing the application since last year, using it to locate places that emergency crews needed to reach even if the sites lacked conventional addresses. LAFD receives What3words locations through 911 calls on Android phones or iPhones, or through text messages sent by dispatchers with links that retrieve the three-word addresses. People also can use the What3words app to pinpoint their own locations. Increasing numbers of signs identify locations with their What3words designations, particularly in wildlands. | [] | [] | [] | scitechnews | None | None | None | None | The Los Angeles Fire Department (LAFD) has entered into a partnership with digital location startup What3words, which assigns a unique three-word name to each of 57 billion 10-foot-square spots on Earth. The department had been testing the application since last year, using it to locate places that emergency crews needed to reach even if the sites lacked conventional addresses. LAFD receives What3words locations through 911 calls on Android phones or iPhones, or through text messages sent by dispatchers with links that retrieve the three-word addresses. People also can use the What3words app to pinpoint their own locations. Increasing numbers of signs identify locations with their What3words designations, particularly in wildlands.
|
||||
55 | High-Speed Projectors Power Virtual Air Hockey With Shape-Changing Paddles | If you've spent any time at a modern arcade you've probably seen air hockey tables upgraded with glowing pucks and paddles that make the game more visually appealing. Researchers from Tohoku University in Japan have taken those upgrades one step further in a version of air hockey that replaces the pucks and paddles with shape-changing virtual projections that increase the challenge.
It's an idea we've seen with billiards tables as well, where a projection system mounted above the table creates animations and effects in response to the movement of the balls, and in some cases, provides visual aiming cues for players setting up their next shot. But to date, these upgrades have all been reactionary and simply enhance a game of pool by tracking the movements of traditional cue sticks and billiard balls. What the researchers from Tohoku University's Intelligent Control Systems Laboratory have created is a modern twist where the projection system completely replaces the physical parts of air hockey.
A semi-transparent and rigid rear projection screen replaces the traditional air hockey table surface that's normally perforated with holes to let air through. Being semi-transparent not only allows projections on the underside of the table to show through (a better approach than projections from above that can be obscured by the player) but it also allows a camera underneath to track the movements of the player's paddle, which features a bright infrared LED so that its orientation can be easily seen and tracked, even in low-light conditions.
The use of a projector and a video camera isn't the notable innovation here that makes MetamorHockey appear so highly interactive and accurate. What makes these upgrades work so well is that the projector and video camera both work at an astonishing 420 frames per second. So during every second of gameplay, the video camera is detecting the position and orientation of the player's paddle 420 times, feeding that information to a computer that calculates the movements and trajectories of the virtual puck, and then it passes that data to a projector that refreshes the position of the virtual puck and paddle the same number of times each second.
Even a projector running at 60 frames per second would make the player's paddle appear to lag behind the high-speed movements of the handle they're holding as they play, which would make the virtual air hockey experience feel less authentic and less enjoyable. The extremely high frame rate of the custom low-latency DMD (digital micromirror device) projector used here is fast enough to fool a player's eyes into believing the virtual paddle is locked to their movements, which sells the effect.
Does air hockey need an upgrade? Probably not, but MetamorHockey's virtual puck will never fly off the table like the real ones have a tendency to do, which is certainly one advantage. And being able to change the size and shape of both the puck and paddle on the fly does introduce some interesting ways to change the way air hockey is played. Irregular shapes make it harder to reliably predict which direction the puck is going to travel, increasing the challenge, while increasing the size of the paddle could make it easier for less experienced players to actually enjoy playing against someone who's more skilled at the game.
There's no word on when MetamorHockey might show up at your local arcade, but more details will be revealed when its creators present their research at the upcoming Siggraph 2021 conference in August. | Scientists at Japan's Tohoku University have invented a virtual version of air hockey that uses shapeshifting virtual projections of paddles and pucks. The MetamorHockey system replaces the traditional air hockey table surface with a semi-transparent rear-projection screen, which enables an underside projection to show through while a video camera underneath tracks the movements of each player's paddle. The paddle has an infrared light-emitting diode to facilitate tracking of its position and orientation. Both projector and camera operate at 420 frames per second; this data is fed to a computer that calculates the puck's movements and trajectories, then passes the information to a projector that refreshes the virtual objects' positions. | [] | [] | [] | scitechnews | None | None | None | None | Scientists at Japan's Tohoku University have invented a virtual version of air hockey that uses shapeshifting virtual projections of paddles and pucks. The MetamorHockey system replaces the traditional air hockey table surface with a semi-transparent rear-projection screen, which enables an underside projection to show through while a video camera underneath tracks the movements of each player's paddle. The paddle has an infrared light-emitting diode to facilitate tracking of its position and orientation. Both projector and camera operate at 420 frames per second; this data is fed to a computer that calculates the puck's movements and trajectories, then passes the information to a projector that refreshes the virtual objects' positions.
If you've spent any time at a modern arcade you've probably seen air hockey tables upgraded with glowing pucks and paddles that make the game more visually appealing. Researchers from Tohoku University in Japan have taken those upgrades one step further in a version of air hockey that replaces the pucks and paddles with shape-changing virtual projections that increase the challenge.
It's an idea we've seen with billiards tables as well, where a projection system mounted above the table creates animations and effects in response to the movement of the balls, and in some cases, provides visual aiming cues for players setting up their next shot. But to date, these upgrades have all been reactionary and simply enhance a game of pool by tracking the movements of traditional cue sticks and billiard balls. What the researchers from Tohoku University's Intelligent Control Systems Laboratory have created is a modern twist where the projection system completely replaces the physical parts of air hockey.
A semi-transparent and rigid rear projection screen replaces the traditional air hockey table surface that's normally perforated with holes to let air through. Being semi-transparent not only allows projections on the underside of the table to show through (a better approach than projections from above that can be obscured by the player) but it also allows a camera underneath to track the movements of the player's paddle, which features a bright infrared LED so that its orientation can be easily seen and tracked, even in low-light conditions.
The use of a projector and a video camera isn't the notable innovation here that makes MetamorHockey appear so highly interactive and accurate. What makes these upgrades work so well is that the projector and video camera both work at an astonishing 420 frames per second. So during every second of gameplay, the video camera is detecting the position and orientation of the player's paddle 420 times, feeding that information to a computer that calculates the movements and trajectories of the virtual puck, and then it passes that data to a projector that refreshes the position of the virtual puck and paddle the same number of times each second.
Even a projector running at 60 frames per second would make the player's paddle appear to lag behind the high-speed movements of the handle they're holding as they play, which would make the virtual air hockey experience feel less authentic and less enjoyable. The extremely high frame rate of the custom low-latency DMD (digital micromirror device) projector used here is fast enough to fool a player's eyes into believing the virtual paddle is locked to their movements, which sells the effect.
Does air hockey need an upgrade? Probably not, but MetamorHockey's virtual puck will never fly off the table like the real ones have a tendency to do, which is certainly one advantage. And being able to change the size and shape of both the puck and paddle on the fly does introduce some interesting ways to change the way air hockey is played. Irregular shapes make it harder to reliably predict which direction the puck is going to travel, increasing the challenge, while increasing the size of the paddle could make it easier for less experienced players to actually enjoy playing against someone who's more skilled at the game.
There's no word on when MetamorHockey might show up at your local arcade, but more details will be revealed when its creators present their research at the upcoming Siggraph 2021 conference in August. |
|||
56 | QR Codes Are Here to Stay. So Is the Tracking They Allow. | Restaurants that use QR code menus can save 30 percent to 50 percent on labor costs by reducing or eliminating the need for servers to take orders and collect payments, said Tom Sharon, a co-founder of Cheqout.
Digital menus also make it easier to persuade people to spend more with offers to add fries or substitute more expensive spirits in a cocktail, with photographs of menu items to make them more appealing, said Kim Teo, a Mr. Yum co-founder. Orders placed through the QR code menu also let Mr. Yum inform restaurants what items are selling, so they can add a menu section with the most popular items or highlight dishes they want to sell.
These increased digital abilities are what worry privacy experts. Mr. Yum, for instance, uses cookies in the digital menu to track a customer's purchase history and gives restaurants access to that information, tied to the customer's phone number and credit cards. It is piloting software in Australia so restaurants can offer people a "recommended to you" section based on their previous orders, Ms. Teo said.
QR codes "are an important first step toward making your experience in physical space outside of your home feel just like being tracked by Google on your screen," said Lucy Bernholz, the director of Stanford University's Digital Civil Society Lab.
Ms. Teo said that each restaurant's customer data was available only to that establishment and that Mr. Yum did not use the information to reach out to customers. It also does not sell the data to any third-party brokers, she said.
Cheqout collects only customers' names, phone numbers and protected payment information, which it does not sell to third parties, Mr. Sharon said. | Quick response (QR) codes that facilitate touchless transactions have become a permanent part of life, adopted by many varieties of commercial establishments. QR codes can store digital data including when, where, and how often a code-scan occurs, enabling businesses to integrate more tracking, targeting, and analytics tools. The American Civil Liberties Union's Jay Stanley said, "Suddenly your offline activity of sitting down for a meal has become part of the online advertising empire." Author Scott Stratten said the U.S. adoption of QR codes surged as a result of Apple enabling iPhone cameras to recognize the codes in 2017, and the coronavirus pandemic. Stanford University's Lucy Bernholz said she considers QR codes "an important first step toward making your experience in physical space outside of your home feel just like being tracked by Google on your screen." | [] | [] | [] | scitechnews | None | None | None | None | Quick response (QR) codes that facilitate touchless transactions have become a permanent part of life, adopted by many varieties of commercial establishments. QR codes can store digital data including when, where, and how often a code-scan occurs, enabling businesses to integrate more tracking, targeting, and analytics tools. The American Civil Liberties Union's Jay Stanley said, "Suddenly your offline activity of sitting down for a meal has become part of the online advertising empire." Author Scott Stratten said the U.S. adoption of QR codes surged as a result of Apple enabling iPhone cameras to recognize the codes in 2017, and the coronavirus pandemic. Stanford University's Lucy Bernholz said she considers QR codes "an important first step toward making your experience in physical space outside of your home feel just like being tracked by Google on your screen."
Restaurants that use QR code menus can save 30 percent to 50 percent on labor costs by reducing or eliminating the need for servers to take orders and collect payments, said Tom Sharon, a co-founder of Cheqout.
Digital menus also make it easier to persuade people to spend more with offers to add fries or substitute more expensive spirits in a cocktail, with photographs of menu items to make them more appealing, said Kim Teo, a Mr. Yum co-founder. Orders placed through the QR code menu also let Mr. Yum inform restaurants what items are selling, so they can add a menu section with the most popular items or highlight dishes they want to sell.
These increased digital abilities are what worry privacy experts. Mr. Yum, for instance, uses cookies in the digital menu to track a customer's purchase history and gives restaurants access to that information, tied to the customer's phone number and credit cards. It is piloting software in Australia so restaurants can offer people a "recommended to you" section based on their previous orders, Ms. Teo said.
QR codes "are an important first step toward making your experience in physical space outside of your home feel just like being tracked by Google on your screen," said Lucy Bernholz, the director of Stanford University's Digital Civil Society Lab.
Ms. Teo said that each restaurant's customer data was available only to that establishment and that Mr. Yum did not use the information to reach out to customers. It also does not sell the data to any third-party brokers, she said.
Cheqout collects only customers' names, phone numbers and protected payment information, which it does not sell to third parties, Mr. Sharon said. |
|||
57 | Flexible Computer Processor Is Most Powerful Plastic Chip Yet | The newest processor from U.K. chip designer Arm reportedly can be printed directly onto paper, cardboard, or cloth. Arm's James Myers said the 32-bit PlasticARM chip can run various applications, although it presently uses read-only memory, and so can only execute the code with which it was built. The processor features circuits and components printed onto a plastic substrate, with 56,340 elements taking up less than 60 square millimeters. PlasticARM has roughly 12 times as many components to conduct calculations as the previous best flexible chip, with the potential to give everyday items like clothing and food containers the ability to collect, process, and transmit information across the Internet. | [] | [] | [] | scitechnews | None | None | None | None | The newest processor from U.K. chip designer Arm reportedly can be printed directly onto paper, cardboard, or cloth. Arm's James Myers said the 32-bit PlasticARM chip can run various applications, although it presently uses read-only memory, and so can only execute the code with which it was built. The processor features circuits and components printed onto a plastic substrate, with 56,340 elements taking up less than 60 square millimeters. PlasticARM has roughly 12 times as many components to conduct calculations as the previous best flexible chip, with the potential to give everyday items like clothing and food containers the ability to collect, process, and transmit information across the Internet.
|
||||
58 | Companies Beef Up AI Models with Synthetic Data | Companies rely on real-world data to train artificial-intelligence models that can identify anomalies, make predictions and generate insights. But often, it isn't enough.
To detect credit-card fraud, for example, researchers train AI models to look for specific patterns of known suspicious behavior, gleaned from troves of data. But unique, or rare, types of fraud are difficult to detect when there isn't enough data to support the algorithm's training.
To... | Companies are building synthetic datasets when real-world data is unavailable to train artificial intelligence (AI) models to identify anomalies. Dmitry Efimov at American Express (Amex) said researchers have spent several years researching synthetic data in order to enhance the credit-card company's AI-based fraud-detection models. Amex is experimenting with generative adversarial networks to produce synthetic data on rare fraud patterns, which then can be applied to augment an existing dataset of fraud behaviors to improve general AI-based fraud-detection models. Efimov said one AI model is used to generate new data, while a second model attempts to determine the data's authenticity. Efimov said early tests have demonstrated that the synthetic data improves the AI-based model's ability to identify specific types of fraud. | [] | [] | [] | scitechnews | None | None | None | None | Companies are building synthetic datasets when real-world data is unavailable to train artificial intelligence (AI) models to identify anomalies. Dmitry Efimov at American Express (Amex) said researchers have spent several years researching synthetic data in order to enhance the credit-card company's AI-based fraud-detection models. Amex is experimenting with generative adversarial networks to produce synthetic data on rare fraud patterns, which then can be applied to augment an existing dataset of fraud behaviors to improve general AI-based fraud-detection models. Efimov said one AI model is used to generate new data, while a second model attempts to determine the data's authenticity. Efimov said early tests have demonstrated that the synthetic data improves the AI-based model's ability to identify specific types of fraud.
Companies rely on real-world data to train artificial-intelligence models that can identify anomalies, make predictions and generate insights. But often, it isn't enough.
To detect credit-card fraud, for example, researchers train AI models to look for specific patterns of known suspicious behavior, gleaned from troves of data. But unique, or rare, types of fraud are difficult to detect when there isn't enough data to support the algorithm's training.
To... |
|||
59 | Extracting More Accurate Data From Images Degraded by Rain, Nighttime, Crowded Conditions | Novel computer vision and human pose estimation methods can extract more accurate data from videos obscured by visibility issues and crowding, according to an international team of scientists led by researchers at the Yale-National University of Singapore College. The research team used two deep learning algorithms to enhance the quality of videos taken at night and in rainy conditions. One algorithm boosts brightness while simultaneously suppressing noise and light effects to produce clear nighttime images, while the other algorithm applies frame alignment and depth estimation to eliminate rain streaks and the rain veiling effect. The team also developed a technique for estimating three-dimensional human poses in videos of crowded environments more reliably by combining top-down and bottom-up approaches. | [] | [] | [] | scitechnews | None | None | None | None | Novel computer vision and human pose estimation methods can extract more accurate data from videos obscured by visibility issues and crowding, according to an international team of scientists led by researchers at the Yale-National University of Singapore College. The research team used two deep learning algorithms to enhance the quality of videos taken at night and in rainy conditions. One algorithm boosts brightness while simultaneously suppressing noise and light effects to produce clear nighttime images, while the other algorithm applies frame alignment and depth estimation to eliminate rain streaks and the rain veiling effect. The team also developed a technique for estimating three-dimensional human poses in videos of crowded environments more reliably by combining top-down and bottom-up approaches.
|
||||
61 | Ancient Printer Security Bug Affects Millions of Devices Worldwide | Cybersecurity researchers have helped patch a privilege escalation vulnerability in the printer driver for HP , Samsung, and Xerox printers that managed to evade detection for 16 years.
SentinelOne, which unearthed the high severity vulnerability, believes it has been present since 2005, and likely affects millions of devices and likely millions of users worldwide.
According to the company's researchers, the vulnerable driver ships with over 380 different HP and Samsung printer models as well as at least a dozen different Xerox products.
"Successfully exploiting a driver vulnerability might allow attackers to potentially install programs, view, change, encrypt or delete data, or create new accounts with full user rights," explained Asaf Amir, VP of Research at SentinelOne.
The security flaw, tracked as CVE-2021-3438, is explained as a buffer overflow vulnerability that could be exploited in a local user privilege escalation attack.
Moreover since the bug exists in the printer driver, which gets loaded automatically by Windows, the vulnerability can be exploited even when the printer isn't connected to the targeted device.
The only saving grace is that to exploit the bug, the attackers need local user access to the system with the buggy driver.
"While we haven't seen any indicators that this vulnerability has been exploited in the wild up till now, with hundreds of millions of enterprises and users currently vulnerable, it is inevitable that attackers will seek out those that do not take the appropriate action," concludes Amir urging users of the affected devices to patch their drivers immediately. | Cybersecurity researchers at SentinelOne have identified a highly severe privilege escalation vulnerability in HP, Samsung, and Xerox printer drivers. The vulnerability appears to have been present since 2005. The researchers said millions of devices and users worldwide likely have been impacted by the buffer overflow vulnerability, which can be exploited whether or not a printer is connected to a targeted device. SentinelOne's Asaf Amir said, "Successfully exploiting a driver vulnerability might allow attackers to potentially install programs; view, change, encrypt, or delete data, or create new accounts with full user rights." Hackers would need local user access to the system to access the affected driver and take advantage of the vulnerability. | [] | [] | [] | scitechnews | None | None | None | None | Cybersecurity researchers at SentinelOne have identified a highly severe privilege escalation vulnerability in HP, Samsung, and Xerox printer drivers. The vulnerability appears to have been present since 2005. The researchers said millions of devices and users worldwide likely have been impacted by the buffer overflow vulnerability, which can be exploited whether or not a printer is connected to a targeted device. SentinelOne's Asaf Amir said, "Successfully exploiting a driver vulnerability might allow attackers to potentially install programs; view, change, encrypt, or delete data, or create new accounts with full user rights." Hackers would need local user access to the system to access the affected driver and take advantage of the vulnerability.
Cybersecurity researchers have helped patch a privilege escalation vulnerability in the printer driver for HP , Samsung, and Xerox printers that managed to evade detection for 16 years.
SentinelOne, which unearthed the high severity vulnerability, believes it has been present since 2005, and likely affects millions of devices and likely millions of users worldwide.
According to the company's researchers, the vulnerable driver ships with over 380 different HP and Samsung printer models as well as at least a dozen different Xerox products.
"Successfully exploiting a driver vulnerability might allow attackers to potentially install programs, view, change, encrypt or delete data, or create new accounts with full user rights," explained Asaf Amir, VP of Research at SentinelOne.
The security flaw, tracked as CVE-2021-3438, is explained as a buffer overflow vulnerability that could be exploited in a local user privilege escalation attack.
Moreover since the bug exists in the printer driver, which gets loaded automatically by Windows, the vulnerability can be exploited even when the printer isn't connected to the targeted device.
The only saving grace is that to exploit the bug, the attackers need local user access to the system with the buggy driver.
"While we haven't seen any indicators that this vulnerability has been exploited in the wild up till now, with hundreds of millions of enterprises and users currently vulnerable, it is inevitable that attackers will seek out those that do not take the appropriate action," concludes Amir urging users of the affected devices to patch their drivers immediately. |
|||
63 | Test of Time Award Bestowed for Data Privacy Paper | UNIVERSITY PARK, Pa. - A 2011 paper on data privacy co-authored by Dan Kifer, professor of computer science and engineering at Penn State, received the 2021 Association for Computing Machinery's Special Interest Group on Management of Data (ACM SIGMOD) Test of Time award .
The paper, No Free Lunch in Data Privacy , was co-authored by Ashwin Machanavajjhala, associate professor of computer science at Duke University, and published in the Proceeding of the 2011 ACM SIGMOD International Conference on Management of Data. The work examines how an individual's information can be embedded in data sets in ways that make privacy protections difficult. | ACM's Special Interest Group on Management of Data (SIGMOD) has named Dan Kifer, professor of computer science and engineering at Pennsylvania State University, and Duke University's Ashwin Machanavajjhala, recipients of its 2021 Test of Time award. The 2011 paper explored how an individual's information can be incorporated in datasets in a manner that can complicate privacy protection. The awards committee cited the paper as raising "fundamental questions on how to define privacy, and the situations when differential private mechanisms provide meaningful semantic privacy guarantees." The committee also said the research covered by the paper led to enhanced privacy frameworks. | [] | [] | [] | scitechnews | None | None | None | None | ACM's Special Interest Group on Management of Data (SIGMOD) has named Dan Kifer, professor of computer science and engineering at Pennsylvania State University, and Duke University's Ashwin Machanavajjhala, recipients of its 2021 Test of Time award. The 2011 paper explored how an individual's information can be incorporated in datasets in a manner that can complicate privacy protection. The awards committee cited the paper as raising "fundamental questions on how to define privacy, and the situations when differential private mechanisms provide meaningful semantic privacy guarantees." The committee also said the research covered by the paper led to enhanced privacy frameworks.
UNIVERSITY PARK, Pa. - A 2011 paper on data privacy co-authored by Dan Kifer, professor of computer science and engineering at Penn State, received the 2021 Association for Computing Machinery's Special Interest Group on Management of Data (ACM SIGMOD) Test of Time award .
The paper, No Free Lunch in Data Privacy , was co-authored by Ashwin Machanavajjhala, associate professor of computer science at Duke University, and published in the Proceeding of the 2011 ACM SIGMOD International Conference on Management of Data. The work examines how an individual's information can be embedded in data sets in ways that make privacy protections difficult. |
|||
64 | Russia Disconnects from Internet in Tests as It Bolsters Security | MOSCOW, July 22 (Reuters) - Russia managed to disconnect itself from the global internet during tests in June and July, the RBC daily reported on Thursday, citing documents from the working group tasked with improving Russia's internet security.
Russia adopted legislation, known as the "sovereign internet" law, in late 2019 that seeks to shield the country from being cut off from foreign infrastructure, in answer to what Russia called the "aggressive nature" of the United States' national cyber security strategy.
The legislation caused consternation among free speech activists, who feared the move would strengthen government oversight of cyberspace.
Tests involving all Russia's major telecoms firms were held from June 15 to July 15 and were successful, according to preliminary results, RBC cited a source in the working group as saying.
"The purpose of the tests is to determine the ability of the 'Runet' to work in case of external distortions, blocks and other threats," the source said.
Another RBC source said the capability of physically disconnecting the Russian part of the internet was tested.
It was not immediately clear how long the disconnection lasted or whether there were any noticeable disruptions to internet traffic.
The law stipulates that tests be carried out every year, but operations were called off in 2020 due to complications with the COVID-19 pandemic, RBC said.
Karen Kazaryan, head of analysis firm Internet Research Institute, said the tests were likely a show of activity after a year of doing nothing and that he did not expect Russia to launch a sovereign internet any time soon.
"Given the general secrecy of the process and the lack of public documents on the subject, it is difficult to say what happened in these tests," he said.
The Kremlin was aware of the tests, spokesman Dmitry Peskov said, describing them as timely and saying that Russia had to be ready for anything.
The legislation seeks to route Russian web traffic and data through points controlled by state authorities and build a national Domain Name System to allow the internet to continue working even if Russia is cut off.
In June 2019, President Vladimir Putin said Moscow had to ensure that the 'Runet' could function in a reliable way to guard against servers outside of Russia's control in other countries being switched off and their operations compromised.
State communications regulator Roskomnadzor said the tests were aimed at improving the integrity, stability and security of Russia's internet infrastructure, RBC reported.
It said the equipment installed as part of the tests had been used by Roskomnadzor to slow down the speed of social network Twitter (TWTR.N) since March over a failure to delete content Moscow deems illegal. read more
Our Standards: The Thomson Reuters Trust Principles. | Russia reportedly disconnected from the global Internet during tests in June and July, according to a report by the RBC daily that cited documents from the working group responsible for strengthening Russia's Internet security under the 2019 "sovereign Internet" law, which aims to prevent Russia from being cut off from foreign infrastructure. A working group source said the purpose of tests was "to determine the ability of the 'Runet' to work in case of external distortions, blocks and other threats." The Internet Research Institute's Karen Kazaryan said, "Given the general secrecy of the process and the lack of public documents on the subject, it is difficult to say what happened in these tests." | [] | [] | [] | scitechnews | None | None | None | None | Russia reportedly disconnected from the global Internet during tests in June and July, according to a report by the RBC daily that cited documents from the working group responsible for strengthening Russia's Internet security under the 2019 "sovereign Internet" law, which aims to prevent Russia from being cut off from foreign infrastructure. A working group source said the purpose of tests was "to determine the ability of the 'Runet' to work in case of external distortions, blocks and other threats." The Internet Research Institute's Karen Kazaryan said, "Given the general secrecy of the process and the lack of public documents on the subject, it is difficult to say what happened in these tests."
MOSCOW, July 22 (Reuters) - Russia managed to disconnect itself from the global internet during tests in June and July, the RBC daily reported on Thursday, citing documents from the working group tasked with improving Russia's internet security.
Russia adopted legislation, known as the "sovereign internet" law, in late 2019 that seeks to shield the country from being cut off from foreign infrastructure, in answer to what Russia called the "aggressive nature" of the United States' national cyber security strategy.
The legislation caused consternation among free speech activists, who feared the move would strengthen government oversight of cyberspace.
Tests involving all Russia's major telecoms firms were held from June 15 to July 15 and were successful, according to preliminary results, RBC cited a source in the working group as saying.
"The purpose of the tests is to determine the ability of the 'Runet' to work in case of external distortions, blocks and other threats," the source said.
Another RBC source said the capability of physically disconnecting the Russian part of the internet was tested.
It was not immediately clear how long the disconnection lasted or whether there were any noticeable disruptions to internet traffic.
The law stipulates that tests be carried out every year, but operations were called off in 2020 due to complications with the COVID-19 pandemic, RBC said.
Karen Kazaryan, head of analysis firm Internet Research Institute, said the tests were likely a show of activity after a year of doing nothing and that he did not expect Russia to launch a sovereign internet any time soon.
"Given the general secrecy of the process and the lack of public documents on the subject, it is difficult to say what happened in these tests," he said.
The Kremlin was aware of the tests, spokesman Dmitry Peskov said, describing them as timely and saying that Russia had to be ready for anything.
The legislation seeks to route Russian web traffic and data through points controlled by state authorities and build a national Domain Name System to allow the internet to continue working even if Russia is cut off.
In June 2019, President Vladimir Putin said Moscow had to ensure that the 'Runet' could function in a reliable way to guard against servers outside of Russia's control in other countries being switched off and their operations compromised.
State communications regulator Roskomnadzor said the tests were aimed at improving the integrity, stability and security of Russia's internet infrastructure, RBC reported.
It said the equipment installed as part of the tests had been used by Roskomnadzor to slow down the speed of social network Twitter (TWTR.N) since March over a failure to delete content Moscow deems illegal. read more
Our Standards: The Thomson Reuters Trust Principles. |
|||
65 | Simulator Helps Robots Sharpen Their Cutting Skills | Researchers from the USC's Department of Computer Science and NVIDIA have unveiled a new simulator for robotic cutting that can accurately reproduce the forces acting on a knife as it slices through common food, such as fruit and vegetables. The system could also simulate cutting through human tissue, offering potential applications in surgical robotics. The paper was presented at the Robotics: Science and Systems (RSS) Conference 2021 on July 16, where it received the Best Student Paper Award .
In the past, researchers have had trouble creating intelligent robots that replicate cutting. One challenge, they've argued, is that no two objects are the same, and current robotic cutting systems struggle with variation. To overcome this, the team devised a unique approach to simulate cutting by introducing springs between the two halves of the object being cut, represented by a mesh. These springs are weakened over time in proportion to the force exerted by the knife on the mesh.
"What makes ours a special kind of simulator is that it is 'differentiable,' which means that it can help us automatically tune these simulation parameters from real-world measurements," said lead author Eric Heiden, a PhD in computer science student at USC. "That's important because closing this reality gap is a significant challenge for roboticists today. Without this, robots may never break out of simulation into the real world."
To transfer skills from simulation to reality, the simulator must be able to model a real system. In one of the experiments, the researchers used a dataset of force profiles from a physical robot to produce highly accurate predictions of how the knife would move in real life. In addition to applications in the food processing industry, where robots could take over dangerous tasks like repetitive cutting, the simulator could improve force haptic feedback accuracy in surgical robots, helping to guide surgeons and prevent injury.
"Here, it is important to have an accurate model of the cutting process and to be able to realistically reproduce the forces acting on the cutting tool as different kinds of tissue are being cut," said Heiden. "With our approach, we are able to automatically tune our simulator to match different types of material and achieve highly accurate simulations of the force profile." In ongoing research, the team is applying the system to real-world robots.
Co-authors are Miles Macklin, Yashraj S Narang, Dieter Fox, Animesh Garg, Fabio Ramos, all of NVIDIA.
The full paper (open access) and blog post are available here . | A new robotic cutting simulator developed by researchers at the University of Southern California (USC) and NVIDIA replicates the forces acting on a knife slicing through foods. To simulate cutting, the researchers put springs between the two halves of the object being cut, represented by mesh; over time, the springs are weakened proportionate to the force exerted by the knife on the mesh. The simulator could pave the way for the use of robots in the food processing industry or in the operating room. USC's Eric Heiden said, "It is important to have an accurate model of the cutting process and to be able to realistically reproduce the forces acting on the cutting tool as different kinds of tissue are being cut. With our approach, we are able to automatically tune our simulator to match different types of material and achieve highly accurate simulations of the force profile." | [] | [] | [] | scitechnews | None | None | None | None | A new robotic cutting simulator developed by researchers at the University of Southern California (USC) and NVIDIA replicates the forces acting on a knife slicing through foods. To simulate cutting, the researchers put springs between the two halves of the object being cut, represented by mesh; over time, the springs are weakened proportionate to the force exerted by the knife on the mesh. The simulator could pave the way for the use of robots in the food processing industry or in the operating room. USC's Eric Heiden said, "It is important to have an accurate model of the cutting process and to be able to realistically reproduce the forces acting on the cutting tool as different kinds of tissue are being cut. With our approach, we are able to automatically tune our simulator to match different types of material and achieve highly accurate simulations of the force profile."
Researchers from the USC's Department of Computer Science and NVIDIA have unveiled a new simulator for robotic cutting that can accurately reproduce the forces acting on a knife as it slices through common food, such as fruit and vegetables. The system could also simulate cutting through human tissue, offering potential applications in surgical robotics. The paper was presented at the Robotics: Science and Systems (RSS) Conference 2021 on July 16, where it received the Best Student Paper Award .
In the past, researchers have had trouble creating intelligent robots that replicate cutting. One challenge, they've argued, is that no two objects are the same, and current robotic cutting systems struggle with variation. To overcome this, the team devised a unique approach to simulate cutting by introducing springs between the two halves of the object being cut, represented by a mesh. These springs are weakened over time in proportion to the force exerted by the knife on the mesh.
"What makes ours a special kind of simulator is that it is 'differentiable,' which means that it can help us automatically tune these simulation parameters from real-world measurements," said lead author Eric Heiden, a PhD in computer science student at USC. "That's important because closing this reality gap is a significant challenge for roboticists today. Without this, robots may never break out of simulation into the real world."
To transfer skills from simulation to reality, the simulator must be able to model a real system. In one of the experiments, the researchers used a dataset of force profiles from a physical robot to produce highly accurate predictions of how the knife would move in real life. In addition to applications in the food processing industry, where robots could take over dangerous tasks like repetitive cutting, the simulator could improve force haptic feedback accuracy in surgical robots, helping to guide surgeons and prevent injury.
"Here, it is important to have an accurate model of the cutting process and to be able to realistically reproduce the forces acting on the cutting tool as different kinds of tissue are being cut," said Heiden. "With our approach, we are able to automatically tune our simulator to match different types of material and achieve highly accurate simulations of the force profile." In ongoing research, the team is applying the system to real-world robots.
Co-authors are Miles Macklin, Yashraj S Narang, Dieter Fox, Animesh Garg, Fabio Ramos, all of NVIDIA.
The full paper (open access) and blog post are available here . |
|||
66 | A Smart City Future for Virginia's Amazon HQ2 Neighborhood | Developer JBG Smith has partnered with AT&T to build the first "smart city at scale" in National Landing in northern Virginia. The plans include building a 5G network from the ground up in a four-mile zone featuring office, residential, and retail space, as well as Amazon's new second headquarters. The 5G network would serve as the foundation for National Landing to become a testbed for urban innovations featuring sensors, artificial intelligence, and Internet of Things technology. To ensure instantaneous device connections, AT&T plans to integrate 5G antennas into street furniture and the sides of buildings. Some of the network infrastructure will be rolled out during the first half of next year. | [] | [] | [] | scitechnews | None | None | None | None | Developer JBG Smith has partnered with AT&T to build the first "smart city at scale" in National Landing in northern Virginia. The plans include building a 5G network from the ground up in a four-mile zone featuring office, residential, and retail space, as well as Amazon's new second headquarters. The 5G network would serve as the foundation for National Landing to become a testbed for urban innovations featuring sensors, artificial intelligence, and Internet of Things technology. To ensure instantaneous device connections, AT&T plans to integrate 5G antennas into street furniture and the sides of buildings. Some of the network infrastructure will be rolled out during the first half of next year.
|
||||
67 | TSA Issues Cybersecurity Rules for Pipeline Companies | A U.S. Transportation Security Administration (TSA) directive imposes new rules requiring pipeline operators to strengthen their cyberdefenses. The order coincides with the first-ever disclosure by the Department of Homeland Security and the Federal Bureau of Investigation that Chinese state-sponsored hackers targeted 23 U.S. natural gas pipeline operators between 2011 and 2013. The announcement offers few details on the directive or its enforcement, as much is classified to keep hackers in the dark about pipeline operators' cybersecurity measures. The directive requires pipeline operators to deploy safeguards against ransomware on information technology (IT) systems commonly targeted by hackers, as well as on physical fuel-flow controls. Operators also must review their IT infrastructures and develop hacking response plans. | [] | [] | [] | scitechnews | None | None | None | None | A U.S. Transportation Security Administration (TSA) directive imposes new rules requiring pipeline operators to strengthen their cyberdefenses. The order coincides with the first-ever disclosure by the Department of Homeland Security and the Federal Bureau of Investigation that Chinese state-sponsored hackers targeted 23 U.S. natural gas pipeline operators between 2011 and 2013. The announcement offers few details on the directive or its enforcement, as much is classified to keep hackers in the dark about pipeline operators' cybersecurity measures. The directive requires pipeline operators to deploy safeguards against ransomware on information technology (IT) systems commonly targeted by hackers, as well as on physical fuel-flow controls. Operators also must review their IT infrastructures and develop hacking response plans.
|
||||
68 | Training Computers to Transfer Music from One Style to Another | Can artificial intelligence enable computers to translate a musical composition between musical styles - e.g., from pop to classical or to jazz? According to a professor of music at UC San Diego and a high school student, they have developed a machine learning tool that does just that.
"People are more familiar with machine learning that can automatically convert an image in one style to another, like when you use filters on Instagram to change an image's style," said UC San Diego computer music professor Shlomo Dubnov. "Past attempts to convert compositions from one musical style to another came up short because they failed to distinguish between style and content."
To fix that problem, Dubnov and co-author Conan Lu developed ChordGAN - a conditional generative adversarial network (GAN) architecture that uses chroma sampling, which only records a 12-tones note distribution note distribution profile to separate style (musical texture) from content (i.e., tonal or chord changes).
"This explicit distinction of style from content allows the network to consistently learn style features," noted Lu, a senior from Redmond High School in Redmond, Washington, who began developing the technology in summer 2019 as a participant in UC San Diego's California Summer School for Mathematics and Science (COSMOS). Lu was part of the COSMOS "Music and Technology" cluster taught by Dubnov, who also directs the Qualcomm Institute's Center for Research in Entertainment and Learning (CREL) and is an affiliate professor in the Computer Science and Engineering department of UC San Diego's Jacobs School of Engineering.
On July 20, Dubnov and Lu will present their findings in a paper* to the 2 nd Conference on AI Music Creativity (AIMC 2021). The virtual conference - on the theme of "Performing [with] Machines" - takes place June 18-22 via Zoom. The conference is organized by the Institute of Electronic Music and Acoustics at the University of Music and Performing Arts in Graz, Austriaia from July 18-22.
After attending the COSMOS program in 2019 at UC San Diego, Lu continued to work remotely with Dubnov through the pandemic to jointly author the paper to be presented at AIMC 2021.
For their paper, Dubnov and Lu developed a data set comprised of a few hundred MIDI audio-data samples from pop, jazz and classical music styles. The MIDI files were pre-processed to turn the audio files into piano roll and chroma formats - training the network to convert a music score.
"One advantage of our tool is its flexibility to accommodate different genres of music," explained Lu. "ChordGAN only controls the transfer of chroma features, so any tonal music can be given as input to the network to generate a piece in the style of a particular musical style.
To evaluate the tool's success, Lu used the so-called Tonnetz distance to measure the preservation of content (e.g., chords and harmony) to ensure that converting to a different style would not result in losing content in the process.
"The Tonnetz representation displays harmonic relationships within a piece," noted Dubnov. "Since the main goal of style transfer in this method is to retain the main harmonies and chords of a piece while changing stylistic elements, the Tonnetz distance provides a useful metric in determining the success of the transfer."
The researchers also added an independent genre classifier (to evaluate that the resulting style transfer was realistic). In testing the accuracy of their genre classifier, it functioned best on jazz clips (74 percent accuracy), and only slightly less well on pop music (68 percent) and classical (64 percent). (While the original evaluation for classical music was limited to Bach preludes, subsequent testing with classical compositions from Haydn and Mozart also proved effective.)
"Given the success of evaluating ChordGAN for style transfer under our two metrics," said Lu, "our solution can be utilized as a tool for musicians to study compositional techniques and generate music automatically from lead sheets."
The high school student's earlier research on machine learning and music earned Lu a bronze grand medal at the 2021 Washington Science and Engineering Fair. That success also qualified him to compete at the Regeneron International Science and Engineering Fair (ISEF), the largest pre-collegiate science fair in the world. Lu became a finalist at ISEF and received an honorable mention from the Association for the Advancement of Artificial Intelligence.
*Lu, C. and Dubnov, S. ChordGAN: Symbolic Music Style Transfer with Chroma Feature Extraction , AIMC 2021, Graz, Austria. | Translating musical compositions between styles is possible via the ChordGAN tool developed by the University of California, San Diego (UCSD) 's Shlomo Dubnov and Redmond, WA, high school senior Conan Lu. ChordGAN is a conditional generative adversarial network (GAN) framework that uses chroma sampling, which only records a 12-tone note distribution profile to differentiate style from content (tonal or chord changes). Lu said, "This explicit distinction of style from content allows the network to consistently learn style features." The researchers compiled a dataset from several hundred MIDI audio-data samples in the pop, jazz, and classical music styles; the files were pre-processed to convert the audio files into piano roll and chroma formats. Said Lu, "Our solution can be utilized as a tool for musicians to study compositional techniques and generate music automatically from lead sheets." | [] | [] | [] | scitechnews | None | None | None | None | Translating musical compositions between styles is possible via the ChordGAN tool developed by the University of California, San Diego (UCSD) 's Shlomo Dubnov and Redmond, WA, high school senior Conan Lu. ChordGAN is a conditional generative adversarial network (GAN) framework that uses chroma sampling, which only records a 12-tone note distribution profile to differentiate style from content (tonal or chord changes). Lu said, "This explicit distinction of style from content allows the network to consistently learn style features." The researchers compiled a dataset from several hundred MIDI audio-data samples in the pop, jazz, and classical music styles; the files were pre-processed to convert the audio files into piano roll and chroma formats. Said Lu, "Our solution can be utilized as a tool for musicians to study compositional techniques and generate music automatically from lead sheets."
Can artificial intelligence enable computers to translate a musical composition between musical styles - e.g., from pop to classical or to jazz? According to a professor of music at UC San Diego and a high school student, they have developed a machine learning tool that does just that.
"People are more familiar with machine learning that can automatically convert an image in one style to another, like when you use filters on Instagram to change an image's style," said UC San Diego computer music professor Shlomo Dubnov. "Past attempts to convert compositions from one musical style to another came up short because they failed to distinguish between style and content."
To fix that problem, Dubnov and co-author Conan Lu developed ChordGAN - a conditional generative adversarial network (GAN) architecture that uses chroma sampling, which only records a 12-tones note distribution note distribution profile to separate style (musical texture) from content (i.e., tonal or chord changes).
"This explicit distinction of style from content allows the network to consistently learn style features," noted Lu, a senior from Redmond High School in Redmond, Washington, who began developing the technology in summer 2019 as a participant in UC San Diego's California Summer School for Mathematics and Science (COSMOS). Lu was part of the COSMOS "Music and Technology" cluster taught by Dubnov, who also directs the Qualcomm Institute's Center for Research in Entertainment and Learning (CREL) and is an affiliate professor in the Computer Science and Engineering department of UC San Diego's Jacobs School of Engineering.
On July 20, Dubnov and Lu will present their findings in a paper* to the 2 nd Conference on AI Music Creativity (AIMC 2021). The virtual conference - on the theme of "Performing [with] Machines" - takes place June 18-22 via Zoom. The conference is organized by the Institute of Electronic Music and Acoustics at the University of Music and Performing Arts in Graz, Austriaia from July 18-22.
After attending the COSMOS program in 2019 at UC San Diego, Lu continued to work remotely with Dubnov through the pandemic to jointly author the paper to be presented at AIMC 2021.
For their paper, Dubnov and Lu developed a data set comprised of a few hundred MIDI audio-data samples from pop, jazz and classical music styles. The MIDI files were pre-processed to turn the audio files into piano roll and chroma formats - training the network to convert a music score.
"One advantage of our tool is its flexibility to accommodate different genres of music," explained Lu. "ChordGAN only controls the transfer of chroma features, so any tonal music can be given as input to the network to generate a piece in the style of a particular musical style.
To evaluate the tool's success, Lu used the so-called Tonnetz distance to measure the preservation of content (e.g., chords and harmony) to ensure that converting to a different style would not result in losing content in the process.
"The Tonnetz representation displays harmonic relationships within a piece," noted Dubnov. "Since the main goal of style transfer in this method is to retain the main harmonies and chords of a piece while changing stylistic elements, the Tonnetz distance provides a useful metric in determining the success of the transfer."
The researchers also added an independent genre classifier (to evaluate that the resulting style transfer was realistic). In testing the accuracy of their genre classifier, it functioned best on jazz clips (74 percent accuracy), and only slightly less well on pop music (68 percent) and classical (64 percent). (While the original evaluation for classical music was limited to Bach preludes, subsequent testing with classical compositions from Haydn and Mozart also proved effective.)
"Given the success of evaluating ChordGAN for style transfer under our two metrics," said Lu, "our solution can be utilized as a tool for musicians to study compositional techniques and generate music automatically from lead sheets."
The high school student's earlier research on machine learning and music earned Lu a bronze grand medal at the 2021 Washington Science and Engineering Fair. That success also qualified him to compete at the Regeneron International Science and Engineering Fair (ISEF), the largest pre-collegiate science fair in the world. Lu became a finalist at ISEF and received an honorable mention from the Association for the Advancement of Artificial Intelligence.
*Lu, C. and Dubnov, S. ChordGAN: Symbolic Music Style Transfer with Chroma Feature Extraction , AIMC 2021, Graz, Austria. |
|||
70 | A Metric for Designing Safer Streets | A new study published in Accident Analysis & Prevention shows how biometric data can be used to find potentially challenging and dangerous areas of urban infrastructure before a crash occurs. Lead author Megan Ryerson led a team of researchers in the Stuart Weitzman School of Design and the School of Engineering and Applied Science in collecting and analyzing eye-tracking data from cyclists navigating Philadelphia's streets. The team found that individual-based metrics can provide a more proactive approach for designing safer roadways for bicyclists and pedestrians.
Current federal rules for installing safe transportation interventions at an unsafe crossing - such as a crosswalk with a traffic signal - require either a minimum of 90-100 pedestrians crossing this location every hour or a minimum of five pedestrians struck by a driver at that location in one year. Ryerson says that the practice of planning safety interventions reactively with a "literal human cost," has motivated her and her team to find more proactive safety metrics that don't require waiting for tragic results.
Part of the challenge, says Ryerson, is that transportation systems are designed and refined using metrics like crash or fatality data instead of data on human behavior to help understand what makes an area unsafe or what specific interventions would be the most impactful. This reactive approach also fails to capture where people might want to cross but don't because they consider it too dangerous and that, if it were safe, more people would utilize.
"Today we have technology, data science, and the capability to study safety in ways that we didn't have when the field of transportation safety was born," says Ryerson. "We don't have to be reactive in planning safe transportation systems; we can instead develop innovative, proactive ways to evaluate the safety of our infrastructure."
The team developed an approach to evaluate cognitive workload, a measure of a person's ability to perceive and process information, in cyclists. Cognitive workload studies are frequently used in other fields of transportation, such as air traffic control and driving simulations, to determine what designs or conditions enable people to process the information around them. But studies looking at cognitive workload in bicyclists and pedestrians are not as common due to a number of factors, including the difficulty of developing realistic cycling simulations.
The researchers in Ryerson's lab looked at how different infrastructure designs elicit changes in cognitive workload and stress in urban cyclists. In 2018, the team had 39 cyclists travel along a U-shaped route from JFK Boulevard and Market Street, down 15th Street to 20th Street, then returning back to 15th and Market. Riders wore Tobii eye-tracking glasses equipped with inward- and outward-facing camera and a gyroscope capable of collecting eye- and head-movement data 100 times per second.
Along with the route being one of Philadelphia's newest protected bicycle lanes at the time, and therefore a new experience for all of the study participants, it also has a dramatic change in infrastructure along the 8-10-minute route, including a mix of protected bike lanes, car-bike mixing zones, and completely unprotected areas. "We felt that, in a short segment of space, our subjects could experience a range of transportation-infrastructure designs which may elicit different stress and cogitative workload responses," Ryerson says.
One of the study's main findings is the ability to correlate locations that have disproportionately high numbers of crashes with a consistent biometric response that indicates increased cognitive workload. If a person's cognitive workload is high, Ryerson says, it doesn't necessarily mean that they will crash, but it does mean that a person is less able to process new information, like a pedestrian or a driver entering the bike lane, and react appropriately. High cognitive workload means the threat of a crash is heightened.
In addition, the researchers found that stressful areas were consistent between expert cyclists and those less experienced or confident. This has implications for current approaches to managing safety, which typically focus on pedestrian- and cyclist-education interventions. Education is still important, Ryerson says, but these results show that infrastructure design is just as important in terms of making spaces safe.
"Even if you're a more competent cyclist than I am, we still have very similar stress and workload profiles as we traverse the city," says Ryerson. "Our finding, that safety and stress are a function of the infrastructure design and not the individual, is a shift in perspective for the transportation-safety community. We can, and must, build safety into our transportation systems."
The Ryerson lab is now analyzing a separate eye-tracking dataset from cyclists traveling Spruce and Pine streets before and after the 2019-20 installation of protected bike lanes, an experiment that will allow closer study of the impacts of a design intervention.
Overall, Ryerson says, the research shows that it's possible to be more proactive about safety and that city planners could use individual-level data to identify areas where a traffic intervention might be useful - before anyone is hit by a car. "The COVID-19 pandemic encouraged so many of us to walk and bike for commuting and recreation. Sadly, it also brought an increase in crashes. We must proactively design safer streets and not wait to count more crashes and deaths. We can use the way people feel as they move through the city as a way to design safer transportation systems," she says.
The complete author list is Megan Ryerson, Carrie Long, Michael Fichman, Joshua Davidson, Kristen Scudder, George Poon, and Matthew Harris from Penn; Michelle Kim from Swarthmore; and Radhika Katti from Carnegie Mellon University.
Megan S. Ryerson is the UPS Chair of Transportation and associate professor of city and regional planning and associate dean for research in the Stuart Weitzman School of Design at the University of Pennsylvania . She also has a secondary appointment in the Department of Electrical & Systems Engineering in the School of Engineering and Applied Science .
Carrie Long is a senior transportation planner at Gannett Fleming and a lecturer at the Stuart Weitzman School of Design. Joshua Davidson is a Ph.D. student in city and regional planning at the Weitzman School of Design. Matthew Harris is an adjunct professor and Michael Fichman is a lecturer in the Urban Spatial Analytics program in the University of Pennsylvania Weitzman School of Design.
This research was supported by the University of Pennsylvania Perelman School of Medicine's Quartet Pilot Project and Penn's Mobility21 National University Transportation center in partnerships with Carnegie Mellon University, which is sponsored by the U.S. Department of Transportation grant 69A3551747111. | A study by University of Pennsylvania (Penn) researchers demonstrated that biometric data can identify potentially problematic and hazardous components of urban infrastructure prior to their involvement in collisions. The team's approach involved assessing the effect of different infrastructure designs on the cognitive workload of bicyclists on the streets in Philadelphia, via eye-tracking data collection and analysis. The investigation determined that locations marked by disproportionately high numbers of crashes correlate with a consistent biometric response that signals increased cognitive workload, meaning a person will be less likely to be able to process new information, and the threat of a crash is higher. Penn's Megan Ryerson said the findings suggest individualized metrics could offer a more proactive strategy for designing safer roadways and traffic interventions for cyclists and pedestrians. | [] | [] | [] | scitechnews | None | None | None | None | A study by University of Pennsylvania (Penn) researchers demonstrated that biometric data can identify potentially problematic and hazardous components of urban infrastructure prior to their involvement in collisions. The team's approach involved assessing the effect of different infrastructure designs on the cognitive workload of bicyclists on the streets in Philadelphia, via eye-tracking data collection and analysis. The investigation determined that locations marked by disproportionately high numbers of crashes correlate with a consistent biometric response that signals increased cognitive workload, meaning a person will be less likely to be able to process new information, and the threat of a crash is higher. Penn's Megan Ryerson said the findings suggest individualized metrics could offer a more proactive strategy for designing safer roadways and traffic interventions for cyclists and pedestrians.
A new study published in Accident Analysis & Prevention shows how biometric data can be used to find potentially challenging and dangerous areas of urban infrastructure before a crash occurs. Lead author Megan Ryerson led a team of researchers in the Stuart Weitzman School of Design and the School of Engineering and Applied Science in collecting and analyzing eye-tracking data from cyclists navigating Philadelphia's streets. The team found that individual-based metrics can provide a more proactive approach for designing safer roadways for bicyclists and pedestrians.
Current federal rules for installing safe transportation interventions at an unsafe crossing - such as a crosswalk with a traffic signal - require either a minimum of 90-100 pedestrians crossing this location every hour or a minimum of five pedestrians struck by a driver at that location in one year. Ryerson says that the practice of planning safety interventions reactively with a "literal human cost," has motivated her and her team to find more proactive safety metrics that don't require waiting for tragic results.
Part of the challenge, says Ryerson, is that transportation systems are designed and refined using metrics like crash or fatality data instead of data on human behavior to help understand what makes an area unsafe or what specific interventions would be the most impactful. This reactive approach also fails to capture where people might want to cross but don't because they consider it too dangerous and that, if it were safe, more people would utilize.
"Today we have technology, data science, and the capability to study safety in ways that we didn't have when the field of transportation safety was born," says Ryerson. "We don't have to be reactive in planning safe transportation systems; we can instead develop innovative, proactive ways to evaluate the safety of our infrastructure."
The team developed an approach to evaluate cognitive workload, a measure of a person's ability to perceive and process information, in cyclists. Cognitive workload studies are frequently used in other fields of transportation, such as air traffic control and driving simulations, to determine what designs or conditions enable people to process the information around them. But studies looking at cognitive workload in bicyclists and pedestrians are not as common due to a number of factors, including the difficulty of developing realistic cycling simulations.
The researchers in Ryerson's lab looked at how different infrastructure designs elicit changes in cognitive workload and stress in urban cyclists. In 2018, the team had 39 cyclists travel along a U-shaped route from JFK Boulevard and Market Street, down 15th Street to 20th Street, then returning back to 15th and Market. Riders wore Tobii eye-tracking glasses equipped with inward- and outward-facing camera and a gyroscope capable of collecting eye- and head-movement data 100 times per second.
Along with the route being one of Philadelphia's newest protected bicycle lanes at the time, and therefore a new experience for all of the study participants, it also has a dramatic change in infrastructure along the 8-10-minute route, including a mix of protected bike lanes, car-bike mixing zones, and completely unprotected areas. "We felt that, in a short segment of space, our subjects could experience a range of transportation-infrastructure designs which may elicit different stress and cogitative workload responses," Ryerson says.
One of the study's main findings is the ability to correlate locations that have disproportionately high numbers of crashes with a consistent biometric response that indicates increased cognitive workload. If a person's cognitive workload is high, Ryerson says, it doesn't necessarily mean that they will crash, but it does mean that a person is less able to process new information, like a pedestrian or a driver entering the bike lane, and react appropriately. High cognitive workload means the threat of a crash is heightened.
In addition, the researchers found that stressful areas were consistent between expert cyclists and those less experienced or confident. This has implications for current approaches to managing safety, which typically focus on pedestrian- and cyclist-education interventions. Education is still important, Ryerson says, but these results show that infrastructure design is just as important in terms of making spaces safe.
"Even if you're a more competent cyclist than I am, we still have very similar stress and workload profiles as we traverse the city," says Ryerson. "Our finding, that safety and stress are a function of the infrastructure design and not the individual, is a shift in perspective for the transportation-safety community. We can, and must, build safety into our transportation systems."
The Ryerson lab is now analyzing a separate eye-tracking dataset from cyclists traveling Spruce and Pine streets before and after the 2019-20 installation of protected bike lanes, an experiment that will allow closer study of the impacts of a design intervention.
Overall, Ryerson says, the research shows that it's possible to be more proactive about safety and that city planners could use individual-level data to identify areas where a traffic intervention might be useful - before anyone is hit by a car. "The COVID-19 pandemic encouraged so many of us to walk and bike for commuting and recreation. Sadly, it also brought an increase in crashes. We must proactively design safer streets and not wait to count more crashes and deaths. We can use the way people feel as they move through the city as a way to design safer transportation systems," she says.
The complete author list is Megan Ryerson, Carrie Long, Michael Fichman, Joshua Davidson, Kristen Scudder, George Poon, and Matthew Harris from Penn; Michelle Kim from Swarthmore; and Radhika Katti from Carnegie Mellon University.
Megan S. Ryerson is the UPS Chair of Transportation and associate professor of city and regional planning and associate dean for research in the Stuart Weitzman School of Design at the University of Pennsylvania . She also has a secondary appointment in the Department of Electrical & Systems Engineering in the School of Engineering and Applied Science .
Carrie Long is a senior transportation planner at Gannett Fleming and a lecturer at the Stuart Weitzman School of Design. Joshua Davidson is a Ph.D. student in city and regional planning at the Weitzman School of Design. Matthew Harris is an adjunct professor and Michael Fichman is a lecturer in the Urban Spatial Analytics program in the University of Pennsylvania Weitzman School of Design.
This research was supported by the University of Pennsylvania Perelman School of Medicine's Quartet Pilot Project and Penn's Mobility21 National University Transportation center in partnerships with Carnegie Mellon University, which is sponsored by the U.S. Department of Transportation grant 69A3551747111. |
|||
71 | Fujitsu Uses Quantum-Inspired Algorithm to Tackle Space Waste | The University of Glasgow has worked with Fujitsu and satellite service and sustainability firm Astroscale on a quantum-inspired project to remove space debris.
The project, carried out as part of the UK Space Agency grant , Advancing research into space surveillance and tracking , was developed over six months. It makes use of Artificial Neural Network (ANN) -based rapid trajectory design algorithms, developed by the University of Glasgow, alongside Fujitsu's Digital Annealer and Quantum Inspired Optimisation Services to solve some of the main optimisation problems associated with ADR (Active Debris Removal) mission planning design.
There are 2,350 non-working satellites currently in orbit, and more than 28,000 pieces of debris being tracked by Space Surveillance networks.
By carefully deciding which debris is collected and when, Fujitsu said the quantum-inspired system, powered by Digital Annealer, optimises the mission plan to determine the minimum-fuel and minimum-time required to bring inoperable spacecrafts or satellites safely back to the disposal orbit.
According to Fujitsu, finding the optimal route to collect the space debris will save significant time and cost during the mission planning phase, and also as a consequence will improve commercial viability.
Jacob Geer, head of space surveillance and tracking at the UK Space Agency, said: "Monitoring hazardous space objects is vital for the protection of services we all rely on - from communications devices to satellite navigation. This project is one of the first examples of Quantum-inspired computing working with artificial intelligence to solve the problems space debris causes, but it's unlikely to be the last."
The project represents the next step in Astroscale's End-of-Life Services by Astroscale (ELSA) Programme to remove multiple debris objects with a single servicer satellite.
Ellen Devereux, digital annealer consultant at Fujitsu UK & Ireland, said the project not only makes the process much more cost-effective for those organisations needing to transfer and dispose of debris, but it also shows how AI and quantum-inspired computing can be used for optimisation.
"What we've learned over the course of the last six months is that this technology has huge implications for optimisation in space; not only when it comes to cleaning up debris, but also in-orbit servicing and more," she said. "Now we better understand its potential, we can't wait to see the technology applied during a future mission."
Amazon Web Services provided the cloud and AI and machine learning tools and services to support the project. The Amazon Sagemaker toolset was used to develop the ANNs for predicting the costs of orbital transfers.
Fujitsu, who spearheaded the project, is among seven UK companies to be awarded a share of more than £1m from the UK Space Agency to help track debris in space. The UK Space Agency and Ministry of Defence have announced the next step in their joint initiative to enhance the UK's awareness of events in space. | Researchers at the U.K.'s University of Glasgow, Fujitsu, and the satellite service and sustainability firm Astroscale together developed an artificial neural network (ANN) -based rapid trajectory design algorithm to address the removal of space debris. Powered by Fujitsu's Digital Annealer, the quantum-inspired system determines which debris will be collected and when, and plans the optimal route to carry out the mission to save time and money. The ANNs predicting the costs of such orbital transfers were developed with the Amazon Sagemaker toolset. Fujitsu's Ellen Devereux noted that the technology "has huge implications for optimization in space, not only when it comes to cleaning up debris, but also in-orbit servicing and more." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the U.K.'s University of Glasgow, Fujitsu, and the satellite service and sustainability firm Astroscale together developed an artificial neural network (ANN) -based rapid trajectory design algorithm to address the removal of space debris. Powered by Fujitsu's Digital Annealer, the quantum-inspired system determines which debris will be collected and when, and plans the optimal route to carry out the mission to save time and money. The ANNs predicting the costs of such orbital transfers were developed with the Amazon Sagemaker toolset. Fujitsu's Ellen Devereux noted that the technology "has huge implications for optimization in space, not only when it comes to cleaning up debris, but also in-orbit servicing and more."
The University of Glasgow has worked with Fujitsu and satellite service and sustainability firm Astroscale on a quantum-inspired project to remove space debris.
The project, carried out as part of the UK Space Agency grant , Advancing research into space surveillance and tracking , was developed over six months. It makes use of Artificial Neural Network (ANN) -based rapid trajectory design algorithms, developed by the University of Glasgow, alongside Fujitsu's Digital Annealer and Quantum Inspired Optimisation Services to solve some of the main optimisation problems associated with ADR (Active Debris Removal) mission planning design.
There are 2,350 non-working satellites currently in orbit, and more than 28,000 pieces of debris being tracked by Space Surveillance networks.
By carefully deciding which debris is collected and when, Fujitsu said the quantum-inspired system, powered by Digital Annealer, optimises the mission plan to determine the minimum-fuel and minimum-time required to bring inoperable spacecrafts or satellites safely back to the disposal orbit.
According to Fujitsu, finding the optimal route to collect the space debris will save significant time and cost during the mission planning phase, and also as a consequence will improve commercial viability.
Jacob Geer, head of space surveillance and tracking at the UK Space Agency, said: "Monitoring hazardous space objects is vital for the protection of services we all rely on - from communications devices to satellite navigation. This project is one of the first examples of Quantum-inspired computing working with artificial intelligence to solve the problems space debris causes, but it's unlikely to be the last."
The project represents the next step in Astroscale's End-of-Life Services by Astroscale (ELSA) Programme to remove multiple debris objects with a single servicer satellite.
Ellen Devereux, digital annealer consultant at Fujitsu UK & Ireland, said the project not only makes the process much more cost-effective for those organisations needing to transfer and dispose of debris, but it also shows how AI and quantum-inspired computing can be used for optimisation.
"What we've learned over the course of the last six months is that this technology has huge implications for optimisation in space; not only when it comes to cleaning up debris, but also in-orbit servicing and more," she said. "Now we better understand its potential, we can't wait to see the technology applied during a future mission."
Amazon Web Services provided the cloud and AI and machine learning tools and services to support the project. The Amazon Sagemaker toolset was used to develop the ANNs for predicting the costs of orbital transfers.
Fujitsu, who spearheaded the project, is among seven UK companies to be awarded a share of more than £1m from the UK Space Agency to help track debris in space. The UK Space Agency and Ministry of Defence have announced the next step in their joint initiative to enhance the UK's awareness of events in space. |
|||
73 | Dubai Police Will Use Citywide Network of Drones to Respond to Crime | Dubai is creating a network of pre-positioned drone bases so police can respond to incidents with drones anywhere in the city within a minute, down from 4.4 minutes currently. Israel's Airobotics will supply the quadcopters, which will operate from the base stations beginning in October, during Expo 2020 Dubai. The drones, which enter and exit their bases through a sliding roof, can fly pre-programmed patrols or be dispatched to a specific location. Operators at police headquarters can use the drones to inspect a scene, follow suspicious individuals or vehicles, and transmit data to other police units. Singapore used two of the quadcopters last year to monitor compliance with COVID-19 lockdowns, but the Dubai initiative is the first to use drones for citywide policing. | [] | [] | [] | scitechnews | None | None | None | None | Dubai is creating a network of pre-positioned drone bases so police can respond to incidents with drones anywhere in the city within a minute, down from 4.4 minutes currently. Israel's Airobotics will supply the quadcopters, which will operate from the base stations beginning in October, during Expo 2020 Dubai. The drones, which enter and exit their bases through a sliding roof, can fly pre-programmed patrols or be dispatched to a specific location. Operators at police headquarters can use the drones to inspect a scene, follow suspicious individuals or vehicles, and transmit data to other police units. Singapore used two of the quadcopters last year to monitor compliance with COVID-19 lockdowns, but the Dubai initiative is the first to use drones for citywide policing.
|
||||
74 | Hackers Got Past Windows Hello by Tricking Webcam | Biometric authentication is a key piece of the tech industry's plans to make the world password-less . But a new method for duping Microsoft's Windows Hello facial-recognition system shows that a little hardware fiddling can trick the system into unlocking when it shouldn't.
Services like Apple's FaceID have made facial-recognition authentication more commonplace in recent years, with Windows Hello driving adoption even farther. Apple only lets you use FaceID with the cameras embedded in recent iPhones and iPads, and it's still not supported on Macs at all. But because Windows hardware is so diverse, Hello facial recognition works with an array of third-party webcams . Where some might see ease of adoption, though, researchers from the security firm CyberArk saw potential vulnerability .
That's because you can't trust any old webcam to offer robust protections in how it collects and transmits data. Windows Hello facial recognition works only with webcams that have an infrared sensor in addition to the regular RGB sensor. But the system, it turns out, doesn't even look at RGB data. Which means that with one straight-on infrared image of a target's face and one black frame, the researchers found that they could unlock the victim's Windows Hello-protected device.
By manipulating a USB webcam to deliver an attacker-chosen image, the researchers could trick Windows Hello into thinking the device owner's face was present and unlocking.
"We tried to find the weakest point in the facial recognition and what would be the most interesting from the attacker's perspective, the most approachable option," says Omer Tsarfati, a researcher at the security firm CyberArk. "We created a full map of the Windows Hello facial-recognition flow and saw that the most convenient for an attacker would be to pretend to be the camera, because the whole system is relying on this input."
Microsoft calls the finding a "Windows Hello security feature bypass vulnerability" and released patches on Tuesday to address the issue. In addition, the company suggests that users enable "Windows Hello enhanced sign-in security," which uses Microsoft's "virtualization-based security" to encrypt Windows Hello face data and process it in a protected area of memory where it can't be tampered with. The company did not respond to a request for comment from WIRED about the CyberArk findings.
Tsarfati, who will present the findings next month at the Black Hat security conference in Las Vegas, says that the CyberArk team chose to look at Windows Hello's facial-recognition authentication, in particular, because there has already been a lot of research industrywide into PIN cracking and fingerprint-sensor spoofing . He adds that the team was drawn by the sizable Windows Hello user base. In May 2020, Microsoft said that the service had more than 150 million users. In December, the company added that 84.7 percent of Windows 10 users sign in with Windows Hello.
While it sounds simple - show the system two photos and you're in - these Windows Hello bypasses wouldn't be easy to carry out in practice. The hack requires that attackers have a good-quality infrared image of the target's face and have physical access to their device. But the concept is significant as Microsoft continues to push Hello adoption with Windows 11. Hardware diversity among Windows devices and the sorry state of IoT security could combine to create other vulnerabilities in how Windows Hello accepts face data.
"A really motivated attacker could do those things," says Tsarfati. "Microsoft was great to work with and produced mitigations, but the deeper problem itself about trust between the computer and the camera stays there."
There are different ways to take and process images for facial recognition. Apple's FaceID, for example, only works with the company's proprietary TrueDepth camera arrays, an infrared camera combined with a number of other sensors. But Apple is in a position to control both hardware and software on its devices in a way that Microsoft is not for the Windows ecosystem. The Windows Hello Face setup information simply says "Sign-in with your PC's infrared camera or an external infrared camera."
Marc Rogers, a longtime biometric-sensor security researcher and vice president of cybersecurity at the digital identity management company Okta, says that Microsoft should make it very clear to users which third-party webcams are certified as offering robust protections for Windows Hello. Users can still decide whether they want to buy one of these products versus any old infrared webcam, but specific guidelines and recommendations would help people understand the options.
The CyberArk research fits into a broader category of hacks known as "downgrade attacks," in which a device is tricked into relying on a less secure mode - like a malicious cell phone tower that forces your phone to use 3G mobile data, with its weaker defenses, instead of 4G. An attack that gets Windows Hello to accept static, prerecorded face data uses the same premise, and researchers have defeated Windows Hello's facial recognition before getting the system to accept photos using different techniques. Rogers says it's surprising that Microsoft didn't anticipate the possibility of attacks against third-party cameras like the one CyberArk devised.
"Really, Microsoft should know better," he says. "This attack pathway in general is one that we have known for a long time. I'm a bit disappointed that they aren't more strict about what cameras they will trust."
This story first appeared on wired.com. | Researchers at the security firm CyberArk uncovered a security feature bypass vulnerability in Microsoft's Windows Hello facial-recognition system that permitted them to manipulate a USB webcam to unlock a Windows Hello-protected device. CyberArk's Omer Tsarfati said, "We created a full map of the Windows Hello facial-recognition flow and saw that the most convenient for an attacker would be to pretend to be the camera, because the whole system is relying on this input." Hackers would need a good-quality infrared image of the victim's face and physical access to the webcam to take advantage of the vulnerability. Said Tsarfati, "A really motivated attacker could do those things. Microsoft was great to work with and produced mitigations, but the deeper problem itself about trust between the computer and the camera stays there." Microsoft has released patches to fix the issue. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the security firm CyberArk uncovered a security feature bypass vulnerability in Microsoft's Windows Hello facial-recognition system that permitted them to manipulate a USB webcam to unlock a Windows Hello-protected device. CyberArk's Omer Tsarfati said, "We created a full map of the Windows Hello facial-recognition flow and saw that the most convenient for an attacker would be to pretend to be the camera, because the whole system is relying on this input." Hackers would need a good-quality infrared image of the victim's face and physical access to the webcam to take advantage of the vulnerability. Said Tsarfati, "A really motivated attacker could do those things. Microsoft was great to work with and produced mitigations, but the deeper problem itself about trust between the computer and the camera stays there." Microsoft has released patches to fix the issue.
Biometric authentication is a key piece of the tech industry's plans to make the world password-less . But a new method for duping Microsoft's Windows Hello facial-recognition system shows that a little hardware fiddling can trick the system into unlocking when it shouldn't.
Services like Apple's FaceID have made facial-recognition authentication more commonplace in recent years, with Windows Hello driving adoption even farther. Apple only lets you use FaceID with the cameras embedded in recent iPhones and iPads, and it's still not supported on Macs at all. But because Windows hardware is so diverse, Hello facial recognition works with an array of third-party webcams . Where some might see ease of adoption, though, researchers from the security firm CyberArk saw potential vulnerability .
That's because you can't trust any old webcam to offer robust protections in how it collects and transmits data. Windows Hello facial recognition works only with webcams that have an infrared sensor in addition to the regular RGB sensor. But the system, it turns out, doesn't even look at RGB data. Which means that with one straight-on infrared image of a target's face and one black frame, the researchers found that they could unlock the victim's Windows Hello-protected device.
By manipulating a USB webcam to deliver an attacker-chosen image, the researchers could trick Windows Hello into thinking the device owner's face was present and unlocking.
"We tried to find the weakest point in the facial recognition and what would be the most interesting from the attacker's perspective, the most approachable option," says Omer Tsarfati, a researcher at the security firm CyberArk. "We created a full map of the Windows Hello facial-recognition flow and saw that the most convenient for an attacker would be to pretend to be the camera, because the whole system is relying on this input."
Microsoft calls the finding a "Windows Hello security feature bypass vulnerability" and released patches on Tuesday to address the issue. In addition, the company suggests that users enable "Windows Hello enhanced sign-in security," which uses Microsoft's "virtualization-based security" to encrypt Windows Hello face data and process it in a protected area of memory where it can't be tampered with. The company did not respond to a request for comment from WIRED about the CyberArk findings.
Tsarfati, who will present the findings next month at the Black Hat security conference in Las Vegas, says that the CyberArk team chose to look at Windows Hello's facial-recognition authentication, in particular, because there has already been a lot of research industrywide into PIN cracking and fingerprint-sensor spoofing . He adds that the team was drawn by the sizable Windows Hello user base. In May 2020, Microsoft said that the service had more than 150 million users. In December, the company added that 84.7 percent of Windows 10 users sign in with Windows Hello.
While it sounds simple - show the system two photos and you're in - these Windows Hello bypasses wouldn't be easy to carry out in practice. The hack requires that attackers have a good-quality infrared image of the target's face and have physical access to their device. But the concept is significant as Microsoft continues to push Hello adoption with Windows 11. Hardware diversity among Windows devices and the sorry state of IoT security could combine to create other vulnerabilities in how Windows Hello accepts face data.
"A really motivated attacker could do those things," says Tsarfati. "Microsoft was great to work with and produced mitigations, but the deeper problem itself about trust between the computer and the camera stays there."
There are different ways to take and process images for facial recognition. Apple's FaceID, for example, only works with the company's proprietary TrueDepth camera arrays, an infrared camera combined with a number of other sensors. But Apple is in a position to control both hardware and software on its devices in a way that Microsoft is not for the Windows ecosystem. The Windows Hello Face setup information simply says "Sign-in with your PC's infrared camera or an external infrared camera."
Marc Rogers, a longtime biometric-sensor security researcher and vice president of cybersecurity at the digital identity management company Okta, says that Microsoft should make it very clear to users which third-party webcams are certified as offering robust protections for Windows Hello. Users can still decide whether they want to buy one of these products versus any old infrared webcam, but specific guidelines and recommendations would help people understand the options.
The CyberArk research fits into a broader category of hacks known as "downgrade attacks," in which a device is tricked into relying on a less secure mode - like a malicious cell phone tower that forces your phone to use 3G mobile data, with its weaker defenses, instead of 4G. An attack that gets Windows Hello to accept static, prerecorded face data uses the same premise, and researchers have defeated Windows Hello's facial recognition before getting the system to accept photos using different techniques. Rogers says it's surprising that Microsoft didn't anticipate the possibility of attacks against third-party cameras like the one CyberArk devised.
"Really, Microsoft should know better," he says. "This attack pathway in general is one that we have known for a long time. I'm a bit disappointed that they aren't more strict about what cameras they will trust."
This story first appeared on wired.com. |
|||
75 | Public Database of AI-Predicted Protein Structures Could Transform Biology | A team of researchers says it has used a new artificial intelligence (AI) algorithm to forecast the three-dimensional structures of 350,000 proteins from humans and 20 model organisms. The team at U.K.-based AI developer DeepMind (which is owned by Alphabet, the parent of Google) developed the AlphaFold computer model, which it says has generated structures for almost 44% of all human proteins, encompassing nearly 60% of the amino acids encoded by the human genome. Researchers at the European Molecular Biology Laboratory (EMBL) in Germany compiled a freely available public database of DeepMind's new protein predictions, which is likely to help biologists determine out how thousands of unknown proteins operate. EMBL's Edith Heard said, "We believe this will be transformative to understanding how life works." | [] | [] | [] | scitechnews | None | None | None | None | A team of researchers says it has used a new artificial intelligence (AI) algorithm to forecast the three-dimensional structures of 350,000 proteins from humans and 20 model organisms. The team at U.K.-based AI developer DeepMind (which is owned by Alphabet, the parent of Google) developed the AlphaFold computer model, which it says has generated structures for almost 44% of all human proteins, encompassing nearly 60% of the amino acids encoded by the human genome. Researchers at the European Molecular Biology Laboratory (EMBL) in Germany compiled a freely available public database of DeepMind's new protein predictions, which is likely to help biologists determine out how thousands of unknown proteins operate. EMBL's Edith Heard said, "We believe this will be transformative to understanding how life works."
|
||||
76 | Total Artificial Heart Successfully Transplanted in U.S. | Surgeons at the Duke University Hospital recently transplanted a total artificial heart (TAH) into a 39-year-old man who experienced sudden heart failure. Unlike conventional artificial hearts , this TAH mimics the human heart and provides the recipient more independence after the surgery, the university said in a press release.
The TAH has been developed by the French company, CARMAT, and consists of two ventricular chambers and four biological valves ensuring that the prosthetic not only resembles the human heart but also functions like one.
The heartbeat is created by an actuator fluid that the patient carries in the bag outside the body and the heart is pumped using micropumps in response to the patient's needs as determined by the sensors and microprocessors on the heart itself. Two outlets connect the artificial heart to the aorta, which is a major artery in the body, as well as the pulmonary artery that carries blood to the lungs to oxygenate it.
The recipient patient, a resident of Shallotte, North Carolina, was diagnosed with sudden heart failure at the Duke Center and had to undergo bypass surgery. However, his condition deteriorated rapidly, making him unfit for a heart transplant either. Luckily, the Center was one of the trial sites where CARMAT is testing its artificial heart after having received primary approvals from the U.S. Food and Drug Administration (FDA).
The recipient is now stable and being monitored at the hospital; the heart will continue to be connected to the Hospital Care Console (HCC) so that its functioning can be monitored. As part of efforts to lead a near-normal life, the recipient will have to carry around almost a nine-pound (four kgs) bag that consists of a controller and two chargeable battery packs that work for approximately four hours, before requiring recharging.
The device has already been approved for use in Europe but is only intended as a bridge for patients who are diagnosed with end-stage biventricular heart failure and are likely to undergo a heart transplant in the next 180 days, the company states on its website. Last year, the Duke University Hospital began transplanting hearts from donors who had died of heart failure but reanimating them in recipient patients, STAT News reports . Having conducted over 50 such surgeries in the past year alone, the hospital was able to reduce the median heart transplant time to 82 days. As one of six large hospitals in the U.S. that provide heart transplant services, the hospital is already helping reduce wait times and the number of deaths that occur while waiting for heart transplants.
The hospital conducted a video press conference with the surgeons involved in the transplant and senior staff leading the transplant program. Participating in the conference, the patient's wife, who is a practicing nurse said ," As a nurse, I understand how important it is to bring these advancements forward." "Both [my husband and I are so grateful that we've been provided an opportunity to participate in something that has the potential to have an impact on so many lives. " | Duke University Hospital surgeons successfully transplanted a total artificial heart (TAH) developed by France's CARMAT into a 39-year-old patient who had suffered sudden heart failure. The TAH both resembles and functions like the human heart. Actuator fluid carried in a bag outside the body is responsible for the heartbeat, and sensors and microprocessors on the heart trigger its micropumps based on patient need. The TAH is connected to the aorta and the pulmonary artery through two outlets. To keep the heart powered, the patient will need to carry a nearly nine-pound bag containing a controller and two chargeable battery packs. The TAH has received primary approval for testing from the U.S. Food and Drug Administration, and was approved for use in Europe for patients expected to receive a heart transplant within 180 days. | [] | [] | [] | scitechnews | None | None | None | None | Duke University Hospital surgeons successfully transplanted a total artificial heart (TAH) developed by France's CARMAT into a 39-year-old patient who had suffered sudden heart failure. The TAH both resembles and functions like the human heart. Actuator fluid carried in a bag outside the body is responsible for the heartbeat, and sensors and microprocessors on the heart trigger its micropumps based on patient need. The TAH is connected to the aorta and the pulmonary artery through two outlets. To keep the heart powered, the patient will need to carry a nearly nine-pound bag containing a controller and two chargeable battery packs. The TAH has received primary approval for testing from the U.S. Food and Drug Administration, and was approved for use in Europe for patients expected to receive a heart transplant within 180 days.
Surgeons at the Duke University Hospital recently transplanted a total artificial heart (TAH) into a 39-year-old man who experienced sudden heart failure. Unlike conventional artificial hearts , this TAH mimics the human heart and provides the recipient more independence after the surgery, the university said in a press release.
The TAH has been developed by the French company, CARMAT, and consists of two ventricular chambers and four biological valves ensuring that the prosthetic not only resembles the human heart but also functions like one.
The heartbeat is created by an actuator fluid that the patient carries in the bag outside the body and the heart is pumped using micropumps in response to the patient's needs as determined by the sensors and microprocessors on the heart itself. Two outlets connect the artificial heart to the aorta, which is a major artery in the body, as well as the pulmonary artery that carries blood to the lungs to oxygenate it.
The recipient patient, a resident of Shallotte, North Carolina, was diagnosed with sudden heart failure at the Duke Center and had to undergo bypass surgery. However, his condition deteriorated rapidly, making him unfit for a heart transplant either. Luckily, the Center was one of the trial sites where CARMAT is testing its artificial heart after having received primary approvals from the U.S. Food and Drug Administration (FDA).
The recipient is now stable and being monitored at the hospital; the heart will continue to be connected to the Hospital Care Console (HCC) so that its functioning can be monitored. As part of efforts to lead a near-normal life, the recipient will have to carry around almost a nine-pound (four kgs) bag that consists of a controller and two chargeable battery packs that work for approximately four hours, before requiring recharging.
The device has already been approved for use in Europe but is only intended as a bridge for patients who are diagnosed with end-stage biventricular heart failure and are likely to undergo a heart transplant in the next 180 days, the company states on its website. Last year, the Duke University Hospital began transplanting hearts from donors who had died of heart failure but reanimating them in recipient patients, STAT News reports . Having conducted over 50 such surgeries in the past year alone, the hospital was able to reduce the median heart transplant time to 82 days. As one of six large hospitals in the U.S. that provide heart transplant services, the hospital is already helping reduce wait times and the number of deaths that occur while waiting for heart transplants.
The hospital conducted a video press conference with the surgeons involved in the transplant and senior staff leading the transplant program. Participating in the conference, the patient's wife, who is a practicing nurse said ," As a nurse, I understand how important it is to bring these advancements forward." "Both [my husband and I are so grateful that we've been provided an opportunity to participate in something that has the potential to have an impact on so many lives. " |
|||
77 | Kaseya Gets Master Decryption Key After July 4 Global Attack | Florida-based software supplier Kaseya has obtained a universal key that will decrypt all businesses and public organizations crippled in the July 4 global ransomware attack. The Russia-affiliated REvil syndicate released the malware, which exploited Kaseya's software and immobilized more than 1,000 targets. Kaseya spokesperson Dana Liedholm would only disclose that the key came from a "trusted third party," and that Kaseya was distributing it to all victims. Ransomware analysts suggested multiple possibilities for the master key's appearance, including Kaseya paying the ransom, or the Kremlin seizing the key and handing it over. | [] | [] | [] | scitechnews | None | None | None | None | Florida-based software supplier Kaseya has obtained a universal key that will decrypt all businesses and public organizations crippled in the July 4 global ransomware attack. The Russia-affiliated REvil syndicate released the malware, which exploited Kaseya's software and immobilized more than 1,000 targets. Kaseya spokesperson Dana Liedholm would only disclose that the key came from a "trusted third party," and that Kaseya was distributing it to all victims. Ransomware analysts suggested multiple possibilities for the master key's appearance, including Kaseya paying the ransom, or the Kremlin seizing the key and handing it over.
|
||||
79 | Algorithm Flies Drones Faster Than Human Pilots | To be useful, drones need to be quick. Because of their limited battery life they must complete whatever task they have - searching for survivors on a disaster site, inspecting a building, delivering cargo - in the shortest possible time. And they may have to do it by going through a series of waypoints like windows, rooms, or specific locations to inspect, adopting the best trajectory and the right acceleration or deceleration at each segment.
The best human drone pilots are very good at doing this and have so far always outperformed autonomous systems in drone racing. Now, a research group at the University of Zurich (UZH) has created an algorithm that can find the quickest trajectory to guide a quadrotor - a drone with four propellers - through a series of waypoints on a circuit. "Our drone beat the fastest lap of two world-class human pilots on an experimental race track," says Davide Scaramuzza, who heads the Robotics and Perception Group at UZH and the Rescue Robotics Grand Challenge of the NCCR Robotics, which funded the research.
"The novelty of the algorithm is that it is the first to generate time-optimal trajectories that fully consider the drones' limitations," says Scaramuzza. Previous works relied on simplifications of either the quadrotor system or the description of the flight path, and thus they were sub-optimal. "The key idea is, rather than assigning sections of the flight path to specific waypoints, that our algorithm just tells the drone to pass through all waypoints, but not how or when to do that," adds Philipp Foehn, PhD student and first author of the paper.
The researchers had the algorithm and two human pilots fly the same quadrotor through a race circuit. They employed external cameras to precisely capture the motion of the drones and - in the case of the autonomous drone - to give real-time information to the algorithm on where the drone was at any moment. To ensure a fair comparison, the human pilots were given the opportunity to train on the circuit before the race. But the algorithm won: all its laps were faster than the human ones, and the performance was more consistent. This is not surprising, because once the algorithm has found the best trajectory it can reproduce it faithfully many times, unlike human pilots.
Before commercial applications, the algorithm will need to become less computationally demanding, as it now takes up to an hour for the computer to calculate the time-optimal trajectory for the drone. Also, at the moment, the drone relies on external cameras to compute where it was at any moment. In future work, the scientists want to use onboard cameras. But the demonstration that an autonomous drone can in principle fly faster than human pilots is promising. "This algorithm can have huge applications in package delivery with drones, inspection, search and rescue, and more," says Scaramuzza.
Philipp Foehn, Angel Romero, Davide Scaramuzza. Time-Optimal Planning for Quadrotor Waypoint Flight. Science Robotics. July 21, 2021. DOI: 10.1126/scirobotics.abh1221 | An autonomously flying quadrotor drone has for the first time outraced human pilots, using a novel algorithm designed by researchers at Switzerland's University of Zurich (UZH). The algorithm calculates the fastest trajectories for the aircraft and guides it through a series of waypoints on a circuit. UZH's Davide Scaramuzza said the algorithm "is the first to generate time-optimal trajectories that fully consider the drones' limitations." He said the algorithm enabled the autonomous drone to beat two world-class human pilots on an experimental track. During the race, external cameras captured the drones' movement, and relayed real-time data to the algorithm on where the autonomous drone was at any moment. | [] | [] | [] | scitechnews | None | None | None | None | An autonomously flying quadrotor drone has for the first time outraced human pilots, using a novel algorithm designed by researchers at Switzerland's University of Zurich (UZH). The algorithm calculates the fastest trajectories for the aircraft and guides it through a series of waypoints on a circuit. UZH's Davide Scaramuzza said the algorithm "is the first to generate time-optimal trajectories that fully consider the drones' limitations." He said the algorithm enabled the autonomous drone to beat two world-class human pilots on an experimental track. During the race, external cameras captured the drones' movement, and relayed real-time data to the algorithm on where the autonomous drone was at any moment.
To be useful, drones need to be quick. Because of their limited battery life they must complete whatever task they have - searching for survivors on a disaster site, inspecting a building, delivering cargo - in the shortest possible time. And they may have to do it by going through a series of waypoints like windows, rooms, or specific locations to inspect, adopting the best trajectory and the right acceleration or deceleration at each segment.
The best human drone pilots are very good at doing this and have so far always outperformed autonomous systems in drone racing. Now, a research group at the University of Zurich (UZH) has created an algorithm that can find the quickest trajectory to guide a quadrotor - a drone with four propellers - through a series of waypoints on a circuit. "Our drone beat the fastest lap of two world-class human pilots on an experimental race track," says Davide Scaramuzza, who heads the Robotics and Perception Group at UZH and the Rescue Robotics Grand Challenge of the NCCR Robotics, which funded the research.
"The novelty of the algorithm is that it is the first to generate time-optimal trajectories that fully consider the drones' limitations," says Scaramuzza. Previous works relied on simplifications of either the quadrotor system or the description of the flight path, and thus they were sub-optimal. "The key idea is, rather than assigning sections of the flight path to specific waypoints, that our algorithm just tells the drone to pass through all waypoints, but not how or when to do that," adds Philipp Foehn, PhD student and first author of the paper.
The researchers had the algorithm and two human pilots fly the same quadrotor through a race circuit. They employed external cameras to precisely capture the motion of the drones and - in the case of the autonomous drone - to give real-time information to the algorithm on where the drone was at any moment. To ensure a fair comparison, the human pilots were given the opportunity to train on the circuit before the race. But the algorithm won: all its laps were faster than the human ones, and the performance was more consistent. This is not surprising, because once the algorithm has found the best trajectory it can reproduce it faithfully many times, unlike human pilots.
Before commercial applications, the algorithm will need to become less computationally demanding, as it now takes up to an hour for the computer to calculate the time-optimal trajectory for the drone. Also, at the moment, the drone relies on external cameras to compute where it was at any moment. In future work, the scientists want to use onboard cameras. But the demonstration that an autonomous drone can in principle fly faster than human pilots is promising. "This algorithm can have huge applications in package delivery with drones, inspection, search and rescue, and more," says Scaramuzza.
Philipp Foehn, Angel Romero, Davide Scaramuzza. Time-Optimal Planning for Quadrotor Waypoint Flight. Science Robotics. July 21, 2021. DOI: 10.1126/scirobotics.abh1221 |
|||
81 | U.K. Companies Lead Expansion in Quantum Computing | LONDON, July 20 (Reuters) - More than 80% of large companies in Britain are scaling up their quantum computing capabilities, making the country a leader in deploying the nascent technology to solve complex problems, according to research by Accenture (ACN.N) .
In the past couple of years the technology has started to move from the research realm to commercial applications as businesses seek to harness the potential exponential increase in computing power it offers.
Alphabet Inc's Google said in late-2019 it had used a quantum computer to solve in minutes a complex problem that would take supercomputers thousands of years to crack. Rivals including IBM Corp and Microsoft Corp are also developing the technology in their cloud businesses.
Rather than storing information in bits - or zeros and ones - quantum computing makes use of a property of sub-atomic particles in which they can exist simultaneously in different states, so a quantum bit can be one and zero at the same time.
They can then become 'entangled' - meaning they can influence each other's behaviour in an observable way - leading to exponential increases in computing power.
Britain has long been a leader in fundamental research in science and technology, but - with some notable exceptions - has struggled to harness the commercial opportunities that followed.
Maynard Williams, a managing director for Accenture Technology in the UK & Ireland, said COVID-19 had forced companies to adopt new technology faster and had increased their willingness to innovate.
Accenture's research showed British businesses were making a head start in experimenting with quantum computing.
Britain was outpacing the global average of 62% of large firms scaling quantum technology, according to the research, and was leading the United States, where the figure was 74%.
The majority of the companies expanding quantum computing in Britain - some 85% - said they would increase investment in the technology in the next three years.
Accenture did not quantify the scaling up or the investment plans.
"It's an exciting moment of focus around what these new technologies can do," Williams said, pointing to areas such as financial markets and supply chains as fertile ground for quantum computing.
"While the technology is still being tested to create new products and services, we expect quantum computing to bring huge advances in computing power and solve business problems that are too complex for classical computing systems."
Our Standards: The Thomson Reuters Trust Principles. | Multinational professional consultancy Accenture said, based on its research, that over 80% of large U.K. companies are expanding their quantum computing capabilities. Accenture Technology's Maynard Williams said the pandemic had forced companies to adopt technology more quickly and to be more willing to innovate. Accenture's research found Britain was outpacing the global average of 62% of large companies scaling quantum computing technologies, surpassing the U.S. average of 74%. Said Williams, "While the technology is still being tested to create new products and services, we expect quantum computing to bring huge advances in computing power and solve business problems that are too complex for classical computing systems." | [] | [] | [] | scitechnews | None | None | None | None | Multinational professional consultancy Accenture said, based on its research, that over 80% of large U.K. companies are expanding their quantum computing capabilities. Accenture Technology's Maynard Williams said the pandemic had forced companies to adopt technology more quickly and to be more willing to innovate. Accenture's research found Britain was outpacing the global average of 62% of large companies scaling quantum computing technologies, surpassing the U.S. average of 74%. Said Williams, "While the technology is still being tested to create new products and services, we expect quantum computing to bring huge advances in computing power and solve business problems that are too complex for classical computing systems."
LONDON, July 20 (Reuters) - More than 80% of large companies in Britain are scaling up their quantum computing capabilities, making the country a leader in deploying the nascent technology to solve complex problems, according to research by Accenture (ACN.N) .
In the past couple of years the technology has started to move from the research realm to commercial applications as businesses seek to harness the potential exponential increase in computing power it offers.
Alphabet Inc's Google said in late-2019 it had used a quantum computer to solve in minutes a complex problem that would take supercomputers thousands of years to crack. Rivals including IBM Corp and Microsoft Corp are also developing the technology in their cloud businesses.
Rather than storing information in bits - or zeros and ones - quantum computing makes use of a property of sub-atomic particles in which they can exist simultaneously in different states, so a quantum bit can be one and zero at the same time.
They can then become 'entangled' - meaning they can influence each other's behaviour in an observable way - leading to exponential increases in computing power.
Britain has long been a leader in fundamental research in science and technology, but - with some notable exceptions - has struggled to harness the commercial opportunities that followed.
Maynard Williams, a managing director for Accenture Technology in the UK & Ireland, said COVID-19 had forced companies to adopt new technology faster and had increased their willingness to innovate.
Accenture's research showed British businesses were making a head start in experimenting with quantum computing.
Britain was outpacing the global average of 62% of large firms scaling quantum technology, according to the research, and was leading the United States, where the figure was 74%.
The majority of the companies expanding quantum computing in Britain - some 85% - said they would increase investment in the technology in the next three years.
Accenture did not quantify the scaling up or the investment plans.
"It's an exciting moment of focus around what these new technologies can do," Williams said, pointing to areas such as financial markets and supply chains as fertile ground for quantum computing.
"While the technology is still being tested to create new products and services, we expect quantum computing to bring huge advances in computing power and solve business problems that are too complex for classical computing systems."
Our Standards: The Thomson Reuters Trust Principles. |
|||
82 | Water-Powered Robotic Hand Can Play Super Mario Bros | By Matthew Sparkes
This 3D-printed robot hand can play Mario University of Maryland
A 3D-printed robotic hand controlled by pressurised water can complete the first level of classic computer game Super Mario Bros in less than 90 seconds.
Ryan Sochol and his team at the University of Maryland 3D-printed the hand in a single operation using a machine that can deposit hard plastic, a rubber-like polymer and a water-soluble "sacrificial" material. This last material enables complicated shapes to be supported during construction before it is rinsed away in water when the printing process is complete.
This range ... | University of Maryland (UMD) researchers used three-dimensional (3D) printing to produce a water-controlled robotic hand capable of completing the first level of the computer game Super Mario Bros in less than 90 seconds. The hand is composed of hard plastic, a rubbery polymer, and a water-soluble "sacrificial" material that can support complex shapes during printing before being rinsed away. These constituents form a rigid skeleton, as well as fluidic circuits that translate streams of water from a hose into finger movements. By carefully controlling the pressure of water pulses routed through the hose, the UMD team could move each of the hand's three fingers and operate a controller with sufficient precision to play the game. | [] | [] | [] | scitechnews | None | None | None | None | University of Maryland (UMD) researchers used three-dimensional (3D) printing to produce a water-controlled robotic hand capable of completing the first level of the computer game Super Mario Bros in less than 90 seconds. The hand is composed of hard plastic, a rubbery polymer, and a water-soluble "sacrificial" material that can support complex shapes during printing before being rinsed away. These constituents form a rigid skeleton, as well as fluidic circuits that translate streams of water from a hose into finger movements. By carefully controlling the pressure of water pulses routed through the hose, the UMD team could move each of the hand's three fingers and operate a controller with sufficient precision to play the game.
By Matthew Sparkes
This 3D-printed robot hand can play Mario University of Maryland
A 3D-printed robotic hand controlled by pressurised water can complete the first level of classic computer game Super Mario Bros in less than 90 seconds.
Ryan Sochol and his team at the University of Maryland 3D-printed the hand in a single operation using a machine that can deposit hard plastic, a rubber-like polymer and a water-soluble "sacrificial" material. This last material enables complicated shapes to be supported during construction before it is rinsed away in water when the printing process is complete.
This range ... |
|||
83 | Algorithm May Help Autonomous Vehicles Navigate Narrow, Crowded Streets | It is a scenario familiar to anyone who has driven down a crowded, narrow street. Parked cars line both sides, and there isn't enough space for vehicles traveling in both directions to pass each other. One has to duck into a gap in the parked cars or slow and pull over as far as possible for the other to squeeze by.
Drivers find a way to negotiate this, but not without close calls and frustration. Programming an autonomous vehicle (AV) to do the same - without a human behind the wheel or knowledge of what the other driver might do - presented a unique challenge for researchers at the Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research .
"It's the unwritten rules of the road, that's pretty much what we're dealing with here," said Christoph Killing, a former visiting research scholar in the School of Computer Science's Robotics Institute and now part of the Autonomous Aerial Systems Lab at the Technical University of Munich. "It's a difficult bit. You have to learn to negotiate this scenario without knowing if the other vehicle is going to stop or go."
While at CMU, Killing teamed up with research scientist John Dolan and Ph.D. student Adam Villaflor to crack this problem. The team presented its research, " Learning To Robustly Negotiate Bi-Directional Lane Usage in High-Conflict Driving Scenarios ," at the International Conference on Robotics and Automation .
The team believes their research is the first into this specific driving scenario. It requires drivers - human or not - to collaborate to make it past each other safely without knowing what the other is thinking. Drivers must balance aggression with cooperation. An overly aggressive driver, one that just goes without regard for other vehicles, could put itself and others at risk. An overly cooperative driver, one that always pulls over in the face of oncoming traffic, may never make it down the street.
"I have always found this to be an interesting and sometimes difficult aspect of driving in Pittsburgh," Dolan said.
Autonomous vehicles have been heralded as a potential solution to the last mile challenges of delivery and transportation. But for an AV to deliver a pizza, package or person to their destination, they have to be able to navigate tight spaces and unknown driver intentions.
The team developed a method to model different levels of driver cooperativeness - how likely a driver was to pull over to let the other driver pass - and used those models to train an algorithm that could assist an autonomous vehicle to safely and efficiently navigate this situation. The algorithm has only been used in simulation and not on a vehicle in the real world, but the results are promising. The team found that their algorithm performed better than current models.
Driving is full of complex scenarios like this one. As the autonomous driving researchers tackle them, they look for ways to make the algorithms and models developed for one scenario, say merging onto a highway, work for other scenarios, like changing lanes or making a left turn against traffic at an intersection.
"Extensive testing is bringing to light the last percent of touch cases," Dolan said. "We keep finding these corner cases and keep coming up with ways to handle them." | An algorithm developed by researchers at Carnegie Mellon University (CMU) could enable autonomous vehicles to navigate crowded, narrow streets where vehicles traveling in opposite directions do not have enough space to pass each other and there is no knowledge about what the other driver may do. Such a scenario requires collaboration among drivers, who must balance aggression with cooperation. The researchers modeled different levels of cooperation between drivers and used them to train the algorithm. In simulations, the algorithm was found to outperform current models; it has not yet been tested on real-world vehicles. | [] | [] | [] | scitechnews | None | None | None | None | An algorithm developed by researchers at Carnegie Mellon University (CMU) could enable autonomous vehicles to navigate crowded, narrow streets where vehicles traveling in opposite directions do not have enough space to pass each other and there is no knowledge about what the other driver may do. Such a scenario requires collaboration among drivers, who must balance aggression with cooperation. The researchers modeled different levels of cooperation between drivers and used them to train the algorithm. In simulations, the algorithm was found to outperform current models; it has not yet been tested on real-world vehicles.
It is a scenario familiar to anyone who has driven down a crowded, narrow street. Parked cars line both sides, and there isn't enough space for vehicles traveling in both directions to pass each other. One has to duck into a gap in the parked cars or slow and pull over as far as possible for the other to squeeze by.
Drivers find a way to negotiate this, but not without close calls and frustration. Programming an autonomous vehicle (AV) to do the same - without a human behind the wheel or knowledge of what the other driver might do - presented a unique challenge for researchers at the Carnegie Mellon University Argo AI Center for Autonomous Vehicle Research .
"It's the unwritten rules of the road, that's pretty much what we're dealing with here," said Christoph Killing, a former visiting research scholar in the School of Computer Science's Robotics Institute and now part of the Autonomous Aerial Systems Lab at the Technical University of Munich. "It's a difficult bit. You have to learn to negotiate this scenario without knowing if the other vehicle is going to stop or go."
While at CMU, Killing teamed up with research scientist John Dolan and Ph.D. student Adam Villaflor to crack this problem. The team presented its research, " Learning To Robustly Negotiate Bi-Directional Lane Usage in High-Conflict Driving Scenarios ," at the International Conference on Robotics and Automation .
The team believes their research is the first into this specific driving scenario. It requires drivers - human or not - to collaborate to make it past each other safely without knowing what the other is thinking. Drivers must balance aggression with cooperation. An overly aggressive driver, one that just goes without regard for other vehicles, could put itself and others at risk. An overly cooperative driver, one that always pulls over in the face of oncoming traffic, may never make it down the street.
"I have always found this to be an interesting and sometimes difficult aspect of driving in Pittsburgh," Dolan said.
Autonomous vehicles have been heralded as a potential solution to the last mile challenges of delivery and transportation. But for an AV to deliver a pizza, package or person to their destination, they have to be able to navigate tight spaces and unknown driver intentions.
The team developed a method to model different levels of driver cooperativeness - how likely a driver was to pull over to let the other driver pass - and used those models to train an algorithm that could assist an autonomous vehicle to safely and efficiently navigate this situation. The algorithm has only been used in simulation and not on a vehicle in the real world, but the results are promising. The team found that their algorithm performed better than current models.
Driving is full of complex scenarios like this one. As the autonomous driving researchers tackle them, they look for ways to make the algorithms and models developed for one scenario, say merging onto a highway, work for other scenarios, like changing lanes or making a left turn against traffic at an intersection.
"Extensive testing is bringing to light the last percent of touch cases," Dolan said. "We keep finding these corner cases and keep coming up with ways to handle them." |
|||
84 | Will AI Grade Your Next Test? | This spring, Philips Pham was among the more than 12,000 people in 148 countries who took an online class called Code in Place. Run by Stanford University, the course taught the fundamentals of computer programming.
Four weeks in, Mr. Pham, a 23-year-old student living at the southern tip of Sweden, typed his way through the first test, trying to write a program that could draw waves of tiny blue diamonds across a black-and-white grid. Several days later, he received a detailed critique of his code.
It applauded his work, but also pinpointed an error. "Seems like you have a small mistake," the critique noted. "Perhaps you are running into the wall after drawing the third wave."
The feedback was just what Mr. Pham needed. And it came from a machine.
During this online class, a new kind of artificial intelligence offered feedback to Mr. Pham and thousands of other students who took the same test. Built by a team of Stanford researchers, this automated system points to a new future for online education, which can so easily reach thousands of people but does not always provide the guidance that many students need and crave. | Stanford University researchers have developed an artificial intelligence (AI) system designed to provide automated feedback to students taking the online Code in Place course. The researchers trained a neural network to analyze computer code using examples from a decade's worth of midterm exams featuring programming exercises. After the system offered 16,000 pieces of feedback to students this spring, the researchers found students agreed with the AI feedback 97.9% of the time, and with feedback from human instructors 96.7% of the time. Stanford's Chris Piech stressed that the system is not intended to replace instructors, but to reach more students than they could on their own. | [] | [] | [] | scitechnews | None | None | None | None | Stanford University researchers have developed an artificial intelligence (AI) system designed to provide automated feedback to students taking the online Code in Place course. The researchers trained a neural network to analyze computer code using examples from a decade's worth of midterm exams featuring programming exercises. After the system offered 16,000 pieces of feedback to students this spring, the researchers found students agreed with the AI feedback 97.9% of the time, and with feedback from human instructors 96.7% of the time. Stanford's Chris Piech stressed that the system is not intended to replace instructors, but to reach more students than they could on their own.
This spring, Philips Pham was among the more than 12,000 people in 148 countries who took an online class called Code in Place. Run by Stanford University, the course taught the fundamentals of computer programming.
Four weeks in, Mr. Pham, a 23-year-old student living at the southern tip of Sweden, typed his way through the first test, trying to write a program that could draw waves of tiny blue diamonds across a black-and-white grid. Several days later, he received a detailed critique of his code.
It applauded his work, but also pinpointed an error. "Seems like you have a small mistake," the critique noted. "Perhaps you are running into the wall after drawing the third wave."
The feedback was just what Mr. Pham needed. And it came from a machine.
During this online class, a new kind of artificial intelligence offered feedback to Mr. Pham and thousands of other students who took the same test. Built by a team of Stanford researchers, this automated system points to a new future for online education, which can so easily reach thousands of people but does not always provide the guidance that many students need and crave. |
|||
85 | Crypto Experts in Demand as Countries Launch Digital Currencies | Demand for cryptocurrency consultants continues to grow as countries accelerate efforts to launch their own digital tenders. For example, Israeli crypto consultant Barak Ben-Ezer designed the SOV (sovereign), a bitcoin-like tradable cryptocurrency, for the Marshall Islands archipelago nation. China has jumpstarted other countries' eagerness to have their own digital currencies by indicating the launch of a digital yuan (the e-CNY) is approaching. Advisers say central banks often have teams modeling digitization schemes, although many are discreetly consulting with engineers with backgrounds in cryptocurrencies and blockchain. Having private advisers like Ben-Ezer directing such efforts raises concerns about potential conflicts of interest and liability; the Marshall Islands' crypto issuance has been delayed amid similar issues raised by the First Hawaiian Bank and the International Monetary Fund. | [] | [] | [] | scitechnews | None | None | None | None | Demand for cryptocurrency consultants continues to grow as countries accelerate efforts to launch their own digital tenders. For example, Israeli crypto consultant Barak Ben-Ezer designed the SOV (sovereign), a bitcoin-like tradable cryptocurrency, for the Marshall Islands archipelago nation. China has jumpstarted other countries' eagerness to have their own digital currencies by indicating the launch of a digital yuan (the e-CNY) is approaching. Advisers say central banks often have teams modeling digitization schemes, although many are discreetly consulting with engineers with backgrounds in cryptocurrencies and blockchain. Having private advisers like Ben-Ezer directing such efforts raises concerns about potential conflicts of interest and liability; the Marshall Islands' crypto issuance has been delayed amid similar issues raised by the First Hawaiian Bank and the International Monetary Fund.
|
||||
86 | iPhone Security No Match for NSO Spyware | Spyware made by Israeli surveillance company NSO has been used to hack Apple iPhones without users' knowledge. An international probe uncovered 23 Apple devices compromised by Pegasus spyware, which circumvented their security systems and installed malware. The hacked smartphones included an iPhone 12 with the latest Apple software updates, indicating even the newest iPhones are vulnerable, and undercutting Apple's long-hyped claims of superior security. An Amnesty International study found evidence that NSO's clients use commercial Internet service companies to send Pegasus malware to targeted devices. The international probe found the inability to block such smartphone hacking threatens democracy in many nations by weakening journalism, political activism, and campaigns against human rights abuses. | [] | [] | [] | scitechnews | None | None | None | None | Spyware made by Israeli surveillance company NSO has been used to hack Apple iPhones without users' knowledge. An international probe uncovered 23 Apple devices compromised by Pegasus spyware, which circumvented their security systems and installed malware. The hacked smartphones included an iPhone 12 with the latest Apple software updates, indicating even the newest iPhones are vulnerable, and undercutting Apple's long-hyped claims of superior security. An Amnesty International study found evidence that NSO's clients use commercial Internet service companies to send Pegasus malware to targeted devices. The international probe found the inability to block such smartphone hacking threatens democracy in many nations by weakening journalism, political activism, and campaigns against human rights abuses.
|
||||
87 | Researchers Pulling Movements from Microfilm with Digital History | Four years into World War II, 7,434 Black soldiers from 60 domestic units sat down to Survey 32. It was one of over 200 surveys administered by social and behavioral scientists assigned to gather feedback on morale and the efficiency of the Army, for the organization's research branch. But Survey 32 was focused primarily on race relations.
The soldiers anonymously ticked boxes and gave short-answer responses to its questions: Did the soldier feel he would have better or worse job prospects after the war? Did he foresee having more rights and privileges, or less? Did he feel he had a fair chance to support the U.S. in winning the war? | Virginia Polytechnic Institute and State University (Virginia Tech) historians and computer scientists are using digital technologies to bring archived historic content to life for public access. Their goal is to provide technology-enhanced experiences for users, to enable them to control interactive platforms that can make the study of history more accessible. One example is Immersive Space to Think, a three-dimensional workspace that history students can navigate using virtual reality goggles and handheld controllers. Users can explore transcriptions and other documents with Incite, an open source software plug-in; the system itself can learn from students' behavior to enhance the interactive experience using a machine learning algorithm that can suggest additional relevant documents to explore. | [] | [] | [] | scitechnews | None | None | None | None | Virginia Polytechnic Institute and State University (Virginia Tech) historians and computer scientists are using digital technologies to bring archived historic content to life for public access. Their goal is to provide technology-enhanced experiences for users, to enable them to control interactive platforms that can make the study of history more accessible. One example is Immersive Space to Think, a three-dimensional workspace that history students can navigate using virtual reality goggles and handheld controllers. Users can explore transcriptions and other documents with Incite, an open source software plug-in; the system itself can learn from students' behavior to enhance the interactive experience using a machine learning algorithm that can suggest additional relevant documents to explore.
Four years into World War II, 7,434 Black soldiers from 60 domestic units sat down to Survey 32. It was one of over 200 surveys administered by social and behavioral scientists assigned to gather feedback on morale and the efficiency of the Army, for the organization's research branch. But Survey 32 was focused primarily on race relations.
The soldiers anonymously ticked boxes and gave short-answer responses to its questions: Did the soldier feel he would have better or worse job prospects after the war? Did he foresee having more rights and privileges, or less? Did he feel he had a fair chance to support the U.S. in winning the war? |
|||
88 | Computer Science Professor Wins 'Test of Time' Award for Influential Paper | Shang-Hua Teng , a University Professor and Seely G. Mudd Professor of Computer Science and Mathematics, has been honored with a Symposium on Theory of Computing (STOC) Test of Time Award . Teng, with Daniel A. Spielman of Yale University, received the award from the ACM Special Interest Group on Algorithms and Computation Theory for a paper on smoothed analysis of algorithms originally presented at the STOC conference in 2001.
In the paradigm-shifting paper, " Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time ," Teng and Spielman use the concept of smoothed analysis to give a more realistic understanding of an algorithm's performance, such as its running time.
The concept helps to explain a long-debated phenomenon: why do some algorithms work better in practice than in theory? Teng and Spielman found that many algorithms, particularly the widely used simplex algorithm for linear programming, work as long as there is noise in the input, because there is usually noise in real-world data.
The study's findings have been applied to practical algorithms in countless applications, including faster internet communications, deep learning, data mining, differential privacy, game theory, and personalized recommendation systems.
An internationally renowned theoretical computer scientist, Teng's work has earned him numerous accolades throughout his career. For his work on smoothed analysis of algorithms, Teng previously received the Gödel Prize as well as the Fulkerson Prize, a prestigious honor awarded once every three years by the American Mathematical Society and the Mathematical Optimization Society.
For his work on nearly-linear-time Laplacian solvers, he was again awarded the Gödel Prize in 2015. A Simons Investigator, a Fellow of the Association for Computing Machinery and Society for Industrial and Applied Mathematics, and Alfred P. Sloan Fellow, Teng has been described by the Simons Foundation as "one of the most original theoretical scientists in the world."
We sat down with Teng to find out why this groundbreaking paper continues to make waves, and how he rediscovered "math for fun" during the pandemic. Answers have been edited for style and clarity.
What were the key findings of this paper? What problem were you trying to solve?
A long-standing challenge in computing, then and now, has been the following: there are many algorithms that work well in practice that do not work well in the worst-case scenario, as measured by the traditional theory of computation.
It has been commonly believed that practical inputs are usually more favorable than worst-case instances. So, Dan Spielman and I were aiming to develop a framework to capture this popular belief and real-world observation to move theory a step towards practice.
Smoothed analysis is our attempt to understand the practical behavior of algorithms. It captures the following: In the real world, inputs have some degree of randomness, noise, imprecision, or uncertainty. Our theory demonstrates that these properties can in fact be helpful to algorithms in practice, because under these conditions, the worst-case scenarios are harder to arise.
How has this area of research changed in the past 20 years? Why do you think it is still relevant today?
During the past 20 years, we entered the age of "big data," massive networks, and ubiquitous data-driven AI and machine learning techniques. Understanding practical algorithmic behaviors has become crucial in applications ranging from human-machine interactions and pandemic modeling, to drug design, financial planning, climate modeling and more.
Data and models from all these areas continue to have randomness, noise, imprecision and uncertainty, which is the topic of our research. I hope our work will continue to inspire new theoretical models and practical algorithms for vast data-driven applications.
How has this research on smoothed analysis impacted the "real world"?
In computing, algorithms are commonly used in practice before comprehensive theoretical analyses are conducted. In other words, practitioners are usually on the frontiers of methodology development. In this context, Dan and I were more like theoretical physicists, aiming to develop theory to explain and model practical observations.
For example, the first algorithm that we applied the smoothed analysis to - the simplex method for linear programming - was invented in the 1940s for military planning and economic modeling. The simplex method was widely used in industry for optimization, even though in the 1970s, worst-case examples were discovered by mathematicians suggesting that, in traditional computing theory, the simplex method could not be an efficient algorithm. This is the source of the gap between theory and practice in the world of computing.
Over the years, some researchers from operations research, network systems, data mining, and machine learning told me that they used methods inspired by smoothed analysis in their work. Of course, practical algorithmic behaviors are far more complex than what our theory can capture, which is why we and others are continuing to look for ways to develop better theories for practice.
How did you and Professor Spielman meet?
I first met Dan in 1990 when he - then a junior of Yale - gave a seminar at CMU (where I was a PhD student). I was his student host. We then reconnected and became life-long friends at MIT Math department in 1992 when he arrived as a PhD student and I joined as an instructor for the department.
When you were both working on this paper, did you have any idea it would have such an enormous and long-lasting impact?
Twenty years ago, like many in our field, Dan and I recognized the significance of the challenge that motivated our paper: closing the theory-practice gap for algorithms. The simplex method was often mentioned as an example where practical performance defies theoretical prediction. We believed that the theory-practice gap would continue to be a fundamental subject for computing.
We were also encouraged by the responses to our initial work from scientists and researchers, who were closer to practical algorithm design and optimization than we were. Their feedback encouraged us that our steps were meaningful towards capturing practical behaviors of algorithms.
As theoreticians, Dan and I enjoyed the conceptual formulation of smoothed analysis and the technical component of probability, high-dimensional geometry, and mathematical programming in our work. It is exciting to develop a theory that is relevant to some aspect of practice and a great honor indeed to have my work recognized by my peers.
Coming back to the present day, what have you been working on recently? Has the pandemic impacted your research?
During this historical moment, I did find one area of mathematics soothing: recreational mathematics. When I was a student, I used to read Scientific American , and always enjoyed the mathematical puzzles and games in the magazine. When I was teaching at Boston University, one of my PhD students, Kyle Burke, was super passionate and gifted in puzzles and games. He wrote a thesis in 2009 with a cool title: "Science for Fun: New Impartial Board Games."
Three years ago, he recommended a talented undergraduate, Matt Ferland, to be a PhD student in our department. During the Covid Zoom world, Matt, Kyle and I have been studying several fundamental problems in Combinatorial Game Theory (a more studious name for recreational mathematics), including board games incorporated with quantum-inspired elements.
We also designed new board games based on mathematical and computer science problems. In a recent paper, we solved two long-standing problems in this field that were open since the 1980s and 1990s. These results involve the mathematical extension of the word-chain game we used to play as kids. I have also started playing these games with my 8-year-old daughter. (One of Teng's games is playable here .) | ACM's Special Interest Group on Algorithms and Computation Theory named the University of Southern California Viterbi School of Engineering Seely G. Mudd Professor of Computer Science and Mathematics Shang-Hua Teng, and his collaborator, Yale University professor of applied mathematics and computer science Daniel A. Spielman, recipients of the Symposium on Theory of Computing Test of Time Award. Teng and Spielman authored a paper on smoothed analysis of algorithms, which offers a more realistic comprehension of algorithmic performance. The authors determined that algorithms, especially the simplex algorithm for linear programming, function as long as the input has noise, because real-world data typically contains noise. Said Teng, "Our theory demonstrates that these properties can in fact be helpful to algorithms in practice, because under these conditions, the worst-case scenarios are harder to arise." These findings have been applied in a wide range of practical algorithms, including faster Internet communications, deep learning, data mining, game theory, and personalized recommendation systems. | [] | [] | [] | scitechnews | None | None | None | None | ACM's Special Interest Group on Algorithms and Computation Theory named the University of Southern California Viterbi School of Engineering Seely G. Mudd Professor of Computer Science and Mathematics Shang-Hua Teng, and his collaborator, Yale University professor of applied mathematics and computer science Daniel A. Spielman, recipients of the Symposium on Theory of Computing Test of Time Award. Teng and Spielman authored a paper on smoothed analysis of algorithms, which offers a more realistic comprehension of algorithmic performance. The authors determined that algorithms, especially the simplex algorithm for linear programming, function as long as the input has noise, because real-world data typically contains noise. Said Teng, "Our theory demonstrates that these properties can in fact be helpful to algorithms in practice, because under these conditions, the worst-case scenarios are harder to arise." These findings have been applied in a wide range of practical algorithms, including faster Internet communications, deep learning, data mining, game theory, and personalized recommendation systems.
Shang-Hua Teng , a University Professor and Seely G. Mudd Professor of Computer Science and Mathematics, has been honored with a Symposium on Theory of Computing (STOC) Test of Time Award . Teng, with Daniel A. Spielman of Yale University, received the award from the ACM Special Interest Group on Algorithms and Computation Theory for a paper on smoothed analysis of algorithms originally presented at the STOC conference in 2001.
In the paradigm-shifting paper, " Smoothed analysis of algorithms: why the simplex algorithm usually takes polynomial time ," Teng and Spielman use the concept of smoothed analysis to give a more realistic understanding of an algorithm's performance, such as its running time.
The concept helps to explain a long-debated phenomenon: why do some algorithms work better in practice than in theory? Teng and Spielman found that many algorithms, particularly the widely used simplex algorithm for linear programming, work as long as there is noise in the input, because there is usually noise in real-world data.
The study's findings have been applied to practical algorithms in countless applications, including faster internet communications, deep learning, data mining, differential privacy, game theory, and personalized recommendation systems.
An internationally renowned theoretical computer scientist, Teng's work has earned him numerous accolades throughout his career. For his work on smoothed analysis of algorithms, Teng previously received the Gödel Prize as well as the Fulkerson Prize, a prestigious honor awarded once every three years by the American Mathematical Society and the Mathematical Optimization Society.
For his work on nearly-linear-time Laplacian solvers, he was again awarded the Gödel Prize in 2015. A Simons Investigator, a Fellow of the Association for Computing Machinery and Society for Industrial and Applied Mathematics, and Alfred P. Sloan Fellow, Teng has been described by the Simons Foundation as "one of the most original theoretical scientists in the world."
We sat down with Teng to find out why this groundbreaking paper continues to make waves, and how he rediscovered "math for fun" during the pandemic. Answers have been edited for style and clarity.
What were the key findings of this paper? What problem were you trying to solve?
A long-standing challenge in computing, then and now, has been the following: there are many algorithms that work well in practice that do not work well in the worst-case scenario, as measured by the traditional theory of computation.
It has been commonly believed that practical inputs are usually more favorable than worst-case instances. So, Dan Spielman and I were aiming to develop a framework to capture this popular belief and real-world observation to move theory a step towards practice.
Smoothed analysis is our attempt to understand the practical behavior of algorithms. It captures the following: In the real world, inputs have some degree of randomness, noise, imprecision, or uncertainty. Our theory demonstrates that these properties can in fact be helpful to algorithms in practice, because under these conditions, the worst-case scenarios are harder to arise.
How has this area of research changed in the past 20 years? Why do you think it is still relevant today?
During the past 20 years, we entered the age of "big data," massive networks, and ubiquitous data-driven AI and machine learning techniques. Understanding practical algorithmic behaviors has become crucial in applications ranging from human-machine interactions and pandemic modeling, to drug design, financial planning, climate modeling and more.
Data and models from all these areas continue to have randomness, noise, imprecision and uncertainty, which is the topic of our research. I hope our work will continue to inspire new theoretical models and practical algorithms for vast data-driven applications.
How has this research on smoothed analysis impacted the "real world"?
In computing, algorithms are commonly used in practice before comprehensive theoretical analyses are conducted. In other words, practitioners are usually on the frontiers of methodology development. In this context, Dan and I were more like theoretical physicists, aiming to develop theory to explain and model practical observations.
For example, the first algorithm that we applied the smoothed analysis to - the simplex method for linear programming - was invented in the 1940s for military planning and economic modeling. The simplex method was widely used in industry for optimization, even though in the 1970s, worst-case examples were discovered by mathematicians suggesting that, in traditional computing theory, the simplex method could not be an efficient algorithm. This is the source of the gap between theory and practice in the world of computing.
Over the years, some researchers from operations research, network systems, data mining, and machine learning told me that they used methods inspired by smoothed analysis in their work. Of course, practical algorithmic behaviors are far more complex than what our theory can capture, which is why we and others are continuing to look for ways to develop better theories for practice.
How did you and Professor Spielman meet?
I first met Dan in 1990 when he - then a junior of Yale - gave a seminar at CMU (where I was a PhD student). I was his student host. We then reconnected and became life-long friends at MIT Math department in 1992 when he arrived as a PhD student and I joined as an instructor for the department.
When you were both working on this paper, did you have any idea it would have such an enormous and long-lasting impact?
Twenty years ago, like many in our field, Dan and I recognized the significance of the challenge that motivated our paper: closing the theory-practice gap for algorithms. The simplex method was often mentioned as an example where practical performance defies theoretical prediction. We believed that the theory-practice gap would continue to be a fundamental subject for computing.
We were also encouraged by the responses to our initial work from scientists and researchers, who were closer to practical algorithm design and optimization than we were. Their feedback encouraged us that our steps were meaningful towards capturing practical behaviors of algorithms.
As theoreticians, Dan and I enjoyed the conceptual formulation of smoothed analysis and the technical component of probability, high-dimensional geometry, and mathematical programming in our work. It is exciting to develop a theory that is relevant to some aspect of practice and a great honor indeed to have my work recognized by my peers.
Coming back to the present day, what have you been working on recently? Has the pandemic impacted your research?
During this historical moment, I did find one area of mathematics soothing: recreational mathematics. When I was a student, I used to read Scientific American , and always enjoyed the mathematical puzzles and games in the magazine. When I was teaching at Boston University, one of my PhD students, Kyle Burke, was super passionate and gifted in puzzles and games. He wrote a thesis in 2009 with a cool title: "Science for Fun: New Impartial Board Games."
Three years ago, he recommended a talented undergraduate, Matt Ferland, to be a PhD student in our department. During the Covid Zoom world, Matt, Kyle and I have been studying several fundamental problems in Combinatorial Game Theory (a more studious name for recreational mathematics), including board games incorporated with quantum-inspired elements.
We also designed new board games based on mathematical and computer science problems. In a recent paper, we solved two long-standing problems in this field that were open since the 1980s and 1990s. These results involve the mathematical extension of the word-chain game we used to play as kids. I have also started playing these games with my 8-year-old daughter. (One of Teng's games is playable here .) |
|||
89 | China Spy Agency Blamed by U.S., Others of Using Contract Hackers | U.S. President Joe Biden said he expected to receive a report Tuesday detailing how China's Ministry of State Security has employed contract hackers to hold U.S. businesses hostage with ransomware. This follows the Biden administration's public accusation that Beijing is conducting unsanctioned cyber operations worldwide. An international coalition claims China launched a zero-day hack in March that impacted tens of thousands of organizations through Microsoft Exchange servers. In a jointly issued advisory Monday, the U.S. National Security Agency, Cybersecurity and Infrastructure Security Agency, and the Federal Bureau of Investigation said they "have observed increasingly sophisticated Chinese state-sponsored cyber activity targeting U.S. political, economic, military, educational, and CI (critical infrastructure)?personnel and organizations." | [] | [] | [] | scitechnews | None | None | None | None | U.S. President Joe Biden said he expected to receive a report Tuesday detailing how China's Ministry of State Security has employed contract hackers to hold U.S. businesses hostage with ransomware. This follows the Biden administration's public accusation that Beijing is conducting unsanctioned cyber operations worldwide. An international coalition claims China launched a zero-day hack in March that impacted tens of thousands of organizations through Microsoft Exchange servers. In a jointly issued advisory Monday, the U.S. National Security Agency, Cybersecurity and Infrastructure Security Agency, and the Federal Bureau of Investigation said they "have observed increasingly sophisticated Chinese state-sponsored cyber activity targeting U.S. political, economic, military, educational, and CI (critical infrastructure)?personnel and organizations."
|
||||
92 | Contact-Aware Robot Design | Adequate biomimicry in robotics necessitates a delicate balance between design and control, an integral part of making our machines more like us. Advanced dexterity in humans is wrapped up in a long evolutionary tale of how our fists of fury evolved to accomplish complex tasks. With machines, designing a new robotic manipulator could mean long, manual iteration cycles of designing, fabricating, and evaluating guided by human intuition.
Most robotic hands are designed for general purposes, as it's very tedious to make task-specific hands. Existing methods battle trade-offs between the complexity of designs critical for contact-rich tasks, and the practical constraints of manufacturing, and contact handling.
This led researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) to create a new method to computationally optimize the shape and control of a robotic manipulator for a specific task. Their system uses software to manipulate the design, simulate the robot doing a task, and then provide an optimization score to assess the design and control.
Such task-driven manipulator optimization has potential for a wide range of applications in manufacturing and warehouse robot systems, where each task needs to be performed repeatedly, but different manipulators would be suitable for individual tasks. | A system developed by Massachusetts Institute of Technology (MIT) researchers uses software to optimize the shape and control of a robotic manipulator for a specific task. After manipulating the design and simulating the robot manipulator performing a task, the system assigns an optimization score to assess its design and control. The researchers used "cage-based deformation" to change the geometry of a shape in real time, in order to create more involved manipulators. This involves putting a cage-like structure around a robotic finger, with the algorithm altering the cage dimensions automatically to create more sophisticated, natural shapes. MIT's Jie Xu said, "We not only find better solutions, but also find them faster. As a result, we can quickly score the design, thus significantly shortening the design cycle." | [] | [] | [] | scitechnews | None | None | None | None | A system developed by Massachusetts Institute of Technology (MIT) researchers uses software to optimize the shape and control of a robotic manipulator for a specific task. After manipulating the design and simulating the robot manipulator performing a task, the system assigns an optimization score to assess its design and control. The researchers used "cage-based deformation" to change the geometry of a shape in real time, in order to create more involved manipulators. This involves putting a cage-like structure around a robotic finger, with the algorithm altering the cage dimensions automatically to create more sophisticated, natural shapes. MIT's Jie Xu said, "We not only find better solutions, but also find them faster. As a result, we can quickly score the design, thus significantly shortening the design cycle."
Adequate biomimicry in robotics necessitates a delicate balance between design and control, an integral part of making our machines more like us. Advanced dexterity in humans is wrapped up in a long evolutionary tale of how our fists of fury evolved to accomplish complex tasks. With machines, designing a new robotic manipulator could mean long, manual iteration cycles of designing, fabricating, and evaluating guided by human intuition.
Most robotic hands are designed for general purposes, as it's very tedious to make task-specific hands. Existing methods battle trade-offs between the complexity of designs critical for contact-rich tasks, and the practical constraints of manufacturing, and contact handling.
This led researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) to create a new method to computationally optimize the shape and control of a robotic manipulator for a specific task. Their system uses software to manipulate the design, simulate the robot doing a task, and then provide an optimization score to assess the design and control.
Such task-driven manipulator optimization has potential for a wide range of applications in manufacturing and warehouse robot systems, where each task needs to be performed repeatedly, but different manipulators would be suitable for individual tasks. |
|||
93 | Algorithm May Help Scientists Demystify Complex Networks | UNIVERSITY PARK, Pa. - From biochemical reactions that produce cancers, to the latest memes virally spreading across social media, simple actions can generate complex behaviors. For researchers trying to understand these emergent behaviors, however, the complexity can tax current computational methods.
Now, a team of researchers have developed a new algorithm that can serve as a more effective way to analyze models of biological systems, which in turn allows a new path to understanding the decision-making circuits that make up these systems. The researchers add that the algorithm will help scientists study how relatively simple actions lead to complex behaviors, such as cancer growth and voting patterns.
The modeling framework used consists of Boolean networks, which are a collection of nodes that are either on or off, said Jordan Rozum, doctoral candidate in physics at Penn State. For example, a Boolean network could be a network of interacting genes that are either turned on - expressed - or off in a cell.
"Boolean networks are a good way to capture the essence of a system," said Rozum. "It's interesting that these very rich behaviors can emerge out of just coupling little on and off switches together - one switch is toggled and then it toggles another switch and that can lead to a big cascade of effects that then feeds back into the original switch. And we can get really interesting complex behaviors out of just the simple couplings."
"Boolean models describe how information propagates through the network," said Réka Albert , distinguished professor of physics and biology in the Penn State Eberly College of Science and an affiliate of the Institute for Computational and Data Sciences . Eventually, the on/off states of the nodes fall into repeating patterns, called attractors, which correspond to the stable long-term behaviors of the system, according to the researchers, who report their findings in the current issue of Science Advances.
Even though these systems are based on simple actions, the complexity can scale up dramatically as nodes are added to the system, especially in the case when events in the system are not synchronous. A typical Boolean network model of a biological process with a few dozen nodes, for example, has tens of billions of states, according to the researchers. In the case of a genome, these models can have thousands of nodes, resulting in more states than there are atoms in the observable universe.
The researchers use two transformations - parity and time reversal - to make the analysis of Boolean networks more efficient. The parity transformation offers a mirror image of the network, switching nodes that are on to off and vice versa, which helps identify which subnetworks have combinations of on and off values that can sustain themselves over time. Time reversal runs the dynamics of the network backward, probing which states can precede an initial input state.
The team tested their methods on a collection of synthetic Boolean networks called random Boolean networks, which have been used for than 50 years as a way to model how gene regulation determines the fate of a cell. The technique allowed the team to find the number of attractors in these networks for more than 16,000 genes, which, according to the researchers, are sizes larger than ever before analyzed in such detail.
According to the team, the technique could help medical researchers.
"For example, you might want a cancer cell to undergo apoptosis (programmed cell death), and so you want to be able to make the system pick the decisions that lead towards that desired outcome," said Rozum. "So, by studying where in the network these decisions are made, you can figure out what you need to do to make the system choose those options."
Other possibilities exist for using the methods to study issues in the social sciences and information technology.
"The propagation of information would also make an interesting application," said Albert. "For example, there are models that describe a society in which people have binary opinions on a matter. In the model people interact with each other, forming a local consensus. Our methods could be used to map the repertoire of consensus groups that are possible, including a global consensus."
She added that uses could extend to any area where researchers are trying to find ways to eliminate pathological behaviors, or drive the system into more normal behaviors.
"To do this, the theory existed, methodologies existed, but the computational expense was a limiting factor," said Albert. "With this algorithm, that has to a large part been eliminated."
The researchers have developed a publicly available software library and the algorithms have already been used in studies carried out by her group, according to Albert.
Computations for the study were performed using Penn State's Roar supercomputer .
Albert and Rozum worked with Jorge Gómez Tejeda Zañudo, postdoctoral associate at Broad Institute and Dana-Farber Cancer Institute; Xiao Gan, postdoctoral researcher at the Center for Complex Network Research; and Dávid Deritei, graduate research fellow at Semmelweis University. | A new algorithm capable of analyzing models of biological systems can lead to greater understanding of their underlying decision-making mechanisms, with implications for studying how complex behaviors are rooted in relatively simple actions. Pennsylvania State University (Penn State) 's Jordan Rozum said the modeling framework includes Boolean networks. Said Penn State's Reka Albert, "Boolean models describe how information propagates through the network," and the nodes' on/off states eventually slip into repeating patterns that correspond to the system's stable long-term behaviors. Complexity can scale up dramatically as the system incorporates more nodes, particularly when events in the system are asynchronous. The researchers used parity and time-reversal transformations to boost the efficiency of the Boolean network analysis. | [] | [] | [] | scitechnews | None | None | None | None | A new algorithm capable of analyzing models of biological systems can lead to greater understanding of their underlying decision-making mechanisms, with implications for studying how complex behaviors are rooted in relatively simple actions. Pennsylvania State University (Penn State) 's Jordan Rozum said the modeling framework includes Boolean networks. Said Penn State's Reka Albert, "Boolean models describe how information propagates through the network," and the nodes' on/off states eventually slip into repeating patterns that correspond to the system's stable long-term behaviors. Complexity can scale up dramatically as the system incorporates more nodes, particularly when events in the system are asynchronous. The researchers used parity and time-reversal transformations to boost the efficiency of the Boolean network analysis.
UNIVERSITY PARK, Pa. - From biochemical reactions that produce cancers, to the latest memes virally spreading across social media, simple actions can generate complex behaviors. For researchers trying to understand these emergent behaviors, however, the complexity can tax current computational methods.
Now, a team of researchers have developed a new algorithm that can serve as a more effective way to analyze models of biological systems, which in turn allows a new path to understanding the decision-making circuits that make up these systems. The researchers add that the algorithm will help scientists study how relatively simple actions lead to complex behaviors, such as cancer growth and voting patterns.
The modeling framework used consists of Boolean networks, which are a collection of nodes that are either on or off, said Jordan Rozum, doctoral candidate in physics at Penn State. For example, a Boolean network could be a network of interacting genes that are either turned on - expressed - or off in a cell.
"Boolean networks are a good way to capture the essence of a system," said Rozum. "It's interesting that these very rich behaviors can emerge out of just coupling little on and off switches together - one switch is toggled and then it toggles another switch and that can lead to a big cascade of effects that then feeds back into the original switch. And we can get really interesting complex behaviors out of just the simple couplings."
"Boolean models describe how information propagates through the network," said Réka Albert , distinguished professor of physics and biology in the Penn State Eberly College of Science and an affiliate of the Institute for Computational and Data Sciences . Eventually, the on/off states of the nodes fall into repeating patterns, called attractors, which correspond to the stable long-term behaviors of the system, according to the researchers, who report their findings in the current issue of Science Advances.
Even though these systems are based on simple actions, the complexity can scale up dramatically as nodes are added to the system, especially in the case when events in the system are not synchronous. A typical Boolean network model of a biological process with a few dozen nodes, for example, has tens of billions of states, according to the researchers. In the case of a genome, these models can have thousands of nodes, resulting in more states than there are atoms in the observable universe.
The researchers use two transformations - parity and time reversal - to make the analysis of Boolean networks more efficient. The parity transformation offers a mirror image of the network, switching nodes that are on to off and vice versa, which helps identify which subnetworks have combinations of on and off values that can sustain themselves over time. Time reversal runs the dynamics of the network backward, probing which states can precede an initial input state.
The team tested their methods on a collection of synthetic Boolean networks called random Boolean networks, which have been used for than 50 years as a way to model how gene regulation determines the fate of a cell. The technique allowed the team to find the number of attractors in these networks for more than 16,000 genes, which, according to the researchers, are sizes larger than ever before analyzed in such detail.
According to the team, the technique could help medical researchers.
"For example, you might want a cancer cell to undergo apoptosis (programmed cell death), and so you want to be able to make the system pick the decisions that lead towards that desired outcome," said Rozum. "So, by studying where in the network these decisions are made, you can figure out what you need to do to make the system choose those options."
Other possibilities exist for using the methods to study issues in the social sciences and information technology.
"The propagation of information would also make an interesting application," said Albert. "For example, there are models that describe a society in which people have binary opinions on a matter. In the model people interact with each other, forming a local consensus. Our methods could be used to map the repertoire of consensus groups that are possible, including a global consensus."
She added that uses could extend to any area where researchers are trying to find ways to eliminate pathological behaviors, or drive the system into more normal behaviors.
"To do this, the theory existed, methodologies existed, but the computational expense was a limiting factor," said Albert. "With this algorithm, that has to a large part been eliminated."
The researchers have developed a publicly available software library and the algorithms have already been used in studies carried out by her group, according to Albert.
Computations for the study were performed using Penn State's Roar supercomputer .
Albert and Rozum worked with Jorge Gómez Tejeda Zañudo, postdoctoral associate at Broad Institute and Dana-Farber Cancer Institute; Xiao Gan, postdoctoral researcher at the Center for Complex Network Research; and Dávid Deritei, graduate research fellow at Semmelweis University. |
|||
94 | Platform Allows Autonomous Vehicles to Safely Drive at Small Distances | Vehicle automation has become an important topic in recent years. It is aimed towards mitigating driver-induced traffic accidents, improving the road capacity of the existing infrastructure as well as reducing fuel consumption. Two major classes of automated vehicles can be distinguished. The first are cooperative vehicles, which use vehicle-to-vehicle communication, or vehicle-to-infrastructure (V2I) communication in order to exchange motion data, making it possible to follow the vehicle in front of you at very small distances, while preventing the harmonica effect that often results in traffic jams. However, this type of vehicle is typically only capable of performing a single task, making its application limited to for example following a preceding vehicle on the highway. The second class are autonomous vehicles. This type of vehicle uses on-board sensors such as radar, LIDAR and computer vision systems in order to identify the road, other traffic participants, and other relevant features or obstacles. The control algorithms on board these vehicles make use of explicit planning of a vehicle trajectory. By planning various trajectories, the vehicle can select the most suitable type of trajectory for the current situation. This enables it to handle a much wider class of traffic scenarios, compared to cooperative vehicles.
In his PhD research (part of the NWO-funded i-CAVE project), Robbin van Hoek aimed to integrate these two classes of automated vehicles into one single platform. This new vehicle benefits from the communicated motion data from other vehicles by following them at very close distances while preventing traffic jams, but maintains the versatility of the autonomous vehicle. For example, instead of simple following the preceding vehicle, it can also autonomously decide to overtake it, in case it is driving too slow compared to the host vehicle.
Aside from the development of the mathematical methods, the framework was implemented in two Twizy's, which have been modified at TU/e to be able to drive autonomously. With the developed cooperative trajectory planning method, Van Hoek was capable of safely following a preceding vehicle at 0.3 seconds.
This research is an important step towards autonomous vehicles that are capable of safely driving at small inter-vehicle distances, while preventing the harmonica effect that is often seen in human driven vehicles at the highway. This research will lead to increased mobility and safety in transportation.
Robbin van Hoek, Cooperative Trajectory Planning forAutomated Vehicles , supervisors: H. Nijmeijer, J. Ploeg. | A Ph.D. student at the Eindhoven University of Technology in the Netherlands, Robbin van Hoek, has integrated the benefits of cooperative and autonomous vehicles into a single platform. Cooperative vehicles exchange motion data via vehicle-to-vehicle or vehicle-to-infrastructure communication, but generally can perform only a single task, like following a preceding vehicle. Autonomous vehicles use radar, LiDAR, and computer vision systems to detect the road, other traffic participants, and other relevant features or obstacles and can plan a vehicle trajectory. Combining the two enables the vehicle to take advantage of motion data communicated by other vehicles by following them at close distances, while preventing traffic jams and autonomously making decisions like, for instance, overtaking a vehicle that is driving too slowly. | [] | [] | [] | scitechnews | None | None | None | None | A Ph.D. student at the Eindhoven University of Technology in the Netherlands, Robbin van Hoek, has integrated the benefits of cooperative and autonomous vehicles into a single platform. Cooperative vehicles exchange motion data via vehicle-to-vehicle or vehicle-to-infrastructure communication, but generally can perform only a single task, like following a preceding vehicle. Autonomous vehicles use radar, LiDAR, and computer vision systems to detect the road, other traffic participants, and other relevant features or obstacles and can plan a vehicle trajectory. Combining the two enables the vehicle to take advantage of motion data communicated by other vehicles by following them at close distances, while preventing traffic jams and autonomously making decisions like, for instance, overtaking a vehicle that is driving too slowly.
Vehicle automation has become an important topic in recent years. It is aimed towards mitigating driver-induced traffic accidents, improving the road capacity of the existing infrastructure as well as reducing fuel consumption. Two major classes of automated vehicles can be distinguished. The first are cooperative vehicles, which use vehicle-to-vehicle communication, or vehicle-to-infrastructure (V2I) communication in order to exchange motion data, making it possible to follow the vehicle in front of you at very small distances, while preventing the harmonica effect that often results in traffic jams. However, this type of vehicle is typically only capable of performing a single task, making its application limited to for example following a preceding vehicle on the highway. The second class are autonomous vehicles. This type of vehicle uses on-board sensors such as radar, LIDAR and computer vision systems in order to identify the road, other traffic participants, and other relevant features or obstacles. The control algorithms on board these vehicles make use of explicit planning of a vehicle trajectory. By planning various trajectories, the vehicle can select the most suitable type of trajectory for the current situation. This enables it to handle a much wider class of traffic scenarios, compared to cooperative vehicles.
In his PhD research (part of the NWO-funded i-CAVE project), Robbin van Hoek aimed to integrate these two classes of automated vehicles into one single platform. This new vehicle benefits from the communicated motion data from other vehicles by following them at very close distances while preventing traffic jams, but maintains the versatility of the autonomous vehicle. For example, instead of simple following the preceding vehicle, it can also autonomously decide to overtake it, in case it is driving too slow compared to the host vehicle.
Aside from the development of the mathematical methods, the framework was implemented in two Twizy's, which have been modified at TU/e to be able to drive autonomously. With the developed cooperative trajectory planning method, Van Hoek was capable of safely following a preceding vehicle at 0.3 seconds.
This research is an important step towards autonomous vehicles that are capable of safely driving at small inter-vehicle distances, while preventing the harmonica effect that is often seen in human driven vehicles at the highway. This research will lead to increased mobility and safety in transportation.
Robbin van Hoek, Cooperative Trajectory Planning forAutomated Vehicles , supervisors: H. Nijmeijer, J. Ploeg. |
|||
95 | Rats Took Over This Pacific Island. Now Drones Are Leading the Fightback | When the people who would become the first Polynesian islanders ventured out into the remote Pacific some 3,000 years ago, they took three main animals with them; pigs, chickens, and dogs. Expanding their territory over the next few thousand years, from New Zealand north to Hawaii and east to Easter Islands, the Polynesians flourished.
Surpassing even their ability to prosper in some of the world's most remote locations was a fourth addition to their animal crew: rats. According to research by James Russell , a conservation biologist at the University of Auckland, Polynesians introduced R. exulans (the Polynesian rat) to Tetiaroa atoll - a six-kilometre-square island in French Polynesia - around 1,000 years ago, while European explorers brought the R. rattus (black rat) variety to the atoll in the 1970s. Which just happened to be around the same time Marlon Brando built a small village on the atoll after filming Mutiny on the Bounty near Tahiti.
The rats are now everywhere on Tetiaroa atoll. Sally Esposito, speaking on behalf of the Tetiaroa Society , a non-profit designated by the Brando Family Trust as environmental stewards of the islands, explains that the nearby island of Motu Reiono experienced 65-153 rats/hectare. Extrapolating that data to the surface area of Tetiaroa suggests that there are currently anywhere between 28,000 and 65,000 rats on the island.
Rats have a knack for taking over isolated parts of the world from the moment they arrive. Wherever they're found, these crafty stowaways rapaciously feast on the eggs and hatchlings of the islands' native bird and reptile species, knocking the pre-ordained flow of nature out of sync. Because most island species have evolved without the presence of mammal species (who, with the exception of bats have historically struggled to make it to islands from the nearest mainland under their own power), most native species have evolved without an evolutionary response to these scurrying interlopers. With a reduced bird population, fewer outside nutrients are brought onto the island creating a closed loop in which rats dominate all resources. This, in turn, reduces the amount of subsidy nutrients being washed out to support coral reefs around the islands.
But now the rats' reign over Tetiaroa may be about to come to an end. Beginning in August 2021, the conservation group Island Conservation will employ a novel approach in an attempt to clear the rat population from Tetiaroa Atoll (as well as two other islands in French Polynesia): drones. By using specially-engineered drones to blanket the islands with rat poison, the charity will implement the world's first scalable, heavy-lift drone operation to remove invasive rats.
The method was trialled on Seymour Norte in the Galapagos in 2019 where the swallow-tailed gull - the only nocturnal gull on Earth - was at risk of extinction due to the local rat population. Launched from boats and flying autonomously along predetermined routes, a drone was able to drop rodenticide with extreme precision while minimising the impact on non-rat species. Two years later, Seymour Norte was declared 100 per cent rat free - a resounding success for conservationists everywhere.
Founded in 1994, Island Conservation and its partners have so far successfully restored 65 islands worldwide, benefiting 1,218 populations of 504 species and subspecies. Protecting our islands is important because they play an outsized role in the planet's biodiversity. They make up only five per cent of our planet's land area, but are home to an estimated 20 per cent of all plant, reptile and bird species. Unfortunately, 75 per cent of all amphibian, bird and mammal extinctions occur on islands, with invasive species such as rats the primary cause. | The conservation group Island Conservation plans to use drones to try to eliminate rats on Tetiaroa atoll and two other islands in French Polynesia, beginning in August. Hexacopter (six-rotor) drones from New Zealand's Envico Technologies will be used to drop 30 tons of rat poison on the islands over a two-week period. A trial drone initiative undertaken in 2019 on Seymour Norte in the Galapagos resulted in that island being declared 100% rat-free two years later. Island Conservation's David Will said, "We've been watching drone technology for a number of years with the idea that it can dramatically reduce cost and also democratize island restoration by allowing local experts to be able to fly them using precision automating processes." | [] | [] | [] | scitechnews | None | None | None | None | The conservation group Island Conservation plans to use drones to try to eliminate rats on Tetiaroa atoll and two other islands in French Polynesia, beginning in August. Hexacopter (six-rotor) drones from New Zealand's Envico Technologies will be used to drop 30 tons of rat poison on the islands over a two-week period. A trial drone initiative undertaken in 2019 on Seymour Norte in the Galapagos resulted in that island being declared 100% rat-free two years later. Island Conservation's David Will said, "We've been watching drone technology for a number of years with the idea that it can dramatically reduce cost and also democratize island restoration by allowing local experts to be able to fly them using precision automating processes."
When the people who would become the first Polynesian islanders ventured out into the remote Pacific some 3,000 years ago, they took three main animals with them; pigs, chickens, and dogs. Expanding their territory over the next few thousand years, from New Zealand north to Hawaii and east to Easter Islands, the Polynesians flourished.
Surpassing even their ability to prosper in some of the world's most remote locations was a fourth addition to their animal crew: rats. According to research by James Russell , a conservation biologist at the University of Auckland, Polynesians introduced R. exulans (the Polynesian rat) to Tetiaroa atoll - a six-kilometre-square island in French Polynesia - around 1,000 years ago, while European explorers brought the R. rattus (black rat) variety to the atoll in the 1970s. Which just happened to be around the same time Marlon Brando built a small village on the atoll after filming Mutiny on the Bounty near Tahiti.
The rats are now everywhere on Tetiaroa atoll. Sally Esposito, speaking on behalf of the Tetiaroa Society , a non-profit designated by the Brando Family Trust as environmental stewards of the islands, explains that the nearby island of Motu Reiono experienced 65-153 rats/hectare. Extrapolating that data to the surface area of Tetiaroa suggests that there are currently anywhere between 28,000 and 65,000 rats on the island.
Rats have a knack for taking over isolated parts of the world from the moment they arrive. Wherever they're found, these crafty stowaways rapaciously feast on the eggs and hatchlings of the islands' native bird and reptile species, knocking the pre-ordained flow of nature out of sync. Because most island species have evolved without the presence of mammal species (who, with the exception of bats have historically struggled to make it to islands from the nearest mainland under their own power), most native species have evolved without an evolutionary response to these scurrying interlopers. With a reduced bird population, fewer outside nutrients are brought onto the island creating a closed loop in which rats dominate all resources. This, in turn, reduces the amount of subsidy nutrients being washed out to support coral reefs around the islands.
But now the rats' reign over Tetiaroa may be about to come to an end. Beginning in August 2021, the conservation group Island Conservation will employ a novel approach in an attempt to clear the rat population from Tetiaroa Atoll (as well as two other islands in French Polynesia): drones. By using specially-engineered drones to blanket the islands with rat poison, the charity will implement the world's first scalable, heavy-lift drone operation to remove invasive rats.
The method was trialled on Seymour Norte in the Galapagos in 2019 where the swallow-tailed gull - the only nocturnal gull on Earth - was at risk of extinction due to the local rat population. Launched from boats and flying autonomously along predetermined routes, a drone was able to drop rodenticide with extreme precision while minimising the impact on non-rat species. Two years later, Seymour Norte was declared 100 per cent rat free - a resounding success for conservationists everywhere.
Founded in 1994, Island Conservation and its partners have so far successfully restored 65 islands worldwide, benefiting 1,218 populations of 504 species and subspecies. Protecting our islands is important because they play an outsized role in the planet's biodiversity. They make up only five per cent of our planet's land area, but are home to an estimated 20 per cent of all plant, reptile and bird species. Unfortunately, 75 per cent of all amphibian, bird and mammal extinctions occur on islands, with invasive species such as rats the primary cause. |
|||
96 | Tool to Explore Billions of Social Media Messages Could Predict Political, Financial Turmoil | Researchers at the University of Vermont (UVM), Charles River Analytics, and MassMutual Data Science (the data science unit of the Massachusetts Mutual Life Insurance Company) have developed an online tool that uncovers the stories within the billions of Twitter posts made since 2008. The Storywrangler, powered by UVM's supercomputer at the Vermont Advanced Computing Core, breaks tweets into one-, two-, and three-word phrases across 150 languages and determines the frequencies with which more than a trillion words, hashtags, handles, symbols, and emoji appear. The data can be used to analyze the rising and falling popularity of words, ideas, and stories around the globe. Said UVM's Peter Dodds, "This tool can enable new approaches in journalism, powerful ways to look at natural language processing, and the development of computational history." | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the University of Vermont (UVM), Charles River Analytics, and MassMutual Data Science (the data science unit of the Massachusetts Mutual Life Insurance Company) have developed an online tool that uncovers the stories within the billions of Twitter posts made since 2008. The Storywrangler, powered by UVM's supercomputer at the Vermont Advanced Computing Core, breaks tweets into one-, two-, and three-word phrases across 150 languages and determines the frequencies with which more than a trillion words, hashtags, handles, symbols, and emoji appear. The data can be used to analyze the rising and falling popularity of words, ideas, and stories around the globe. Said UVM's Peter Dodds, "This tool can enable new approaches in journalism, powerful ways to look at natural language processing, and the development of computational history."
|
||||
97 | GPUs Can Now Analyze a Billion Complex Vectors in Record Time | The complexity of a digital photo cannot be understated.
Each pixel comprises many data points, and there can be millions of pixels in just a single photo. These many data points in relation to each other are referred to as "high-dimensional" data and can require immense computing power to analyze, say if you were searching for similar photos in a database. Computer programmers and AI experts refer to this as "the curse of high curse of high dimensionality ."
In a study published July 1 in IEEE Transactions on Big Data , researchers at Facebook AI Research propose a novel solution that aims to ease the burden of this curse. But rather than the traditional means of a computer's central processing units (CPUs) to analyze high-dimensional media, they've harnessed Graphical Processing Units (GPUs). The advancement allows 4 GPUs to analyze more than 95 million high-dimensional images in just 35 minutes. This speed is 8.5 times faster than previous techniques that used GPUs to analyze high-dimensional data.
"The most straightforward technique for searching and indexing [high-dimensional data] is by brute-force comparison, whereby you need to check [each image] against every other image in the database," explains Jeff Johnson, a research engineer at Facebook AI Research who co-developed the new approach using GPUs. "This is impractical for collections containing billions of vectors."
CPUs, which have high memory storage and thus can handle large volumes of data, are capable of such a task. However, it takes a substantial amount of time for CPUs to transfer data among the various other supercomputer components, which causes an overall lag in computing time.
In contrast, GPUs offer more raw processing power. Therefore, Johnson and his team developed an algorithm that allows GPUs to both host and analyze a library of vectors. In this way, the data is managed by a small handful of GPUs that do all the work. Notably, GPUs typically have less overall memory storage than CPUs, but Johnson and his colleagues were able to overcome this pitfall using a technique that compresses vector databases and makes them more manageable for the GPUs to analyze.
"By keeping computations purely on a GPU, we can take advantage of the much faster memory available on the accelerator, instead of dealing with the slower memories of CPU servers and even slower machine-to-machine network interconnects within a traditional supercomputer cluster," explains Johnson.
The researchers tested their approach against a database with one billion vectors, comprising 384 gigabytes of raw data. Their approach reduced the number of vector combinations that need to be analyzed, which would normally be a quintillion (10 18 ), by at least 4 orders of magnitude.
"Both the improvement in speed and the decrease in database size allow for solving problems that would otherwise take hundreds of CPU machines, in effect democratizing large-scale indexing and search techniques using a much smaller amount of hardware," he says.
Their approach has been made freely available through the Facebook AI Similarity Search (Faiss) open source library. Johnson notes that the computing tech giant Nvidia has already begun building extensions using this approach, which were unveiled at the company's 2021 GPU Technology Conference . | Researchers at Facebook AI Research have developed an approach to leverage graphical processing units (GPUs) for the analysis of high-dimensional media. The researchers developed an algorithm that enables GPUs to host and analyze a library of vectors, and employed a technique that compresses vector databases so the GPUs can analyze them more easily. Facebook AI's Jeff Johnson said, "By keeping computations purely on a GPU, we can take advantage of the much faster memory available on the accelerator, instead of dealing with the slower memories of [central processing unit-based] servers and even slower machine-to-machine network interconnects within a traditional supercomputer cluster." With the new approach, four GPUs were able to analyze more than 95 million high-dimensional images in 35 minutes, 8.5 times faster than previous techniques. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at Facebook AI Research have developed an approach to leverage graphical processing units (GPUs) for the analysis of high-dimensional media. The researchers developed an algorithm that enables GPUs to host and analyze a library of vectors, and employed a technique that compresses vector databases so the GPUs can analyze them more easily. Facebook AI's Jeff Johnson said, "By keeping computations purely on a GPU, we can take advantage of the much faster memory available on the accelerator, instead of dealing with the slower memories of [central processing unit-based] servers and even slower machine-to-machine network interconnects within a traditional supercomputer cluster." With the new approach, four GPUs were able to analyze more than 95 million high-dimensional images in 35 minutes, 8.5 times faster than previous techniques.
The complexity of a digital photo cannot be understated.
Each pixel comprises many data points, and there can be millions of pixels in just a single photo. These many data points in relation to each other are referred to as "high-dimensional" data and can require immense computing power to analyze, say if you were searching for similar photos in a database. Computer programmers and AI experts refer to this as "the curse of high curse of high dimensionality ."
In a study published July 1 in IEEE Transactions on Big Data , researchers at Facebook AI Research propose a novel solution that aims to ease the burden of this curse. But rather than the traditional means of a computer's central processing units (CPUs) to analyze high-dimensional media, they've harnessed Graphical Processing Units (GPUs). The advancement allows 4 GPUs to analyze more than 95 million high-dimensional images in just 35 minutes. This speed is 8.5 times faster than previous techniques that used GPUs to analyze high-dimensional data.
"The most straightforward technique for searching and indexing [high-dimensional data] is by brute-force comparison, whereby you need to check [each image] against every other image in the database," explains Jeff Johnson, a research engineer at Facebook AI Research who co-developed the new approach using GPUs. "This is impractical for collections containing billions of vectors."
CPUs, which have high memory storage and thus can handle large volumes of data, are capable of such a task. However, it takes a substantial amount of time for CPUs to transfer data among the various other supercomputer components, which causes an overall lag in computing time.
In contrast, GPUs offer more raw processing power. Therefore, Johnson and his team developed an algorithm that allows GPUs to both host and analyze a library of vectors. In this way, the data is managed by a small handful of GPUs that do all the work. Notably, GPUs typically have less overall memory storage than CPUs, but Johnson and his colleagues were able to overcome this pitfall using a technique that compresses vector databases and makes them more manageable for the GPUs to analyze.
"By keeping computations purely on a GPU, we can take advantage of the much faster memory available on the accelerator, instead of dealing with the slower memories of CPU servers and even slower machine-to-machine network interconnects within a traditional supercomputer cluster," explains Johnson.
The researchers tested their approach against a database with one billion vectors, comprising 384 gigabytes of raw data. Their approach reduced the number of vector combinations that need to be analyzed, which would normally be a quintillion (10 18 ), by at least 4 orders of magnitude.
"Both the improvement in speed and the decrease in database size allow for solving problems that would otherwise take hundreds of CPU machines, in effect democratizing large-scale indexing and search techniques using a much smaller amount of hardware," he says.
Their approach has been made freely available through the Facebook AI Similarity Search (Faiss) open source library. Johnson notes that the computing tech giant Nvidia has already begun building extensions using this approach, which were unveiled at the company's 2021 GPU Technology Conference . |
|||
99 | Air-Powered Computer Memory Helps Soft Robot Control Movements | Engineers at UC Riverside have unveiled an air-powered computer memory that can be used to control soft robots. The innovation overcomes one of the biggest obstacles to advancing soft robotics: the fundamental mismatch between pneumatics and electronics. The work is published in the open-access journal, PLOS One.
Pneumatic soft robots use pressurized air to move soft, rubbery limbs and grippers and are superior to traditional rigid robots for performing delicate tasks. They are also safer for humans to be around. Baymax, the healthcare companion robot in the 2014 animated Disney film, Big Hero 6, is a pneumatic robot for good reason.
But existing systems for controlling pneumatic soft robots still use electronic valves and computers to maintain the position of the robot's moving parts. These electronic parts add considerable cost, size, and power demands to soft robots, limiting their feasibility.
To advance soft robotics toward the future, a team led by bioengineering doctoral student Shane Hoang, his advisor, bioengineering professor William Grover , computer science professor Philip Brisk , and mechanical engineering professor Konstantinos Karydis , looked back to the past.
"Pneumatic logic" predates electronic computers and once provided advanced levels of control in a variety of products, from thermostats and other components of climate control systems to player pianos in the early 1900s. In pneumatic logic, air, not electricity, flows through circuits or channels and air pressure is used to represent on/off or true/false. In modern computers, these logical states are represented by 1 and 0 in code to trigger or end electrical charges.
Pneumatic soft robots need a way to remember and maintain the positions of their moving parts. The researchers realized that if they could create a pneumatic logic "memory" for a soft robot, they could eliminate the electronic memory currently used for that purpose.
The researchers made their pneumatic random-access memory, or RAM, chip using microfluidic valves instead of electronic transistors. The microfluidic valves were originally designed to control the flow of liquids on microfluidic chips, but they can also control the flow of air. The valves remain sealed against a pressure differential even when disconnected from an air supply line, creating trapped pressure differentials that function as memories and maintain the states of a robot's actuators. Dense arrays of these valves can perform advanced operations and reduce the expensive, bulky, and power-consuming electronic hardware typically used to control pneumatic robots.
After modifying the microfluidic valves to handle larger air flow rates, the team produced an 8-bit pneumatic RAM chip able to control larger and faster-moving soft robots, and incorporated it into a pair of 3D-printed rubber hands. The pneumatic RAM uses atmospheric-pressure air to represent a "0" or FALSE value, and vacuum to represent a "1" or TRUE value. The soft robotic fingers are extended when connected to atmospheric pressure and contracted when connected to vacuum.
By varying the combinations of atmospheric pressure and vacuum within the channels on the RAM chip, the researchers were able to make the robot play notes, chords, and even a whole song - "Mary Had a Little Lamb" - on a piano. (Scroll down for video.)
In theory, this system could be used to operate other robots without any electronic hardware and only a battery-powered pump to create a vacuum. The researchers note that without positive pressure anywhere in the system - only normal atmospheric air pressure - there is no risk of accidental overpressurization and violent failure of the robot or its control system. Robots using this technology would be especially safe for delicate use on or around humans, such as wearable devices for infants with motor impairments.
The paper, "A pneumatic random-access memory for controlling soft robots," is available here . The research was supported by the National Science Foundation.
Header photo: A pneumatic gripper holds a UCR orange. (William Grover) | A new air-powered computer memory can be utilized to control soft robots, thanks to engineers at the University of California, Riverside (UC Riverside). The researchers designed an 8-bit pneumatic random-access memory (RAM) chip that substituted microfluidic valves for electronic transistors. The valves stay sealed against a pressure differential even when detached from an air supply line, generating trapped pressure differentials that serve as memories and maintain the states of a robot's actuators. Dense valve arrays can conduct sophisticated operations and streamline the bulky, power-intense hardware typical of pneumatic robot controls. The UC Riverside team incorporated the pneumatic RAM chip into a pair of three-dimensionally-printed rubber hands, and induced a robot to use them to play notes, chords, and an entire song on a piano by varying the mixture of atmospheric pressure and vacuum within the channels on the chip. | [] | [] | [] | scitechnews | None | None | None | None | A new air-powered computer memory can be utilized to control soft robots, thanks to engineers at the University of California, Riverside (UC Riverside). The researchers designed an 8-bit pneumatic random-access memory (RAM) chip that substituted microfluidic valves for electronic transistors. The valves stay sealed against a pressure differential even when detached from an air supply line, generating trapped pressure differentials that serve as memories and maintain the states of a robot's actuators. Dense valve arrays can conduct sophisticated operations and streamline the bulky, power-intense hardware typical of pneumatic robot controls. The UC Riverside team incorporated the pneumatic RAM chip into a pair of three-dimensionally-printed rubber hands, and induced a robot to use them to play notes, chords, and an entire song on a piano by varying the mixture of atmospheric pressure and vacuum within the channels on the chip.
Engineers at UC Riverside have unveiled an air-powered computer memory that can be used to control soft robots. The innovation overcomes one of the biggest obstacles to advancing soft robotics: the fundamental mismatch between pneumatics and electronics. The work is published in the open-access journal, PLOS One.
Pneumatic soft robots use pressurized air to move soft, rubbery limbs and grippers and are superior to traditional rigid robots for performing delicate tasks. They are also safer for humans to be around. Baymax, the healthcare companion robot in the 2014 animated Disney film, Big Hero 6, is a pneumatic robot for good reason.
But existing systems for controlling pneumatic soft robots still use electronic valves and computers to maintain the position of the robot's moving parts. These electronic parts add considerable cost, size, and power demands to soft robots, limiting their feasibility.
To advance soft robotics toward the future, a team led by bioengineering doctoral student Shane Hoang, his advisor, bioengineering professor William Grover , computer science professor Philip Brisk , and mechanical engineering professor Konstantinos Karydis , looked back to the past.
"Pneumatic logic" predates electronic computers and once provided advanced levels of control in a variety of products, from thermostats and other components of climate control systems to player pianos in the early 1900s. In pneumatic logic, air, not electricity, flows through circuits or channels and air pressure is used to represent on/off or true/false. In modern computers, these logical states are represented by 1 and 0 in code to trigger or end electrical charges.
Pneumatic soft robots need a way to remember and maintain the positions of their moving parts. The researchers realized that if they could create a pneumatic logic "memory" for a soft robot, they could eliminate the electronic memory currently used for that purpose.
The researchers made their pneumatic random-access memory, or RAM, chip using microfluidic valves instead of electronic transistors. The microfluidic valves were originally designed to control the flow of liquids on microfluidic chips, but they can also control the flow of air. The valves remain sealed against a pressure differential even when disconnected from an air supply line, creating trapped pressure differentials that function as memories and maintain the states of a robot's actuators. Dense arrays of these valves can perform advanced operations and reduce the expensive, bulky, and power-consuming electronic hardware typically used to control pneumatic robots.
After modifying the microfluidic valves to handle larger air flow rates, the team produced an 8-bit pneumatic RAM chip able to control larger and faster-moving soft robots, and incorporated it into a pair of 3D-printed rubber hands. The pneumatic RAM uses atmospheric-pressure air to represent a "0" or FALSE value, and vacuum to represent a "1" or TRUE value. The soft robotic fingers are extended when connected to atmospheric pressure and contracted when connected to vacuum.
By varying the combinations of atmospheric pressure and vacuum within the channels on the RAM chip, the researchers were able to make the robot play notes, chords, and even a whole song - "Mary Had a Little Lamb" - on a piano. (Scroll down for video.)
In theory, this system could be used to operate other robots without any electronic hardware and only a battery-powered pump to create a vacuum. The researchers note that without positive pressure anywhere in the system - only normal atmospheric air pressure - there is no risk of accidental overpressurization and violent failure of the robot or its control system. Robots using this technology would be especially safe for delicate use on or around humans, such as wearable devices for infants with motor impairments.
The paper, "A pneumatic random-access memory for controlling soft robots," is available here . The research was supported by the National Science Foundation.
Header photo: A pneumatic gripper holds a UCR orange. (William Grover) |
|||
101 | As Spain's Beaches Fill Up, Seaside Resort Sends in Drones | Officials in the town of Sitges in northeastern Spain are using drones for real-time crowd monitoring along 18 km of beach as COVID-19 cases rise. Ricardo Monje of Annunzia, the company that developed the project, said, "We can take photos, pass them through some software and with the software we can count how many people are on the beach." Local official Guillem Escola said, "If we see the beach is very crowded, we can pass that information on to the beach monitors who will make checks and ensure people are keeping their distance. If people don't take notice, then we send in the police." Officials said the project complies with all data-protection laws, and images of people would remain anonymous. | [] | [] | [] | scitechnews | None | None | None | None | Officials in the town of Sitges in northeastern Spain are using drones for real-time crowd monitoring along 18 km of beach as COVID-19 cases rise. Ricardo Monje of Annunzia, the company that developed the project, said, "We can take photos, pass them through some software and with the software we can count how many people are on the beach." Local official Guillem Escola said, "If we see the beach is very crowded, we can pass that information on to the beach monitors who will make checks and ensure people are keeping their distance. If people don't take notice, then we send in the police." Officials said the project complies with all data-protection laws, and images of people would remain anonymous.
|
||||
103 | How Germany Hopes to Get the Edge in Driverless Technology | FRANKFURT - In Hamburg, a fleet of electric Volkswagen vans owned by a ride-hailing service roams the streets picking up and dropping off passengers. The vehicles steer themselves, but technicians working from a remote control center keep an eye on their progress with the help of video monitors. If anything goes wrong, they can take control of the vehicle and steer it out of trouble.
This futuristic vision, within reach of current technology, is about to become legal in Germany. The Parliament in Berlin approved a new law on autonomous driving in May, and it awaits the signature of Germany 's president, a formality. The law opens a path for companies to start making money from autonomous driving services, which could also spur development.
With its requirement that autonomous vehicles be overseen by humans, the German law reflects a realization in the industry that researchers are still years away from cars that can safely allow the driver to disengage while the car does all the work. The law also requires that autonomous vehicles operate in a defined space approved by the authorities, an acknowledgment that the technology is not advanced enough to work safely in areas where traffic is chaotic and unpredictable.
So German companies that are pursuing the technology have adjusted their ambitions, focusing on moneymaking uses that don't require major breakthroughs. | A new law awaiting the signature of Germany's president would make driverless cars legal and allow companies to begin making money from autonomous driving services. However, the law requires humans to provide oversight for autonomous vehicles, and mandates these vehicles operate in approved, defined spaces. The law covers the entire nation, as opposed to U.S. policy, where the federal government has issued guidelines for autonomous driving but no overarching regulations. The law could give German automakers a competitive advantage; said German lawmaker Arno Klare, "Germany can be the first country in the world to bring vehicles without drivers from the laboratory into everyday use." | [] | [] | [] | scitechnews | None | None | None | None | A new law awaiting the signature of Germany's president would make driverless cars legal and allow companies to begin making money from autonomous driving services. However, the law requires humans to provide oversight for autonomous vehicles, and mandates these vehicles operate in approved, defined spaces. The law covers the entire nation, as opposed to U.S. policy, where the federal government has issued guidelines for autonomous driving but no overarching regulations. The law could give German automakers a competitive advantage; said German lawmaker Arno Klare, "Germany can be the first country in the world to bring vehicles without drivers from the laboratory into everyday use."
FRANKFURT - In Hamburg, a fleet of electric Volkswagen vans owned by a ride-hailing service roams the streets picking up and dropping off passengers. The vehicles steer themselves, but technicians working from a remote control center keep an eye on their progress with the help of video monitors. If anything goes wrong, they can take control of the vehicle and steer it out of trouble.
This futuristic vision, within reach of current technology, is about to become legal in Germany. The Parliament in Berlin approved a new law on autonomous driving in May, and it awaits the signature of Germany 's president, a formality. The law opens a path for companies to start making money from autonomous driving services, which could also spur development.
With its requirement that autonomous vehicles be overseen by humans, the German law reflects a realization in the industry that researchers are still years away from cars that can safely allow the driver to disengage while the car does all the work. The law also requires that autonomous vehicles operate in a defined space approved by the authorities, an acknowledgment that the technology is not advanced enough to work safely in areas where traffic is chaotic and unpredictable.
So German companies that are pursuing the technology have adjusted their ambitions, focusing on moneymaking uses that don't require major breakthroughs. |
|||
104 | AI System Developed to Diagnose Heart Problems | Researchers at the Technion - Israel Institute of Technology have developed an artificial intelligence (AI) system that can diagnose cardiac issues based on hundreds of electrocardiograms (ECG). The AI system uses an augmented neural network trained on more than 1.5 million ECG tests on hundreds of patients worldwide. The system is more accurate in reading ECGs than humans, and can detect pathological conditions human cardiologists cannot. For instance, the system can identify patients at risk of arrhythmia, which can lead to heart attacks and strokes, even if the condition does not show up in the ECG. The AI explains its decisions using official cardiology terminology. | [] | [] | [] | scitechnews | None | None | None | None | Researchers at the Technion - Israel Institute of Technology have developed an artificial intelligence (AI) system that can diagnose cardiac issues based on hundreds of electrocardiograms (ECG). The AI system uses an augmented neural network trained on more than 1.5 million ECG tests on hundreds of patients worldwide. The system is more accurate in reading ECGs than humans, and can detect pathological conditions human cardiologists cannot. For instance, the system can identify patients at risk of arrhythmia, which can lead to heart attacks and strokes, even if the condition does not show up in the ECG. The AI explains its decisions using official cardiology terminology.
|
||||
105 | Google to Help Insurers Measure Slip-and-Fall Risks in Buildings | Cloud computing services platform Google Cloud and building instrumentation company BlueZoo have partnered to help small-business insurers more accurately quantify slip-and-fall and other accident risks in buildings. BlueZoo will deploy sensors in buildings that will listen for Wi-Fi probes emitted by mobile phones, and transmit the resulting occupancy data to Google Cloud for analysis. BlueZoo's Bill Evans said the cloud servers review the data to produce occupancy metrics with 90% accuracy. Google Cloud's Henna Karna said, "BlueZoo measures risk continuously, making it possible to more accurately price risk or work with building owners to mitigate risk." | [] | [] | [] | scitechnews | None | None | None | None | Cloud computing services platform Google Cloud and building instrumentation company BlueZoo have partnered to help small-business insurers more accurately quantify slip-and-fall and other accident risks in buildings. BlueZoo will deploy sensors in buildings that will listen for Wi-Fi probes emitted by mobile phones, and transmit the resulting occupancy data to Google Cloud for analysis. BlueZoo's Bill Evans said the cloud servers review the data to produce occupancy metrics with 90% accuracy. Google Cloud's Henna Karna said, "BlueZoo measures risk continuously, making it possible to more accurately price risk or work with building owners to mitigate risk."
|
||||
106 | First 3D-Printed Steel Bridge Opens in Amsterdam | By Matthew Sparkes
A 3D-printed bridge has been installed in Amsterdam Adriaan De Groot
The first ever 3D-printed steel bridge has opened in Amsterdam, the Netherlands. It was created by robotic arms using welding torches to deposit the structure of the bridge layer by layer, and is made of 4500 kilograms of stainless steel.
The 12-metre-long MX3D Bridge was built by four commercially available industrial robots and took six months to print. The structure was transported to its location over the Oudezijds Achterburgwal canal in central Amsterdam last week and is now open to pedestrians and cyclists.
More than a dozen sensors attached to the bridge after the printing was completed will monitor strain, movement, vibration and temperature across the structure as people pass over it and the weather changes. This data will be fed into a digital model of the bridge.
Engineers will use this model to study the properties of the unique material and will employ machine learning to spot any trends in the data that could indicate maintenance or modification is necessary. They also hope it will help designers understand how 3D-printed steel might be used for larger and more complex building projects.
Mark Girolami at the University of Cambridge, who is working on the digital model with a team at the Alan Turing Institute in London, says that investigations into bridge failures often reveal deterioration that was missed. Constant data feedback may have been able to prevent these failures by providing an early warning, he says.
Girolami says that early indications for the strength of 3D-printed steel are positive. "One of the things that we found is that the strength characteristics are dependent on the orientation of the printing. But what was in some sense surprising was that the baseline strength was what you would expect of just rolled steel, and it actually increased in some directions." | The world's first three-dimensionally (3D) -printed stainless steel bridge has opened to pedestrians and cyclists in Amsterdam. Deposited in layers by robot arms with welding torches over six months, the 12-meter (39-foot) -long MX3D Bridge spans the Oudezijds Achterburgwal canal. Sensors attached to the structure will monitor strain, movement, vibration, and temperature, and will input that data into a digital model which engineers at the U.K.'s University of Cambridge will use to watch how the 3D-printed steel reacts to its use in this application. They also will employ machine learning to identify any signs that maintenance or modification is required. Said Cambridge's Mark Girolami, "What was in some sense surprising was that the baseline strength was what you would expect of just rolled steel, and it actually increased in some directions." | [] | [] | [] | scitechnews | None | None | None | None | The world's first three-dimensionally (3D) -printed stainless steel bridge has opened to pedestrians and cyclists in Amsterdam. Deposited in layers by robot arms with welding torches over six months, the 12-meter (39-foot) -long MX3D Bridge spans the Oudezijds Achterburgwal canal. Sensors attached to the structure will monitor strain, movement, vibration, and temperature, and will input that data into a digital model which engineers at the U.K.'s University of Cambridge will use to watch how the 3D-printed steel reacts to its use in this application. They also will employ machine learning to identify any signs that maintenance or modification is required. Said Cambridge's Mark Girolami, "What was in some sense surprising was that the baseline strength was what you would expect of just rolled steel, and it actually increased in some directions."
By Matthew Sparkes
A 3D-printed bridge has been installed in Amsterdam Adriaan De Groot
The first ever 3D-printed steel bridge has opened in Amsterdam, the Netherlands. It was created by robotic arms using welding torches to deposit the structure of the bridge layer by layer, and is made of 4500 kilograms of stainless steel.
The 12-metre-long MX3D Bridge was built by four commercially available industrial robots and took six months to print. The structure was transported to its location over the Oudezijds Achterburgwal canal in central Amsterdam last week and is now open to pedestrians and cyclists.
More than a dozen sensors attached to the bridge after the printing was completed will monitor strain, movement, vibration and temperature across the structure as people pass over it and the weather changes. This data will be fed into a digital model of the bridge.
Engineers will use this model to study the properties of the unique material and will employ machine learning to spot any trends in the data that could indicate maintenance or modification is necessary. They also hope it will help designers understand how 3D-printed steel might be used for larger and more complex building projects.
Mark Girolami at the University of Cambridge, who is working on the digital model with a team at the Alan Turing Institute in London, says that investigations into bridge failures often reveal deterioration that was missed. Constant data feedback may have been able to prevent these failures by providing an early warning, he says.
Girolami says that early indications for the strength of 3D-printed steel are positive. "One of the things that we found is that the strength characteristics are dependent on the orientation of the printing. But what was in some sense surprising was that the baseline strength was what you would expect of just rolled steel, and it actually increased in some directions." |
|||
107 | iOS Zero-Day Let SolarWinds Hackers Compromise Fully Updated iPhones | The Russian state hackers who orchestrated the SolarWinds supply chain attack last year exploited an iOS zero-day as part of a separate malicious email campaign aimed at stealing Web authentication credentials from Western European governments, according to Google and Microsoft.
Attacks targeting CVE-2021-1879, as the zero-day is tracked, redirected users to domains that installed malicious payloads on fully updated iPhones. The attacks coincided with a campaign by the same hackers who delivered malware to Windows users, the researchers said.
The federal government has attributed last year's supply chain attack to hackers working for Russia's Foreign Intelligence Service (abbreviated as SVR). For more than a decade , the SVR has conducted malware campaigns targeting governments, political think tanks, and other organizations in countries like Germany, Uzbekistan, South Korea, and the US. Targets have included the US State Department and the White House in 2014. Other names used to identify the group include APT29, the Dukes, and Cozy Bear.
In an email, Shane Huntley, the head of Google's Threat Analysis Group, confirmed the connection between the attacks involving USAID and the iOS zero-day, which resided in the WebKit browser engine.
"These are two different campaigns, but based on our visibility, we consider the actors behind the WebKit 0-day and the USAID campaign to be the same group of actors," Huntley wrote. "It is important to note that everyone draws actor boundaries differently. In this particular case, we are aligned with the US and UK governments' assessment of APT 29."
Throughout the campaign, Microsoft said, Nobelium experimented with multiple attack variations. In one wave, a Nobelium-controlled web server profiled devices that visited it to determine what OS and hardware the devices ran on. If the targeted device was an iPhone or iPad, a server used an exploit for CVE-2021-1879, which allowed hackers to deliver a universal cross-site scripting attack. Apple patched the zero-day in late March.
In Wednesday's post, Stone and Lecigne wrote:
The iOS attacks are part of a recent explosion in the use of zero-days. In the first half of this year, Google's Project Zero vulnerability research group has recorded 33 zero-day exploits used in attacks - 11 more than the total number from 2020. The growth has several causes, including better detection by defenders and better software defenses that require multiple exploits to break through.
The other big driver is the increased supply of zero-days from private companies selling exploits.
"0-day capabilities used to be only the tools of select nation-states who had the technical expertise to find 0-day vulnerabilities, develop them into exploits, and then strategically operationalize their use," the Google researchers wrote. "In the mid-to-late 2010s, more private companies have joined the marketplace selling these 0-day capabilities. No longer do groups need to have the technical expertise; now they just need resources."
The iOS vulnerability was one of four in-the-wild zero-days Google detailed on Wednesday. The other three were:
The four exploits were used in three different campaigns. Based on their analysis, the researchers assess that three of the exploits were developed by the same commercial surveillance company, which sold them to two different government-backed actors. The researchers didn't identify the surveillance company, the governments, or the specific three zero-days they were referring to.
Representatives from Apple didn't immediately respond to a request for comment. | Google and Microsoft researchers found that the Russian state hackers behind last year's SolarWinds supply chain hack also exploited a then-unknown iOS zero-day vulnerability in a separate malicious email campaign. The goal was to steal Web authentication credentials from Western European governments via messages to government officials through LinkedIn. Google's Shane Huntley confirmed that the attack involving the iOS zero-day were connected to an attack reported by Microsoft in May in which the SolarWind hackers, known as Nobelium, compromised an account belonging to U.S. foreign aid and development assistance agency USAID. Google's Project Zero vulnerability research group found 33 zero-day exploits used in attacks during the first half of 2021, 11 more than from the total for last year. | [] | [] | [] | scitechnews | None | None | None | None | Google and Microsoft researchers found that the Russian state hackers behind last year's SolarWinds supply chain hack also exploited a then-unknown iOS zero-day vulnerability in a separate malicious email campaign. The goal was to steal Web authentication credentials from Western European governments via messages to government officials through LinkedIn. Google's Shane Huntley confirmed that the attack involving the iOS zero-day were connected to an attack reported by Microsoft in May in which the SolarWind hackers, known as Nobelium, compromised an account belonging to U.S. foreign aid and development assistance agency USAID. Google's Project Zero vulnerability research group found 33 zero-day exploits used in attacks during the first half of 2021, 11 more than from the total for last year.
The Russian state hackers who orchestrated the SolarWinds supply chain attack last year exploited an iOS zero-day as part of a separate malicious email campaign aimed at stealing Web authentication credentials from Western European governments, according to Google and Microsoft.
Attacks targeting CVE-2021-1879, as the zero-day is tracked, redirected users to domains that installed malicious payloads on fully updated iPhones. The attacks coincided with a campaign by the same hackers who delivered malware to Windows users, the researchers said.
The federal government has attributed last year's supply chain attack to hackers working for Russia's Foreign Intelligence Service (abbreviated as SVR). For more than a decade , the SVR has conducted malware campaigns targeting governments, political think tanks, and other organizations in countries like Germany, Uzbekistan, South Korea, and the US. Targets have included the US State Department and the White House in 2014. Other names used to identify the group include APT29, the Dukes, and Cozy Bear.
In an email, Shane Huntley, the head of Google's Threat Analysis Group, confirmed the connection between the attacks involving USAID and the iOS zero-day, which resided in the WebKit browser engine.
"These are two different campaigns, but based on our visibility, we consider the actors behind the WebKit 0-day and the USAID campaign to be the same group of actors," Huntley wrote. "It is important to note that everyone draws actor boundaries differently. In this particular case, we are aligned with the US and UK governments' assessment of APT 29."
Throughout the campaign, Microsoft said, Nobelium experimented with multiple attack variations. In one wave, a Nobelium-controlled web server profiled devices that visited it to determine what OS and hardware the devices ran on. If the targeted device was an iPhone or iPad, a server used an exploit for CVE-2021-1879, which allowed hackers to deliver a universal cross-site scripting attack. Apple patched the zero-day in late March.
In Wednesday's post, Stone and Lecigne wrote:
The iOS attacks are part of a recent explosion in the use of zero-days. In the first half of this year, Google's Project Zero vulnerability research group has recorded 33 zero-day exploits used in attacks - 11 more than the total number from 2020. The growth has several causes, including better detection by defenders and better software defenses that require multiple exploits to break through.
The other big driver is the increased supply of zero-days from private companies selling exploits.
"0-day capabilities used to be only the tools of select nation-states who had the technical expertise to find 0-day vulnerabilities, develop them into exploits, and then strategically operationalize their use," the Google researchers wrote. "In the mid-to-late 2010s, more private companies have joined the marketplace selling these 0-day capabilities. No longer do groups need to have the technical expertise; now they just need resources."
The iOS vulnerability was one of four in-the-wild zero-days Google detailed on Wednesday. The other three were:
The four exploits were used in three different campaigns. Based on their analysis, the researchers assess that three of the exploits were developed by the same commercial surveillance company, which sold them to two different government-backed actors. The researchers didn't identify the surveillance company, the governments, or the specific three zero-days they were referring to.
Representatives from Apple didn't immediately respond to a request for comment. |
|||
108 | Japan Shatters Internet Speed Record | We're in for an information revolution.
Engineers in Japan just shattered the world record for the fastest internet speed, achieving a data transmission rate of 319 Terabits per second (Tb/s), according to a paper presented at the International Conference on Optical Fiber Communications in June. The new record was made on a line of fibers more than 1,864 miles (3,000 km) long. And, crucially, it is compatible with modern-day cable infrastructure.
This could literally change everything.
Note well: we can't stress enough how fast this transmission speed is. It's nearly double the previous record of 178 Tb/s, which was set in 2020. And it's seven times the speed of the earlier record of 44.2 Tb/s, set with an experimental photonic chip. NASA itself uses a comparatively primitive speed of 400 Gb/s, and the new record soars impossibly high above what ordinary consumers can use (the fastest of which maxes out at 10 Gb/s for home internet connections).
As if there's no limit to this monumental achievement, the record was accomplished with fiber optic infrastructure that already exists (but with a few advanced add-ons). The research team used four "cores," which are glass tubes housed within the fibers that transmit the data, instead of the conventional standard core. The signals are then broken down into several wavelengths sent at the same time, employing a technique known as wavelength-division multiplexing (WDM). To carry more data, the researchers used a rarely-employed third "band," extending the distance via several optical amplification technologies.
The new system begins its transmission process with a 552-channel comb laser fired at various wavelengths. This is then sent through dual polarization modulation, such that some wavelengths go before others, to generate multiple signal sequences - each of which is in turn directed into one of the four cores within the optical fiber. Data transmitted via this system moves through 43.5 miles (70 km) of optical fiber, until it hits optical amplifiers to boost the signal for its long journey. But there's even more complexity: The signal runs through two novel kinds of fiber amplifiers, one doped in thulium, the other in erbium, before it continues on its way, in a conventional process called Raman amplification.
After this, signal sequences are sent into another segment of optical fiber, and then the entire process repeats, enabling the researchers to send data over a staggering distance of 1,864.7 miles (3,001 km). Crucially, the novel four-core optical fiber possesses the same diameter as a conventional single-core fiber, bracketing the protective cladding around it. In other words, integrating the new method into existing infrastructure will be far simpler than other technological overhauls to societal information systems.
This is what makes the new data transfer speed record really shine. Not only have the researchers in Japan blown the 2020 record out of the proverbial water, but they've done so with a novel engineering method capable of integrating into modern-day fiber optic infrastructure with minimal effort. We're nearing an age where the internet of the twenty-teens and early 2020s will look barbaric by comparison, in terms of signal speed and data transfer. It's an exciting time to be alive . | Japanese engineers have broken the world Internet speed record with a 319 TB per second data transmission rate across more than 3,001 kilometers (1,864.7 miles) of existing fiber-optic infrastructure. The achievement nearly doubles the previous record of 178 Tb/s. The researchers used four cores, or glass tubes housed within data-transmission fibers, to send signals segmented into several wavelengths simultaneously via wavelength-division multiplexing, while a seldom-used third band extended the distance of the transmissions through optical amplification. The four-core optical fiber is the same diameter as conventional single-core fiber, so integrating the new method into existing infrastructure should be far simpler than other technological overhauls. | [] | [] | [] | scitechnews | None | None | None | None | Japanese engineers have broken the world Internet speed record with a 319 TB per second data transmission rate across more than 3,001 kilometers (1,864.7 miles) of existing fiber-optic infrastructure. The achievement nearly doubles the previous record of 178 Tb/s. The researchers used four cores, or glass tubes housed within data-transmission fibers, to send signals segmented into several wavelengths simultaneously via wavelength-division multiplexing, while a seldom-used third band extended the distance of the transmissions through optical amplification. The four-core optical fiber is the same diameter as conventional single-core fiber, so integrating the new method into existing infrastructure should be far simpler than other technological overhauls.
We're in for an information revolution.
Engineers in Japan just shattered the world record for the fastest internet speed, achieving a data transmission rate of 319 Terabits per second (Tb/s), according to a paper presented at the International Conference on Optical Fiber Communications in June. The new record was made on a line of fibers more than 1,864 miles (3,000 km) long. And, crucially, it is compatible with modern-day cable infrastructure.
This could literally change everything.
Note well: we can't stress enough how fast this transmission speed is. It's nearly double the previous record of 178 Tb/s, which was set in 2020. And it's seven times the speed of the earlier record of 44.2 Tb/s, set with an experimental photonic chip. NASA itself uses a comparatively primitive speed of 400 Gb/s, and the new record soars impossibly high above what ordinary consumers can use (the fastest of which maxes out at 10 Gb/s for home internet connections).
As if there's no limit to this monumental achievement, the record was accomplished with fiber optic infrastructure that already exists (but with a few advanced add-ons). The research team used four "cores," which are glass tubes housed within the fibers that transmit the data, instead of the conventional standard core. The signals are then broken down into several wavelengths sent at the same time, employing a technique known as wavelength-division multiplexing (WDM). To carry more data, the researchers used a rarely-employed third "band," extending the distance via several optical amplification technologies.
The new system begins its transmission process with a 552-channel comb laser fired at various wavelengths. This is then sent through dual polarization modulation, such that some wavelengths go before others, to generate multiple signal sequences - each of which is in turn directed into one of the four cores within the optical fiber. Data transmitted via this system moves through 43.5 miles (70 km) of optical fiber, until it hits optical amplifiers to boost the signal for its long journey. But there's even more complexity: The signal runs through two novel kinds of fiber amplifiers, one doped in thulium, the other in erbium, before it continues on its way, in a conventional process called Raman amplification.
After this, signal sequences are sent into another segment of optical fiber, and then the entire process repeats, enabling the researchers to send data over a staggering distance of 1,864.7 miles (3,001 km). Crucially, the novel four-core optical fiber possesses the same diameter as a conventional single-core fiber, bracketing the protective cladding around it. In other words, integrating the new method into existing infrastructure will be far simpler than other technological overhauls to societal information systems.
This is what makes the new data transfer speed record really shine. Not only have the researchers in Japan blown the 2020 record out of the proverbial water, but they've done so with a novel engineering method capable of integrating into modern-day fiber optic infrastructure with minimal effort. We're nearing an age where the internet of the twenty-teens and early 2020s will look barbaric by comparison, in terms of signal speed and data transfer. It's an exciting time to be alive . |
|||
109 | NIST Evaluates Face Recognition Software's Accuracy for Flight Boarding | The most accurate face recognition algorithms have demonstrated the capability to confirm airline passenger identities while making very few errors, according to recent tests of the software conducted at the National Institute of Standards and Technology (NIST).
The findings, released today as Face Recognition Vendor Test (FRVT) Part 7: Identification for Paperless Travel and Immigration ( NISTIR 8381 ), focus on face recognition (FR) algorithms' performance under a particular set of simulated circumstances: matching images of travelers to previously obtained photos of those travelers stored in a database. This use of FR is currently part of the onboarding process for international flights, both to confirm a passenger's identity for the airline's flight roster and also to record the passenger's official immigration exit from the United States.
The results indicate that several of the FR algorithms NIST tested could perform the task using a single scan of a passenger's face with 99.5% accuracy or better - especially if the database contains several images of the passenger.
"We ran simulations to characterize a system that is doing two jobs: identifying passengers at the gate and recording their exit for immigration," said Patrick Grother, a NIST computer scientist and one of the report's authors. "We found that accuracy varies across algorithms, but that modern algorithms generally perform better. If airlines use the more accurate ones, passengers can board many flights with no errors."
Previous FRVT studies have focused on evaluating how algorithms perform one of two different tasks that are among FR's most common applications. The first task, confirming that a photo matches a different one of the same person, is known as "one-to-one" matching and is commonly used for verification work, such as unlocking a smartphone. The second, determining whether the person in the photo has a match in a large database, is known as "one-to-many" matching.
This latest test concerns a specific application of one-to-many matching in airport transit settings, where travelers' faces are matched against a database of individuals who are all expected to be present. In this scenario, only a few hundred passengers board a given flight. However, NIST also looked at whether the technology could be viable elsewhere in the airport, specifically in the security line where perhaps 100 times more people might be expected during a certain time window. (The database was built from images used in previous FRVT studies , but the subjects were not wearing face masks .)
As with previous studies, the team used software that developers voluntarily submitted to NIST for evaluation. This time, the team only looked at software that was designed to perform the one-to-many matching task, evaluating a total of 29 algorithms.
Among the report's findings are:
Grother said that the study does not address an important factor: the sort of camera that an FR system uses. Because airport environments differ, and because the cameras themselves operate in different ways, the report offers some guidance for tests that an airline or immigration authority could run to complement the NIST test results. Such tests would provide accuracy estimates that reflect the actual equipment and environment where it is used.
"We do not focus on cameras, which are an influential variable," he said. "We recommend that officials conduct the other tests we outline so as to refine their operations." | Tests of face recognition (FR) software at the U.S. National Institute of Standards and Technology (NIST) indicate the most accurate algorithms are able to confirm airline passenger identities at boarding while committing few errors. The NIST researchers assessed the algorithms' performance in matching images of travelers to previously acquired photos stored in a database. They found the seven top-performing FR algorithms could find a match based on a single scan of a passenger's face with 99.5% or higher accuracy, particularly if several passenger images were in the database. Explained NIST's Patrick Grother, "We found that accuracy varies across algorithms, but that modern algorithms generally perform better. If airlines use the more accurate ones, passengers can board many flights with no errors." | [] | [] | [] | scitechnews | None | None | None | None | Tests of face recognition (FR) software at the U.S. National Institute of Standards and Technology (NIST) indicate the most accurate algorithms are able to confirm airline passenger identities at boarding while committing few errors. The NIST researchers assessed the algorithms' performance in matching images of travelers to previously acquired photos stored in a database. They found the seven top-performing FR algorithms could find a match based on a single scan of a passenger's face with 99.5% or higher accuracy, particularly if several passenger images were in the database. Explained NIST's Patrick Grother, "We found that accuracy varies across algorithms, but that modern algorithms generally perform better. If airlines use the more accurate ones, passengers can board many flights with no errors."
The most accurate face recognition algorithms have demonstrated the capability to confirm airline passenger identities while making very few errors, according to recent tests of the software conducted at the National Institute of Standards and Technology (NIST).
The findings, released today as Face Recognition Vendor Test (FRVT) Part 7: Identification for Paperless Travel and Immigration ( NISTIR 8381 ), focus on face recognition (FR) algorithms' performance under a particular set of simulated circumstances: matching images of travelers to previously obtained photos of those travelers stored in a database. This use of FR is currently part of the onboarding process for international flights, both to confirm a passenger's identity for the airline's flight roster and also to record the passenger's official immigration exit from the United States.
The results indicate that several of the FR algorithms NIST tested could perform the task using a single scan of a passenger's face with 99.5% accuracy or better - especially if the database contains several images of the passenger.
"We ran simulations to characterize a system that is doing two jobs: identifying passengers at the gate and recording their exit for immigration," said Patrick Grother, a NIST computer scientist and one of the report's authors. "We found that accuracy varies across algorithms, but that modern algorithms generally perform better. If airlines use the more accurate ones, passengers can board many flights with no errors."
Previous FRVT studies have focused on evaluating how algorithms perform one of two different tasks that are among FR's most common applications. The first task, confirming that a photo matches a different one of the same person, is known as "one-to-one" matching and is commonly used for verification work, such as unlocking a smartphone. The second, determining whether the person in the photo has a match in a large database, is known as "one-to-many" matching.
This latest test concerns a specific application of one-to-many matching in airport transit settings, where travelers' faces are matched against a database of individuals who are all expected to be present. In this scenario, only a few hundred passengers board a given flight. However, NIST also looked at whether the technology could be viable elsewhere in the airport, specifically in the security line where perhaps 100 times more people might be expected during a certain time window. (The database was built from images used in previous FRVT studies , but the subjects were not wearing face masks .)
As with previous studies, the team used software that developers voluntarily submitted to NIST for evaluation. This time, the team only looked at software that was designed to perform the one-to-many matching task, evaluating a total of 29 algorithms.
Among the report's findings are:
Grother said that the study does not address an important factor: the sort of camera that an FR system uses. Because airport environments differ, and because the cameras themselves operate in different ways, the report offers some guidance for tests that an airline or immigration authority could run to complement the NIST test results. Such tests would provide accuracy estimates that reflect the actual equipment and environment where it is used.
"We do not focus on cameras, which are an influential variable," he said. "We recommend that officials conduct the other tests we outline so as to refine their operations." |
|||
110 | Seattle Leads Nation in 'Brain Gain,' Adds Tech Jobs Faster Than Any Other Big U.S. Market Over 5 Years | The Seattle region added more than 48,000 tech jobs from 2016 to 2020, an increase of more than 35% - growing at a faster rate than any other large U.S. tech market, according to a new analysis by the CBRE real estate firm.
The report confirms the meteoric growth of the region's tech industry in the latter half of the past decade. The trend has been driven by the expansion of Silicon Valley engineering outposts in the Seattle area , the extraordinary growth of Amazon, the revival of Microsoft, and the emergence of heavily funded, homegrown startups , particularly in cloud computing and enterprise technology.
Only Toronto and Vancouver, B.C., grew at a faster rate, as the tech markets in Canadian cities benefitted from restrictive U.S. immigration policies.
Although the report refers to the market as Seattle, CBRE confirmed that the stats encompass the greater Seattle-Bellevue-Tacoma metropolitan area.
Tech leaders and economic development officials across the continent are closely watching these trends as the industry and the world emerge from the COVID-19 pandemic, and a new era of remote and hybrid work allows some tech workers to live further from company offices.
Overall, Seattle moved past Washington, D.C., to claim the No. 2 spot behind the San Francisco Bay Area in CBRE's overall scorecard assessing each region's "depth, vitality and attractiveness to companies seeking tech talent and to tech workers seeking employment."
The report also studied new tech-related college degrees in the context of overall tech jobs added in each region to determine whether a market was able to retain the talent produced by its universities. Seattle scored a high "brain gain" by this measure, second only to Toronto.
Top tech talent isn't cheap. The report puts Seattle second, behind the San Francisco Bay Area, in average wages; and third, behind the Bay Area and New York, in total cost of running a technology business.
The report also looked at the overall diversity of each tech market as part of its assessment. Seattle was in the middle of the pack, including among neither the most diverse nor the least diverse markets, as measured by employment of women and underrepresented racial and ethnic groups in the tech industry.
See the full CBRE scorecard for Seattle below, including the diversity statistics.
Here's a larger PDF of the scorecard, and the full report . | A report by real estate firm CBRE reveals that more than 48,000 technology jobs were added in the Seattle region from 2016 to 2020. Only the Canadian tech markets in Toronto and Vancouver posted faster growth rates in North America, the study found. Seattle ranked second, behind the San Francisco Bay Area and ahead of Washington, D.C., in the overall scorecard, which assessed each region's "depth, vitality and attractiveness to companies seeking tech talent and to tech workers seeking employment." Seattle came in second, behind Toronto, with regard to the number of new tech-related college degrees in comparison to overall tech jobs added in each region. In average wages, the San Francisco Bay Area ranked first and Seattle second, and for total costs of running a tech business, the Bay Area ranked first, New York second, and Seattle third. Seattle was in the middle of the pack for overall diversity. | [] | [] | [] | scitechnews | None | None | None | None | A report by real estate firm CBRE reveals that more than 48,000 technology jobs were added in the Seattle region from 2016 to 2020. Only the Canadian tech markets in Toronto and Vancouver posted faster growth rates in North America, the study found. Seattle ranked second, behind the San Francisco Bay Area and ahead of Washington, D.C., in the overall scorecard, which assessed each region's "depth, vitality and attractiveness to companies seeking tech talent and to tech workers seeking employment." Seattle came in second, behind Toronto, with regard to the number of new tech-related college degrees in comparison to overall tech jobs added in each region. In average wages, the San Francisco Bay Area ranked first and Seattle second, and for total costs of running a tech business, the Bay Area ranked first, New York second, and Seattle third. Seattle was in the middle of the pack for overall diversity.
The Seattle region added more than 48,000 tech jobs from 2016 to 2020, an increase of more than 35% - growing at a faster rate than any other large U.S. tech market, according to a new analysis by the CBRE real estate firm.
The report confirms the meteoric growth of the region's tech industry in the latter half of the past decade. The trend has been driven by the expansion of Silicon Valley engineering outposts in the Seattle area , the extraordinary growth of Amazon, the revival of Microsoft, and the emergence of heavily funded, homegrown startups , particularly in cloud computing and enterprise technology.
Only Toronto and Vancouver, B.C., grew at a faster rate, as the tech markets in Canadian cities benefitted from restrictive U.S. immigration policies.
Although the report refers to the market as Seattle, CBRE confirmed that the stats encompass the greater Seattle-Bellevue-Tacoma metropolitan area.
Tech leaders and economic development officials across the continent are closely watching these trends as the industry and the world emerge from the COVID-19 pandemic, and a new era of remote and hybrid work allows some tech workers to live further from company offices.
Overall, Seattle moved past Washington, D.C., to claim the No. 2 spot behind the San Francisco Bay Area in CBRE's overall scorecard assessing each region's "depth, vitality and attractiveness to companies seeking tech talent and to tech workers seeking employment."
The report also studied new tech-related college degrees in the context of overall tech jobs added in each region to determine whether a market was able to retain the talent produced by its universities. Seattle scored a high "brain gain" by this measure, second only to Toronto.
Top tech talent isn't cheap. The report puts Seattle second, behind the San Francisco Bay Area, in average wages; and third, behind the Bay Area and New York, in total cost of running a technology business.
The report also looked at the overall diversity of each tech market as part of its assessment. Seattle was in the middle of the pack, including among neither the most diverse nor the least diverse markets, as measured by employment of women and underrepresented racial and ethnic groups in the tech industry.
See the full CBRE scorecard for Seattle below, including the diversity statistics.
Here's a larger PDF of the scorecard, and the full report . |
|||
113 | University of Illinois at Urbana-Champaign Graduate Receives ACM Doctoral Dissertation Award | Chuchu Fan is the recipient of the 2020 ACM Doctoral Dissertation Award for her dissertation, " Formal Methods for Safe Autonomy: Data-Driven Verification, Synthesis, and Applications ." The dissertation makes foundational contributions to verification of embedded and cyber-physical systems, and demonstrates applicability of the developed verification technologies in industrial-scale systems.
Fan's dissertation also advances the theory for sensitivity analysis and symbolic reachability; develops verification algorithms and software tools (DryVR, Realsyn); and demonstrates applications in industrial-scale autonomous systems.
Key contributions of her dissertation include the first data-driven algorithms for bounded verification of nonlinear hybrid systems using sensitivity analysis. A groundbreaking demonstration of this work on an industrial-scale problem showed that verification can scale. Her sensitivity analysis technique was patented, and a startup based at the University of Illinois at Urbana-Champaign has been formed to commercialize this approach.
Fan also developed the first verification algorithm for "black box" systems with incomplete models combining probably approximately correct (PAC) learning with simulation relations and fixed point analyses. DryVR, a tool that resulted from this work, has been applied to dozens of systems, including advanced driver assist systems, neural network-based controllers, distributed robotics, and medical devices.
Additionally, Fan's algorithms for synthesizing controllers for nonlinear vehicle model systems have been demonstrated to be broadly applicable. The RealSyn approach presented in the dissertation outperforms existing tools and is paving the way for new real-time motion planning algorithms for autonomous vehicles.
Fan is the Wilson Assistant Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology, where she leads the Reliable Autonomous Systems Lab. Her group uses rigorous mathematics including formal methods, machine learning, and control theory for the design, analysis, and verification of safe autonomous systems. Fan received a BA in Automation from Tsinghua University. She earned her PhD in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign.
Honorable Mentions for the 2020 ACM Doctoral Dissertation Award go to Henry Corrigan-Gibbs and Ralf Jung .
Corrigan-Gibbs's dissertation, " Protecting Privacy by Splitting Trust ," improved user privacy on the internet using techniques that combine theory and practice. Corrigan-Gibbs first develops a new type of probabilistically checkable proof (PCP), and then applies this technique to develop the Prio system, an elegant and scalable system that addresses a real industry need. Prio is being deployed at several large companies, including Mozilla, where it has been shipping in the nightly version of the Firefox browser since late 2019, the largest-ever deployment of PCPs.
Corrigan-Gibbs's dissertation studies how to robustly compute aggregate statistics about a user population without learning anything else about the users. For example, his dissertation introduces a tool enabling Mozilla to measure how many Firefox users encountered a particular web tracker without learning which users encountered that tracker or why. The thesis develops a new system of probabilistically checkable proofs that lets every browser send a short zero-knowledge proof that its encrypted contribution to the aggregate statistics is well formed. The key innovation is that verifying the proof is extremely fast.
Corrigan-Gibbs is an Assistant Professor in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he is also a member of the Computer Science and Artificial Intelligence Lab. His research focuses on computer security, cryptography, and computer systems. Corrigan-Gibbs received his PhD in Computer Science from Stanford University.
Ralf Jung's dissertation, " Understanding and Evolving the Rust Programming Language ," established the first formal foundations for safe systems programming in the innovative programming language Rust. In development at Mozilla since 2010, and increasingly popular throughout the industry, Rust addresses a longstanding problem in language design: how to balance safety and control. Like C++, Rust gives programmers low-level control over system resources. Unlike C++, Rust also employs a strong "ownership-based" system to statically ensure safety, so that security vulnerabilities like memory access errors and data races cannot occur. Prior to Jung's work, however, there had been no rigorous investigation of whether Rust's safety claims actually hold, and due to the extensive use of "unsafe escape hatches" in Rust libraries, these claims were difficult to assess.
In his dissertation, Jung tackles this challenge by developing semantic foundations for Rust that account directly for the interplay between safe and unsafe code. Building upon these foundations, Jung provides a proof of safety for a significant subset of Rust. Moreover, the proof is formalized within the automated proof assistant Coq and therefore its correctness is guaranteed. In addition, Jung provides a platform for formally verifying powerful type-based optimizations, even in the presence of unsafe code.
Through Jung's leadership and active engagement with the Rust Unsafe Code Guidelines working group, his work has already had profound impact on the design of Rust and laid essential foundations for its future.
Jung is a post-doctoral researcher at the Max Planck Institute for Software Systems and a research affiliate of the Parallel and Distributed Operating Systems Group at the Massachusetts Institute of Technology. His research interests include programming languages, verification, semantics, and type systems. He conducted his doctoral research at the Max Planck Institute for Software Systems, and received his PhD, Master's, and Bachelor's degrees in Computer Science from Saarland University.
News Release | Printable PDF | ACM has named University of Illinois at Urbana-Champaign graduate Chuchu Fan the recipient of the 2020 ACM Doctoral Dissertation Award for contributing to the verification of embedded and cyber-physical systems. Fan's work showcases industrial-scale application of developed verification technologies; furthers the theory for sensitivity analysis and symbolic reachability; presents verification algorithms and software tools; and highlights the utility of industrial-scale autonomous systems. Fan's contributions in the dissertation include the first data-driven algorithms for verifying nonlinear hybrid systems through sensitivity analysis, which demonstrated scalability. Her nonlinear vehicle model system controller synthesis algorithms also demonstrated broad applications, with the RealSyn approach detailed in the dissertation outperforming current tools and facilitating real-time motion planning software for autonomous vehicles. | [] | [] | [] | scitechnews | None | None | None | None | ACM has named University of Illinois at Urbana-Champaign graduate Chuchu Fan the recipient of the 2020 ACM Doctoral Dissertation Award for contributing to the verification of embedded and cyber-physical systems. Fan's work showcases industrial-scale application of developed verification technologies; furthers the theory for sensitivity analysis and symbolic reachability; presents verification algorithms and software tools; and highlights the utility of industrial-scale autonomous systems. Fan's contributions in the dissertation include the first data-driven algorithms for verifying nonlinear hybrid systems through sensitivity analysis, which demonstrated scalability. Her nonlinear vehicle model system controller synthesis algorithms also demonstrated broad applications, with the RealSyn approach detailed in the dissertation outperforming current tools and facilitating real-time motion planning software for autonomous vehicles.
Chuchu Fan is the recipient of the 2020 ACM Doctoral Dissertation Award for her dissertation, " Formal Methods for Safe Autonomy: Data-Driven Verification, Synthesis, and Applications ." The dissertation makes foundational contributions to verification of embedded and cyber-physical systems, and demonstrates applicability of the developed verification technologies in industrial-scale systems.
Fan's dissertation also advances the theory for sensitivity analysis and symbolic reachability; develops verification algorithms and software tools (DryVR, Realsyn); and demonstrates applications in industrial-scale autonomous systems.
Key contributions of her dissertation include the first data-driven algorithms for bounded verification of nonlinear hybrid systems using sensitivity analysis. A groundbreaking demonstration of this work on an industrial-scale problem showed that verification can scale. Her sensitivity analysis technique was patented, and a startup based at the University of Illinois at Urbana-Champaign has been formed to commercialize this approach.
Fan also developed the first verification algorithm for "black box" systems with incomplete models combining probably approximately correct (PAC) learning with simulation relations and fixed point analyses. DryVR, a tool that resulted from this work, has been applied to dozens of systems, including advanced driver assist systems, neural network-based controllers, distributed robotics, and medical devices.
Additionally, Fan's algorithms for synthesizing controllers for nonlinear vehicle model systems have been demonstrated to be broadly applicable. The RealSyn approach presented in the dissertation outperforms existing tools and is paving the way for new real-time motion planning algorithms for autonomous vehicles.
Fan is the Wilson Assistant Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology, where she leads the Reliable Autonomous Systems Lab. Her group uses rigorous mathematics including formal methods, machine learning, and control theory for the design, analysis, and verification of safe autonomous systems. Fan received a BA in Automation from Tsinghua University. She earned her PhD in Electrical and Computer Engineering from the University of Illinois at Urbana-Champaign.
Honorable Mentions for the 2020 ACM Doctoral Dissertation Award go to Henry Corrigan-Gibbs and Ralf Jung .
Corrigan-Gibbs's dissertation, " Protecting Privacy by Splitting Trust ," improved user privacy on the internet using techniques that combine theory and practice. Corrigan-Gibbs first develops a new type of probabilistically checkable proof (PCP), and then applies this technique to develop the Prio system, an elegant and scalable system that addresses a real industry need. Prio is being deployed at several large companies, including Mozilla, where it has been shipping in the nightly version of the Firefox browser since late 2019, the largest-ever deployment of PCPs.
Corrigan-Gibbs's dissertation studies how to robustly compute aggregate statistics about a user population without learning anything else about the users. For example, his dissertation introduces a tool enabling Mozilla to measure how many Firefox users encountered a particular web tracker without learning which users encountered that tracker or why. The thesis develops a new system of probabilistically checkable proofs that lets every browser send a short zero-knowledge proof that its encrypted contribution to the aggregate statistics is well formed. The key innovation is that verifying the proof is extremely fast.
Corrigan-Gibbs is an Assistant Professor in the Electrical Engineering and Computer Science Department at the Massachusetts Institute of Technology, where he is also a member of the Computer Science and Artificial Intelligence Lab. His research focuses on computer security, cryptography, and computer systems. Corrigan-Gibbs received his PhD in Computer Science from Stanford University.
Ralf Jung's dissertation, " Understanding and Evolving the Rust Programming Language ," established the first formal foundations for safe systems programming in the innovative programming language Rust. In development at Mozilla since 2010, and increasingly popular throughout the industry, Rust addresses a longstanding problem in language design: how to balance safety and control. Like C++, Rust gives programmers low-level control over system resources. Unlike C++, Rust also employs a strong "ownership-based" system to statically ensure safety, so that security vulnerabilities like memory access errors and data races cannot occur. Prior to Jung's work, however, there had been no rigorous investigation of whether Rust's safety claims actually hold, and due to the extensive use of "unsafe escape hatches" in Rust libraries, these claims were difficult to assess.
In his dissertation, Jung tackles this challenge by developing semantic foundations for Rust that account directly for the interplay between safe and unsafe code. Building upon these foundations, Jung provides a proof of safety for a significant subset of Rust. Moreover, the proof is formalized within the automated proof assistant Coq and therefore its correctness is guaranteed. In addition, Jung provides a platform for formally verifying powerful type-based optimizations, even in the presence of unsafe code.
Through Jung's leadership and active engagement with the Rust Unsafe Code Guidelines working group, his work has already had profound impact on the design of Rust and laid essential foundations for its future.
Jung is a post-doctoral researcher at the Max Planck Institute for Software Systems and a research affiliate of the Parallel and Distributed Operating Systems Group at the Massachusetts Institute of Technology. His research interests include programming languages, verification, semantics, and type systems. He conducted his doctoral research at the Max Planck Institute for Software Systems, and received his PhD, Master's, and Bachelor's degrees in Computer Science from Saarland University.
News Release | Printable PDF |
|||
115 | After Backlash, Predictive Policing Adapts to a Changed World | Pushback against law enforcement's use of predictive software has forced reconsideration, with officials in Santa Cruz, CA, recently warning the technology contributes to racial profiling. Predictive-policing companies are beginning to deprioritize "forecasting" crime and concentrate more on tracking police, in order to ensure greater oversight and identify behaviors corresponding with reduced crime. The University of Texas at Austin's Sarah Brayne cited inconsistent evidence that predictive software can outperform human analysts in reducing crime, given a lack of data needed to make independent assessments. She and some software company executives expect police increasingly will employ global positioning system-based heat maps to track officers' movements and enhance accountability, as well as predicting crime hot spots. | [] | [] | [] | scitechnews | None | None | None | None | Pushback against law enforcement's use of predictive software has forced reconsideration, with officials in Santa Cruz, CA, recently warning the technology contributes to racial profiling. Predictive-policing companies are beginning to deprioritize "forecasting" crime and concentrate more on tracking police, in order to ensure greater oversight and identify behaviors corresponding with reduced crime. The University of Texas at Austin's Sarah Brayne cited inconsistent evidence that predictive software can outperform human analysts in reducing crime, given a lack of data needed to make independent assessments. She and some software company executives expect police increasingly will employ global positioning system-based heat maps to track officers' movements and enhance accountability, as well as predicting crime hot spots.
|
||||
117 | ECB Starts Work on Digital Version of the Euro | LONDON - The European Central Bank announced Wednesday that it's starting work toward creating a digital euro currency as more consumers ditch cash.
The project is expected to take two years and the idea is to design a digital version of the common currency, used in the 19 members of the euro zone. However, the actual implementation of the central bank-backed currency could take another two years on top of the design and investigation stage.
"It has been nine months since we published our report on a digital euro. In that time, we have carried out further analysis, sought input from citizens and professionals, and conducted some experiments, with encouraging results. All of this has led us to decide to move up a gear and start the digital euro project," ECB President Christine Lagarde said in a statement.
"Our work aims to ensure that in the digital age citizens and firms continue to have access to the safest form of money, central bank money," she added.
Lagarde had forecast in March a timeline of at least four years for full implementation. In an interview with Bloomberg News, Lagarde said that this was a technical endeavor and that "we need to make sure we do it right." | The European Central Bank (ECB) has launched an initiative to produce a digital euro currency. The ECB expects the design and investigation stage to take two years, while the currency's actual implementation could add two more years to the project. ECB's Fabio Panetta said, "Private solutions for digital and online payments bring important benefits such as convenience, speed, and efficiency. But they also pose risks in terms of privacy, safety, and accessibility. And they can be expensive for some users." The digital euro would let consumers make payments electronically, but also would "complement" the existing monetary system rather than supplanting physical cash and eliminating the commercial lending business. | [] | [] | [] | scitechnews | None | None | None | None | The European Central Bank (ECB) has launched an initiative to produce a digital euro currency. The ECB expects the design and investigation stage to take two years, while the currency's actual implementation could add two more years to the project. ECB's Fabio Panetta said, "Private solutions for digital and online payments bring important benefits such as convenience, speed, and efficiency. But they also pose risks in terms of privacy, safety, and accessibility. And they can be expensive for some users." The digital euro would let consumers make payments electronically, but also would "complement" the existing monetary system rather than supplanting physical cash and eliminating the commercial lending business.
LONDON - The European Central Bank announced Wednesday that it's starting work toward creating a digital euro currency as more consumers ditch cash.
The project is expected to take two years and the idea is to design a digital version of the common currency, used in the 19 members of the euro zone. However, the actual implementation of the central bank-backed currency could take another two years on top of the design and investigation stage.
"It has been nine months since we published our report on a digital euro. In that time, we have carried out further analysis, sought input from citizens and professionals, and conducted some experiments, with encouraging results. All of this has led us to decide to move up a gear and start the digital euro project," ECB President Christine Lagarde said in a statement.
"Our work aims to ensure that in the digital age citizens and firms continue to have access to the safest form of money, central bank money," she added.
Lagarde had forecast in March a timeline of at least four years for full implementation. In an interview with Bloomberg News, Lagarde said that this was a technical endeavor and that "we need to make sure we do it right." |
|||
118 | Chinese Phone Games Now Require Facial Scans to Play at Night | Tencent, the world's largest Chinese video game publisher, has taken an extreme step to comply with its nation's rules about limiting minors' access to video games. As of this week, the publisher has added a facial recognition system, dubbed "Midnight Patrol," to over 60 of its China-specific smartphone games , and it will disable gameplay in popular titles like Honor of Kings if users either decline the facial check or fail it.
In all affected games, once a gameplay session during the nation's official gaming curfew hours (10 pm to 8 am) exceeds an unspecified amount of time, the game in question will be interrupted by a prompt to scan the player's face. Should an adult fail the test for any reason, Tencent makes its "too bad, so sad" attitude clear in its announcement: users can try to play again the next day.
Additionally, parents can now turn on a facial recognition system that checks specifically for approved parents' faces before allowing gameplay to unlock - though it's unclear why a parent would elect to do this instead of turning on something like a password or PIN system.
This system follows increased Chinese government scrutiny on childhood gaming addiction, and this includes rules with which game publishers must comply lest they face penalties as extreme as having their business licenses revoked. In addition to the aforementioned gaming curfew for minors, Chinese games must also include real-name registration systems and in-game spending caps for minors.
But this facial scanning requirement implies that most smartphone platforms' built-in parental controls, along with commonsense parental phone management, aren't stopping minors from accessing Internet-connected devices with "adult" credentials at night. In the case of Midnight Patrol's more generic "check your face for your age" system - as opposed to parent-specific scans - it's arguably a question of when, not if, savvy Chinese teen gamers will defeat the system with something like specially prepared photos .
Tencent did not provide a list of the "over 60" games affected by this week's update. The publisher has already pledged to add Midnight Patrol to more of its games over time, which will likely expand to Tencent-published smartphone games familiar to the West like PUBG Mobile and League of Legends .
If you're wondering how the staff at Ars Technica feels about draconian restrictions on childhood gaming hours, rewind to our 2019 staffsource feature on somehow graduating high school and college in spite of our own video and tabletop gaming addictions. | Chinese video game publisher Tencent has added a facial recognition system to more than 60 of its China-specific smartphone games, in order to comply with government rules to limit access to video games by minors. Users will receive a prompt to scan their face once a gameplay session exceeds an unspecified amount of time during the nation's official gaming curfew hours of 10 p.m. to 8 a.m. If adult users decline or fail the facial check, the "Midnight Patrol" system will disable gameplay. Game publishers that do not comply with the rules, which also require real-name registration systems and in-game spending caps for minors, could have their business licenses revoked. | [] | [] | [] | scitechnews | None | None | None | None | Chinese video game publisher Tencent has added a facial recognition system to more than 60 of its China-specific smartphone games, in order to comply with government rules to limit access to video games by minors. Users will receive a prompt to scan their face once a gameplay session exceeds an unspecified amount of time during the nation's official gaming curfew hours of 10 p.m. to 8 a.m. If adult users decline or fail the facial check, the "Midnight Patrol" system will disable gameplay. Game publishers that do not comply with the rules, which also require real-name registration systems and in-game spending caps for minors, could have their business licenses revoked.
Tencent, the world's largest Chinese video game publisher, has taken an extreme step to comply with its nation's rules about limiting minors' access to video games. As of this week, the publisher has added a facial recognition system, dubbed "Midnight Patrol," to over 60 of its China-specific smartphone games , and it will disable gameplay in popular titles like Honor of Kings if users either decline the facial check or fail it.
In all affected games, once a gameplay session during the nation's official gaming curfew hours (10 pm to 8 am) exceeds an unspecified amount of time, the game in question will be interrupted by a prompt to scan the player's face. Should an adult fail the test for any reason, Tencent makes its "too bad, so sad" attitude clear in its announcement: users can try to play again the next day.
Additionally, parents can now turn on a facial recognition system that checks specifically for approved parents' faces before allowing gameplay to unlock - though it's unclear why a parent would elect to do this instead of turning on something like a password or PIN system.
This system follows increased Chinese government scrutiny on childhood gaming addiction, and this includes rules with which game publishers must comply lest they face penalties as extreme as having their business licenses revoked. In addition to the aforementioned gaming curfew for minors, Chinese games must also include real-name registration systems and in-game spending caps for minors.
But this facial scanning requirement implies that most smartphone platforms' built-in parental controls, along with commonsense parental phone management, aren't stopping minors from accessing Internet-connected devices with "adult" credentials at night. In the case of Midnight Patrol's more generic "check your face for your age" system - as opposed to parent-specific scans - it's arguably a question of when, not if, savvy Chinese teen gamers will defeat the system with something like specially prepared photos .
Tencent did not provide a list of the "over 60" games affected by this week's update. The publisher has already pledged to add Midnight Patrol to more of its games over time, which will likely expand to Tencent-published smartphone games familiar to the West like PUBG Mobile and League of Legends .
If you're wondering how the staff at Ars Technica feels about draconian restrictions on childhood gaming hours, rewind to our 2019 staffsource feature on somehow graduating high school and college in spite of our own video and tabletop gaming addictions. |
|||
120 | Meet the Open Source Software Powering NASA's Ingenuity Mars Helicopter | The open source F Prime software that drives the U.S. National Aeronautics and Space Administration (NASA) 's Ingenuity Mars Helicopter is also finding use at universities as a flight software option for university and student projects. JPL's Aadil Rizvi said F Prime is an out-of-the-box solution for several flight software services, including commanding, telemetry, parameters, and sequencing for spacecraft. A team at the Georgia Institute of Technology is using F Prime in its GT1 CubeSat, while a Carnegie Mellon University team chose to use the software to run its Iris Lunar Rover robot. JPL's Jeff Levison said university partnerships such as these benefit all parties, as his organization supplies flight systems expertise to young engineers, who could eventually bring their talents and experience with F Prime to a career at JPL. | [] | [] | [] | scitechnews | None | None | None | None | The open source F Prime software that drives the U.S. National Aeronautics and Space Administration (NASA) 's Ingenuity Mars Helicopter is also finding use at universities as a flight software option for university and student projects. JPL's Aadil Rizvi said F Prime is an out-of-the-box solution for several flight software services, including commanding, telemetry, parameters, and sequencing for spacecraft. A team at the Georgia Institute of Technology is using F Prime in its GT1 CubeSat, while a Carnegie Mellon University team chose to use the software to run its Iris Lunar Rover robot. JPL's Jeff Levison said university partnerships such as these benefit all parties, as his organization supplies flight systems expertise to young engineers, who could eventually bring their talents and experience with F Prime to a career at JPL.
|
||||
122 | Stumble-Proof Robot Adapts to Challenging Terrain in Real Time | Robots have a hard time improvising, and encountering an unusual surface or obstacle usually means an abrupt stop or hard fall. But researchers have created a new model for robotic locomotion that adapts in real time to any terrain it encounters, changing its gait on the fly to keep trucking when it hits sand, rocks, stairs and other sudden changes.
Although robotic movement can be versatile and exact, and robots can "learn" to climb steps, cross broken terrain and so on, these behaviors are more like individual trained skills that the robot switches between. Although robots like Spot famously can spring back from being pushed or kicked, the system is really just working to correct a physical anomaly while pursuing an unchanged policy of walking. There are some adaptive movement models , but some are very specific (for instance this one based on real insect movements) and others take long enough to work that the robot will certainly have fallen by the time they take effect.
The team, from Facebook AI, UC Berkeley and Carnegie Mellon University, call it Rapid Motor Adaptation. It came from the fact that humans and other animals are able to quickly, effectively and unconsciously change the way they walk to fit different circumstances.
"Say you learn to walk and for the first time you go to the beach. Your foot sinks in, and to pull it out you have to apply more force. It feels weird, but in a few steps you'll be walking naturally just as you do on hard ground. What's the secret there?" asked senior researcher Jitendra Malik, who is affiliated with Facebook AI and UC Berkeley.
Certainly if you've never encountered a beach before, but even later in life when you have, you aren't entering some special "sand mode" that lets you walk on soft surfaces. The way you change your movement happens automatically and without any real understanding of the external environment.
Visualization of the simulation environment. Of course the robot would not perceive any of this visually. Image Credits: Berkeley AI Research, Facebook AI Research and CMU
"What's happening is your body responds to the differing physical conditions by sensing the differing consequences of those conditions on the body itself," Malik explained - and the RMA system works in similar fashion. "When we walk in new conditions, in a very short time, half a second or less, we have made enough measurements that we are estimating what these conditions are, and we modify the walking policy."
The system was trained entirely in simulation, in a virtual version of the real world where the robot's small brain (everything runs locally on the on-board limited compute unit) learned to maximize forward motion with minimum energy and avoid falling by immediately observing and responding to data coming in from its (virtual) joints, accelerometers and other physical sensors.
To punctuate the total internality of the RMA approach, Malik notes that the robot uses no visual input whatsoever. But people and animals with no vision can walk just fine, so why shouldn't a robot? But since it's impossible to estimate the "externalities" such as the exact friction coefficient of the sand or rocks it's walking on, it simply keeps a close eye on itself.
"We do not learn about sand, we learn about feet sinking," said co-author Ashish Kumar, also from Berkeley.
Ultimately the system ends up having two parts: a main, always-running algorithm actually controlling the robot's gait, and an adaptive algorithm running in parallel that monitors changes to the robot's internal readings. When significant changes are detected, it analyzes them - the legs should be doing this , but they're doing this , which means the situation is like this - and tells the main model how to adjust itself. From then on the robot only thinks in terms of how to move forward under these new conditions, effectively improvising a specialized gait.
Image Credits: Berkeley AI Research, Facebook AI Research and CMU
After training in simulation, it succeeded handsomely in the real world, as the news release describes it:
You can see examples of many of these situations in videos here or (very briefly) in the gif above.
Malik gave a nod to the research of NYU professor Karen Adolph , whose work has shown how adaptable and free-form the human process of learning how to walk is. The team's instinct was that if you want a robot that can handle any situation, it has to learn adaptation from scratch, not have a variety of modes to choose from.
Just as you can't build a smarter computer-vision system by exhaustively labeling and documenting every object and interaction (there will always be more), you can't prepare a robot for a diverse and complex physical world with 10, 100, even thousands of special parameters for walking on gravel, mud, rubble, wet wood, etc. For that matter you may not even want to specify anything at all beyond the general idea of forward motion.
"We don't pre-program the idea that it has for legs, or anything about the morphology of the robot," said Kumar.
This means the basis of the system - not the fully trained one, which ultimately did mold itself to quadrupedal gaits - can potentially be applied not just to other legged robots, but entirely different domains of AI and robotics.
"The legs of a robot are similar to the fingers of a hand; the way that legs interact with environments, fingers interact with objects," noted co-author Deepak Pathak, of Carnegie Mellon University. "The basic idea can be applied to any robot."
Even further, Malik suggested, the pairing of basic and adaptive algorithms could work for other intelligent systems. Smart homes and municipal systems tend to rely on preexisting policies, but what if they adapted on the fly instead?
For now the team is simply presenting their initial findings in a paper at the Robotics: Science and Systems conference and acknowledge that there is a great deal of follow-up research to do. For instance building an internal library of the improvised gaits as a sort of "medium-term" memory, or using vision to predict the necessity of initiating a new style of locomotion. But the RMA approach seems to be a promising new approach for an enduring challenge in robotics. | A new robotic locomotion model capable of real-time terrain adaptation has been developed by a multi-institutional research team. Engineers at Facebook AI, the University of California, Berkeley (UC Berkeley), and Carnegie Mellon University based Rapid Motor Adaptation (RMA) on the ability of humans and other animals to quickly and unconsciously adjust their locomotion to different conditions. The team trained the system in a virtual model of the real world, where the robot's brain learned to maximize forward motion with the least amount of energy, and to avoid falls by responding to incoming data from physical sensors. UC Berkeley's Jitendra Malik said the robot employs absolutely no visual input, instead closely monitoring itself. The RMA system uses a constantly running main gait-control algorithm and a parallel adaptive algorithm that watches internal readings and provides the main model adjustment data in response to terrain changes. | [] | [] | [] | scitechnews | None | None | None | None | A new robotic locomotion model capable of real-time terrain adaptation has been developed by a multi-institutional research team. Engineers at Facebook AI, the University of California, Berkeley (UC Berkeley), and Carnegie Mellon University based Rapid Motor Adaptation (RMA) on the ability of humans and other animals to quickly and unconsciously adjust their locomotion to different conditions. The team trained the system in a virtual model of the real world, where the robot's brain learned to maximize forward motion with the least amount of energy, and to avoid falls by responding to incoming data from physical sensors. UC Berkeley's Jitendra Malik said the robot employs absolutely no visual input, instead closely monitoring itself. The RMA system uses a constantly running main gait-control algorithm and a parallel adaptive algorithm that watches internal readings and provides the main model adjustment data in response to terrain changes.
Robots have a hard time improvising, and encountering an unusual surface or obstacle usually means an abrupt stop or hard fall. But researchers have created a new model for robotic locomotion that adapts in real time to any terrain it encounters, changing its gait on the fly to keep trucking when it hits sand, rocks, stairs and other sudden changes.
Although robotic movement can be versatile and exact, and robots can "learn" to climb steps, cross broken terrain and so on, these behaviors are more like individual trained skills that the robot switches between. Although robots like Spot famously can spring back from being pushed or kicked, the system is really just working to correct a physical anomaly while pursuing an unchanged policy of walking. There are some adaptive movement models , but some are very specific (for instance this one based on real insect movements) and others take long enough to work that the robot will certainly have fallen by the time they take effect.
The team, from Facebook AI, UC Berkeley and Carnegie Mellon University, call it Rapid Motor Adaptation. It came from the fact that humans and other animals are able to quickly, effectively and unconsciously change the way they walk to fit different circumstances.
"Say you learn to walk and for the first time you go to the beach. Your foot sinks in, and to pull it out you have to apply more force. It feels weird, but in a few steps you'll be walking naturally just as you do on hard ground. What's the secret there?" asked senior researcher Jitendra Malik, who is affiliated with Facebook AI and UC Berkeley.
Certainly if you've never encountered a beach before, but even later in life when you have, you aren't entering some special "sand mode" that lets you walk on soft surfaces. The way you change your movement happens automatically and without any real understanding of the external environment.
Visualization of the simulation environment. Of course the robot would not perceive any of this visually. Image Credits: Berkeley AI Research, Facebook AI Research and CMU
"What's happening is your body responds to the differing physical conditions by sensing the differing consequences of those conditions on the body itself," Malik explained - and the RMA system works in similar fashion. "When we walk in new conditions, in a very short time, half a second or less, we have made enough measurements that we are estimating what these conditions are, and we modify the walking policy."
The system was trained entirely in simulation, in a virtual version of the real world where the robot's small brain (everything runs locally on the on-board limited compute unit) learned to maximize forward motion with minimum energy and avoid falling by immediately observing and responding to data coming in from its (virtual) joints, accelerometers and other physical sensors.
To punctuate the total internality of the RMA approach, Malik notes that the robot uses no visual input whatsoever. But people and animals with no vision can walk just fine, so why shouldn't a robot? But since it's impossible to estimate the "externalities" such as the exact friction coefficient of the sand or rocks it's walking on, it simply keeps a close eye on itself.
"We do not learn about sand, we learn about feet sinking," said co-author Ashish Kumar, also from Berkeley.
Ultimately the system ends up having two parts: a main, always-running algorithm actually controlling the robot's gait, and an adaptive algorithm running in parallel that monitors changes to the robot's internal readings. When significant changes are detected, it analyzes them - the legs should be doing this , but they're doing this , which means the situation is like this - and tells the main model how to adjust itself. From then on the robot only thinks in terms of how to move forward under these new conditions, effectively improvising a specialized gait.
Image Credits: Berkeley AI Research, Facebook AI Research and CMU
After training in simulation, it succeeded handsomely in the real world, as the news release describes it:
You can see examples of many of these situations in videos here or (very briefly) in the gif above.
Malik gave a nod to the research of NYU professor Karen Adolph , whose work has shown how adaptable and free-form the human process of learning how to walk is. The team's instinct was that if you want a robot that can handle any situation, it has to learn adaptation from scratch, not have a variety of modes to choose from.
Just as you can't build a smarter computer-vision system by exhaustively labeling and documenting every object and interaction (there will always be more), you can't prepare a robot for a diverse and complex physical world with 10, 100, even thousands of special parameters for walking on gravel, mud, rubble, wet wood, etc. For that matter you may not even want to specify anything at all beyond the general idea of forward motion.
"We don't pre-program the idea that it has for legs, or anything about the morphology of the robot," said Kumar.
This means the basis of the system - not the fully trained one, which ultimately did mold itself to quadrupedal gaits - can potentially be applied not just to other legged robots, but entirely different domains of AI and robotics.
"The legs of a robot are similar to the fingers of a hand; the way that legs interact with environments, fingers interact with objects," noted co-author Deepak Pathak, of Carnegie Mellon University. "The basic idea can be applied to any robot."
Even further, Malik suggested, the pairing of basic and adaptive algorithms could work for other intelligent systems. Smart homes and municipal systems tend to rely on preexisting policies, but what if they adapted on the fly instead?
For now the team is simply presenting their initial findings in a paper at the Robotics: Science and Systems conference and acknowledge that there is a great deal of follow-up research to do. For instance building an internal library of the improvised gaits as a sort of "medium-term" memory, or using vision to predict the necessity of initiating a new style of locomotion. But the RMA approach seems to be a promising new approach for an enduring challenge in robotics. |
|||
125 | 3D Printable Phase-Changing Composites Can Regulate Temperatures Inside Buildings | Changing climate patterns have left millions of people vulnerable to weather extremes. As temperature fluctuations become more commonplace around the world, conventional power-guzzling cooling and heating systems need a more innovative, energy-efficient alternative, and in turn, lessen the burden on already struggling power grids.
In a new study, researchers at Texas A&M University have created novel 3D printable phase-change material (PCM) composites that can regulate ambient temperatures inside buildings using a simpler and cost-effective manufacturing process. Furthermore, these composites can be added to building materials, like paint, or 3D printed as decorative home accents to seamlessly integrate into different indoor environments.
"The ability to integrate phase-change materials into building materials using a scalable method opens opportunities to produce more passive temperature regulation in both new builds and already existing structures," said Emily Pentzer, associate professor in the Department of Materials Science and Engineering and the Department of Chemistry.
This study was published in the June issue of the journal Matter .
Heating, ventilation and air conditioning (HVAC) systems are the most commonly used methods to regulate temperatures in residential and commercial establishments. However, these systems guzzle a lot of energy. Furthermore, they use greenhouse materials, called refrigerants, for generating cool, dry air. These ongoing issues with HVAC systems have triggered research into alternative materials and technologies that require less energy to function and can regulate temperature commensurate to HVAC systems.
One of the materials that has gained a lot of interest for temperature regulation is phase-change materials. As the name suggests, these compounds change their physical state depending on the temperature in the environment. So, when phase-change materials store heat, they convert from solid to liquid upon absorbing heat and vice versa when they release heat. Thus, unlike HVAC systems that rely solely on external power to heat and cool, these materials are passive components, requiring no external electricity to regulate temperature.
The traditional approach to manufacturing PCM building materials requires forming a separate shell around each PCM particle, like a cup to hold water, then adding these newly encased PCMs to building materials. However, finding building materials compatible with both the PCM and its shell has been a challenge. In addition, this conventional method also decreases the number of PCM particles that can be incorporated into building materials.
"Imagine filling a pot with eggs and water," said Ciera Cipriani, NASA Space Technology Graduate Research Fellow in the Department of Materials Science and Engineering. "If each egg has to be placed in an individual container to be hard-boiled, fewer eggs will fit in the pot. By removing the plastic containers, the veritable shell in our research, more eggs, or PCMs, can occupy a greater volume by packing closer together within the water/resin."
To overcome these challenges, past studies have shown that when using phase-changing paraffin wax mixed with liquid resin, the resin acts as both the shell and building material. This method locks the PCM particles inside their individual pockets, allowing them to safely undergo a phase change and manage thermal energy without leakage.
Similarly, Pentzer and her team first combined light-sensitive liquid resins with a phase-changing paraffin wax powder to create a new 3D printable ink composite, enhancing the production process for building materials containing PCMs and eliminating several steps, including encapsulation.
The resin/PCM mixture is soft, paste-like, and malleable, making it ideal for 3D printing but not for building structures. By using a light-sensitive resin, they cured it with an ultraviolet light to solidify the 3D printable paste, making it suitable for real-world applications.
Additionally, they found that the phase-changing wax embedded within the resin was not affected by the ultraviolet light and made up 70% of the printed structure. This is a higher percentage when compared to most currently available materials being used in industry.
Next, they tested the thermoregulation of their phase-changing composites by 3D printing a small-scale house-shaped model and measuring the temperature inside the house when it was placed in an oven. Their analysis showed that the model's temperature differed by 40% compared to outside temperatures for both heating and cooling thermal cycles when compared to models made from traditional materials.
In the future, the researchers will experiment with different phase-change materials apart from paraffin wax so that these composites can operate at broader temperature ranges and manage more thermal energy during a given cycle.
"We're excited about the potential of our material to keep buildings comfortable while reducing energy consumption," said Peiran Wei, research scientist in the Department of Materials Science and Engineering and the Soft Matter Facility. "We can combine multiple PCMs with different melting temperatures and precisely distribute them into various areas of a single printed object to function throughout all four seasons and across the globe."
This study was funded by the National Science Foundation's Division of Materials Research Career Award. | Texas A&M University (TAMU) scientists have engineered novel three-dimensional (3D) printable phase-change material (PCM) composites for regulating interior building temperatures cost-effectively. The method eliminates the plastic capsules that surround each PCM particle in conventional manufacturing by blending light-sensitive liquid resins with a phase-changing paraffin wax powder. The resin ensures the 3D-printable paste will solidify under ultraviolet light, which does not affect the embedded wax. The TAMU team 3D-printed a small-scale model and found its inside temperature differed by 40% compared to outside temperatures, versus models composed of traditional materials. Said TAMU's Emily Pentzer, "The ability to integrate phase-change materials into building materials using a scalable method opens opportunities to produce more passive temperature regulation in both new builds and already existing structures." | [] | [] | [] | scitechnews | None | None | None | None | Texas A&M University (TAMU) scientists have engineered novel three-dimensional (3D) printable phase-change material (PCM) composites for regulating interior building temperatures cost-effectively. The method eliminates the plastic capsules that surround each PCM particle in conventional manufacturing by blending light-sensitive liquid resins with a phase-changing paraffin wax powder. The resin ensures the 3D-printable paste will solidify under ultraviolet light, which does not affect the embedded wax. The TAMU team 3D-printed a small-scale model and found its inside temperature differed by 40% compared to outside temperatures, versus models composed of traditional materials. Said TAMU's Emily Pentzer, "The ability to integrate phase-change materials into building materials using a scalable method opens opportunities to produce more passive temperature regulation in both new builds and already existing structures."
Changing climate patterns have left millions of people vulnerable to weather extremes. As temperature fluctuations become more commonplace around the world, conventional power-guzzling cooling and heating systems need a more innovative, energy-efficient alternative, and in turn, lessen the burden on already struggling power grids.
In a new study, researchers at Texas A&M University have created novel 3D printable phase-change material (PCM) composites that can regulate ambient temperatures inside buildings using a simpler and cost-effective manufacturing process. Furthermore, these composites can be added to building materials, like paint, or 3D printed as decorative home accents to seamlessly integrate into different indoor environments.
"The ability to integrate phase-change materials into building materials using a scalable method opens opportunities to produce more passive temperature regulation in both new builds and already existing structures," said Emily Pentzer, associate professor in the Department of Materials Science and Engineering and the Department of Chemistry.
This study was published in the June issue of the journal Matter .
Heating, ventilation and air conditioning (HVAC) systems are the most commonly used methods to regulate temperatures in residential and commercial establishments. However, these systems guzzle a lot of energy. Furthermore, they use greenhouse materials, called refrigerants, for generating cool, dry air. These ongoing issues with HVAC systems have triggered research into alternative materials and technologies that require less energy to function and can regulate temperature commensurate to HVAC systems.
One of the materials that has gained a lot of interest for temperature regulation is phase-change materials. As the name suggests, these compounds change their physical state depending on the temperature in the environment. So, when phase-change materials store heat, they convert from solid to liquid upon absorbing heat and vice versa when they release heat. Thus, unlike HVAC systems that rely solely on external power to heat and cool, these materials are passive components, requiring no external electricity to regulate temperature.
The traditional approach to manufacturing PCM building materials requires forming a separate shell around each PCM particle, like a cup to hold water, then adding these newly encased PCMs to building materials. However, finding building materials compatible with both the PCM and its shell has been a challenge. In addition, this conventional method also decreases the number of PCM particles that can be incorporated into building materials.
"Imagine filling a pot with eggs and water," said Ciera Cipriani, NASA Space Technology Graduate Research Fellow in the Department of Materials Science and Engineering. "If each egg has to be placed in an individual container to be hard-boiled, fewer eggs will fit in the pot. By removing the plastic containers, the veritable shell in our research, more eggs, or PCMs, can occupy a greater volume by packing closer together within the water/resin."
To overcome these challenges, past studies have shown that when using phase-changing paraffin wax mixed with liquid resin, the resin acts as both the shell and building material. This method locks the PCM particles inside their individual pockets, allowing them to safely undergo a phase change and manage thermal energy without leakage.
Similarly, Pentzer and her team first combined light-sensitive liquid resins with a phase-changing paraffin wax powder to create a new 3D printable ink composite, enhancing the production process for building materials containing PCMs and eliminating several steps, including encapsulation.
The resin/PCM mixture is soft, paste-like, and malleable, making it ideal for 3D printing but not for building structures. By using a light-sensitive resin, they cured it with an ultraviolet light to solidify the 3D printable paste, making it suitable for real-world applications.
Additionally, they found that the phase-changing wax embedded within the resin was not affected by the ultraviolet light and made up 70% of the printed structure. This is a higher percentage when compared to most currently available materials being used in industry.
Next, they tested the thermoregulation of their phase-changing composites by 3D printing a small-scale house-shaped model and measuring the temperature inside the house when it was placed in an oven. Their analysis showed that the model's temperature differed by 40% compared to outside temperatures for both heating and cooling thermal cycles when compared to models made from traditional materials.
In the future, the researchers will experiment with different phase-change materials apart from paraffin wax so that these composites can operate at broader temperature ranges and manage more thermal energy during a given cycle.
"We're excited about the potential of our material to keep buildings comfortable while reducing energy consumption," said Peiran Wei, research scientist in the Department of Materials Science and Engineering and the Soft Matter Facility. "We can combine multiple PCMs with different melting temperatures and precisely distribute them into various areas of a single printed object to function throughout all four seasons and across the globe."
This study was funded by the National Science Foundation's Division of Materials Research Career Award. |