An Algorithm Summarizes Lengthy Text Surprisingly Well

Researchers at Salesforce have developed an algorithm that applies machine-learning techniques to accurately and coherently condense lengthy textual documents, technology which could impact fields such as law, medicine, and scientific research. The algorithm blends various strategies, including supervised learning, by being fed summary examples, while also applying an artificial attention mechanism to the text it is receiving and generating. The process ensures the system will not return too many repetitive strands of text, which has been an issue for other summarization programs. In addition, the system conducts experiments to produce its own summaries via reinforcement learning. Northwestern University professor Kristian Hammond lauds the Salesforce algorithm, but says it also illustrates the limits of solely relying on statistical machine learning. “We need a little bit of semantics and a little bit of syntactic knowledge in these systems in order for them to be fluid and fluent,” Hammond says.

More info here: Technology Review

Anuncios
Publicado en Ciencia y programación

Flexible tactile sensor lets robots feel

Researchers at the Korea Advanced Institute of Science and Technology (KAIST) have developed a tactile sensor composed of silicon and carbon materials that can serve as a skin for robots, absorb shocks, and differentiate between various forms of touch. The researchers combined silicon and carbon nanotubes to produce a composite, which was paired with a medical-imaging technique called electrical impedance tomography. The team says the new material can distinguish between the location and the size of various forms by touch. In addition, it can withstand strong force, as well as function as a three-dimensional computer interface and tactile sensor. The researchers also note it can be reused even after partial damage to the sensor by filling and hardening the damaged region with composite. “This technology will contribute to the soft robot industry in the areas of robot skin and the field of wearable medical appliances,” says KAIST professor Jung Kim.

More info here:  EE Times Asia

Publicado en Ciencia y programación

Computers Learn to Cooperate Better Than Humans

Computers have for the first time trained themselves to cooperate in games in which the goal is to achieve the best possible outcome for all players. Brigham Young University professor Jacob Crandall and colleagues brought humans and computers together to play digital versions of chicken, prisoner’s dilemma, and a third collaborative game called “alternator.” Teams consisted of two people, two computers, or one human and one computer. Twenty-five different machine-learning algorithms were tested, but no one algorithm was capable of collaborating. The researchers then imbued communicative ability among the computers by adding 19 prewritten phrases to be sent back and forth between partners after each term. Over time, the computers had to learn the phrases’ definition in the context of the game. The S# algorithm learned to cooperate with its partner in a few turns, and the machine-only teams cooperated at a higher rate than humans by the end of the game.

More info here: Science – Jackie Snow

Publicado en Ciencia y programación

A revolutionary atom-thin semiconductor for electronics

Researchers at the University of Bayreuth in Germany have developed Hexagonal Boron Carbon Nitrogen, a two-dimensional (2D) material that could revolutionize electronics. The researchers say the new material, which features semiconductor properties, could be better suited for high-tech applications than other conventional materials, such as graphene. “Our development can be the starting point for a new generation of electronic transistors, circuits, and sensors that are many times smaller and more flexible than previous electronic elements,” says Bayreuth professor Axel Enders. Although graphene is extremely stable and serves as an excellent conductor of heat and electricity, electrons flow unhindered at any electrical voltage, meaning there are no defined “on” and “off” states. The Bayreuth researchers attempted to solve this problem by replacing individual carbon atoms in the material with boron and nitrogen, in such a way they were able to form a two-dimensional lattice with semiconductor properties.

More info here: University of Bayreuth, Christian Wissler

 

Publicado en Informática e Internet | Deja un comentario

Trump likely to see the birth of an exascale system

An exascale supercomputer will likely be realized within the Trump administration’s first term, which could be a tipping point for the U.S. Supercomputing is viewed as essential to national competitiveness because of the increasingly virtual nature of research and product development. Europe has set an exascale delivery schedule of 2022 along with a $749-million commitment, while both China and Japan aim to have a system ready by 2020. China is using its own microchips, while a European system in development uses ARM processors. The Obama administration initially set a 2023-2024 target date for exascale, but amended it in its final weeks to 2021, with a projected budget of $3.1 billion to $5.7 billion. Argonne National Laboratory’s Paul Messina says the U.S. Department of Energy’s Exascale Computing Project “is now a seven-year project, not a 10-year project, but it will cost more.” China currently has the world’s fastest supercomputer, running at about 125 petaflops. Although the U.S. exascale project’s goals include contributing to the country’s economic competitiveness and supporting national security, another objective is developing a software stack, in collaboration with vendors, that smaller systems in industry and academia can utilize.

More info here: Computerworld (01/20/17) Patrick Thibodeau

Publicado en Informática e Internet

Japan aims for superefficient supercomputer by 2017

Japan’s National Institute of Advanced Industrial Science and Technology (AIST) plans to build a super-efficient supercomputer that could achieve the top ranking in the Top500 supercomputer list by the end of next year. The AI Bridging Cloud Infrastructure is intended for use by startups, existing industrial supercomputing users, and academia. The planned supercomputer would have a processing capacity of 130 petaflops and outperform the current world leader, China’s Sunway TaihuLight, which delivers 93 petaflops. AIST also wants to make its new supercomputer one of the most efficient in the world, aiming for a power consumption of less than 3 megawatts. Japan’s most powerful supercomputer, Oakforest-PACS, currently delivers 13.6 petaflops for the same amount of power. AIST wants its new system to have a power usage effectiveness of less than 1.1, a value attained only by the world’s most efficient data centers. The AIST researchers plan to use liquid cooling to help meet their goals for the new system. Other countries have optimized their top supercomputers for calculations such as atmospheric modeling or nuclear weapon simulations, but AIST is focusing on machine-learning and deep-learning applications in artificial intelligence.

More info here: IDG News Service (11/25/16) Peter Sayer

Publicado en Ciencia y programación

Computer scientists find ‘inexact computing’ can improve answers

Computer scientists at Rice University, Argonne National Laboratory, and the University of Illinois at Urbana-Champaign say “inexact computing” can dramatically improve the quality of simulations run on supercomputers. Inexact computing focuses on saving energy wherever possibly by paying only for the accuracy that is required in a given situation, says Krishna Palem, director of Rice’s Center for Computing at the Margins. Using the Newton-Raphson tool of numerical analysis, the team demonstrated it is possible to leapfrog from one part of a computation to the next and reinvest the energy saved from inexact computations at each new leap to increase the quality of the final answer while retaining the same energy budget. The researchers showed the solution’s quality could be improved by more than three orders of magnitude for a fixed energy cost when an inexact approach to calculation was applied instead of a traditional high-precision approach. Palem compares their approach to calculating answers as a relay of sprints rather than a marathon. “A specific goal is to encourage the application of this approach as a way to advance the quality of weather and climate modeling by improving model resolution,” he says.

More info here: Rice University (10/20/16) David Ruth; Jade Boyd

Publicado en Ciencia y programación