U.S. universities are starting to offer ethics courses relating to computer science, with the hope of training next-generation technologists and policymakers to weigh the social and moral ramifications of innovations before they are commercialized. One factor driving this trend is the popularization of tools such as machine learning, which have the potential to significantly change human society. “We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” says New York University’s Laura Noren. “You can patch the software, but you can’t patch a person if you…damage someone’s reputation.” A joint Harvard University-Massachusetts Institute of Technology course concentrates on the ethical, policy, and legal implications of artificial intelligence. The course also covers the proliferation of algorithmic risk scores that use data to predict whether someone is likely to commit a crime.
More info here: The New York Times, Natasha Singer
The University of Southampton in the U.K. recently launched the Center for Machine Intelligence (CMI), bringing together researchers and practitioners in artificial intelligence, machine learning, and autonomous systems to develop a coherent approach to research and technology transfer. Discussions at the launch event focused on these various technologies’ application in large-scale Internet of Things systems and in the insurance and social care sectors. Research groups within the CMI will focus on the theoretical aspects of machine intelligence, including the Agents, Interaction, and Complexity group, and the Vision, Learning, and Control group. “The formation of the CMI is an important next step at a time of great advances in this field and we look forward to working with industry, policymakers and the general public as we address both national and global challenges,” says Southampton professor Sarvapali Ramchurn, who will head the CMI.
More info here: University of Southampton
Researchers at the U.S. National Institute of Standards and Technology (NIST) say they have constructed a superconducting “synapse” switch that “learns” in the manner of a biological system and which could link processors and store memories in future computers that operate like the human brain. The team views the synapse as a key ingredient for neuromorphic computers, and it consists of a compact metallic cylinder 10 micrometers in diameter. The device processes incoming electrical spikes to tailor spiking output signals, with processing based on a flexible internal design that is experientially or environmentally tunable. The NIST synapse also fires 1 billion times a second–much more than a human synapse–while using only about one-10,000th as much energy. The researchers note the synapse would be employed in neuromorphic systems built from superconducting components, which can transmit electricity without resistance, with data transmitted, processed, and stored in units of magnetic flux.
More info here: NIST News, Laura Ost
More info here: SD Times, Jenna Sargent
Researchers at the universities of Toulouse and Paris-Saclay in France found they could differentiate between human and computerized Go players by analyzing the statistical characteristics of thousands of games played by people and algorithms. The researchers built databases of 8,000 games played by amateur humans, 8,000 played by the software Gnugo, 8,000 played by the software Fuego, and 50 games played by the software AlphaGo. Their analysis found network-based software forms more “communities”–signs the algorithms are creating varied and diverse strategies–than humans. The team also found statistical differences between the computer- and human-generated networks are much larger than the variability within each network. They suggest these differences could form the basis of a new type of Turing test. “We think our work indicates a path towards a better characterization and understanding of the differences between human and computer decision-making processes, which could be applied in many different areas,” says Paris-Saclay’s Olivier Giraud.
More info here: Phys.org, Lisa Zyga
Researchers at the Georgia Institute of Technology (Georgia Tech) have developed MERLIN, a computer-aided approach for streamlining the design process for origami-based structures. The researchers say MERLIN is a breakthrough that makes it easier for engineers and scientists to conceptualize new ideas graphically while generating the underlying mathematical data for building the structure. “With the new software, we can easily visualize and, most importantly, engineer the behavior of deployable, self-assembling, and adaptable origami systems,” says Georgia Tech professor Glaucio Paulino. The research involved building a computer model to simulate the interaction between the two facets of a folded sheet–how easily and how far the folds would bend, and how much the flat planes would deform during movement. MERLIN lets users simulate how origami structures will respond to compression forces at different angles. “The software also allows us to see where the energy is stored in the structure and better understand and predict how the objects will bend, twist, and snap,” Paulino says.
More info here: Georgia Tech Research Horizons
Researchers at the University of California, Los Angeles are constructing a device the California NanoSystems Institute’s Adam Stieg says is “inspired by the brain to generate the properties that enable the brain to do what it does.” The device is a mesh of highly interconnected silver nanowires that is self-configured out of random chemical and electrical processes. This network contains 1 billion artificial synapses for each square centimeter, and experiments found it can execute simple learning and logic operations, as well as filtering out unwanted noise from received signals. Instead of using software, the researchers leverage the network’s ability to distort an input signal in various ways, depending on where the output is quantified; this implies voice- or image-recognition applications. Another implication is the mesh could support reservoir computing, enabling users to select or mix outputs in such a manner that the result is a desired computation of the inputs.
More info here: Quanta Magazine, Andreas von Bubnoff