Scientists at the University of Alberta (U of A) in Canada have taken a step toward smaller, faster, and more energy-efficient computers engineered at the atomic level via proton shuffling. The university’s Robert Wolkow and Moe Rashidi developed an algorithm to automate a time-intensive process, which involves employing an incredibly thin probe to sever and rearrange atomic bonds. “Every time you go to break the bond between atoms, so that you can pick up a target atom and put it somewhere else, you might unintentionally break a bond in your tools,” Wolkow says. Rashidi’s algorithm automatically spots and repairs probe damage as it occurs, making human monitoring unnecessary. Wolkow suggests building circuits at the atomic level would enable manufacturers to produce devices that circumvent the current energy and heat limitations of modern transistors.
More info here: The Star Edmonton, Hamdi Issawi
Ali Khademhosseini of the University of California, Los Angeles led a study describing a new three-dimensional (3D) printing method to construct therapeutic biomaterials from multiple materials. His team employed stereolithography in conjunction with a customized 3D printer Khademhosseini designed, which uses a custom-built microfluidic chip with inlets that each produce a distinct material, combined with a digital micromirror array. The team used varying hydrogels that cohere into scaffolds for tissue, while the micromirrors steered light onto the printing surface to mark the outline of the object. Illumination also catalyzed the formation of molecular bonds in the materials, inducing hydrogel solidification. The mirror array re-directed the light pattern during printing to indicate the contours of each new layer. The process was initially used to produce simple shapes, and then applied to complex 3D structures to emulate muscle tissue and muscle-skeleton connective tissues.
More info here: UCLA Samueli Newsroom
The International Criminal Police Organization (Interpol) in Lyon, France is assessing software that matches samples of speech taken from phone calls or social media posts to voice recordings of criminals in a vast law enforcement database. The SIIP (Speaker Identification Integrated Project) platform would use multiple speech analysis algorithms to filter voice samples by gender, age, language, and accent in an effort to increase voice data accuracy, reliability, and judicial admissibility. The development team successfully field-tested the system twice last year, and a project review is slated for this June in Brussels. The software adds new information to captured voice clips, such as the speaker’s age or accent. Using algorithms lined up by software developers, the SIIP platform parses newly recorded voice samples through a processing chain built on open-sourced architecture. The software’s video processing engine extracts the audio from an online video and formats it into uncompressed 16 kHz WAV files. Security groups in the Netherlands and the U.K. studied the ethical concerns associated with the project, but it has drawn negative commentary from civil rights activists.
More info here: IEEE Spectrum, Michael Dumiak
HyperTools is an open source software package developed by Dartmouth College researchers that leverages a suite of mathematical techniques to understand high-dimensional datasets via the underlying geometric structures they reflect. HyperTools can be used to convert data into visualizable shapes or animations, which can then be employed to compare different datasets; intuitively gain insights into underlying patterns; generalize across datasets; and develop and test theories relating to big data. “Our tool turns complex data into intuitive 3D [three-dimensional] shapes that can be visually examined and compared,” says Dartmouth’s Jeremy R. Manning. The researchers demonstrated their work with HyperTools visualizations of brain activity in response to movie frames, of changes in temperature measurement across the Earth between 1975 and 2013, and of the content of political tweets by Hillary Clinton and Donald Trump during the 2016 presidential campaign. HyperTools can also be used to guide the development of new machine learning algorithms.
More info here: Dartmouth College
The University of Cambridge in the U.K. will get a new artificial intelligence (AI) supercomputer via a 10-million-pound ($13.5-million) collaborative alliance with the Engineering and Physical Sciences Research Council and the Science and Technology Facilities Council, with the goal of helping companies generate business value from advanced computing infrastructures. “Cambridge’s supercomputer provides researchers with the fast and affordable supercomputing power they need for AI work,” says Cambridge’s Paul Calleja. AI projects currently underway at the university include medical imaging analysis and genomics, as well as an astronomy initiative for mapping exoplanets. The supercomputer is part of the U.K. government’s AI Sector Deal designed to help position the nation as a research hub, with measures to ensure future innovators and technology entrepreneurs are based in the U.K., with investment in the high-level post-graduate skills required to capitalize on AI’s vast potential.
More info here: University of Cambridge, Sarah Collins
Researchers at Nanyang Technological University in Singapore have programmed a robot to create and carry out a plan to assemble an Ikea chair. The robot was built with custom software, a three-dimensional camera, two arms, grippers, and force detectors. The machine was fed a set of instructions on how the chair components fit together, and over about 20 minutes it performed the assembly in three stages. The robot first looked at the scattered components, photographing the scene and matching each part to one modeled in its manual. It then developed a plan to quickly assemble the chair without its arms colliding with each other or with the various parts, and finally executed that plan. The robot used grippers to pick up wooden pins and force sensors at its “wrists” to identify when the pins, searching in a spiral, slid into their holes. The arms then worked together to press the sides of the chair frame together.
More info here: The New York Times, Niraj Chokshi
University of Washington researchers, working with colleagues at the Allen Institute for Artificial Intelligence (AI), have trained an AI system to respond like a dog using data from an actual animal. To capture that data, a real dog was initially equipped with several sensors, including a GoPro, a microphone, inertia sensors, and an Arduino unit. In total, the team collected 24,500 frames of video, which were synchronized with body movements and sound. The researchers then used 21,000 of those frames to train the AI system and the rest to test it. The researchers found the system outperformed baselines on tasks they deemed challenging. Although the AI system was not connected to a robotic dog, the team wants to take the research in that direction in the future.
More info here: Tech Xplore, Bob Yirka