University of Washington researchers, working with colleagues at the Allen Institute for Artificial Intelligence (AI), have trained an AI system to respond like a dog using data from an actual animal. To capture that data, a real dog was initially equipped with several sensors, including a GoPro, a microphone, inertia sensors, and an Arduino unit. In total, the team collected 24,500 frames of video, which were synchronized with body movements and sound. The researchers then used 21,000 of those frames to train the AI system and the rest to test it. The researchers found the system outperformed baselines on tasks they deemed challenging. Although the AI system was not connected to a robotic dog, the team wants to take the research in that direction in the future.
More info here: Tech Xplore, Bob Yirka
Researchers at the University of Texas at San Antonio (UTSA) have created a new cloud-based learning platform for artificial intelligence (AI) designed to teach machines to learn like humans. UTSA professors Nicole Beebe and Paul Rad examined how education and comprehension have evolved over the past 500 years to obtain a better understanding of how computers could be taught to approach deductive reasoning. They also focused on how humans learn across their lifetimes. Rad thinks intelligent machines could be utilized in medical diagnoses, leading to more affordable healthcare, and in other fields in which precise deductive reasoning is a necessity. Throughout history, Rad said, “Humans have invented and used tools such as swords, calculators, and cars, and tools have changed human society and enable us to evolve. That’s what we’re doing here, but on a much more impactful scale.”
More info here: UTSA Today, Joanna Carver
Researchers at the University of Manchester in the U.K. developing jumping robot spiders and swarms of robotic bees have trained a species of jumping spider to jump different distances and heights, recording every movement in extreme detail using high-resolution cameras. If robots can be developed that can perfectly mimic the way biological spiders jump, they can be used for applications in complex engineering and manufacturing, and can be deployed in unknown or dangerous environments, says Manchester researcher professor Mostafa Nabawy. Nabawy also is developing flying robot bees, with the ultimate goal of creating a machine that can fly independently. Nabawy notes these technologies can “be used for many different applications, including improving the current aerodynamic performances of aircraft.”
More info here: University of Manchester
Neuroscientists at the University of Toronto Scarborough (U of T Scarborough) have for the first time digitally reconstructed images of what people perceive based on brain waves recorded via electroencephalogram (EEG). “When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing,” says U of T Scarborough’s Dan Nemrodov. “We were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process.” EEG-linked test subjects were displayed images of faces as their brain activity was recorded, and then machine-learning algorithms digitally recreated the images. “EEG captures activity at the millisecond scale, Nemrodov notes. “So we can see with very fine detail how the percept of a face develops in our brain using EEG.” The team estimated that it takes the brain about 170 milliseconds to form a good representation of a face.
More info here: University of Toronto Scarborough, Don Campbell
Researchers at the Massachusetts Institute of Technology, Harvard University, and elsewhere have demonstrated the successful interaction of photons, which could clear a path toward using light particles in quantum computing. Their controlled experiments showed that when shining a weak laser beam through a dense cloud of ultracold rubidium atoms, the photons bound together in pairs or triplets, suggesting an attraction occurring among them. The researchers also measured the photonic phase before and after traveling through the cloud, and they noted as the three-photon particles exited the atom cloud simultaneously, their phase was shifted compared to when the photons did not interact at all, and was three times larger than the phase shift of two-photon molecules. The team says photons that have interacted with each other, in this case via an attraction between them, can be considered strongly correlated, or entangled, which is essential for any quantum computing bit.
More info here: MIT News, Jennifer Chu
U.S. universities are starting to offer ethics courses relating to computer science, with the hope of training next-generation technologists and policymakers to weigh the social and moral ramifications of innovations before they are commercialized. One factor driving this trend is the popularization of tools such as machine learning, which have the potential to significantly change human society. “We need to at least teach people that there’s a dark side to the idea that you should move fast and break things,” says New York University’s Laura Noren. “You can patch the software, but you can’t patch a person if you…damage someone’s reputation.” A joint Harvard University-Massachusetts Institute of Technology course concentrates on the ethical, policy, and legal implications of artificial intelligence. The course also covers the proliferation of algorithmic risk scores that use data to predict whether someone is likely to commit a crime.
More info here: The New York Times, Natasha Singer
The University of Southampton in the U.K. recently launched the Center for Machine Intelligence (CMI), bringing together researchers and practitioners in artificial intelligence, machine learning, and autonomous systems to develop a coherent approach to research and technology transfer. Discussions at the launch event focused on these various technologies’ application in large-scale Internet of Things systems and in the insurance and social care sectors. Research groups within the CMI will focus on the theoretical aspects of machine intelligence, including the Agents, Interaction, and Complexity group, and the Vision, Learning, and Control group. “The formation of the CMI is an important next step at a time of great advances in this field and we look forward to working with industry, policymakers and the general public as we address both national and global challenges,” says Southampton professor Sarvapali Ramchurn, who will head the CMI.
More info here: University of Southampton