The results show that dogs are more attuned to actions than to who or what is doing the action – ScienceDaily

Scientists have decoded visual images from a dog’s brain, offering the first look at how the canine mind reconstructs what it sees. Journal of Visual Experiments Emory University published a study.

The results show that dogs are more attuned to the movements around them than to who or what is moving.

The researchers recorded fMRI neural data for two awake, unrestrained dogs while they watched videos for a total of 90 minutes in three 30-minute sessions. They then used a machine learning algorithm to analyze the patterns in the neuronal data.

“We’ve shown that we can monitor the activity in a dog’s brain while watching a video and reconstruct, at least to a limited extent, what it was watching,” says Emory psychology professor and corresponding author of the paper, Gregory Burns. “The fact that we were able to do this is remarkable.”

The project is inspired by recent advances in machine learning and fMRI to decipher visual stimuli from the human brain, providing new insights into the nature of perception. Besides humans, the technique has been applied to only a few other species, including some primates.

“Although our work is based on only two dogs, there is proof of concept that these techniques work in dogs,” says first author Erin Phillips, who did the work as a research scientist in Burns’ Canine Cognitive Neuroscience Lab. “I hope this paper helps lay the groundwork for other researchers to apply these techniques to dogs, as well as other species, so we can learn more and more about how the minds of different animals work.”

Originally from Scotland, Phillips came to Emory as a Bobby Jones Scholar, an exchange program between Emory and the University of St. Andrews. He is currently a graduate student in ecology and evolutionary biology at Princeton University.

Burns and colleagues have pioneered training techniques to put dogs in an fMRI scanner and hold them completely still and unrestrained while their neural activity is measured. Ten years ago, his team published the first fMRI brain images of a fully awake, untethered dog. This opened the door to what Burns called The Dog Project—a series of experiments exploring the minds of the oldest domesticated species.

Over the years, his lab has published research on how the canine brain processes sight, words, smells, and rewards such as praise or receiving food.

Meanwhile, the technology behind machine learning computer algorithms continued to improve. Technology has allowed scientists to decipher some patterns of human brain activity. The technology “reads minds” by detecting within patterns of brain data the various objects or movements a person sees while watching a video.

“I started to wonder, ‘Can we apply similar techniques to dogs?'” recalls Burns.

The first challenge was to produce video content that a dog would be interested enough to watch for a long time. The Emory research team attached a video recorder to a gimbal and selfie stick, which allowed them to capture steady footage from the dog’s perspective, from about waist-to-human or slightly lower.

They used this device to create a half-hour video of scenes from most dogs’ lives. Activities include dogs being petted by people and enjoying people. Scenes with dogs include them sniffing, playing, eating or walking on a leash. Action scenes showed cars, bicycles or scooters driving down the road; a cat walking around the house; a passing deer; people sitting; people hugging or kissing; people offering a rubber bone or ball to the camera; and people eating.

Video data is time-stamped into different classifiers, including object-based classifiers (such as dog, car, human, cat) and motion-based classifiers (such as sniffing, playing, or eating).

Only two of the dogs trained for the FMRT experiments had the attention and temperament to lie perfectly still and watch a 30-minute video without a break, including three sessions totaling 90 minutes. These two “superstar” dogs were Daisy, a mixed breed who may be part Boston terrier, and Bhubo, a mixed breed who may be part boxer.

“They didn’t even need food,” says Phillips, who watched the animals during fMRI sessions and tracked their eyes on video. “It was fun because it’s serious science and a lot of time and effort went into it, but it ended up with dogs watching videos of other dogs and people doing some kind of silly thing.”

Two people also underwent the same experiment, watching the same 30-minute video in three separate sessions while lying down in the fMRI.

Brain data can be mapped to video classifiers using time stamps.

A neural network machine learning algorithm known as Ivis was applied to the data. A neural network is a method of machine learning by analyzing computer training samples. In this case, a neural network was trained to classify the content of brain data.

Results for two human subjects revealed that a model developed using a neural network showed 99% accuracy in mapping brain data onto both object- and activity-based classifiers.

In the case of decoding video content from dogs, the model did not work for object classifiers. However, it was 75% to 88% accurate when decoding activity classifications for dogs.

The results show major differences in how the brains of humans and dogs work.

“We humans are very object-oriented,” says Burns. “English has 10 times more nouns than verbs because we have a particular obsession with naming objects. Dogs seem to be less interested in what they look like or what they see, and more interested in the action itself.”

Burns notes that there are also big differences in the visual systems of dogs and humans. Dogs see only in shades of blue and yellow, but have a slightly higher density of visual receptors designed to detect movement.

“It makes perfect sense that dogs’ brains are highly attuned to movement first and foremost,” he says. “Animals have to be very concerned about what’s going on around them to avoid being eaten or to track down animals they might want to hunt. Motion and motion are paramount.”

For Philips, understanding how different animals perceive the world is important to her current field research in Mozambique on how the reintroduction of predators can affect ecosystems. “Historically, computer science and ecology haven’t had a lot of overlap,” he says. “However, machine learning is a growing field that is beginning to find wider applications, including in ecology.”

Additional authors on the paper include Emory associate professor of psychology Daniel Dilks and Emory undergraduate neurobiology and behavioral biology major Kirsten Gillette, who worked on the project. Gilette has since graduated and is now pursuing an undergraduate degree at the University of North Carolina.

Daisy belongs to Rebecca Beasley and Bhubo belongs to Ashwin Sakhardande. Human experiments in the study were supported by a grant from the National Eye Institute.

.

Leave a Comment