About Careers Internship MedBlog Contact us
Medindia LOGIN REGISTER
Advertisement

New Artificial Agent Sees Like a Human

by Colleen Fleiss on May 16, 2019 at 11:16 PM
New Artificial Agent Sees Like a Human

The University of Texas computer scientists have taught an AI agent how to do things that usually humans can do namely take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions. The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results today in the journal Science Robotics.

But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.

Advertisement


"We want an agent that's generally equipped to enter environments and be ready for new perception tasks as they arise," Grauman said. "It behaves in a way that's versatile and able to succeed at different tasks because it has learned useful patterns about the visual world."

The scientists used deep learning, a type of machine learning inspired by the brain's neural networks, to train their agent on thousands of 360-degree images of different environments.
Advertisement

Now, when presented with a scene it has never seen before, the agent uses its experience to choose a few glimpses--like a tourist standing in the middle of a cathedral taking a few snapshots in different directions--that together add up to less than 20 percent of the full scene. What makes this system so effective is that it's not just taking pictures in random directions but, after each glimpse, choosing the next shot that it predicts will add the most new information about the whole scene. This is much like if you were in a grocery store you had never visited before, and you saw apples, you would expect to find oranges nearby, but to locate the milk, you might glance the other way. Based on glimpses, the agent infers what it would have seen if it had looked in all the other directions, reconstructing a full 360-degree image of its surroundings.

"Just as you bring in prior information about the regularities that exist in previously experienced environments--like all the grocery stores you have ever been to--this agent searches in a nonexhaustive way," Grauman said. "It learns to make intelligent guesses about where to gather visual information to succeed in perception tasks."

One of the main challenges the scientists set for themselves was to design an agent that can work under tight time constraints. This would be critical in a search-and-rescue application. For example, in a burning building a robot would be called upon to quickly locate people, flames and hazardous materials and relay that information to firefighters.

For now, the new agent operates like a person standing in one spot, with the ability to point a camera in any direction but not able to move to a new position. Or, equivalently, the agent could gaze upon an object it is holding and decide how to turn the object to inspect another side of it. Next, the researchers are developing the system further to work in a fully mobile robot.

Using the supercomputers at UT Austin's Texas Advanced Computing Center and Department of Computer Science, it took about a day to train their agent using an artificial intelligence approach called reinforcement learning. The team, with Ramakrishnan's leadership, developed a method for speeding up the training: building a second agent, called a sidekick, to assist the primary agent.

"Using extra information that's present purely during training helps the [primary] agent learn faster," Ramakrishnan said.

Source: Eurekalert
Font : A-A+

Advertisement

Advertisement
Advertisement

Recommended Readings

Latest News on IT in Healthcare

The Digital Transformation of Healthcare: Prioritizing Patient Privacy
The pandemic boosted telemedicine and digital tech use. Yet, overlooking cybersecurity in healthcare tech risks patient data privacy.
Earbuds That Listen to Your Brain Activity
Earbuds are becoming brain activity monitors with flexible sensors, providing health insights and even neuro-degenerative condition detection.
Oracle Revolutionizes Healthcare With Generative AI Advancements
Oracle has added healthcare-specific features to the Oracle Fusion Cloud Applications Suite, aiding healthcare in trengthening supply chains, and elevating patient care.
AI Sheds Light on Predicting Post-Hip Fracture Mortality Risk
Explore how artificial intelligence is utilized to predict the likelihood of death in the years after experiencing a hip fracture.
Robotics Market Set To Reach $218 Billion By 2030
Cloud computing plays a central role in robotics market, enabling more rapid, secure, and scalable management of sensing, computation, and memory.
View All
This site uses cookies to deliver our services.By using our site, you acknowledge that you have read and understand our Cookie Policy, Privacy Policy, and our Terms of Use  Ok, Got it. Close
MediBotMediBot
Greetings! How can I assist you?MediBot
×

New Artificial Agent Sees Like a Human Personalised Printable Document (PDF)

Please complete this form and we'll send you a personalised information that is requested

You may use this for your own reference or forward it to your friends.

Please use the information prudently. If you are not a medical doctor please remember to consult your healthcare provider as this information is not a substitute for professional advice.

Name *

Email Address *

Country *

Areas of Interests