1: Artificial intelligence can help categorize volcanic ash

Scientists have shown that an artificial intelligence program called a Convolutional Neural Network can be trained to categorize volcanic ash particle shapes. Because the shapes of volcanic particles are linked to the type of volcanic eruption, this categorization can help provide information on eruptions and aid volcanic hazard mitigation efforts.

CNN, Controversial Neural Network, is an AI program that is helping scientists sift volcanic ash with respect to different particle shapes and sizes. AI systems have performed exceptionally well in identifying patterns. In this scenario, a CNN is great because volcanic particles contain information about the type of volcanic eruption. Being able to use that effectively presents a great opportunity for mitigating volcanic hazards. ScienceDaily

2: Our thinking pattern in a hundred million tweets identified

Scientists used 800 million tweets to figure out some pattern in the way we think. The study showed that our mode of thinking changes at different times of the day in a 24-hour day pattern. They came to this conclusion by analyzing around seven billion words in over eight hundred million tweets. Bristol

3: Beaver fever hits autonomous robots – no rough terrain is rough enough

New autonomous robots being built to solve problems based on a research using stigmergy.

Stigmergy is a biological phenomenon that helps understand the complex and sophisticated behavior patterns of termites, ants, and beavers etc. It also explains how the internet and the World Wide Web have blissfully organized into what it is today.

A new research is using how beavers can work through the roughest of terrains and modeling robots in the same way. Buffalo

4: Robot bloodhound tracks odors on the ground

Scientists have capitalized a new way to make robot dogs that can track scents over impressive amounts of distances. Everyone knows how good a dog nose is, specially a Bloodhound’s. Researchers have developed a modern-day bloodhound robot that can very quickly detect odors. The robot bloodhound can even read messages written using some odor as a barcode. ACS

5: AI can now track objects in videos without any supervision

Google has embarked on a journey to track objects in videos without any supervision. They are doing that by introducing a Convolutional neural network that colorizes grayscale videos to track objects. The network can follow multiple objects, track through occlusions and perform flawlessly when dealing with deformed data without using any pre-labeled training data. AI.Googleblog

6: Artificial Intelligence 1, Sick Cardiac Cells 0

The health applications of AI are unlocking new opportunities. A new study shows that, AI paired with machine learning, can be used to sort out sick cardiac cell cultures from healthy ones, and that too with high precision. UTA

7: A Blood-less way to check your sugar levels

Forget having to poke a needle to check your blood sugar levels. A new method is on the rise. By combining radar and AI technology, scientists have been able to sense blood sugar levels without pricking any fingers to take the blood. Economictimes

8: AI enables Computer program to look five minutes into the future

Scientists have developed a new software that can, presumably, look minutes into the future. How could that be possible? The program does so, by learning the expected sequence of actions, for example cooking, from watching video sequences. It can then predict a new situation of what the chef will do and at which point in time. Sciencedaily

9: AI can sense your pose through a wall

Scientists are using wireless devices in combination with AI to sense people’s movements through a wall.

“RF-Pose,” is one of the latest projects at MIT’s CSAIL, which is using artificial intelligence (AI) to teach wireless devices to sense people’s postures and movement. The surprising fact, they can read your pose and movement through a wall!

By using a neural network that analyzes radio signals bouncing off people’s bodies, the system can create a dynamic stick figure that mimics the exact same movements of the person on the other side of the wall. News.MIT

10: Researchers are using Deep Learning to Identify, Count and Describe Wild Animals

Scientist are using artificial intelligence to identify, count and describe animals in their natural habitats. They have devised a system that can automate animal identification for up to 99.3 percent of the images it reads. Using photographs from motion-sensor cameras, the deep learning network automatically describes and identifies the animals in the pictures, and also counts them. UWYO

11: People can now correct robots mistakes through their minds

MIT’s Researchers at the CSAIL have devised a new system that will allow people to control robots through intuition. The designed system monitors brain activity of a person and tries to detect, in real-time, if that person noticed any error while the robot performs its task. News.MIT.

12: Robots are getting better at grabbing things

At the university of QUT, Roboticists have developed a new way for robots to pick up objects, from a jam-packed and changing environment. It is said to make the robots faster and more accurate. Such robots could prove to be great in industrial and domestic environments. QUT

13: Another take on Artificial Intelligence fails desperately

The Guardian reviews Tau, an AI thriller, to be another Sci-Fi disaster. It stars Maika Monroe as the woman who gets kidnapped and academy award winner Gary Oldman as the voice of the AI ‘TAU’.

With a rating of 1 out of 5 stars, TAU is another rotten cherry on the failed AI Sci-Fi pile of dump movies. TheGuardian

14: AI principles at Google – Launched

Google revealed principles for everything Artificial Intelligence. Google recognizes AI to be a powerful technology and understands that it raises equally powerful questions about the way AI should and could be used.

In light of that, Google announced seven principles to guide their AI work going forward. The principals shared are not just theoretical concepts but concrete standards that are set to govern research and product development and business decisions. Blog.Google

15: Machine learning network offers personalized estimates of children’s behavior

Scientists can now estimate autistic children’s behavior by using Machine Learning on a personalized level. MIT media lab Researchers are working with robots using this technique to estimate the engagement and interest of each child during various interactions. Doing so, they are able to gather data specific to each child. That is important with respect to autism as it is not a generalized experience. It is different from child to child.

Doing so, helps them progress towards their long-term goal to create robots that therapists could use to personalize therapy for each case. News.MIT

16: A giant that moves $5 trillion dollars for businesses is about to get gigantic with an AI-powered assistant.

JP Morgan, which moves money for corporations from Honeywell to Facebook, is giving an AI-powered assistant to a core global commerce business. The virtual assistant will commence answer & questions and come up with recommendations as well. CNBC

17: Google takes a step back from helping the US pentagon with AI

Google will not be renewing their contract with the US Pentagon for their AI project, Maven. Due to strong opposition received from its employees against the company being part of a government project of such nature. Employees deeply felt this to be the first step towards using AI for lethal purposes. Thousands of employees signed a petition against Google taking part in this project and numerous resigned. BBC

18: IBM’s arguing robot, wins one debate in a public argument

IBM has created a robot that can answer humans back. An arguing robot! It is an approximate 2-metre-tall black panel called Project Debater. They tested it in a competition in a public debate against humans.  The Debater put an admirable performance and scored a 1-1.

Not bad for a robot! NewScientist

19: Google’s Deepmind can construct a 3D world from a 2D image

GQN (General Query Network) from Deepmind is a neural network. What makes it different from other similar programs is that GQN is programmed to only train on the data it gathers from its observations, not on data inputted by anyone!

This approach enabled GQN to imagine scenes from different angles and generate three-dimensional renderings from those 2D images. It can also identify and classify objects that were unlabelled before hand and make inferences as well, based on what it can see to figure out what it cannot. Engadget

20: Reddit and Twitter set to become an Anti-swearing zone

Researchers at IBM have developed an AI system that turns offensive comments into less benign versions and of course, keeping the context of those comments intact.

Their AI system has proved to be more accurate than other state-of-the-art text translator algorithms. Cacm.Acm