When you come back to school after summer break, it feels as though you’ve already forgotten everything you learned. But if you were to learn like an AI system does, your brain would take that as a cue to wipe the slate clean and start from scratch.
AI systems have the tendency to forget the things that it’s already learned. It’s known as catastrophic forgetting, and is highly problematic.
AI systems don’t actually understand the logic of what they do, teaching them something else, even if it’s highly similar, means training them all over again. Once the algorithm is trained, it can’t be changed or updated.
Scientists have been trying for years to figure out how to work around this memory problem. If they manage to find a solution, AI systems would be capable of learning from a new set of training data, without having to overwrite what they already know, for the most part.
If the robots could somehow rise up, they would be able to conquer all life on Earth. But still, this catastrophic forgetting is still one of the major challenges scientists are faced with. It’s one of the main reasons why experts don’t plan on seeing human-level AI anytime soon.
However, a senior research scientist at Google DeepMind, Irina Higgins recently announced that her team has started to crack the code…
She’s developed an AI agent, similar to a video game character that is controlled by an AI algorithm, that could think more broadly and creatively than a standard algorithm. In fact, it would actually somewhat imagine how things that it already encountered would look like in another environment.
Of course, this isn’t at the same level as a human’s imagination, where we are able come up with new mental images, however the AI system can imagine objects that it’s already seen in other places.