![]() That network uses thousands of labeled issues to learn how to identify things like hair, faces, glasses, and shoulders. Once it does that, it applies the backgrounds users pick in real-time (at least 30 times each second), as Google details in a blog post.Īs you might expect, Google managed to do this thanks in part to a neural network. ![]() This new “video segmentation tool” doesn’t use any depth-sensing tech, but simply uses the ordinary image to determine where the foreground and background meet. Now, a beta feature is rolling out to a limited group of YouTube Stories users which lets creators swap out their background images with nothing more than a phone. If there’s one thing Google does better than anyone else, it’s using software to make camera features better. ![]()
0 Comments
Leave a Reply. |