Tech
Trending

Facebook and Instagram to label all fake AI images

Facebook and Instagram, plans to identify and label AI-generated images to tackle the rise of deepfake videos

Meta will start reporting AI-generated photographs on Facebook, Instagram, and Threads to maintain online openness.

A visual watermark is already used by the tech giant to denote video created by its Imagine AI engine. Moving on, it will perform something similar with images from third-party sources such as OpenAI, Google, and Midjourney, to name a few. It is unclear what these labels would look like, but based on the announcement article, they may just be the phrase “AI Info” next to the created material. Meta notes that this design is not final, implying that it may change after the update is released.

Meta plans to develop technology that can recognize and categorize photographs created by other firms’ artificial intelligence (AI) capabilities.

It will be available across Facebook, Instagram, and Threads.

Meta has previously labeled AI photos created by its algorithms. It says it expects the new technology, which it is currently developing, will generate “momentum” in the sector to combat AI fraud.

In addition to obvious labels, the business claims it’s developing technologies to “identify invisible markers” in photos created by third-party generators. Imagine AI performs this as well by inserting watermarks in its content’s metadata. Its goal is to provide a distinct tag that cannot be altered by editing software. Meta claims that other platforms want to do the same and require a mechanism in place to identify tagged metadata.

Audio and video labeling

So far, everything has been focused on branding visuals, but how about AI-generated music and video? Google’s Lumiere can create amazingly lifelike films, and OpenAI is working on adding video production to ChatGPT. Is there anything in place to detect more advanced types of AI content? Okay, sort of.

Meta admits that it can’t presently recognize AI-generated audio and video at the same level as pictures. The technology simply isn’t there yet. However, the industry is moving “towards this capability”. Until then, the corporation will rely on the honor system. It would require users to indicate if the video clip or audio file they wish to submit was created or altered using artificial intelligence. Failure to do so will incur a “penalty”. Furthermore, if a piece of media is so realistic that it risks misleading the audience, Meta will affix “a more prominent label” with critical information.

For More Updates Follow: AroundUsInfo.com

Future Updates

Meta is also aiming to improve its own platforms’ first-party features. The company’s AI Research unit, FAIR, is creating a new sort of watermarking technology called Stable Signature. Invisible indicators may be removed from AI-generated material metadata. Stable Signature is intended to prevent this by including watermarks in the “image generation process”. On top of that, Meta has begun training multiple LLMs (Large Language Models) on its Community Standards, allowing the AIs to assess whether specific pieces of material violate the policy.

Expect to see the social media branding appear in the coming months. The timing of the announcement is unsurprising given that 2024 is a big election year for numerous nations, most notably the US. Meta strives to prevent disinformation from propagating on its platforms as much as possible.

We contacted the firm to learn more about the fines that users may face if they do not appropriately label their posts, as well as if the company intends to mark photographs from third-party sources with a visible watermark. This story will be updated later.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button