Why Dall.E 3 is so restrictive? This is part of the tutorial for Artificial Intelligence Images and Prompts, FB Group Admins.

This post shows a summary made by Microsoft Copilot, of a PDF Dall.e 3 OpenAI published, specifically where the document explains, why Dall.e 3 is so restrictive. 

The original document name is: DALL_E_3_System_Card.pdf


"The algorithm shown in this part of the document is a mitigation stack for the DALL·E 3 system, which is a set of techniques and policies to reduce the potential harms and risks of using generative AI models". 


The importance of the algorithm is to:

  • Filter out explicit and inappropriate content from the training data and the generated images, such as graphic sexual and violent content, hate symbols, and personal information.
  • Reduce model bias and increase data diversity by adjusting the thresholds and specificity of the filters, especially for images of women who are often over-represented in sexualized imagery.
  • Refuse to generate prompts or images that may violate the usage policy or content guidelines, such as sensitive topics, public figures, or branded objects.
  • Rewrite the input text to make it more effective and compliant for generating images, such as adding details, removing names, and using generic terms.

The algorithm aims to ensure that the DALL·E 3 system is safe, trustworthy, and responsible, and that it respects the privacy and dignity of the users and the subjects of the images. 🙏


Image created on MS Copilot, Prompter RubenTainoAI, Prompt: Make an AI image of an Exaggerated Caricaturerealistic of Eugenio Salvador Dalí Painting a Robot name DallE, add the name "DallE" on the Canvas, in the Style of Extreme Caricature.

Comments