Using DALL-E 2 in the real world
DALL-E 2 isn’t available for public use yet but, once it is, how could you use it in the real world?
Editing your photos could become much quicker and easier by using DALL-E 2. It’ll allow you to cut down the time spent on editing tasks by seamlessly changing your imagery almost instantly. And it’ll enable non-technical users to get creative without the need for any specialist editing skills.
From a designer’s perspective, DALL-E 2 could be a great alternative to stock photography. Instead of trawling through image libraries to find the right picture, you could just ask for what you need and voila! This would also be useful for bloggers or anyone with a website looking for a quick way to generate images to sit alongside their content. Cosmopolitan magazine recently took this a step further by creating the world’s first artificially intelligent magazine cover using the DALL-E 2 technology.
Been on holiday but come home to realise that some of your favourite photos are duds? Why not use DALL-E 2 to recreate those special moments. After all, the memories are there, you just don’t have an accurate reflection of them to share yet. That’s exactly what this guy did, mixing in his real holiday snaps with DALL-E 2 ones in a Facebook album, and seeing if anyone spotted the difference.
An exciting evolution of DALL-E 2 could lie in the gaming world. Creating virtual worlds is a laborious task - and we all know that time is money. DALL-E 2 could be used by developers to speed up the creation of these worlds. Games would get to market quicker and there’s potential to save a huge amount of money in development.
Seeing as we’re talking about virtual worlds, it’d be remiss not to mention the Metaverse. In the broadest of terms, the Metaverse is an immersive 3D version of the internet. An advanced version of DALL-E 2 could be developed to help everyday users create their own space in the Metaverse.
Ethical challenges of text-to-image generators
As impressive as it is, DALL-E 2 doesn’t come without its flaws. And the same can be said for other text-to-image generators like Google’s Imagen.
The core limitation of technologies like DALL-E 2 is biased images.
Think about it. DALL-E has learned the link between images and their labels by scraping millions of images from the internet.
Negative stereotypes, social bias and racism have all been fed into the model. DALL-E’s outputs are limited by its inputs. And a poorly curated data set will clearly result in images that automate discrimination.
OpenAI is aware of the ethical issues with DALL-E 2. To combat this, they’ve put together a ‘red team’ - a group of external experts who look for limitations in DALL-E 2 before it's made publicly available.
Initial ‘red team’ findings, however, aren’t promising. Early tests have shown that DALL-E 2 leans toward generating images of white men, reinforces racial stereotypes and overly sexualizes images of women.
More needs to be done to tackle these biased outputs to ensure that images generated by DALL-E 2 do not have a negative societal impact.
Is art really art without a human touch?
Bias aside, there’s no doubt that the capabilities of DALL-E 2 are impressive.
Its ability to compose images in a way that makes sense to us makes it feel like there’s real imagination and thought behind the process.
But can we call it art?
The reality is that there is no emotion behind DALL-E 2’s images.
There’s no deep and meaningful story behind the mad scientist teddy bears below. Or at least, not one that can be understood by the machine that created it. (Unless you’re of the thought school that AI is becoming sentient.)