[ad_1]
Sustainability AI has announced stable spread 3, the latest and most powerful version of the company’s image-generating AI model. Although details are scant, this is clearly an effort to stoke the hype around OpenAI and Google’s recently announced competitors.
We’ll do a more technical analysis of all this soon, but for now you should know that Stable Diffusion 3 (SD3) is based on a new architecture and will work on a wide variety of hardware (although you’ll still need something robust). It’s not released yet, but you can sign up for the waiting list Here,
SD3 uses an update “diffusion change,” A technology started in 2022 but revised in 2023 and now reaching scalability. Sora, OpenAI’s impressive video generator, apparently works on similar principles (the paper’s co-author, Will Peebles, went on to co-lead the Sora project). It also employs “flow matching”, another new technique that improves quality without adding too much overhead.
Model suites intended to run on a variety of hardware range from 800 million parameters (less than the commonly used SD 1.5) to 8 billion parameters (more than SD XL). You’ll probably still want a serious GPU and a setup for machine learning work, but you’re not limited to APIs like you typically are with OpenAI and Google Models. (For its part, Anthropic hasn’t publicly focused on image or video creation, so it’s not really part of this conversation.)
On . Those capabilities are still theoretical, but it seems there are no technical barriers to including them in a future release.
Of course, it’s impossible to compare these models, as none have actually been released and we only have to go by competing claims and cherry-picked examples. But Stable Diffusion has a definite advantage: its presence in the zeitgeist for producing any type of image anywhere, with few intrinsic limitations in method or material. (In fact, SD3 will almost certainly enter A new era of AI-generated pornOnce they get past the security system.)
It seems like Stable Diffusion wants to be the white label generative AI you can’t do without, rather than the boutique generative AI you’re not sure you need. To that end, the company is also upgrading its tooling to lower the bar for usability, though like the rest of the announcement, these improvements are left to the imagination.
Interestingly, the company has put security at the forefront in its announcement and said:
We have taken and continue to take appropriate steps to prevent the misuse of Stable Diffusion 3 by bad actors. Security starts when we start training our models and continues during testing, evaluation, and deployment. In preparation for this early preview, we have introduced several security measures. By continuously collaborating with researchers, experts, and our community, we sincerely hope to make more innovations as we approach the public release of the model.
What exactly are these security measures? No doubt the preview will feature them to some extent, and then the public release will be further refined, or censored, depending on your perspective on these things. We’ll know more soon, and in the meantime let’s dive into the technical side of things to better understand the theory and methods behind the new generation models.
[ad_2]
Thanks For Reading