- Logiaweb Weekly
- Posts
- Sora 2 Just Changed AI Video Forever
Sora 2 Just Changed AI Video Forever
OpenAI adds sound + physics now it’s a reality engine. Plus: no-code SaaS tutorial + fresh design inspo.

Happy Monday, creative family, and welcome to Logiaweb Weekly.
This week’s design intelligence briefing reveals:
🧪 What I'm Building: Behind the Scenes
🚨 Big News: OpenAI Announces Sora 2
🤖 Design Inspiration: Animated UI Design
🛠️ Tutorial of the Week: How to Build a Profitable SaaS Without Coding
💡 Prompt of the Week: Immersive Hyper-Cinematic 3D Visuals
🧪 What I'm Building: Behind the Scenes
I landed in Bali this week, and honestly, it feels great to be back. The environment here is perfect to work hard while keeping balance outside of it.
On the studio side, two new clients signed: a full branding project and a website redesign. Both are exciting challenges that will keep us busy in the coming weeks.
I’m also working on the less glamorous (but crucial) part: setting up systems to scale. If the backend isn’t solid, growth can get messy fast. And I want to avoid that at all costs.
Next week I’ll share more about these systems and how they can help anyone building a creative business.

🚨Big News: OpenAI Announces Sora 2
What's New in Sora 2
The two big upgrades: sounds and physics.
🔊 Sounds — Sora 2 now generates synced audio: dialogue, sound effects, background ambiance. It's no longer silent film AI it's got a full soundtrack.
🌍 Physics — Movement finally behaves like the real world. Balls bounce correctly, water flows naturally, characters interact without that weird floaty motion. OpenAI says it's "significantly better at simulating real world physics."
Together, these upgrades make Sora 2 feel less like a video generator and more like a simulated reality engine.

How the App Actually Works
The Sora app is basically TikTok but for AI generated content. Here's the flow:
Creating Your Cameo
Do a one time video and audio recording
This becomes your "cameo" that can be dropped into AI generated scenes
You control who gets to use your cameo and can revoke access anytime
Remixing Videos
Scroll through a TikTok style feed of AI generated clips
See a video you like? Tap to remix it with your own face
Add prompts to guide the style, tone, or scenario
Generate 10 second clips that look like you're actually in the scene
The Feed
Personalized recommendations based on your activity, location, and engagement
"Steerable ranking" system you can tell the algorithm in natural language what you want to see more of
Browse trending creations, remix them, or start from scratch

Safety Features
Every video includes a visible moving watermark
C2PA metadata embedded (industry standard, tamper proof signature)
Blocks celebrity fakes and explicit content
You can see every video that includes your cameo, even if someone else made it
Parental controls for teens (content limits, profile restrictions, messaging blocks)
Beyond the App
This isn’t just iOS OpenAI plans to expand access through sora.com (web) and via an API, so developers and creative teams can plug Sora 2 into their own workflows.

The Catch
Right now it's invite only in the US and Canada. Each user gets four invites to share. iPhone only no Android support yet.
The Bigger Picture
Why This Matters
⚡ Creative acceleration → Ads, product demos, social content all go from concept to full motion video in minutes.
🎬 Social deepfakes → This isn't just a creation tool. It's a distribution platform. If people actually start remixing each other's faces into viral clips, this could reshape how we think about identity online.
📱 Accessibility → The barrier to entry just collapsed. Anyone with an iPhone can now generate Hollywood level video effects.
My Take
This feels like Photoshop for video met TikTok and had a baby.
OpenAI is calling this the "ChatGPT moment for video." If they're right, we're about to see AI generated clips flood every social feed. The cameo feature specifically is wild imagine scrolling and seeing your face in videos you never made.
If I ran a creative team, I'd be testing this yesterday. Not because it's perfect, but because whoever figures out AI native video storytelling first will set the culture for the next decade.

🤖 Design Inspiration: Animated UI Design

Tool Used: Midjourney
Create a seamless loop animation of white bold sans-serif text reading “REACT BITS COMPONENTS” arranged in a perfect circle. The circle should slowly rotate clockwise in the center of a solid black background. Use smooth easing and no jitter, maintaining crisp vector-style typography. The composition should feel modern, clean, and geometric, evoking professional branding motion graphics.

Tool Used: AutoAE
Generate a motion graphic where the text “REACT BITS” appears in the center in a pixelated, retro video game style font. The animation should start with glitchy flicker effects or scanline distortion, then lock into place with a strong, blocky presence. Use a solid dark background, crisp white text, and subtle retro digital effects. The vibe should resemble a 1980s arcade title screen intro, clean and bold.

Tool Used: Kling
Animate the phrase “BE CREATIVE ✦ WITH REACT” in bold sans-serif white letters on a black background. The text should undulate smoothly in a sine-wave pattern, moving like a flowing ribbon across the screen. Each character should follow the wave path in perfect sync, creating an elegant kinetic typography effect. Keep the motion fluid and rhythmic, with a looping cycle that feels organic and creative. Minimalist, modern, and clean design.
🛠️ Tutorial of the Week: How to Build a Profitable SaaS Without Coding
Here’s your quick tutorial on launching a SaaS product with zero coding skills all powered by AI. Want a video version, check it out here!
Step 1: Generate Your Optimized Prompt
Open ChatGPT.
Write your app idea in plain English. ChatGPT turns it into an optimized prompt, ready for app generation
Heres an example (see below)
“I want to build a client portal app for freelancers to manage invoices and send updates.”
Pro Tip: The more detail you give (features, design style, user flow), the cleaner your first draft will be.
Step 2: Build with Rocket.New
Copy that optimized prompt.
Paste it into Rocket.
Instantly, Rocket codes the first version of your app.
No dev team. No waiting weeks. You’ve got a working MVP in minutes.

Step 3: Iterate with AI
Use Rocket’s AI to refine your app.
Example edits you can try:
“Switch design to dark mode.”
“Add a share button so clients can access the portal.”
Updates appear immediately you can shape your app as fast as you think.
Pro Tip: Save your favorite iterations as templates for future projects.

Step 4: Add Your Integrations
Make your SaaS powerful with plug-and-play tools:
Resend → Send client invites + automated emails.
Stripe → Collect payments and manage subscriptions.
Supabase → Securely store all app data.
These integrations turn your prototype into a real business app.

Step 5: Publish & Go Live
Hit Launch inside Rocket.
Your SaaS is live and ready for clients.
Share the link instantly or connect it to your custom domain.
Congrats 🎉 you just built, customized, and launched a SaaS without writing a single line of code.
Bonus Pro Tips
Start small: one app, one problem. Expand later.
Use integrations strategically don’t overload v1.
Position yourself not as “just building apps” but as launching automated businesses powered by AI.
💡 Prompt of the week: Immersive Hyper-Cinematic 3D Visuals

Graphic Tool Used: Midjourney
Create a hyper-realistic cinematic 3D render of an apple rotating in place. The apple is split into two distinct halves: one side is a natural, juicy red-and-yellow fruit with vivid surface texture, pores, and visible seeds inside a fresh cut; the other side is a seamless chrome mirror surface, perfectly polished and highly reflective, capturing detailed highlights, environment reflections, and subtle distortions like real metal. The apple rotates smoothly on its vertical axis in a continuous loop, placed on a glossy reflective surface that mirrors both the fruit and chrome textures. Use ultra-realistic studio lighting with high dynamic range (HDRI), casting soft shadows, subtle caustics, and precise light reflections. Depth of field should be shallow, with cinematic focus shifts that emphasize the transition between organic fruit and mirrored chrome. Render in photorealistic, ultra-detailed quality at 8K resolution, with ray tracing, global illumination, and physically based rendering (PBR) materials. The animation should feel like a product showcase shot, polished, elegant, seamless, with smooth motion, realistic reflections, and professional studio-grade aesthetics.
Wrapping Up
That’s a wrap for today. Could you do me a 30-second favor?
👉 [Survey] – What do you want me to cover next?
Catch you next Monday,
— Adrien

Adrien Ninet