Photography | Simcoemedia https://www.simcoe.co.uk Video, design and photography by Peter Simcoe Tue, 17 Mar 2026 11:28:41 +0000 en-GB hourly 1 https://wordpress.org/?v=6.9.4 https://www.simcoe.co.uk/wp-content/uploads/2024/06/cropped-simcoe-logo3-32x32.png Photography | Simcoemedia https://www.simcoe.co.uk 32 32 5 Useful Photoshop Features For Designers https://www.simcoe.co.uk/5-useful-photoshop-features-for-designers/?utm_source=rss&utm_medium=rss&utm_campaign=5-useful-photoshop-features-for-designers Thu, 23 Apr 2026 12:20:28 +0000 https://www.simcoe.co.uk/?p=4086

Adobe Creative Cloud is acknowledged as industry standard software for creative designers across the globe as it features design essentials such as Photoshop, Illustrator, InDesign, Dreamweaver and now packs a punch allowing users to access various generative AI technologies. This includes Adobe’s Firefly, Google’s Nano Banana and many more – enabling the creation of both images and generative AI video.

Photoshop is a crucial part of the designers toolkit and is often used to fine tune colour in photography ready for publication, create web design mockups, assemble composites for a product launch or produce storyboards for film and video. Here are 5 (relatively!) new tools you may not have tried within Photoshop designed to streamline workflows and open up new creative opportunities:

1. Generative Fill and Generative Expand

Why designers like this

The ability to quickly replace objects, expand an image vertically / horizontally or alter the colour of specific areas opens up new creative options and streamlines processes. It is possible to switch AI models depending on whether you need cleaner product-style results, more “illustrative” outputs, or different texture realism — all without leaving Photoshop.

Using Generative Fill And Expand

Select an area > Generative Fill in the Contextual Task Bar > pick an AI model.  For Generative Expand use the Crop tool > expand to required proportions then use Contextual Task Bar.

Note: Some models do use credits to generate results.

2. Harmonize For Fast Compositing

Why designers like this

Harmonize auto-matches color, lighting, and shadows when adding objects to a scene (best used with transparency). I rapidly makes these adjustments in seconds to ensure the final composited products, buildings or vehicles look like they were part of the original scene.

Using Harmonize

Place subject on a pixel layer > Select Harmonize from the Contextual Task Bar > select choose a variation from the available generations.

3. Generative Upscaling With Topaz Bloom

Why designers like this

Topaz have led the way in upscaling technologies for the last few years and Adobe’s integration of Generative Upscaling using Topaz upscaling algorithms enables users to enlarge images with ease. This feature is useful when rescuing small client assets (logos in a photo, old campaign images, cropped bits) to a more usable resolution.

Using Generative Upscale

Go to Image > Generative Upscale.

Note: There are currently maximum image scaling limits of 6144px. It is possible to select between Adobe’s Firefly upscaling and Topaz Bloom (generative AI) or Gigapixel (non-generative).

4. Object Selection And Remove Distractions

Why Designers Like This

Photoshop improved their selection tool by allowing processing of images in the cloud, resulting in a cleaner selection resulting in less refinement work when masking. This is a real time saver for designers. The have also created a new find distractions tool that will auto-detect wires / cables or background people allowing for fast scene cleanup.

Using Find Distractions

Use the Remove Tool > Top Menu Bar > Find Distractions

Menu Items in Photoshop

5. New Color & Vibrance Adjustment Layer

Why Designers Like This

It is a new feature that creates a non-destructive way to handle temperature / tint / vibrance / saturation control as a layer adjustment. This retains flexibility during the editing process where previously a commitment was required.

Using Colour And Vibrance Adjustment Layer

Layers panel > New Adjustment Layer > Color and vibrance

More About Generative AI

Why Every Freelancer Should Experiment With Generative AI
AI And The Future Of Media Production

The post 5 Useful Photoshop Features For Designers first appeared on Simcoemedia.]]>
Understanding Video and Image Prompts For Generative AI https://www.simcoe.co.uk/understanding-video-image-prompts-for-generative-ai/?utm_source=rss&utm_medium=rss&utm_campaign=understanding-video-image-prompts-for-generative-ai Wed, 18 Mar 2026 12:45:41 +0000 https://www.simcoe.co.uk/?p=4181

Writing Your Generative AI Prompt

Methods for writing generative AI prompts vary from platform to platform in terms of structure, style and tone. Different models respond more accurately to different kinds of input, levels of detail and different ways of presenting your requests. A prompt that works well for Firefly Image may not be the best way to approach Midjourney. In the same way, video tools such as Veo or Kling benefit from prompts that describe movement, camera behaviour, environment and mood rather than a static image alone. If you approach every model with the same sentence pattern, the results will vary wildly. Generative AI can be a little hit and miss at the best of times – so a solid input technique can improve efficiency and effectiveness dramatically.

Prompting Example

A straightforward example makes this clearer. Imagine you want to generate a moody futuristic scene of a man walking through a rain-soaked neon alley at night.

For Adobe Firefly Image, a prompt such as man walking through neon alley, rain, cinematic lighting, futuristic city, night fits a simple subject-led style.

For Midjourney, something shorter and more visually weighted may yield solid results, such as lone man, neon alley, rain, futuristic, cinematic, moody –ar 16:9.

For Veo, the same idea benefits from a more filmic structure: Cinematic live-action shot. A lone man in a long coat walks slowly through a rain-soaked neon alley at night. Low tracking camera, reflections on wet pavement, distant siren, soft electrical hum, tense atmosphere.

A Brief Guide To Prompting

Below is an outline of how the most popular generative AI tools available to consumers can generate the most effective prompts for their subject matter. As previously mentioned, the process is inherently a little bit random at the best of times but knowing how the systems work can get you closer to the results you wanted.

From Guide To Practical Tool

To make this process even easier to use in practice I created a prompt construction tool called Prompt Workbench at prompt.simcoe.co.uk. It is designed to guide users through the process rather than presenting a static article. The written guide explains the logic behind prompting, while the online tool helps apply it in a more practical and structured way.

Generative AI Prompt Guide

Generative AI Images

Adobe Firefly

For Adobe Firefly, write prompts in simple, direct language built around a clear subject plus descriptors and keywords. Adobe advises using at least three words, avoiding filler verbs like “generate” or “create,” and being specific rather than vague. The system responds well to clean wording rather than long rambling instructions.

Subject + Descriptors + Keywords + Style / Medium + Setting

Nano Banana 2

Build prompts around style, subject, setting, action, and composition, then add production details such as aspect ratio, output format, or exact text in quotation marks when needed. Nano Banana is especially useful when you want accurate text rendering, grounded real-world knowledge, diagrams and localised visuals.

Style + Subject + Setting + Action + Composition + Text / Output Constraints

Midjourney

Short and simple prompts usually work best. Brief prompts let Midjourney’s style engine do more of the creative filling-in. Be precise with words and define subject, medium, environment, lighting, color, and mood. Focus on what you do want, not what you do not want, then use parameters at the end of the prompt for things like aspect ratio and other controls.

Subject + Medium / Style + Environment + Lighting / Colour + Mood + Parameters

Flux

The most important elements should come first: main subject, key action, critical style, then essential context. Medium-length prompts are often the sweet spot, with longer prompts reserved for complex scenes. FLUX does not use negative prompts in the usual way, so describe the desired result positively. For photorealism, specify cameras, lenses, film stocks, and lighting.

Subject + Action + Style + Context

GPT Image

For GPT Image, the best method is structured prompting. OpenAI’s current guidance is to keep a consistent order and to include the intended use, such as ad, UI mockup, infographic, poster, or product image. For more complex jobs, split the prompt into labeled sections rather than one dense paragraph. Be explicit about framing, angle, lighting, layout, and text placement.

Background / Scene + Subject + Key Details + Constraints + Text / Layout

Generative AI Video

Adobe Firefly

Describe the camera perspective and movement first, then the character, what they are doing, where they are, and finally the mood or visual treatment. Camera angles matter Adobe warns that too many subjects can confuse the model, so it is usually better to keep the scene focused.

Shot Type Description + Character + Action + Location + Aesthetic

Google Veo

Veo 3 can generate dialogue and respond to explicit sound design cues, so prompts can describe what is heard. The best prompts usually establish the visual style and tone early, then build the world with sensory detail and clear character actions. Treat the prompt like a miniature director’s brief, not just a visual idea.

Style / Tone + Subject / Character + Setting + Action + Camera Direction + Audio / Dialogue

Kling

Kling simplifies this even further into Subject + Movement, which means the still image already carries most of the visual information and the prompt should focus mainly on motion. Good Kling prompts are concrete, cinematic, and observable: describe what is moving, how the camera behaves, what the environment is, and what kind of light defines the shot.

Subject + Movement + Scene + Camera Language + Lighting

Runway

Start simple, add detail strategically, and use positive, concrete language. Runway separates prompts into visual components and motion components. For text-to-video, describe what we see and how it behaves. For image-to-video, the prompt should focus mostly on motion, camera work, timing, direction, and temporal progression.

Camera / Shot + Subject + Motion / Action + Environment + Temporal Progression

Dream Machine

Prompt as if you are describing the shot naturally to another person. It also encourages iterative refinement using built-in tools such as Modify, Styles, Character Reference, Visual Reference, Camera Motion, Extend, Keyframes, and Loop. A strong workflow is to begin with a broad idea, then make specific changes step by step instead of overloading one prompt.

Subject + Action + Setting + Style / Mood + Camera Motion + Refinement

Image Prompt Summary

Firefly Image: Subject + Descriptors + Keywords
Nano Banana 2: Style + Subject + Setting + Composition
Midjourney: Subject + Style + Environment + Parameters
FLUX: Subject + Action + Style + Context
GPT Image: Scene + Subject + Details + Constraints

Firefly Video: Shot Type + Character + Action + Location + Aesthetic
Veo: Style + Subject + Setting + Action + Camera + Audio
Kling: Subject + Movement + Scene + Camera + Lighting
Runway: Camera + Subject + Motion + Environment + Progression
Dream Machine: Subject + Action + Setting + Mood + Camera + Refinement

The post Understanding Video and Image Prompts For Generative AI first appeared on Simcoemedia.]]>
Effective Use Of Generative AI In Creative Media Processes https://www.simcoe.co.uk/generative-ai-in-creative-processes/?utm_source=rss&utm_medium=rss&utm_campaign=generative-ai-in-creative-processes Thu, 08 Jan 2026 11:50:33 +0000 https://www.simcoe.co.uk/?p=4043

Until recently, creative media production services relied solely upon more traditional tools of the trade such as high-end PCs, cameras, tripods, lighting rigs and audio capture equipment. During the last 2-3 years Artificial Intelligence (AI) has slowly integrated into industry standard software tools such as Adobe’s Creative Cloud suite, providing new opportunities for content generation, refinement and streamlining of workflows and creative experimentation across various media.

Here are a few ways AI can add value to your video, audio and graphic design work (note that many of the links provided require subscriptions or the purchase of credits).

1. Storyboarding and Creative Visual Exploration

If your work includes concept development, environment design, animation or live action as a graphic designer, photographer or video producer, AI has the potential to become a useful tool when producing concepts, exploring styles and testing motion sequences. Storyboarding can be manually intensive when conveying narrative through detailed drawings, animation and even live action video clips. With some fairly basic groundwork to maximise effective use of AI technologies, such as the use of clearly defined sketches and focused text prompts, it is possible to generate:

  • Atmospheres and mood boards illustrating a theme
  • Environmental concepts and detail
  • Lighting tests
  • Colour palettes
  • Character styles and clothing samples
  • Sophisticated animation or video sequences

Most importantly, AI lets you “audition” ideas at a pace that was previously not possible. This does not replace the need for traditional media production skills – it simply speeds up the process of discovery and refinement.

Harmonise in Photoshop
Adobe Firefly Moodboards
Luma Labs Modify features

2. Editing and Post-Production Assistance

Video and film editors understand how edits make or break the pacing, emotion and overall feel of video production including the use of sound, colour palette, special effects and camera angle. AI technologies will not replace human decision and intervention any time soon. Efforts to completely automate creative processes using AI tend to decend into cliché, lack nuance and create with limited finesse – therefore human intervention and direction will remain for the foreseeable future. However, there are some useful AI editing tools added to software such as Adobe’s Premier Pro that aim to streamline the editing process with time saving features that enable AI to:

  • Detect scenes within footage
  • Search within footage by describing what you are looking for
  • Extend footage using generative AI
  • Create automatic captioning or transcription
  • Enhance audio to improve clarity and remove background noise
  • Colour correct footage with greater accuracy
  • Remix the length of music in a video to fit the entire clip

These features are not revolutionary but quietly save hours, especially if you are a one-person production studio juggling multiple roles or rescuing footage from visual or audio issues during the capture process.

Adobe Premier Pro new AI features
Da Vinci Resolve new AI features

3. Sound Effects, Atmospheric Ambience and Score Creation

Score creation using tools such as Suno.com or Udio.com provide some of the most interesting and accessible methods for creating AI generated media content – they are easy to try out but also, with some practice, useful for creating unique soundtracks for your documentary or film production without the need for expensive royalty agreements.

With AI tools you can easily generate:

  • Ambient drones and pads
  • Industrial textures
  • Lo-fi soundscapes
  • Tension-building atmospheres
  • Clean voiceovers (if you don’t have access to talent)

This is huge leap forward in media production because, unlike traditional audio libraries, you are not limited by the content and style someone else has recorded. You can shape the sound to the style of your world through text prompts, by uploading a simple melody guide created by a single instrument or even a sound effect simulated by the sound of your voice.

Suno.com AI Music Creator
Udio.com AI Music Creator
Adobe Firefly’s Sound Effects feature
Adobe Podcast (cleaning voice audio)

Losing Your Creative Voice

Many designers are reluctant to use AI tools as part of their workflow because they feel it threatens their craft or devalues their skills. The real challenge and craft is keeping your voice at the centre of the work. Anyone can press a button and generate something using the tools described in this article but remaining in control

The best use of generative AI is as reference material, a thinking partner, a way to test ideas or potentially as a filler when resources are thin but always returning to your own judgement, taste, and aesthetic. AI should not provide a style in itself, it should amplify and focus the one you already have.

AI should become part of the creative foundation — not a novelty. A tool or an assistant when brainstorming concepts, experimenting visually, gathering atmospheric references, refining early sequences and developing the “feel” of a piece.

Ultimately, AI never completely replaces the need for human creativity.
It extends it.

Find out more about AI related topics on this site:
AI and The Future Of Media Production
Why Every Freelancer Should Experiment With Generative AI

The post Effective Use Of Generative AI In Creative Media Processes first appeared on Simcoemedia.]]>
YouTube Channels For Creative Professionals https://www.simcoe.co.uk/youtube-channels-for-creative-professionals/?utm_source=rss&utm_medium=rss&utm_campaign=youtube-channels-for-creative-professionals Sun, 21 Sep 2025 09:00:49 +0000 https://www.simcoe.co.uk/?p=3617

YouTube is a valuable learning platform for creative media producers. Whether you are starting a career in graphic design, an experienced motion graphics professional or a filmmaker looking to explore the potential of AI, YouTube offers tutorials, opinion and advice from industry professionals. One of the key advantages of YouTube is the rapid response of content creators to emerging technologies such as Generative AI, delivering the type of immediacy and detail that help creatives stay ahead of the curve. Many  professionals within the fields of design, video and photography freely share their expertise, offering tutorials, behind-the-scenes insights, and industry trends—all accessible from a single video platform. There is such a wealth of useful content on YouTube that compiling useful videos would generate an almost endless list. This article aims to provide a short list with example of the best YouTube channels that every creative media professional should consider in their development journey.

Graphic Design

Satori Graphics offers high-quality tutorials focused on graphic design basics, current industry trends and creative styles. With so many different subjects from logo design to colour theory to Adobe software hints and tips, there is something for all professionals in the industry. Other notable channels include The Futur and Will Paterson.

Motion Design

Ben Marriott produces high-quality Adobe After Effects tutorials for his YouTube channel. These are designed to help motion designers master animation techniques. He covers the fundamentals of motion design in an easy-to-understand manner as well as advanced techniques. Other useful channels to explore include Evan Abrams and School of Motion.

Generative AI

Curious Refuge presents the latest news and updates from the world of AI, predominantly the use of Generative AI in video production and photography, including the use of Midjourney, Runway, Kaiber and Luma Labs Dream Machine to name a few. For a more broad coverage of Generative AI see Matt Wolfe. Theoretically Media is also worth a look.

UI/UX and Web Design

Flux Academy is a resource for web designers and UX/UI designers. The channel covers software from Figma to Framer to Adobe Illustrator as well as design techniques, hints and tips and web design trends. DesignCourse covers similar topics whilst providing a unique take on learning software, developing techniques and understanding design trends.

Traditional Animation

Toniko Pantoja’s channel provides a wealth of knowledge and experience from the founder of Brushtail Works Studios. He provides guidance and assistance for animators looking to develop and establish their animation style whilst also highlighting common challenges. A useful companion to this channel is Draw Like A Sir which focusses upon drawing characters.

Film Making

Watching videos from the Standard Story Company is both entertaining in terms of delivery and informative. Topics include videos on writing compelling stories, finding locations for your next shoot or producing short films within a specific genre. StudioBinder is another channel worth exploring, particularly for its ‘Advanced Filmmaking Techniques’ series.

Photography

Whilst it is possible to find videos on the basics of digital photography, many photography channels lean towards individual expression, style and advanced techniques. Mango Street provides a useful array of topics including editing in Adobe Lightroom, creating images with your iPhone and lighting techniques. The Photographic Eye is also worth a look.

Character Animator

Finally, lets take a look at Adobe’s Character Animator software with Okay Samurai’s detailed guide. This channel provides a wealth of inspirational examples, guidance and tutorials on how to get the best from Adobe Character Animator along with a few other Adobe software hints and tips. See Simcoemedia’s Character Animator music video.

The post YouTube Channels For Creative Professionals first appeared on Simcoemedia.]]>
Starting A Drone Photography and Video Business In The UK https://www.simcoe.co.uk/starting-a-drone-photography-and-video-business-in-the-uk/?utm_source=rss&utm_medium=rss&utm_campaign=starting-a-drone-photography-and-video-business-in-the-uk Sat, 14 Jun 2025 09:00:59 +0000 https://www.simcoe.co.uk/?p=3697

Drones are becoming increasingly sophisticated and relatively low cost with 4K resolution video capture as standard. Whilst this provides opportunities for video professionals and enthusiasts to capture exciting content there are rules and regulations restricting how these Unmanned Aerial Vehicles (UAVs) are operated.

This post is not intended to be a comprehensive legal guide or compliance checklist, but it does cover many of the key areas you should examine when considering using a drone for commercial purposes in the UK. The Flight Reel video highlights a few drone project examples from Simcoemedia.

1. Legal Requirements For Drones

It is crucial that you understand the guidelines and rules to ensure you remain within the law. Where operators may decide to take their drone abroad, you must ensure you are compliant in those countries too. This may involve registering your drone with the aviation authority, taking relevant drone tests and confirming that your insurance covers operation in the relevant location.

According to the UK’s Civil Aviation Authority (CAA), as of 2 April 2025, the basic guidelines are as follows:

If your drone has a camera (unless it is a toy) or weighs 250g or more then you need to register with the CAA. You need to renew this registration every year. This is a registration of you as the operator rather than the drone itself. Anyone flying a drone weighing 250g or more needs to pass a test and get a flyer ID from the CAA. This is free and online. Regardless of whether you legally need a flyer ID we strongly recommend that you do the learning and test as it gives you valuable information on flying your drone safely. If you already have a flyer ID that is still valid, you don’t need to re-do the test until it expires, although you are required to keep up to date with the new regulations. You can register, get your flyer ID and find more information at register-drones.caa.co.uk

There are however some other rules you must follow should you decide to purchase and fly a drone for business or pleasure:

  • Airspace & Permissions: Ensure that you do not fly into restricted areas and no-fly zones (e.g., airports, urban areas, military zones etc). A useful website highlighting restricted airspace for drones within the UK can be found on the NATS website.
  • Insurance: Public liability insurance is mandatory for commercial operations (providers like Coverdrone or FPV are examples of popular insurers). According to the CAA’s drone code

There is no distinction between flying commercially and flying for pleasure or recreation. This means that an approval just to operate commercially is not required. However, all commercial drone flights require valid insurance cover.

2. Developing A Business Strategy

  • Target Audience: Identify sectors with potential to generate revenue, including real estate, construction, surveying, weddings, events or tourism for example. Each sector poses unique challenges for a drone operator.
  • Pricing Strategy: As with any other business model, consider pricing based on hourly rates, project-based pricing or larger packages. Any cost analysis shouldl include travel, recording and editing of the video. Remember that you need to cover the costs of setting up your business in the first place – the drone, insurance and CAA fees.
  • Competitive Analysis: Research competitors and determine how to differentiate yourself in the marketplace. Creating a showreel of your best work including your own signature video movement and composition combined with striking photography will ensure you stand out.

3. Equipment And Technical Considerations

  • Drone Selection: Choose drones that meet your business needs and legal requirements. For example, drones in the sub-250g category, while compromising on quality to a degree, have significantly less restrictions than those over 250g.
  • Camera Capabilities: There are a variety of drones available, each with their own capabilities in terms of camera quality, automation (such as Point Of Interest and Precision Landing) and battery life. Ensure that you check out examples of video footage and photography via reviews from reputable sources on video platforms such as YouTube or Vimeo to gauge camera quality and ease of use.
  • Accessories: You will likely need accessories for your drone so invest in extra batteries, ND filters, SD cards, a landing pad, and a controller with a bright screen for use in direct sunlight where necessary.

4. Footage Post-Production

  • Editing Software: Software such as Adobe Premiere Pro, DaVinci Resolve, or Final Cut Pro are suitable for video; Lightroom or Photoshop are common editing tools for photographs. Some drones, such as the DJI Mini 4 Pro for example, are capable of creating High Dynamic Range images.
  • Stabilisation and Grading: High-end drone footage may require colour correction LUTs and stabilisation software prior to delivering the final product.

5. Scalability Of Drone Services

  • Additional Services: Drones are also capable of mapping, 3D modeling (photogrammetry), thermal imaging and cinematography for film/TV. You are likely to require an upgrade to your existing hardware and software to cater for these highly specialised services.

Useful Links

Simcoemedia Aerial Drone Footage

Tattenhall Marina

A collection of aerial footage created for Tattenhall Marina, a marina located near the city of Chester in the UK on the Shropshire Union Canal. See the marina at its finest in late Springtime.

Waverton Arms

Short drone video captures The Waverton Arms from interesting aerial angles and provides and overview of outside facilities including the garden, parking, proximity to the main road and other seating areas.

Final Comments

As mentioned in the first paragraph, this article is designed to provide an overview of the general rules and guidelines associated with owning a drone and operating it commercially. If you are considering adding aerial video and photography to your business then please ensure you follow the drone code.

The post Starting A Drone Photography and Video Business In The UK first appeared on Simcoemedia.]]>
AI Tutorials For Photographers, Designers And Video Producers https://www.simcoe.co.uk/ai-tutorials-photo-designers-video-producers/?utm_source=rss&utm_medium=rss&utm_campaign=ai-tutorials-photo-designers-video-producers Wed, 14 May 2025 14:15:00 +0000 https://www.simcoe.co.uk/?p=3290

Simcoemedia shop has been selling 360 images, tutorials, books and t-shirts since summer 2024 proving an outlet for the graphic design, AI video experimentation, generative AI 360 image generation. Simcoemedia remains committed to the exploration, experimentation and analysis of AI tools and, with more resources in the pipeline, the shop aims to be a valuable resource for those looking to embrace AI as part of in their creative work. This article focusses upon the tutorials written to assist creatives looking to explore these tools.

Tutorials

Applying Styles to 360 Photography Using Midjourney and Magnific

This tutorial examines how AI can transform 360-degree images by applying image styles using Midjourney and Magnific AI. If you are looking to enhance architectural shots, landscapes, or abstract environments, this guide can assist you step by step through the process, enabling you to enhancce immersive photography using AI-driven tools.

Introduction to Creating AI-Generated Music Videos

AI is revolutionising the way music videos are produced, enabling artists and filmmakers to bring visual storytelling to life without the need for expensive production crews or complex computer graphics. This free tutorial provides a brief history of music videos, explores the potential of AI-generated visuals, and provides practical examples of how Runway Gen 3, Kaiber, and other AI platforms can be used to create unique and engaging music videos.

Creating 360 Images Using Midjourney and Magnific AI

For those interested in creating immersive 360-degree images, this tutorial provides a complete workflow using AI tools. From generating high-quality panoramic scenes to ensuring seamless stitching for a flawless 360 experience, this tutorial guides you through the techniques required to create visually stunning, AI-enhanced environments.

The Future of AI in Creative Media

The fusion of AI and creative media opens up a new world, offering new tools for artists and designers looking to streamline the production of creative work. As AI tools continue to evolve, they provide new methods for expression, allowing creatives to push the boundaries of storytelling, photography, and digital artistry. Check out the full range of tutorials at the Simcoemedia Shop.

The post AI Tutorials For Photographers, Designers And Video Producers first appeared on Simcoemedia.]]>
AI And The Future Of Media Production https://www.simcoe.co.uk/future-of-media-production-and-creative-industries/?utm_source=rss&utm_medium=rss&utm_campaign=future-of-media-production-and-creative-industries Sun, 02 Mar 2025 22:09:44 +0000 https://www.simcoe.co.uk/?p=3485

The US Government has recently announced an investment of 500 billion dollars toward the development of AI technologies. Flanked by some of techs top shakers and movers including Sam Altman of OpenAI and Facebook’s Mark Zuckerberg, Donald Trump announced it to the press in early 2025. Investment on this scale will inevitably provide fertile ground for advancing AI and some scientists believe that Artificial General Intelligence (AGI) is achievable within the next 10 years. So how could this impact the creative professions? This article provides a brief overview of AI development in early 2025 and provides a few thoughts on how this may affect creative communities in the near future.

What Is Artificial General Intelligence?

Artificial General Intelligence (AGI) describes a machine that can learn, understand and complete any intellectual task on a similar level to a human. The Singularity is the moment when AI surpasses human intelligence and, in theory, these machines become capable of building even more effective and efficient machines. When The Singularity is achieved AI is considered to be self aware and capable of independent ‘thought’. I discussed The Singularity a couple of years ago in my article Artificial Intelligence And The Singularity.

Some scientists such as Ray Kurtzweil believe that AGI is inevitable and we are only a few years away from The Singularity. Others are more sceptical as to whether computers are capable of becoming self aware – AI will effectively become a sophisticated emulation of human behaviour to the point that it is indistinguishable from human beings. However, sceptics believe on close examination AI will remain clearly identifiable. A film reference that comes to mind is the Voight-Kampf Test conducted by Deckard (Harrison Ford) on Rachel (Sean Young) in Ridley Scott’s masterpiece Blade Runner (based upon the Philip K Dick book ‘Do Androids Dream Of Electric Sheep’). The test is designed to indicate whether the test subject is human or android. Deckard is impressed with the level of examination required to confirm that Rachel is an android…perhaps intentionally a nod to the Turing Test developed in the 1950’s designed to establish the theoretical point at which a machine becomes self aware.

Blade Runner Voight-Kampf Android Test

How Might AI Affect Creative Industries?

Creative industries rely on the skills and experience of people from a variety of backgrounds, including graphic design, video production, 3D VFX, and film production, among others. Some content creators have been exploring AI technologies to streamline their workflows and gain a creative edge over their competition. AI has already started reshaping these fields, offering new ways to generate, edit, and enhance content with unprecedented speed and efficiency.

One of the most immediate impacts has been on image generation and digital art. Platforms like Midjourney, Magnific AI, and Adobe Firefly have provided artists with tools that can generate highly detailed illustrations, concept art, and even photorealistic imagery in seconds. Traditional methods that once took hours or even days—such as sketching, refining, and coloring—can now be automated, allowing artists to iterate rapidly. Photoshop’s Generative Fill, introduced in 2023, further revolutionized workflows by enabling users to manipulate images with simple text prompts. This has led to a democratisation of creative tools, allowing individuals with little to no formal training to create professional-looking visuals. However, it has also raised concerns about originality and the potential devaluation of artistic skills.

In video production, AI-assisted tools like Runway, Kaiber, and Luma Labs’ Dream Machine have begun blurring the lines between live-action footage and AI-generated sequences. Filmmakers and content creators can now generate complex animations, enhance video footage, and even automate tedious editing processes. For instance, Runway’s text-to-video feature allows users to create short film sequences without the need for expensive equipment or extensive VFX expertise. While this is a boon for indie filmmakers and small production teams, some professionals fear it could reduce demand for traditional post-production roles.

AI’s role in music production has also seen growth. Suno and Udio are among the leading AI-driven music generators that can create fully composed tracks from simple prompts. These tools can generate music that mimics various genres, from orchestral scores to Electronic Dance Music (EDM). This has opened doors for independent creators who lack access to professional musicians or studio space. It has also sparked debates around copyright, authenticity, and the ethical implications of AI-generated music competing with human composers.

Hollywood Is Dead?

Popular YouTube channels such as Matt Wolfe and Curious Refuge frequently discuss how generative AI video could signal the end of traditional filmmaking. Some argue that AI-generated content will make professional studios and production crews obsolete because film producers can now create high-quality clips from a simple text prompt. However, I’m not convinced Hollywood level production is a risk. AI video tools like Runway Gen-3 have made impressive strides in generating short clips with minimal effort. However, generative AI still struggles with consistency, coherence, and the ability to tell complex stories. While AI-generated content may prove disruptive in areas like advertising, social media content, or even indie filmmaking, the idea that AI alone could replace blockbuster films, nuanced performances, and the artistry of cinematography seems far-fetched—at least for now.

This situation mirrors the evolution of gaming and computer-generated imagery (CGI). As a design and technology student, I remember discussions about how gaming would be indistinguishable from reality around the year 2030. While modern graphics engines like Unreal Engine 5 have brought us photorealistic visuals, the human eye can still detect the difference between computer-generated environments and footage shot in the real world. The same applies to AI-generated video—despite its rapid improvements, it remains fundamentally different from real-world cinematography. Take the early days of CGI in Hollywood as a case in point. When Tron (1982) experimented with computer graphics, it was groundbreaking, but clearly recognisable as artificially generated. Over time, computer generated imagery evolved into a powerful filmmaking tool, enhancing films rather than replacing traditional production. AI-generated video is likely to follow a similar path: not as a complete replacement for Hollywood, but as a tool for filmmakers to augment their craft, streamline workflows, and explore new creative possibilities.

The real question is not whether AI will kill Hollywood, but how filmmakers will adapt. Just as green screens, motion capture, and CGI didn’t erase practical effects but reshaped them, AI will challenge traditional production methods while offering exciting new possibilities. The future of film will likely be a hybrid—where AI tools assist in everything from pre-visualisation to special effects, but the heart of storytelling remains human.

Matt Wolfe’s YouTube Channel

Tron (1982) Light Cycle Sequence

AI Slop

You may have heard the derogatory term AI Slop. AI Slop is the content creators equivelant to spam email from automated bots and refers to low quality, low effort, unwanted content that is appearing online including images on social media, AI generated video content on YouTube and even entire websites. In a recent video YouTuber PenguinZ0 / Charlie (reknowned on the web for his sharp, insightful commentary) described the demise of a YouTuber known by the name Kwebbelkop as a result of using AI platforms designed to automatically generate content. The problem was that this content was perceived by his audience as lazily produced and lower quality…essentially AI Slop. As a result his audience began to lose interest, his reputation and brand permanently damaged with negative consequences in terms of income and reach. Content producers be warned.

Charlie (PenguinZ0) Discusses AI Slop

The post AI And The Future Of Media Production first appeared on Simcoemedia.]]>
Immersive Digital Media Part 1 – Definitions https://www.simcoe.co.uk/immersive-digital-media-definitions/?utm_source=rss&utm_medium=rss&utm_campaign=immersive-digital-media-definitions Mon, 28 Oct 2024 11:00:06 +0000 https://www.simcoe.co.uk/?p=3393

I recently had a conversation with a client regarding the use of immersive digital media in engineering and design. We discussed how it can enhance research, product development and training and increase the overall impact. Over the last 8 years I have conducted a variety of experiments exploring immersive media such as recording 360 video, drawing with Google Tilt Brush and mixing ambisonic audio. This article, the first of two posts exploring immersive media, provides an overview of terminology.

Traditional vs Immersive Media

The term traditional media usually refers to television, radio, newspapers and cinema. Content is presented to the audience in a passive manner, meaning there is little or no control over presentation or narrative. In contrast, immersive media  interactivity and enhanced sensory experience using advanced hardware and software such as VR headsets or headphones designed to emulate spatial audio. Some technologies incorporate the simulation of touch and smell. Immersive experiences are designed to be consumed in a non-linear, participatory manner where choices and physical interaction affect narrative and environment.

Below is an example of a 360 video uploaded to YouTube with a resolution of 8K. The original video was recorded with a high resolution camera. It is important to note only a portion of the 7680 x 3840 pixels recorded by the 360 camera will be visible to the viewer at any given time (depending upon the Field Of View) which reduces displayed resolution to that approximating full HD (1920 x 1080). If viewed on a desktop PC in full screen you can use the mouse to direct the point of view by clicking and dragging in the desired direction

Immersive experiences are designed to increase the sense of realism and there are many different formats available, each with their own characteristics and advantages. Common media formats are:

360 Video

360 video can be viewed in a Virtual Reality headset such as Meta Quest 3 and, when uploaded to platforms such as YouTube, is also available on a desktop or mobile device. The viewer interacts with the content within a VR headset by moving their head or on a desktop by ‘clicking and dragging’ to change the point of view using an input device such as a mouse. It is also possible to achieve similar interactions using the gyro technology on a mobile phone or a screen with touch capability. YouTube can display interactive 360 video in VR, on desktop and mobile.

Video is recorded with a camera utilising a series of wide angle lenses designed to capture the surrounding environment. It is stitched together using compatible software which may be provided by the manufacturer such as Insta360 Studio or by a third party such as Mistika VR.

360 video is usually recorded in the same equirectangular format as 360 photography. Current cameras record video of at least 6 – 8K which results in Gigabytes of data per minute with the Insta360 Titan recording 11K (10K in 3D). The challenges posed in producing 360 video, such as hiding microphones, lights and other equipment, has led to a decline in use during recent years in favour of 3D VR180 video. However the format remains popular in real estate, tourism and journalism where a view of the entire environment is important.

VR180 Video

VR180 uses half the horizontal viewing angle of 360 video with just the front facing 180 degrees available. It is designed to be consumed within a VR headset, viewed on a screen with active glasses or converted to anaglyph for viewing with red / cyan glasses. Whilst there are only 180 degrees of recorded content, most VR headsets have a viewing angle of around 90 degrees which provides a realistic sense of immersion.

Content is typically recorded using 2 wide angle lenses covering a 180 degree viewing angle. Both of these lenses face the same direction with the centre of each lens placed at approximately the same distance as human eyes. When converted for use within a VR headset, the video provides realistic depth. HumanEyes Technologies released the Vuze XR in 2018 which had two 4K cameras which could be used in VR180 mode or 360 capture mode. A recent addition to the VR180 camera market is the CALF 3D VR180.

This format is used in vlogging and entertainment such as storytelling. However, as mentioned in the previous section, 360 video is still used when it is useful to see an environment in its entirety.

Virtual Reality (VR)

Virtual Reality experiences are designed to facilitate interaction where location, physicality and changes to the environment have meaningful consequences. They are usually viewed within a VR headset such as Vive XR Elite or Meta Quest 3 using controllers or hand tracking. However, platforms such as Spatial and Horizon Workrooms allow users access via a desktop environment as a ‘window’ to the virtual world. The user is able to shape the narrative and environment by their choices which may involve changing the state or position of physical objects within a space. Many VR applications are created with software such as Unity or Unreal Engine.

Examples of immersive VR applications range from as simple as the simulation of fairground games within Nvidia’s VR Funhouse, production of 3D art using Google Tilt Brush or involve the complexity associated with piloting an aircraft in Flight Simulator. Other examples may be found on Meta’s App Store.

The term ‘Virtual Reality‘ was first used by American academic Jaron Lanier in the 1980’s as a title for his research project. He is considered to be the ‘father of VR’ because of his groundbreaking work in the field.

Augmented Reality (AR)

Augmented Reality is the technology that overlays visuals, data or audio onto the real world, enhancing the user’s perception of the environment. One example of this is Google Maps Live View where the camera on a mobile phone is used to show a live view of the road ahead whilst superimposing directions and other visual guides. Another notable project is Glass, Google’s answer to Augmented Reality glasses. This project began in 2010 with the wearable tech available in 2014, It was later discontinued in 2015 due to safety and privacy concerns along with a lack of uptake in the healthcare sector – see this article for more information on the cancellation.

Mixed Reality (MR)

Mixed Reality is similar to Augmented Reality but allows the users to interact with the layers or objects superimposed upon the environment around the user. Meta Quest 3’s MR demo First Encounters is a great example of this. The surrounding environment is displayed on the headset in real time using front facing cameras whilst objects are overlaid onto the display to create game elements that can be interacted with.

Extended Reality (XR)

This term incorporates VR, MR and AR. XR refers to the technologies and experiences collectively.

Ambisonic Audio

Ambisonics is an audio technology that uses hardware and software capable of rendering spatial audio in Virtual Reality, Augmented Reality and Mixed Reality. As few as 4 audio channels can be used to represent sound within a virtual space. As the viewer’s head changes direction or objects emitting sound move within a space the audio is adjusted in a realistic manner to reflect the effect of these movements on the perceived sound. It is also possible to experience ambisonic audio in a limited manner when viewing 360 video on a desktop PC or mobile device by moving the point of view. The use of 4 audio channels to simulate spatial sound is referred to as First Order. However, it is possible to use more than 4 channels to enhance the effect in a similar way to improvements of 7.1 surround sound over 5.1.

For more information on ambisonics, see this excellent summary of ambisonic audio from Waves.com

Olfactory

Olfaction or olfactory sense is the sense of smell. There are devices capable of stimulating the olfactory sense as part of an immersive experience. One example is the Smell Engine described as “a system for artificial odour synthesis in virtual environments”

Gustatory

Gustatory perception refers to the taste sense. It is possible to trick the human brain to into thinking that food is being consumed using stimulation by computer controlled plates placed upon the tongue. In 2013 digital lollipop was created by researchers at the University of Singapore that stimulated sweet, sour, salty and bitter tastes.

Summary

Immersive digital media has the potential to elevate and enhance the process of storytelling, communicating research ideas, developing products and in the provision of training. The last 10 years has seen rapid growth of hardware and software technologies at both professional and consumer levels increasing the number of creators and immersive content. Despite these advances, many challenges remain including the size, weight, cost and uptake of VR headsets, the cost and quality issues associated with 360 and VR180 cameras and the technical complexities of generating spatial audio. There are also positive signs too – the release of the Apple Vision Pro, camera releases from manufacturers such as Insta360 and continued support for immersive content in Adobe’s Creative Cloud.

The post Immersive Digital Media Part 1 – Definitions first appeared on Simcoemedia.]]>
Creating AI Generated Immersive 360 Images – AI School https://www.simcoe.co.uk/creating-ai-generated-360-images-using-midjourney-and-magnificai/?utm_source=rss&utm_medium=rss&utm_campaign=creating-ai-generated-360-images-using-midjourney-and-magnificai Thu, 25 Apr 2024 08:21:48 +0000 https://www.simcoe.co.uk/?p=2332
Loading...

AI School

I’ve recently been exploring the use of Midjourney and Magnific.ai to create highly detailed 12K 360 images such as the example shown above (note this example has been enhanced with audio and lens flare using Pano2VR). After working out the most appropriate workflow through a series of experiments, a course was published on Eventbrite designed to impart this knowledge and experience to interested parties in the AI and virtual tour communities. In addition to Midjourney and Magnific, the process involves Photoshop and Pano2VR (or equivelent editing software tools) to edit the nadir and zenith (directly below and above the viewer) along with ensuring the vertical seam blends. This is to ensure objects and textures on the left side of the image fit seamlessly with the right when assembled in the final 360 form.

You can see a variety of images produced using this method by visiting the 360 image section of the Simcoemedia Shop or on my Facebook feed.

AI Workshop Details

The workshops have been designed to cater for 360 enthusiasts, immersive media artists and virtual tour producers looking to enhance their 360 workflow using AI tools. The 90 minute session includes the following activities:

  • Creating equirectangular images in Midjourney by using appropriate commands, parameters and descriptive keywords
  • Ensuring the image does not warp in 360 view by aligning the horizon correctly
  • Resolving stitching problems and nadir issues using Photoshop and Pano2VR (or equivelent) to ensure the best interactive experience
  • Using Magnfic to upscale and add detail to images whilst retaining a reasonable level of creative control
  • Resolving metadata issues + testing images to ensure they are ready for publication
  • Discussion of how these tools have developed and evolved in recent months

A reference page has been set up on this site at www.simcoe.co.uk/ai-school/ reminding participants of the key processes highlighted during demonstration and discussion. So far these sessions have been useful and enjoyable for those taking part with some returning for the second in the series which examines transfering image styles to existing 360 tour photography. For more information on upcoming events see Peter Simcoe’s Eventbrite page.

Related topics: Find out more about how Generative AI creates video and images by reading ‘What Is Generative AI and Is It Useful In Film Production?’

The post Creating AI Generated Immersive 360 Images – AI School first appeared on Simcoemedia.]]>
Google Earth Studio Experiments Part 2 https://www.simcoe.co.uk/google-earth-studio-experiments-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=google-earth-studio-experiments-part-2 Sun, 10 Dec 2023 17:30:26 +0000 https://www.simcoe.co.uk/?p=2122

Google kindly gave me access to their Google Earth Studio platform a few months back. I recently used it to create a variety of videos including a 2D 360 interactive video tour of my home city of Chester in England amongst other tours including London and Snowdonia National Park.

Familiar Tools And Techniques

Google Earth studio is relatively easy to work with if you are familiar with video editing or animation software. Camera controls including field of view, location, tilt and roll all have similarity to the type of keyframing you would find in Adobe software such as After Effects, Premier Pro and Character Animator. It’s fairly easy to create animated sequences and produce some interesting videos using areas of the map where 3D data is available. Google have only recently released 3D data for Chester…as its a beautiful city and familiar territory, this made a great focal point for some further exploration of the tool. In addition, I added some ‘tilt shift’ effects to Birmingham and Barcelona sequences to create a ‘miniature’ effect.

Exporting Videos

Once you have established the camera path for your sequence using the keyframing tools, it is possible to export traditional video or equirectangular video suitable for upload to YouTube – this process is either completed in the cloud (via Google servers) or by export as individual jpeg frames (which at 50fps is time consuming). Note that the limit per day for processing frames is around 15000. Having uploaded, YouTube then translates the data into a 360 video capable of being controlled with a mouse or TV remote.

Render Quality

Having uploaded 360 videos to YouTube the first things you notice are clarity, resolution and aliasing issues. Essentially, the 3D render contains moire patterns and flicker on buildings with lack of definition on hard edges. This reduced the quality of the viewing experience considerably. However, using Topaz’s video enhancement software it was possible to upscale the rendered video to 8K. When re-uploaded to YouTube the conversion was much sharper with improvements in antialiasing and fine detail.

Notes

  • Not all of Google’s coverage features 3D photogrammetry data. The major cities of the world and sites of interest are covered, but there are still many locations with 2D imagery only
  • If you’re interested in alternativeS then maybe try Geolayers, a plugin for After Effects. It allows more advanced visual techniques for animating maps with After Effects suite of effects and camera attributes such as depth of field.
  • See my previous Google Earth Studio experiments
The post Google Earth Studio Experiments Part 2 first appeared on Simcoemedia.]]>