Website powered by

Tips for easy and affordable 3D scanning

Article / 25 December 2021

Goals of this article: Getting you up to speed on latest developments in 3d scanning so that you can reliably output the meshes you need for free or at low cost, no matter your hardware.

Level: Intermediate (I will assume some familiarity with Android/iOS and windows/linux/macOS computers, and you've heard of 3D scanning before).

No matter if you process it via the cloud, or locally, if you capture the data with photos or with structured light (LIDAR, Infrared FaceID), what you want to acquire is always points, LOTS AND LOTS OF POINTS. Let's help you capture the ideal dataset, whatever hardware you have.


Part 1: Android and iOS mobile photogrammetry

As we approach the end of 2021, Android phones and its main OS developper Google have not provided the same comprehensive object capture/ARkit as iOS and Apple. However Apple likes everything to be "walled in", so you've also got to process iOS captures...on a MacOS. Meh. 

Cloud 3D reconstruction changed the game as you can let someone else run energy-efficient mac mini M1 or one large Mac pro, and just sent the data and rent it as service. The introduction of Polycam Web allows any Android or quadcopter drone users to take advantage of this framework using "photo mode" and uploading the photos from outside iOS ecosystem.

Best Apps for iOS: Polycam, Trnio.

Best apps for Android: OpenCamera (1-2 second interval HQ jpeg capture with all filtering disabled)

How to acquire the ideal dataset: Since photogrammetry works based on tracking 2D features to create depth maps, you need some level of overlap in-between photos. This is exactly the same thing as "panorama mode" or panorama stitching, but in 3D this time. The other thing you want is sharpness since the 2D features cannot be tracked if there is the presence of motion blur or out of focus areas (difficult for scanning very small objects). Last but not least, you want coverage (seeing every side).

To achieve overlap, you want abut 20% of the frame to remain similar while you move or rotate, and make sure your object has plenty of grain or texture to capture.  You'll never be able to capture a chrome ball with today's algorithms since reflective surfaces are view-dependent, they change optically depending on your own position in each photos. If overlap is not achieved, you will find most 3D reconstruction software will fail to achieve a continuous result and start making wild guesses on how much your traveled physically in-between each picture. The easiest method to maximize both coverage and overlap is a low+high 2 pass orbit; You first rotate around the object from your highest height, holding the phone up and centering the desired object in the frame while rotating around it, then you do a second pass from a low angle to capture undersides. Optionally you can then break from the orbit and gradually get closer to a detail of the object you want to make sure to capture correctly. Remember: if you don't have coverage, for the computer it's an unknown blob. Even with the latest machine learning, reality is so rich and surprisingly complex that you'll find computers don't make great guesses yet, or rather not in 3D, (not yet). 

Side note: It's as much a philosophical debate as it is about algorithm design; filling the void reveals biases in the training data.

Finally, let's expand a bit on why sharpness matters, and how that intersects with lighting, sensor size, and ISO noise (the sensor sensitivity). A smartphone achieves photographic quality with a small sensor generally by doing all sorts of fancy computational tricks to make the picture look good despite the smaller sensor (this is why cameras that make a physical shutter sound are bigger and heavier and capture raw photos instead). Unfortunately the tricks that make pics look good for social media also make it a less-than-ideal candidate for photogrammetry. Your aim is to get as close as possible to a "straight out of the sensor" jpeg from your smartphone, because when a smartphone removes the ISO noise using an algorithm, it introduces a 2D error, which will propagate into a 3D error. Now you can imagine that any other computational photography tricks will also propagate errors. 

Example: Many smartphones introduced "night mode" which is achieved by shooting a burst of high-iso photos, and recombining 10 of them to eliminate the noise. Any imperceptible alignment errors in the 10 photos introduces errors too.

So, to capture the ideal dataset, we want to avoid any 2D filtering in the smartphone, and find a balance between shutter speed and iso noise. Shutter speed is the measure of how long you let light in, and a measure of movement-blur. Since you're rotating around your object, if you don't pause and click for each photo (tedious) you will see blur happen in any low light situation. The thing is, you're also dealing with a small sensor (doesn't capture lots of light) AND you usually want to scan on overcast days to achieve an "unlit" look that can make your scan look good in any new ray-traced/game lighting conditions.

Shutter speed is not something smartphone users think about much, but it's an obsession for cinematographers and photographers alike; people move, you move, and 1/100 is usually a minimum to achieve sharp results. Due to small sensor size, your smartphone will often drop as low as 1/30th of a second, so if you're doing the 2 pass orbit I recommended above,  you might see that the center object is sharp, but the background has a rotation blur. That's bad. 2D feature tracking for 3D reconstruction happens when it can recognize features, but the parallax  between your object to scan and the background is essential (parallax: how fast things are relatively moving, like when you're in a car or train and focus on a object with your eyes and follow it and the background suddenly seems to move counter to your vehicle direction). If the background is blurred, the algorithms will lack a ground plane/reference frame to guess your camera position in 3D space. So you have to boost iso to achieve 1/60 or higher shutter speed if you're scanning fast. Thankfully on iOS, Polycam and Trnio do this automatically, taking pictures at the ideal time when the blur is lowest, even with slow speeds it will only take a picture if sharp.

Ideally, someone could make an Android app that does this for dataset capture too! You don't want to waste time deleting blurry pics, let the smartphone's CPU decide when to take the picture by estimating your momentum using the velocity data from gyroscope sensor (lowest=better). 

In the meantime for Android users, you can decide if you prefer manually taking pictures, or using the 1-2S interval capture mode of OpenCamera app.

Get a feel for it! There's a sort of rhythm to starting-stopping to capture sharp pictures while scanning fast on this kind of interval. Using this method and a 2pass high/low orbit, you can scan a medium-sized rock or a tree-trunk in less than 3 minutes. 

Smartphones with ARM CPUs are amazing for acquiring datasets because they're lightweight and always in your pocket, but since the small sensor can get in the way, let's explore other options for taking 100's of photos with ease before we talk about how to do cloud and local 3D reconstruction.

My pick: used iphone SE II  with cracked screen (you're not paying for anything extra, this is a work device for me not a toy). Add tempered glass protection (like the previous owner should have done) and a silicone case, you're good to go!


Part 2: DSLRs, micro 4/3, or the cheapest sharpest cameras

The large the sensor size, the more light goes in, the higher the shutter speed you can achieve with lower noise. Noise prevents good 2D feature tracking and leads to 3D errors. So bigger sensor and higher aperture (small f1.7 type number) will lead to less errors and faster, more confident scanning.

Problem: DSLRs are super expensive. Solution: Due to smartphones being good enough for social media, plenty of people are selling Used micro 4/3rd cameras, who occupied a niche in sensor size in-between the heavyweight full-frame sensors (like a canon 5D MKII), APS-C, and the very small sensors of smartphones and point-and-shoot cameras (which are pretty useless now lol).

So that means you can get good used cameras with micro 4/3 sensors for decent prices. Cameras are extremely solid and micro-4/3 cameras often have a "silent shutter" mode, aka electronic shutter, where the mechanical curtain does not physically move, extending the operational lifetime of the camera by many many years (moving pieces = failure risk). Look for a camera with 10mpix or more resolution, good iso performance (measured in IQ, here's a good website to compare models). To achieve sharpness  and avoid motion-blur, a camera with in-body or lens stabilisation is ideal.

My pick: gx-85 with 12-35 kit lens (dual stabilisation and no AA filter).

Jpeg is usually enough, don't bother with RAW because it will just clog up your RAM if local or Wifi if doing cloud, unless you're using RAW to create "unlit" texture look by lifting up the shadows in something like lightroom or Affinity Photo batch mode, producing an "ideal HQ jpeg" using the RAW data.

Peak sharpness for most lens is halfway between highest and lowest f/aperture, usually f5.6 for small sensors because of optical diffraction effects. You want the highest f number below f8 that still gives you desired shutter speed above 1/60th of a second and tolerance for ISO noise (100 to 1600 ISO). Experiment with ISO and F number, you'll find for example that it's worth it for small object to boost the ISO to get a higher f number (smaller aperture) to ensure there is no depth of field blur (blurry background). This will ensure continuity. In other cases, the complete opposite is true, such as capturing large objects in low light, where a wide aperture (small f number) is ideal.

Generally, the wider the lens the better, but any barrel distortion ("Gopro effect") will introduce optical errors in reconstruction. I bought a 7.5mm fisheye lens and it proved itself to be a waste of money, since I don't have the patience for 2D undistortion or shoot in RAW and pre-process. Wide-angle, high aperture lenses with no barrel distortion are very expensive because they are optically complex objects with precise construction needs, (usually German or Japanese lenses). Investing in one of them could prove useful to scan faster if you're scanning for a business. Lenses are good investments, while camera bodies drop in value over time. Micro 4/3 mount like the one used by Panasonic is extremely popular and leads to a good lens selection. Lenses for full-frame sensors quickly get astronomically expensive and really heavy (it's a matter of geometry, you've got to have lots and lots of glass if you want high aperture ratio).


Part 3: LIDAR, FaceID, and other structured light hardware.

What if you somehow have access to something else than just capturing incoming photons? What if consumer hardware trying to make better AR happened to have similar structured light equipment as something that used to cost 30K usd a few decades ago? 

The basic principle: A portable device shoots structured light at the scene, and analyzes the way it bounces off the surface to generate the points instead of 2D tracking. This has a few pros and cons compared to pure photogrammetry:

Pros: 

- Works at night (no texture though) since your capture device becomes a light!

- Reconstruction can happen locally and at extremely low CPU cost. All it has to do is merge the various 2.5D slices you're sending it. Most devices now do realtime reconstruction. This means if you have a successful capture method, you could capture 100's of objects a day. Industrial-grade tech!

-Stability: it can capture people's faces even if they are moving slightly, since each 2.5D slice has lots of data, alignment noise is lower.

-Instant scanning of indoors walls! This is what it's designed for. Measure anything, AR-enhance your living room, etc.

Cons:

-Range: a major limitation is that the max range of structured light follows the inverse square law of light, where intensity and therefore precision has a quadratic drop-off as you get farther away (similar visually to "exponential" curve). This means scanning large/gigantic objects is out of question, as the max range is 3m for IR and 10-20m for LIDAR.

-Alignment errors: Due to range limitations and precision, you will find that you often don't have enough background information to track your movement correctly, leading to wild reconstruction errors where the model duplicates itself instead of recognizing that it's you the observer/camera that moved.

-LIDAR isn't super high res, especially on the geometry side, it's kind of blocky and disappointing. Most folks I talked to were disappointed with the promises of iPad LIDAR for example.

My pick: Any iphone with FaceID, Scandy pro App, and the lookout accessory.

The lookout accessory will allow you to mirror/flip the direction of IR scanning, not to recognize your face but instead to do full-body scanning of friends who are standing still, and small objects. Pretty cool!

Note: Scanning to Vertex color instead of texture allows for great flexibility to import it into NomadSculpt, giving you an offline, on-device pipeline for scanning and creative recombinations.

Part 4: Reconstruction energy/$cost considerations and trends

The Degrowth perspective is that we should all attempt to produce less devices, more energy efficient ones, and use less cloud, spend less time on energy-hungry desktops. How does this relate to all that I mentioned above? Let's imagine various scanning scenarios and make some assumptions:

Scenario 1: Phone scanning, cloud reconstruction (efficient cloud machines such as ARM cpus consume 30 watts per reconstruction node, but  inefficient network and contributes to ICT energy demand growth, which is all-too-often met via fossil fuels who should rather stay in the ground. Avoid 4G and 5G, prefer Wifi to reduce your carbon footprint and phone bill).

Scenario 2: IR local scanning (most efficient, with 5 watts iPad/Surface Pro realtime reconstruction on ARM CPU)

Scenario 3: DSLR and local reconstruction (Cameras consumes next to zero energy, but you are responsible for the energy efficiency of the reconstruction. If you're not careful you could end up using 250 watts for no good reason and reconstruct for hours and hours, unable to use your PC for anything else).

Recommendation: consider using as few photos as possible for cloud, using the most efficient ARM CPUs, or processing locally during the noon peak where solar panels or renewable energy is highest. Processing the scans at night means some nuclear power plant, batteries, or gas-powered thermal power plant has to run and use resources to power this use. The Degrowth perspective is that the excess generation of renewables is still very generous, and it's kind of SolarPunk to time your computing needs based on the Sun.

For large scans: (of over 100 photos), there is a diminishing return to resolution, and batch-resizing jpegs from 4K and above down to 2048px horizontal (using Affinity Photo for iPad) ensures the fastest upload/reconstruction time, while minimizing ICT footprint.

Best Cloud option: Polycam (subscription, but you get 2 free scans per month as a trial).

Local option: RealityCapture PPI (pay per input, you licence per megapixel of dataset, as low as $0.30 per scan).

Open source alternative: MeshRoom is a free 3D reconstruction software using the AliceVision nodal framework (depth map generation is super slow as of 2021 though and unless you're running this on ARM it's a big waste of energy). The Degrowth point of view here is that we need to swap the depth map generation with something much faster that still runs on ARM CPUs and isn't RAM hungry. If the issue of temporal coherence can be resolved, something like the BoostingMonocularDepth algorithm could be ideal.

Parting thoughts: Hope this article helped. Some of you might be wondering why I think blogs are still worth investing in, instead of making videos on youtube; It's about accessibility, portability, and ownership. I have this article saved offline and can publish it on a self-hosted device like my raspberry pi, it's screen readable so more inclusive than most Youtube videos, but also uses less data which is great for inclusion and degrowth. I prefer seeking the maximum amount of information in the least amount of data. Plus, it's easy to make a PDF guide out of it and send it to friends.

Happy scanning in 2022! Plan your path ahead, and watch out for slippery rocks.

Report

Carbon Offsetting -and other accounting tricks

Article / 14 October 2022

Carbon offsetting -and other accounting tricks.

The scariest thing about post-apocalyptic games such as The Last of Us II is not the zombies, not the senseless violence between humans, but rather the landscape. What is scary is how much closer we are to a world of hurricanes and floods that resembles a post-apocalyptic game. 


In this article, we will talk about accounting , time, and entropy as a way to explore what the future might look like, and paths we can take to shape it and avert the worst environmental catastrophes of our time.

Consider the dimension of time. When you open a 3D software you can generate noise in 2D, 3D, or 4D. The fourth dimension is time, meaning it moves, it's based on pre-existing conditions and keeps evolving. Entropy means that it requires energy to reverse heat dissipation, the dissipation of gasses, or in more simple terms, if you throw a bag of sand or flour in your living room, it will take a lot of time to clean up the mess. We learn to voluntarily slow down when we carry things that have a potential for dissipation; water, sand, flour are either packed with a lid, or we move slowly to not spill it over. We spend time to avoid increasing the entropy; it took a lot of energy to take all the grains and grind it into flour, so it would be a double waste to have it spilled on the floor.

There is however, a few ways which humans unleash the power of entropy and time upon themselves on a daily basis. I am of course talking about greenhouse gases, mainly Co2 (carbon dioxide) and CH4 (methane).

If oil and gas were stored for million of years underground, that's great, it makes sense to put a tight lid on it.

The problem is that civilization developed by tapping little holes into these reservoirs, then oil companies lied to the public repeatedly about the risks. Not only did they blew up the lid, it's now leaking methane from open wells and pipes all over our pale blue dot known as the Earth.



Now obviously the hardcore denial is no longer trendy, and the oil industry therefore had to switch to delay tactics. Making your brand seem cleaner than it is (greenwashing) is how they went about it. The main source of greenwashing is carbon capture and storage, and offsetting by afforestation. Now don't get me wrong, we DO need to plant a shit-ton of trees. However it's extremely important to know why we're planting them, where, and what industrial behaviors are enabled by the concept of offsetting not being challenged repeatedly.

The idea is simple (in theory) pollute a little bit, clean up an equal bit. Put a ton of carbon in the atmosphere, store a ton of carbon in soil or rock formations. The problem is that we're on so many levels comparing apples to oranges, or rather a fruit to a tree, which has completely different embedded time. 

Here's why; The ideal methods of storage for that carbon is by definition right where it has been for the past millions of years, and any attempts to store it underground will be at best a downgrade from the lid that was there before. Promising carbon capture and storage (CCS) projects include for example converting it to solid waste so that the gas doesn't dissipate (it wants to, that's entropy), or making into bricks combined with lime rocks. All of these storage methods will give us at best 300 years before they leak. Everything breaks down, the earth's crust breaks down the slowest. So when we say a ton of carbon (global pollution) is "offset" by a carbon and capture project, what we're really doing is passing the hot potato along to the future generations, who in 300 years will have to deal with the problem again. If you take a bottle of butane and let some gas out, there's no way you will be able to compress it back into the bottle, it's dissipated and left your house trough the windows (even if they're closed) within minutes, sometimes seconds. We're comparing 300 years band aid to a high quality million-years storage. The answer is obvious; keep.it.in.the.ground!

The classic explanation for entropy is a pool of water, with clear water on one side, and colorful, dense ink on the other side. There is a gap in-between, and we can slide that gap out and we see that within a closed system, the entropy will increase, meaning the two liquids will mix until they become impossible to separate. Oil spills methane leaks interact with the atmosphere in the same way. When the oil industry instead of plugging old wells, decide to open new ones, they're removing the gap between ink and water. The added co2 will stay in the atmosphere for 3000 years and we don't have storage methods guaranteed to work for that long.

So what do we do? Plant trees, right? Sure, but not trees to enable waste, it's very important. Trees use the law of large numbers to reproduce, most nuts from a chestnut fail to grow, one out 1000 survives and ever becomes a mature tree. Similarly one out of many forests will still be there 3 millenias from now. The number of old growth forests today is rapidly dwindling, and I don't know what makes you think this will somehow get better over time without unprecedented action. Yet even if humanity decides to get together and never again log an old growth forest such that the carbon sink can get deeper, there are still the increased hurricanes and wildfires that are threatening (for example) sequoias in California. Existing co2 and methane gets us at 1.1 degrees of warming at the time of writing this article in 2022 [1],[2]. Greenhouse effect means more wildfires, but also more energy in the system, which in a closed system (earth's gravity well) means more rapid mixing of hot and cold zones, ie tornadoes and hurricanes system forming more often. The law of large number is not working in favor of forests -at all. To make matters worse, if the trees are planted in boreal region, the albedo effect of tree's dark foliage even contributes to an earlier melting of snow (which is albedo 0.9) compared to tundra. Oops.


Source: Norsk Polar institute.

Note: Afforestation as a climate solution is not a bad idea though, for example expanding and restoring mangroves in tropical regions has many environmental and economic benefits. mangroves grow fast and can tolerate sea water while providing heating and insulating material.


Art by Harry H. Johnston, 1906

To make matters worse, we're also double-counting in so many places because everyone wants to pat themselves on the back (me included). As an investor in sustainable shipping for example, I could claim I decarbonized my airplane travel (which I should minimize, yup), but the ship operators will claim the same thing, and so will the clients, and maybe even the customers of the products in the end. The order of magnitude is way off, especially as we get down the logistics chain. Everyone wants to feel good and claim to have done the decarbonization work. But with half of my savings invested in regenerative agriculture and green shipping, I know I've got a lifetime of work ahead of me still. If Earth had an accountant, this double-counting of carbon offsets would get them fired.

All of this points to a social moment when we collectively decide that blowing up the lid of geological gas storage in the 2020's is a nonsense idea. Some countries already reached that point, while others seemingly compete at being the worst possible partners on this spaceship we call Earth. As more countries reach this point, there will be more supply chain shortages. If we're idealistic, this is a good thing; we should not be hauling things with gas-powered vehicles all across the globe, we can consume resources locally, use products made in your city, in your region, etc. If we're materialists however, it's a disaster as millions if not billions of people are kept in a state of dependence to this supply chain. So the urgent need is to decarbonize and decouple standards of living from fossil fuels. This means giving the land back to the communities who can best steward it into the most troubled era, often times indigenous nations who have already survived an apocalypse or two.

"climate change will never affect me in my lifetime" A 20-something years old Lyft driver said to me in 2019.

And if you think "the obnoxious French dude" is referring to myself, I was rather thinking of Jancovici [ https://youtu.be/wGt4XwBbCvA ]. 


Additional sources: 

[1] https://www.climate.gov/news-features/understanding-climate/climate-change-global-temperature

[2] https://www.eea.europa.eu/ims/global-and-european-temperatures

Report

Book review : Anatomy of Facial expression

Tutorial / 29 September 2022

If you've spent any amount of time looking for references on Pinterest or Artstation, chances are you've come across the work of Uldis Zarins, and the Anatomy for Sculptors team. I thought why not take a look at the books, especially since I'm usually quite busy, I needed to find the best possible source of information. 

A few anatomy books I purchased previously were disappointing or had unnecessary fluff, this is not the case for Anatomy of facial expression, just like the rest of the books from Anatomy for sculptors.

An example of surprisingly well explained anatomy was the way this book described dynamic wrinkles, and the relationship between muscles direction and fold direction.


Print quality: Overall very good. Text is always sharp and readable, black and white graphics are also sharp. On close inspection the greyscale and color is rendered at around 300-600 DPI. Plenty enough resolution for zooming in and using scans of this book as backdrop image in your 3D software like Blender or Zbrush. If you need even more resolution though, anatomy4sculptors has a PDF E-book version available for purchase.

(zoomed in close up of print quality)

The book is divided into main sections and color coded (yay!) for easy flipping trough. It starts with the skull, then adding in muscles, fat, skin, then it transitions into a very useful section:

facial action coding system (FACS). I first learned about this system while reading the similarly excellent VES handbook , the chapter on rigging does a deep dive on FACS and why it's useful for sculpting blendshapes, for acting and animation notes, etc. 

The anatomy of facial expression book did some clever organization in that each page in this last section is a condensed representation of 0-1 range of affected muscles, with close up views. For example "wink" 0 is neutral -no wink- and 1 represents the maximum activation of orbicularis oculi, with FACS code number  AU46. Riggers and animators in high-end realistic games and film VFX will find this particularly useful. Nowadays in many games even monsters can emote with this level of subtlety, (at least in cinematics).

With 212 pages densely packed with artist resources for face details, this is a great guidebook for any sculptor or painter to have, especially if you're working in cinema, TV or gaming.

You can find the book at https://anatomy4sculptors.com/product/anatomy-of-facial-expression-paperback/

Report

2022 updates - New 3d model pack

News / 20 March 2022

Hey everyone, time for some updates!

  •  My first 3d scanned asset pack is out. Head over to my artstation store or gumroad to buy this collection of lichen-covered rocks and old stone walls.


  • Don't need that many rocks/ just want to try out the quality first? Use Blenderkit and check out the natural elements section to see my rocks, I've included a few in the free plan. If you find Blenderkit useful, you can use my link to get 10% off subscription:  https://www.blenderkit.com/r/EnoraVFX                                                                       

This gives you an idea of the value of the full pack with 52+ models and 3 blender scenes. I will keep uploading rocks and other models to Blenderkit, including free samples. I think it's a great add-on that helped me on numerous occasions, and they have an option to donate parts of the subscription price to Blender Foundation (main devs).

  •  I'm working on my first long-form recorded video tutorial, for those who wanted concept art and Blender tips for a while, I hope I will deliver a great experience.


  • I've got a new job! I'm now Lead Environment Concept Artist. 

  • The Shipyard is stil in progress, although slow progress (new job and all). Working on some concepts and 3d models. I'll have some stuff to show this summer :)


Report

Podcast + The Shipyard

Work In Progress / 27 October 2021

I appeared on a podcast episode with Siddharta Valluri, you can listen to ep 41 of the convergence podcast on soundcloud and spotify.  A transcript in plain text is available here.

Episode description: "This week I got a chance to talk to concept artist Efflam Mercier about their passion for the Solarpunk art movement and how new organisational structure within companies can benefit upcoming artists."

On the podcast, I also had the opportunity to talk a little bit about a game I'd like to make.

Build in tandem with the ancient and the new, solar sails chart our escape from fossil fuels. The future listens to the wind - will you? 

The Shipyard - Beta test 2022


This game could never be made in a firm, and will only flourish in a worker-owned cooperative (🚂 get on the COOP hype train, and learn from your fellow workers!).

Hopefully if I survive our violent world, I may able to share more about dreams of making this non-violent Shipyard game and eco-punk entertainment in my first *official* devblog article. Ideally I can release the 1.0 version somewhere before the collapse of the energy supply chain that could sustain gaming. But if I don't? I'll still distribute my ARM-optimized games by sail, by internet over avian carrier (pigeons strapped with micro-sd cards). 

I find a funny kind of hope in that no matter how low-tech things get, someone out there in 2050 will workout on a 100 watt rowing machine to charge their 2018 phones - for gaming. 

Report

Course, interview, AvE

General / 03 September 2020

Hey everyone,
This blog was long overdue for an update, feels like last blog post was from a different era entirely. I blow the dust on these old manuscript to tell you about five things:

I'm back to teaching!

I don't even know if I posted about the first time here, but I've taught a 8 weeks course to 15 students over zoom last year, and I'm doing it again via The workshop Academy on October 9, 2020.

This is a 8 week advanced environment design class with a focus on concepting environments within gameplay, technical, and story constraints. These constraints are often unique to each game, but there are a few common principles whether we make a single player top-down tactical RPG or a survival FPS MMO.  


Voyage LA interview.

A local magazine was interested in what I have to say about a few topics, here's an excerpt:

"The amazing new tools at artist’s disposal (2D,3D Animation, games) might look like they can change this top-down problem, with open-source lowering the barrier of entry, platforms to publish independent games, etc. All over the world, successful artists from fairly diverse backgrounds finally see their works picked up by a Kickstarter and elevated to a beautiful entertainment product. If this is happening, why are we not seeing more TV series and games that tell the emotional truth about climate change? Not the sugar-coated “we are all universally responsible, we are all in this together, we must do better, and in the end, technology will save us,” but the raw “This is the ugliest cover-up in history, the oil companies knew, and the innocents will suffer the most. Keep it in the ground, shut it all down and prepare equitably for the storm”. "

Like what you see? Wanna hear what I've been up to? Read more here.


Artists Vs Extinction.

During the height of the amazon fires a bunch of artists were talking, "What can we do, what can we do?". We thought it would be cool to have a place to hangout and craft ecological/ social movement art, then publish it out. The group now grew to about 500 members between AvE Discord and AvE FB. We use it to share knowledge, resources, talent with the aim to produce ecological political art. The participants are from all over the place, and we even have some scientists onboard. I've been delighted to see connections, networking, ideas shared on this group. Moving forward I'd love to figure out ways to partner up with orgs doing sci-com in the climate or ecological sphere, leveraging the combined resourced of AvE and learning trough a cooperative design challenge.

(Cover art by my friend Sean Bodley)


Editorial feature



I've been a big fan of Drilled (the podcast and news organization), and it's an honor to be featured as a cover for Dr Kate Marvel's excellent essay 
"I am a mad scientist".

When I set out to paint the resistance against fossil fuels, this is the type of things I had in mind as a low-hanging fruit. Creating little visual seeds of resilience and non-cooperation with Ecocide. No matter what happens, one more image exists that depicts the struggle of humans against inhuman fossil fuel corporations (they don't even care about their own workers).
- - -
I have more to share but I'll keep it for the next blog post!

Report

Post-collapse futurism : phase I

Article / 03 June 2019

Before I get started, I would like to thank you all for following me along on this journey. I know that what I write is not necessarily comfortable or nice to hear, so I really appreciate the time you take to read this. 

At the time of writing this post, there are now 20,000 of you, thanks for the support!

Post-collapse Futurism : phase I

Around Christmas time last year, I tried formulating the problem by writing Your 2050 dystopia is weirdly optimistic, and invited artists to create art that better reflect the challenges of our time.

But before urging everyone to join this art movement, I had to try and define it better. It took 4 months of reading very depressing scientific papers about energy supply, solar radiation management side-effects, and the climate feedback loops before I could come up with any ideas worth painting.


One of the important steps in defining this project was to talk about what makes post-collapse different than post-apocalypse art. While the difference between the future I imagine and the one sold to us by silicon valley Billionaires is obvious, the difference between the collapse of civilization as we know it and a nuclear wasteland is harder to define.

I'm not going to attempt to create sharp razors for now, I think it's a fuzzy border, (Waterworld could fit in, for example). What I can think of however is a list of checks:
Does your collapse lead to more than 90% of the world's population to die? If yes, it's probably post-apocalypse, not post-collapse. Is the problem the same everywhere? Or is each region affected by different problems and reacting differently? etc.

This chart helped me a lot in defining what types of images I would or would not paint. I will not for example paint extreme sea level rises where the top of the buildings is underwater. I think it's been done really well before, and it's the middle-left of the futurism spectrum that is more interesting to me. Future pathways where it's not the apocalypse, but humanity is definitely not handling the involuntary de-industrialization well.

In a nutshell, my working definition was:

Post-collapse futurism is an art movement  focused on showing the policy and cultural failures of today manifested into the hardships of tomorrow.


It’s upon the completion of the third painting that the path forward cleared up a bit; With this type of artwork, I had the potential to start conversations that are usually uncomfortable or showcased as a partisan issue when explored by media outlets.
I think Cli-fi as a definition of sci-fi focused on climate change is a great start, but it looks like it's mostly confined to the writer spheres at the moment. I also think it's a bit limited in it's own brand, since the climate crisis is only one out of 10 different major challenges facing humanity in this century. If the climate crisis barely gets any coverage, It's even worse for the 10 others. For example, I could barely find just one good article about topsoil depletion outside of academia.

I realized that I also had the opportunity to plant the flag, to engage other artists in creating this kind of art, and lowering the barrier of entry between caring about climate change and having a finished piece.

Phase I is about planting that flag. I would say I’m just at the beginning of this phase but it’s already pretty promising. I'm lucky to have incredibly supportive friends and family who helped me trough depression and at the earliest stages of this project.

Here’s some of the things in the works for phase I:

  • Creating 10-20 post-collapse pieces (in progress).
  • Making a website to feature post-collapse artists and their art (I was trying to learn HTML and do it myself, but an artist reached out to help. Thanks a lot, Maxi! The website could come up online in 3-4 months).
  • Once the website is running, inviting other artists to send artworks that fit this theme of post-collapse, to build an art movement.
  • Writing a statement to go alongside the art, as well as a biography tailored for people not familiar with my work, or even concept art in general (a very supportive friend who is also an Editor helped me write it!).
  • Defining post-collapse futurism in a press release (I can start, but honestly it will be people, from artists to viewers to journalist, who will define what it is).


Artist statement

We often think of disasters as isolated, one time events: brief trials to be overcome, that humanity will bounce back from, stronger than before. Rarely do we imagine disasters or states of crisis to be our new norm. In my work, I explore what that existence looks like. One where there’s no disaster relief team coming to the rescue, because the slow global collapse of civilization is the disaster itself. Like time travelers, this series lets us take a peek into a very likely version of our future. If we don’t like what we see, we need to change our present, now.


Due to the inertia of the fossil fuel infrastructure the world relies on, our planet is guaranteed to increase temperature by another 3°C. Unless governments change their priorities or economic growth grinds to a halt, this future is already history.


If humanity’s “Plan A” is to solve our current challenges of global structural failures and systemic problems through infinite growth of technology and energy with no compromises, the intent of my work is to highlight our total lack of a “Plan B.”


My hope is that exposing the fragility of industrialized countries and painting their future in a state of post-collapse will increase our empathy for those suffering from disasters today and help people visualize what life looks like, not so very far now if we won’t strive for change together.

Time travel with me now, to the not so distant future, where we will visit our world as it’s currently on track to become.


Welcome to this historical retrospective of the late 21st century.


Short Biography

Efflam Mercier was born and raised along the coast of Brittany, France. After working on 3D animated films in Paris, Efflam moved to Los Angeles to design the fantastical, imaginary worlds of video games -- from dragons to shipwrecks, scavenger cultures to risen dynasties. Yet, however fun dreaming up the wild and thrilling landscapes of fictional, escapist worlds might be, Efflam’s heart has always been deeply concerned for this world. Our world. The world you can’t change with a brush or photoshop. The world that can’t be reimagined or rebooted if we don’t like what we see. The world we can only change if we all pull together. 


So he decided to paint a different kind of future -- not scifi, not fantasy -- but the very real future we all might face if things don’t change. This series is Efflam’s way of starting a conversation we all need to have now. There is much to fear about our future’s prospects. But there is much to take hope in, as well. That hope starts with each other. So let’s step into this version of our future together, take a look around and discuss if it’s the history we want to paint ourselves into.

Efflam usually paints digitally using open-source softwares, however he recently switched to traditional mediums like oil and acrylic, fearing that digital art will not be archived if industrial civilization collapses.



Thank you for reading this post. I hope the statement and biography helps explain why I'm doing this Painting series.
If you started following my art for the dragons and the knights doing knight stuff, I'm sorry to disappoint, but I will probably not paint any more in my free time for as long as the urgency of the climate crisis supersedes my desire to draw fun escapist things.
I found my calling with this project, and I'll pursue it no matter the personal cost.

The way I rationalized staying in the entertainment industry is that it will allow me to make no compromises and pull no punches when I do my personal paintings. On the other hand, if I went and did those paintings full-time, I would probably be biased over time to make art that is safer and more decorative in order to sell, and I would end up hating it. This is why you can still expect me to post fantasy and sci-fi art from the games and movies I work on, so stick around!

PS: Did you know The royal baby Archie got more press coverage in a week than the climate crisis did in the entire year of 2018? Let me know your thoughts below!

Report

Your 2050 dystopia is weirdly optimistic

Article / 10 December 2018

TLDR: I urge everyone to join the conversation on how we can create art that more accurately reflect the challenges of our time.

Picture this: A sprawling megalopolis covered in smog. The faint glow of neon signs and giant LED screens displaying the latest advert for the latest high-tech drug. Constant aerial traffic of flying cars and a spaceship is now boarding for the moons of Jupiter. Police wearing full body exo-skeleton armor patrol the crowded, lively streets. While underground networks of ultra-libertarian hackers fight for the rights of the digital commons, religious sects try to outlaw consciousness transfers.

Sounds familiar? It's basically every other sci-fi world imagined after Blade Runner.

I'm allowed to talk crap about this image because I painted it.

Here's the thing, Artists have a huge role in influencing the subconscious narrative of humanity
we got into Art because it was fun, for some of us it's also how we make a living now. 

Problem 1: It's rooted in the (mostly) outdated challenges and imaginary of the 80's

Good sci-fi is usually a projection of human challenges and moral/philosophical dilemmas projected into the future. But did you ever notice how most old sci-fi looks very dated?

This comes from the fact that human imagination is usually locked by our surroundings. For example with this electric scrubber: mass produced mechanical parts were the new hot thing in 1900, so it makes sense that a "cleaning machine" would extrapolate based on this. What most people in 1900 couldn't imagine is that sucking air is way more efficient, but they couldn't think of it since there were not much pneumatic technology around the life of the average citizen.

So here's my problem with almost every concept artist (including me!) loving Blade Runner so much ; It's reinforcing an 80's imaginary of the future. Meaning it's the future, but viewed from 1968-1980.

I will separate the imaginary from the challenges.
The imaginary is my personal analysis of science-fiction from the 70's and 80's, while challenges are the historical accounts of challenges that the authors of science-fiction from 70's and 80's were having at the time. Note: all challenges are sourced at the end of the article using the [source number] tag.

Imaginary: 

  • "We went to the moon, now it's time to colonize the solar system"
  • "We're going to colonise other planets once earth is overpopulated"
  • "Flying cars are just around the corner"
  • "The use of robots are going to raise ethics questions very soon"
  • "The USSR will live on forever. The cold war is here to stay."
  • "Japan's economy is going to surpass the USA"
  • "science and industry will keep making more and more powerful machines"
  • "Humanity is the center of the economic universe" 

Challenges: 

  • Pollution at the city level was a major concern in the 1980's as car traffic increased and photochemical air pollution was getting worse.
    At the time of the making of cyberpunk dystopias like Blade Runner, air pollution was actually at it's peak in many cities. For example, air quality in Los Angeles slowly got better with the introduction of the clean act in 1970 and 1990 [1] 
  • In the 60's, the population growth rates of India and many other countries were absolutely out of control [2] , leading to widespread fears of overpopulation from the scientific community. In 1968 Paul R. Ehrlich, (a Stanford University biologist) published “The Population Bomb," an apocalyptic vision of an overpopulated earth and mass starvation. 
    You can see that the peak of the growth rate matches with the birth of the fear of overpopulation. I highly recommend reading NYT's article titled "The Unrealized Horrors of Population Explosion". We can safely assume that most fiction written around that time was influenced by this challenge.

    While air pollution is still a concern in large cities, it is a mostly understood and reversible phenomenon, and the ethics of robotics is still very much a philosophical debate rather than a software engineer one.

    Now let me attempt to define the imaginary and Challenges of 2019 onwards. This is no small task and of course my list is going to be incomplete, inaccurate, etc. This is more meant as a conversation starter to move towards a more up-to-date vision of the future.

  • Imaginary: 
    • "Humanity is fucked"
    • "We'll be fine, we'll go to Mars haha"
    • "Technological singularity is coming soon"
    • "Nuclear Fusion is coming soon"
    • "The economy can keep growing forever"
    • "This is all going to crash soon"
    • "Developing countries are going to provide 2/3 of the GDP growth by 2040"
      (Note : taken from an actual sustainable development investment journal)
    • "Renewable energy, yay!"
    • "Renewable energy is a leftist conspiracy"
    • "The scientists are going to save us all with some breakthrough technology"
    • "Dude, where's my flying car?"

  • Challenges: 

I'm going to focus on Energy, because most other problems are a result of this.




Remember those memes about graphic design/art? Where you can't get it all at the same time?



We basically have the same challenge, most of the world population think we can still have it all.

Look at the correlation between energy consumption and CO2 emissions:

According to a 2015 paper titled "Causality among Energy Consumption, CO2  Emission, Economic Growth and Trade" by P. Srinivasan  Et. Al. "the study detects one-way causation that exists from energy use to CO2  emission and trade" [3]

A one-way causation is a pretty big deal in Science; it means one thing directly causes another.

To simplify: energy used = CO2 emitted.


Wait, what about renewables?


Well first it's important to understand the difference between electricity and energy. Electricity is an energy in the form of a flow of charged electrons. Oil is an energy in the form of a high density fluid fuel that can be ignited, releasing heat and pressure.

Usually when we talk about renewables, we are actually talking about Biofuels, Biogas (organic matter to liquid fuel), or energy capture devices that transform mechanical energy into electricty: Photons to electrons (Solar PV), air flow to mechanical to electrons (wind)


So if you take a pie chart of electricity generation, it looks promising! Hydroelectric is at 17% for example. [4]

But as you take a step back, and you include all types of energy that are not in the form of electrons, you end up with this much more depressing chart:

And even worse, look at how renewable energy barely keeps up with the growth rate of fossil fuels:

To summarize:  if we care about the planet, we are running out of energy, if we care about energy, we are running out of planet, if we want carbon free energy, we should have started 100 years ago.



Okay, but what does all of this has to do with our cool dystopian sci-fi stuff?

If Sci-fi is a way to explore pressing issues by projecting them into the future, I believe we are collectively under-utilizing the medium.

Based on what we know today, the dystopias and utopias that we draw should look much different.

Don't get me wrong, spaceships and sprawling polluted megacities are very cool to paint. But I think if we care about science fiction as an art form, we should try to understand the world a little bit better.


To me, science-fiction is a what-if? engine, and what makes it so great is that you try to portray everything normally past the first what-if?

Then you develop your story around it and make it look cool, congratulation you made a sci-fi film.


So here's why your 2050 dystopia is weirdly optimistic:
If your dystopia focuses on a repressive totalitarian government in a super technological mega city you are assuming that we are going to solve the energy/climate dual problem. That in itself is already science fiction! So now it's not "what if x" It's "what if x AND we solved the climate/energy problem". Same thing with interplanetary travel, you are assuming that there will be a sufficient civilization and investment to support such an industry.

Physics and engineering today tells us that your mega city in 2050 is either:

Powered by coal and divided over the issue of what to do with the climate refugees, they camp outside of the city's makeshift walls, constantly - under the watch of the super armed state police.
Powered by coal and slowly sinking into the sea. Most of the poor people live on platform boards or use briges to cross over crumbling high rises while the rich live on the hills.
Powered by renewables but only the main systems of the city are operational, hunger drove most of the population out of the city and back to the farmlands. Buildings are abandoned, cars are stripped of their engines to power agricultural machines on rudimentary biofuels. buildings are stripped of their copper to make motors for home-made wind electricity generators. The few that stay in the city are organized in Organopónicos, a system of urban agriculture that was developed during the fuel shortage following the fall of the Soviet Union in Cuba.
Powered by nuclear but it's actually one of the last operational city on earth. The population is growing concerned about the supply of Uranium and Thorium, some even say the government is hiding the fact that there is only 10 years of fuel left as the rest of the world placed an embargo on Uranium.

All of these examples are world-building based on ONE of the challenges of our time, there are many ethical, social and technological challenges to explore. That said it's clear that energy is the cornerstone of civilization, so maybe we should link it.
• What are the ethics of climate accountability? Who is going to pay for the damages? Are hordes of hungry displaced civilians going to siege the fossil fuel billionaires's doomsday retreats and hunt down their yatch around the world's acid oceans?
• What are the social implications of an energy descent? How would cooperation triumph over egoism? Would a low energy world be more or less democratic? How would a woman living in France see her sister across the Atlantic Ocean once all fossil fuels are banned for civilian use? Would Sailing make a comeback? In that case, wouldn't piracy also make a comeback?
• How would small communities share and access knowledge trough technology? Will they repair phones and turn them into simplistic low-energy web servers? Will human-powered velomobiles deliver news from town to town on broken roads?

I've been researching the energy/climate problem extensively for the past few months, and let me tell you; there is no miracle solution.

I'm very concerned that our imaginative output as artists almost never reflect this impending energy descent.
I'm wondering if it's because few people are aware of the problem, or is it rather that we don't know how to portray it?
I think I'm in the latter category, I want to make art that reflect this post-carbon vision of the future, but I wanted to make sure to do my research first.

How do YOU imagine 2050 to look like? 
Permaculture Utopia? Thermonuclear weapons aimed at the biggest Co2 emitting countries? Desperate measures like dropping sulfuric acid in the atmosphere [5] gone wrong?

Let me know in the comments below!



Sources:

[1] Arthur Davidson  Photochemical oxidant air pollution: A historical perspective (Studies in Environmental Science Volume 72, 1998, Pages 393-405)

[2] Up to 2015; OurWorldInData series based on UN and HYDE, after :UN population division (2015) - Medium Variant projections 2015 to 2100

[3] Causality among Energy Consumption, CO2 Emission, Economic Growth and Trade, 2015

[4] The Shift Project Data Portal

[5] A Cheap and Easy Plan to Stop Global Warming

Report

How I discovered my love for cinematography

Article / 16 July 2018

In 2013, last year of high-school, I was in a learning spree.

At the time, I wanted to become a 3D lighting artist. I was E-mailing industry professionals all the time, asking for advice, etc.
I didn't get replies all the time, but some artists really helped.

Benjamin Venancie is one them. He is a lead lighting artist at Dreamworks and I had just asked him something along the lines of:
"On an artistic level, how does one learn about lighting? Any books or methods to recommend?"

I'm going to try my best to paraphrase and translate the answer, in a way that can help others develop their taste for cinematography.


  1. Photography: 
    Being a good lighter, it's first of all about a global understand of images, not just the light but also the composition and everything that has to do with the camera. A good way to understand the basics is to practice and study photography. It enables you to "train" your eye and taste, meaning when it's time to make a call, you can make good choices for the lighting.

    Recommended readings: The negative, The camera, and The print by Ansel Adams. (Note: very long and technical, but it covers the fundamentals of photography). Photographing Shadow and Light by Joey L. (Behind the scenes and lighting positions diagrams)
    Additional links: Guess the lighting, this website describes the lighting positions diagrams of fashion and editorial photographs.
    LightFilmSchool channel on YouTube to know more about light placements for film.

  2. Films:
    Benjamin gave me a list of movies that impressed him from a cinematography standpoint.
    I will now list every one of these movies and what I learned, how my taste evolved from watching them.

    Barry Lyndon (1975) directed by Stanley Kubrick.

    What I learned:
    Practical vs Natural
    From a cinematography standpoint, the interesting part of this movie is that it was entirely shot using natural light.
    You see usually when there's an interior shot, and say people are gathered around a table, on the table is a lamp.
    That lamp is called a "practical", but most of the time is just there as a "motivator" (reason why) that justifies the existence of a huge fresnel lamp off camera, pointed at the actor's faces. The reason why this is done is that the practical lamps often don't generate enough light for the sensitivity of the camera.
    What that does, is that any interior shot before recent groundbreaking ISO sensors has been "faked" with various levels of success.
    Compare two master of their craft Stanley Kubrick (and Larry Smith), and Roger Deakins.


    In this shot from the movie Skyfall, you can see Deakins uses the restaurant's table lamp as a practical to justify the lighting on the actress.
    A common practice is that the lamp should not be blown out white, as it's considered ugly. From a natural lighting perspective, this shot is unrealistic though, as the light would have to be blow out to light the actress that much. My guess would be that there is a diffuser hidden under the table, the angle and softness is slightly off. The image looks pleasing, but you know you are looking at a movie.

    Instead in this shot from Eyes wide shut , the practical is the sole light source on the actor's face. It is blow out to white because of the intensity, you can hardly see the actor... But doesn't it feel much closer to the feeling of being inside of a busy restaurant with Christmas lights?

    Bravo for practical lighting! If you want more information on Kubrick's use of practical lighting, check out this excellent video.
    (I think both options are perfectly valid, but as artists, we should know when we are breaking the laws of physics, and what we are trying to achieve by doing it).
    I will now post more of my favorite shots from Barry Lyndon:


      


      


      


      

    One of the reason I think the choice to go all-natural for the lighting of a period film, is that it feels familiar due to the fact that the classical painters had no other tools than the the sun, the sky, windows, and candles to complete their masterworks.
    Bonus : It's kind of obvious, but ominous skies as a foreshadowing device is really effective.

    In the Mood for Love (2001) by director Wong Kar-wai


    What I learned: Poetry within Chaos
    Christopher Doyle has to work fast. The productions are low budget, most of them shot on real world locations and in tiny cramped apartments. The director has no script and decides what to shoot right on the spot. Whoever thrives in this environment has to acquire a taste, an eye that can detect beauty within urban concrete jungles and neon lights.

    I really did not expect to like this film.
    In the following shots I want you to pay close attention to 1) the unexpected color choices, 2) the use of frames, windows, mirrors and pure black to create negative spaces.




    Now let's contrast this poetry by making a 180 degree turn to look at another movie with Christopher Doyle as DOP

    Hero (2002), directed by Zhang Yimou.


      


      


      


      


      



      



      



      



      

    What I learned:
    Simplicity is key in composition; Central compositions and symmetrical designs make the visuals stronger.
    Go bold with color. If you stick with mostly earthy/desaturated tones, reintroducing a single color color at a time makes for very bold images.

    Skyfall (2012) directed by Sam Mendes. Roger Deakins as DOP.

    What I learned:
    Silhouettes silhouettes silhouettes.
    Leading lines.
    Selectively lighting a part of an actor's face.


      






    The Fall (2006) directed by Tarsem Singh. Colin Watkinson as DOP.

    What I learned:
    The Fairy Tale Aesthetic
    Transitions (as seen below from a butterfly to an island)



     




  

What's really interesting in this movie is the juxtaposition of seemingly disparate elements to highlight the creativity of the young girl who wants her stay in how the fairy tale unfolds. Beautiful movie, I highly recommend.

On Benjamin's movie list was also Blade Runner, but I think we don't need any more Blade Runner inspired concept art these days :D
That's why I refuse to show it, however cool it looks like!

Overall, watching these movies and paying attention to the craft of cinematography set me on a watching spree, studying many other films and absorbing on-set dvds about lighting. I think you need to have watched a lot of good films to know what your taste in films is.
The same goes for painting, drawing etc.

I think I also need to stop there because I'm not sure the Arstation Blog feature was meant to handle 100 images.
I will leave you with the short film The bloody olive, I think the exaggerated lighting effects make it a great case study.
As a final note, I would like to thank Benjamin, and all the people who take the time to reply to emails to help people out.

What's your favorite film or short from a cinematography standpoint? Share in the comments below!

Report

The list of cool stuff for learning [updated]

General / 28 January 2018


I get asked pretty much every day about what resources to use, which videos to learn from, etc, so I decided to compile a list of the stuff that really taught me a lot :

BOOKS:

The art and science of digital composting 2nd edition  (Ron Brinkman)

A huge, complex but complete volume on anything digital composting ; history of VFX, computer graphics math components, the nature of digital images, signal, all the way to production breakdowns of some of the most groundbreaking VFX shots of the last two decades.
I think I read it about 4 times, one of my friend borrowed it but I forgot who that was, so I'm probably going to order it again, just to have it in my library :D

Framed ink  ( Marcos Mateu Mestre)

Practical examples of framing, lighting and storytelling  (comic book oriented).
Cheap and good .

Framed perspective ( Marcos Mateu Mestre)

Great book on perspective drawing, I also recommend Framed perspective Vol. 2

Alla prima II  (Richard Schmidt)

A must have for every wannabe painter, you can easily apply that stuff to digital painting.

The Color Correction Handbook (Alexis Van Hurkman)

This book goes really in depth about color grading for cinematic images, but it drops some little gems of knowledge about human perception of colors, temperature and contrast along the way.

The VES Handbook (visual effects society)

If you thought the other books I mentioned are in depth, here comes the winner!
This books covers the entire  visual effects (VFX) production system down to the last polygon, it's organised by chapter, each one being a discipline / role in a VFX studio,
prepare for an avalanche of information!

COURSES:

Schoolism:


They just recently started a new subscription model, for a hundred bucks  you have access to all the courses (1$ to switch, hehe), it used to be like 500$ a course when I started out , so you guys are so spoiled ! Here are a  few that I recommend :

Lighting fundamentals (Sam Nielson)

This is the courses that transformed me into a lighting geeeeek, I just loved his scientific approach to a seemingly abstract topic, this is a really complete course.

Designing with color and light (Nathan Fowkes)

Listen to this guy's smooth voice as he explains lot of cool stuff about light and composition, great for beginners.

Painting with light and color
 (Dice Tsutsumi and Robert Kondo)

I was quite surprised by this one, I was expecting to be a bit bored since I learned a lot about lighting already, but they have some interesting way of explaining things, and also the way they use digital life painting to explain is great.

EdX : (website)

EdX is a online learning platform hosting various MOOC (massive online open classroom).
I think platforms like these are the future of scalable, high quality education.
You can get free courses from MIT, Harvard, Stanford, Berkeley.

FXPhD: (website)

Same kind of subscription  deal  as  schoolism, y'all lucky!
The website is focused on technical VFX and 3D animation training.

I recommend you start out with :
History of visual effects , I promise you'll have 10.000 more questions by the end of lecture, this is the perfect place to answer them :)
I recommend any courses by Matt Leonard or Mike Seymour, (especially on math stuff), if you want to go deep into software training, they got you covered!
Also they have theses great breakdown and blog post about news and technology advancement.

YOUTUBE / FREE VIDEOS:

Free doesent mean bad ! so here's a list of good ones:

Cinematography Database is a YouTube channel dedicated to explaining cinematography and cinematic lighting techniques trough 3D rendering and how it translates to a real world movie set. ( Thanks to Marek Tamowicz for the suggestion )


Illustration vs concept art (design cinema EP 53)

Visual library (design cinema EP 52)

CTRL PAINT this Matt Korh dude put so much work into making this library of free videos, high five!
It can cover you from total noob to beginner in digital painting.

FESTIVALS / WORKSHOPS

THU is a crazy awesome festival that happens in Portugal in September,  but it's also 50 hours of kickass content for about a hundred bucks. The conferences are really packed into content so it's definitely a good one if you're short on time! btw fuck you Andre <3

IW  is a ton of cool workshops happening  in London also in September, like THU,  it's getting better every year, like a bottle of wine.

IFCC Was very interesting too. It's happening in Zagreb, Croatia.


I will keep adding things as I probably forgot a lot of cool ones, but this is the list so far, I hope you are hungry for knowledge, because dinner is ready !

Report