Unreal Engine Open World Preview and Landscape Tools | Unreal Dev Days 2019 | Unreal Engine


>>Hello everybody.
Thank you all for coming. My name is Ryan Brucks
and I’m Principal Tech Artist at Epic Games,
and that means I get to kind of jump around
on different projects helping out on Engine features
and game teams and different things like that. I’m here today
to give you an overview of some of the progress
we’ve been making
on the Open World initiative that Nick mentioned
briefly during his keynote. We’re going to go take a closer
look at some of these features including a few
live Editor demonstrations that should help
give you guys an idea of what it’s like
to work with the tools and the type of content you might be able
to create with them and hopefully how they might
affect your workflows and how you might want
to start using them. But before we do that,
we’re going to kind of quickly go over
what are some of the challenges of making a game engine great
for Open World development. And the Open World topic
is pretty diverse and expansive and probably touches
most of Unreal Engine 4. To kind of help simplify that,
I think it helps to distill that down to a few
major categories that we’ll kind of go through and discuss the quick challenges
that we face there and what we’re probably
going to end up doing to address
those challenges. So for this discussion, those features are really
high performance rendering because you have to have, or high performance
streaming, sorry, because you have to have
a large world if you’re going to have
an Open World and you want to build a stream
that ain’t just near your player where you need the most
resources and the most detail. And you need Editing Tools
that will work for any scale because as the size
of your world increases, the size of your edits
might have to increase as well. And if you’re doing large edits
to say a gigantic mountain or river in the same way that
you’re doing little hand edits like moving rocks
and moving bushes around, it’s going to be pretty time
consuming and costly to scale
those edits up to your project. And then lastly, quality rendering
at any level of detail, and that kind of
has implications for streaming. Just the quality of
the Materials in the scene and the lighting
and a lot of other things. And it’s also worth pointing out
that Unreal Engine 4 already has a lot of tools and features
that deal with these categories and does a pretty good job. But we always want to improve
the Engine whenever we find
an opportunity to do so. This might help give an insight into what we’re
thinking of doing there. Some of the challenges
for Open World streaming, again,
we have an Open World so presumably it’s bigger
than just a little courtyard or a slab
of background mountains that games used to have before Open World
became pretty popular. So, you need a way to separate
those different areas into chunks and you can think of that
as having a grid or having different cells
of the world. There’s a lot of different ways
to manage that. So obviously you don’t need
everything loaded at the same detail and you want to know
where your player is and manage loading that in. And that involves specifying
and handling when and where to load those different separate chunks
of your world in. And, and there’s a lot of
different ways to manage that, that we’ll go into. But there’s another kind
of hidden side of this which is keeping the content
in chunks, because, after you’ve initially
decided to split up some world into a bunch of little grid
pieces or separate levels, it can be challenging
to make sure that as you edit the world, it stays that way
and you’re not crossing lines and breaking boundaries. For Editing Tools, this is
really about letting your users work comfortably at any scale
including really large scales, because like I said before, you might have
an Open World project with big mountains or big rivers
or continents or islands that you need
to change the coastline or move this river over here and doing all that stuff by hand
on an individual level doesn’t always scale
and just takes too much time. And yeah, you can throw a lot
of people at these problems and try to have them all work on
different parts of the terrain, but then you have
another issue of, it’s hard to have consistency
in contention of files and things like that. And this really touches on a lot
of areas that we’ll talk about. Mostly Procedural Tools
and Landscape Tools and that involves both hand
editing and procedural and leading to the next thing
of lighting leads us kind of into the challenges
of Open World rendering. So, one of the things
we hear about a lot is that people really want to be able to do higher
quality terrain rendering and that includes higher quality
terrain performance as well. The big thing we’ll be talking
about for that today is Virtual Texturing, which there’s some pretty
exciting things you can do with it and
some interesting implications. And then another thing
that comes up again and again, which kind of seems like an
obvious, just really broad label but accurate
representation of worlds. So that means,
even if you have the tools to create
this big gigantic world that you want,
how do you make sure that it feels big
and feels alive and feels plausible
and all these things. So, we’ll be talking a little
bit about Atmospheric Sky Model that was mentioned during
the keynote as well. Giving a quick hint about that, we’re expanding that
to a Water Model in the future and how this kind of all works
with different types of streaming
and things like that. Jumping into streaming
a little bit more, of course there’s a lot
of requirements about streaming. We have to handle
the spatially separated areas; we have to let people
support editing at different levels
of granularity. So, they have to have kind of a,
let’s say you need to do, like I said before,
a small little edit to one area or a gigantic edit that touches
let’s say most of your world. You want to have kind of
a sane option that makes sense for both of those cases
and then everything in between. And then this all has
to be linked to the level of detail
of your objects and things, and of course touches on HLOD, which I remember
there was a question in which I’ll try to slip
in a little bit of an answer into one of the upcoming slides about one way
that you could address that. The status quo for managing
the streaming in UE4 has been World Composition and there’s nothing new there,
of course, right now. That’s been in a UE4
since roughly the early days when we took
UE4 to the launcher and that sort of thing. And it includes a lot of nice
features out of the box that can get you
started to a pretty good place. You can lay out your levels
in a streaming grid with a 2D viewport
showing all the terrain. You can import Tiled
Landscape Chunks, which is a pretty
common workflow that we see, people will use
external programs like Houdini or World Machine
or pretty much whatever [inaudible] to bring in a bunch of different
chunks for their Landscape. And then that forms a nice
starting point of a pre-split grid that you can then use as kind
of your world granularity. And then it gives you the option
to create Whole Level LODs, and of course, Unreal Engine 4
has the proxy LOD generation now to replace Simplygon. So, you can create
whole LODs without any separate licenses
or tool requirements. And usually this is managed
by people by going with either scripted
or Volume-based solutions. Probably Volume
is the most common where you’ll just have
level bounds for each object or level
in the World Composition. And then it’ll kind of try
to automatically stream those in based
on the level bounds. And you can of course override
and customize this. And a lot of people go
with custom solutions as well. There’s some challenges
to using World Composition even though you know, it does give a good foundation
to start with like I mentioned. We mentioned before,
keeping content in chunks. So that might sound kind of
like a silly quick thing, but there’s actually some
implications worth considering. So, let’s imagine ourselves
as either level designers or level artists
or environment artists assigned to pretty up this
little patch of our Open World, a little tiny level
called Level A. So, we think, okay,
there’s some trees but they look a little lonely. We need to add some more
happy trees. And so we’re going through
populating our level and starting to look
pretty good up until,
happy tree turns into sad tree because someone accidentally
placed it outside of the bounds
of the streaming area that they
were actually working on, which might not
be obvious at the time that they’re performing edit because it still looks like
it’s on the terrain. It’s just that it’s not
in the right level. And the side effects of this
can cause- or it really depends on which
streaming solution you use. On the one hand,
if you’re using level bounds and they’re automatically
calculated this little tree over here can suddenly
just dramatically increase the size of level bounds and really throw off some of the
performance and profiling data you might be getting
on your project because you really need to have consistent
repeatable test results when you’re optimizing
for a performance target. And if you have daily testing
where one day the build is broke because the balance got all huge
and the data looked like it was really expensive,
it makes it hard to gauge the effectiveness
of your other content changes. And then in the other case
where you’re specifying the bounds manually, it can cause
an equally bad problem in the opposite direction
where those trees just vanish because you weren’t close enough to the level
where they were placed in but should’ve been placed
somewhere else, which that can also cause falsely high positive
performance reports as well. Because, hey, what if half of
this level was placed over here and it’s not even visible
in this performance test? So, the solution of course
has been to manually maintain and fix that up.
You figure out which area does that belong in
and where does it go. We’ve identified this
as something we really want to improve upon
because even internally, we spend a lot of time
on these things. One of the solutions, and this was also
mentioned in the keynote, is that we’re exploring
an upcoming mode to manage sub-levels
based on a grid automatically. This is just an image
from a prototype right here, but hopefully,
it conveys the rough idea that we have a concept of a grid splitting up this terrain
or floor surface here. And as we drag objects
into the world from say, the Content Browser or the
little asset pallet there, it automatically is figuring out
where that object needs to go. And then that also takes effect
as you’re dragging the object around after it’s been placed.
So, obviously this is just debug text and debug prototype
stuff showing and that would be something you
would optionally turn on or off, but the point really being that
you wouldn’t really have to worry about that anymore. It would just be handled under
the hood automatically by the Engine. So of course,
there’s still some challenges when having levels in grids based on the different paradigms
in the Engine. One of the challenges
with World Composition is that you can’t really nest
multiple levels within other levels.
So that leads you to kind of have to face
some different choices. So, let’s say here
that we have a little house, and it might be ideal to say, hey, this house
should be its own separate level so that we can modify
this house level without having to check out
the whole terrain chunk or whatever
that happens to be there. And of course, some of this is
addressable by having multiple different
streaming grids in your world, but some of that
gets pretty advanced and complicated to set up.
So, we’re looking at ways to kind of ease
some of the tension. So what this kind of image
is trying to show here is that in the one case
we have a house level by itself and then the one that’s blue
being placed on the tile there, it’s kind of been placed as Actors directly
in that Landscape tile, which people also do
pretty often to avoid having tons and tons of different
persistent level streaming. So yeah, the two choices people
would often make to work around this is, either they’ll still have
their individual house levels, but they’ll have them
all as persistent levels in the same top hierarchy so that World Composition
can reference them or they’ll just manually
place them in the Level Browser
in the Editor. And both of those kind of
come with side effects. Either you have to then
check out the persistent level anytime you want to move a house
or adjust something or you place them
in the grid maps, like I said before, and suddenly you have
the problem of file contention. In this case, we have an example where that
same patch of terrain, which has the trees and
the terrain and the house now, because in this example, we decided to put the house
into the terrain. What happens when we have four
or five people trying to work
on some objects in this area? Whoever got it
first ends up winning. And there’s also a bunch
of other cases like someone well intentioned was about to check
in a really awesome fix, but then they had to run home
right at five o’clock because something happened
at home and then someone else was meaning to use that same
content right after them. So, then you have to involve IT and they’ll have to revert the
person’s checkout and make sure, hey, I hope
you didn’t lose too much work, otherwise you’re going to
have to redo it. So, trying to solve that kind of
gets us back to another point from the keynote,
which is File Per Actor. So right now, we have the whole
concept in Unreal of a level, which is one monolithic file containing
a bunch of individual assets like Actors and objects
and Volumes and all sorts
of different things. But that means if you want to
touch something in that level, you always have
to touch that file. The idea of separating Actors
into different files is that now, instead of having to
just touch that level, you’d have all these
different files where each can be applied
or sorry, each can be modified. A tree or a rock would have
its own transform information in that file
so you could just check that out and move
where it is within the level without having
to worry about that. And then that means
you could still have attachments and things like that. So, in theory it would
make it better and easier to have a nested
hierarchy of different Actors. So now jumping
and changing gears a little bit to Editing Tools,
talking about Landscape. And Landscape is obviously
pretty important and at the core of many
Open World projects because you usually want
some sort of foundation of earth for your game unless it’s purely an
urban environment for example. And one of the important things
about Landscape, and it goes back to
one of the early requirements, is that as the project
scale increases, the tools must also scale
with the project. And that can refer
to a couple of different things. Right now, we’re going
to be talking about kind of two alternatives,
which is either making handcrafted edits
more powerful, you know, so going back again,
moving mountains and sculpting gigantic canyons and things like that
in the same way and with the same ease and speed
that you might sculpt a pothole or make a tiny adjustment
to a sidewalk. And then the other side
is procedurally created content. And that can be,
at the base level, it’s something as simple
as procedural noise to be the base of a terrain before you start really
sculpting it out by hand. Or it can refer to procedurally
scattering objects with Procedural Foliage or even using Blueprints
to do Spawn Actor with some custom logic,
which there’s honestly, there’s a lot you
can do right now, but we have some improvements that we’re excited
to show as well. So, the big thing for the
Landscape improvements right now is the new Layer System. 4.23 included this as
an experimental feature, it’s called the Layer System
and it’s also worth mentioning that 4.24 is aiming to have
this system go out of experimental
and into beta. So, the Layer System
allows for a full stack of compositing layers
on your terrain and it’s kind of up to you
how you want to use them, but there’s just an example. You might want to have different
features of different types of editing on different layers. So, what you can think
of these is, each layer is another complete
terrain dataset, meaning we have access to paint
on a new height map and a copy
of each new paintable layer where we can make custom edits. And these are all non-destructive
in that they don’t affect what is below them in the stack. You can easily hide
and isolate them. And another question that was
asked before was blending. You’ll notice that on the right, that’s an image of the Landscape
mode with the Layer System. And next to layer one is an opacity
or an alpha on the very right. So yeah, you can click
the eyeball to easily hide or debug these things to see,
hey, what’s going on, why is this layer
making this weird spike here or something,
or you can just type opacity to zero or 0.5
or whatever you want. And that’s pretty useful. And it’s also
worth pointing out here that the layers currently, they always have a fixed blend
mode of being kind of a relative add to the layer
below them in the stack. Meaning when you add
a new layer, they shouldn’t actually change
the terrain right away, but you then have the ability
to paint down or paint up or use flatten or noise
or different brushes like that. And they should be expected
to work as normal, although keeping in mind the experimental/beta nature
as well. This is pretty flexible
and up to you how to use, but as an example
of one way to use it, let’s say that we have
a few different layers. We have our blockout layer,
where you might want to mass out kind of the basic foundation
of the terrain for your area. You could think of it is what is
the natural geology of this area of the world that’s not likely to change
in the course of our game; That’s not affected by man made
objects and that sort of thing. And then a spline layer
which could represent things like roads
or rivers or paths, trails. And then a building
foundation layer could be thought of as almost
a scratch layer for quick edits that help tie human made
structures into the Landscape. Buildings might need to have
a little flat area where they nest into a slope
for example, or roads or towns might have
a nice little clearing area that you want to be
able to tweak. And that comes with some
interesting benefits. Just a quick little animated
image showing how this works. Here we have the blockout layer
being shown as kind of an island shape, that’s the noise, the spline
layer being a Landscape spline and then the foundation layer
just kind of making some tiny little adjustments
on top of that and notice we can look at just
the foundation layer and see some little
modifications that it did. So, one of the things layers
can help with is to avoid redoing work
in cases where you need
to change your mind or move something around
or undo something. And this is an example where, say someone created
a nice procedural noise basis for the terrain
and then later went in and flattened out
some spots for buildings or for maybe a soccer field
or a stadium or something like that
and then later, you decided, hey, that shouldn’t be there, we need to delete it
or move it around. If you had done that edit
in a layer, you could actually just use
the erase brush and go ahead and erase
that little foundation splat and go back
to where you originally were. So, when we say
it’s non-destructive, that’s really
what we’re talking about and that it’s non-destructive to the data
within the other layers. So, it’s also worth pointing out
that the serializing/saving process
of the Layer System bakes
everything down to single layers and that means that
there’s Editor-only costs. So, in the Editor mode, you’ll
have some extra render targets and extra Textures
in memory for a time. But the cooking process
simplifies that all down. So even if you have four or ten
or a hundred layers, it’s going to export it down to a single Texture
for each layer and weight map as well,
in addition to whatever other packing the Engine
does for different platforms, such as, we pack weight maps
into the different channels of a Texture, for example,
just to reduce samplers. So of course, some of
the experimental risks which will be going away
a little bit when this goes to beta in 4.24. But of course, still in
development features may change, Editor optimizations
are still being implemented. And I’d also like to say
that some of the caveats that were mentioned in the keynote
also apply here to this talk and that
this is not all inclusive. There’s a lot of other
Open World features as well going on and et cetera.
We mentioned Spline Systems. We also have an improved
Spline System now that works with Layer System. And the big change here
is that the splines can non-destructively
carve the terrain. And that means that
as you move them around, they will, as soon as you
let go of the mouse, carve out the terrain with
whatever settings they have. And so here I’m just moving
around a couple of points but also adjusting some settings
like the falloff and blending of the edges.
So, you can change things like the width
of the blend at each point. And there’s also some
Texture-based effects that this image doesn’t show, because I didn’t have time
to get an example of that. But you can basically
break up the Texture. If you want to have asphalt
or gravel, you can kind of use a noise mask
to modulate that a little bit and make it look
more interesting. So, the next thing we’ll talking
about is Landscape Blueprint Custom Brushes. So, this is a system that was exposed
through the Layer System again. And what it does is it gives
control of the Landscape to Blueprints
or rather control to modify and of course
break the Landscape if the custom brush is broken, so you have to make sure
you hook them up correctly. The idea behind these
is that it gives you power to use the GPU
to modify the Landscape. So, this is in 4.23
via the Layer Landscape System. Although there’s not any example
content or introduction to how to actually use this,
4.24 will have a plugin that will —
in an example project which will help it be easier
to figure out and see how it works. So, the basic process here
is it starts from Landscape, which calls a render function
for our Blueprint. And then that Blueprint
dispatches Render Material Draw events to the GPU, which then come back
as just a series of Textures which then get reprocessed
and sent back to the Landscape for the compositing process
of the layers. The Blueprint Brushes work
with the Layer System. Whenever you add a brush,
it goes to one layer or another. Unlike splines, they don’t require the layer
to be reserved for that purpose. You can actually add them
to a layer and then still hand paint in
that layer although right now, I believe that the brush
is kind of in the higher priority of the two.
So, if you want to do something like have a brush
that does stuff and then edit with hand brushes
or hand sculpting on top of that, you would have
another layer above and then do the hand
edits in that above layer. And these brushes can affect either height maps
or paint layers. And that means
also multiple layers. You’re not restricted
to just choosing one. What this looks like in the most
simple form in Blueprints is using the render function. You just drag a custom brush
into the world in Landscape mode and it’ll assign it
to a layer automatically and then whenever that brush
is modified or the Landscape is modified in a way
that requires this to update, it’ll call render, it’ll pass in a reference
to the render target that is kind of like,
what is the value of the terrain before we apply this brush?
And then in the Blueprint, you basically take that
and pass it to a Material, access that in the Material
and do some changes and pass the result back on
the return of the render node. And that return render target is basically whatever will
end up on Landscape. The best way to get an example
of this is with 4.24 with the experimental
plugin Landmass. We’ll be talking a little bit
about some of the custom brushes that come
with Landmass right now. And you can also get access
to that on GitHub. If you guys are set up
for GitHub access, we’re really early adopters,
which I encourage. So, what this looks like
inside the Material space is basically you’re reading
the HeightRT Texture there, which is a Texture parameter. So, you can assign it using
a Blueprint to read the value
of that Texture that came in and then doing
some modification to it, which in this case we have
some displacement Texture, that little noisy Voronoi looking thing
that we want to add to it. And then the thing to point out
is that Landscape stores values in a quantized
format internally. What they represent
is 16-bit integers, which means you have a value
from zero to 65 K. But it stores that using
an 8-bit Texture format with two channels RG8. There’s some functions with
Landmass to unpack and repack that data before you return it,
which you’ll need to do. Right now, with just 4.23, you’d actually kind of
have to figure that out yourself or look at the code for how
the Landscape does that packing. So, so I’d recommend
looking at Landmass and getting these two functions
to get started as well. And basically the workflow
there is you sample as normal, you place the unpack function just on that render target
of the Landscape, then you can do regular
shader work because you have regular values
at that point and then just before you return the output
you repack it back to RG8. So, here’s an example
of a custom brush that just does noise
on top of the terrain, which is a pretty fun place
to start. You can see by having different
amounts of octaves and persistence, you can have different levels
of terrain where it’s kind of starting
to get smooth with a little bit of noise versus
just very jagged, very quickly. So, you can try out
some different looks and just use
that as a way to start out with the foundation
of a terrain. And you can also
do more advanced effects such
as generating Distance Fields and using those Distance Fields
to carve out and effect the terrain
in pretty custom ways, really taking advantage of what
the Material Editor can do. So, the process to create these
Distance Field based brushes
is still all using Blueprints. And there’s kind of
a process involved starting with spline data. So, all these shapes
for these brushes that we’re going to show start
with splines that are just drawn out
like regular splines, but they’re
kind of always assumed to be a closed loop spline, you just don’t have to connect
the last two points. So, you can actually render
this out to a canvas mask using Blueprints by using a Draw
Material to Render Target, but instead using a slightly
modified version with a just Draw Canvas where you
give it a list of triangles. And notice here
that it’s not actually rendering a triangulated Mesh
with proper triangulation. It’s just using a winding order
trick where for every polygon, you draw one of the verts
back to the first vert, and then you flipped
the sign of the Material based on the winding order, which in UE4 is just the node
Two-Sided Sign in Materials. So that causes it to kind of cut
out the little chunks where the spline
went the other direction. And then you can get a result which is just
a little mask Texture. And then that mask starts
as the input to the next phase, which is generating
the Distance Field. So that works with an algorithm
called jump flooding, which maybe some of you
are familiar with. But basically the idea
being that you first run an edge detection step
where you specify certain seeds and then the jump
flooding algorithm spreads those seeds out and always finds
the nearest seed for any point and returns what’s called
a Voronoi Diagram, and that basically tells us
it’s a diagram of where the closest point
to any other point is. And then from that,
it’s pretty simple to extract a Signed Distance Field. And the Signed
in Signed Distance Field basically means it contains
information about both the inside
and outside of the shape. So, notice here for that little
jagged mountain outline that we have bright values
inside the shape where it peaks, which continue outside the shape
for the whole range of the Texture. In that case it’s biased so that
the boundary is actually at 0.5 and then anything 0.5
and above is positive and anything 0.5 and below
is negative or flipped depending
on your interpretation of is the inside positive
or negative. So, one of the useful things
about this Distance Field is, because it’s a continuous field that covers the entire region
of the Texture here, which also happens to be
the entire Landscape, that means you have
a continuous intersection that continues beyond the point where the brush contacts
the terrain. And that means you can start to
use operators like SmoothMin and SmoothMax
for advanced blending where those brushes
intersect the terrain. And you can separate that
by inner and outer edge as well, which is pretty interesting. And then you can kind of take
that a little bit further. I know going back
to the previous question; the previous keynote
about the blend modes, while the layers themselves do
not have different blend modes, you can pretty easily implement
your own blend mode in a Material
for a custom brush. And it kind of does that. The way that this works
under the hood is you can give it any result and when you pass it back
to the Landscape, it kind of makes it relative to what was underneath it
at that point. So that means
you’re pretty free. You’re just returning an
absolute value of the Landscape at the end of this. So, you can do for example here
alpha blend min or max or just an additive brush. And you can see these all have
pretty different effects on how it deforms the terrain. And in addition, there’s the
option to cap the shape or not. And that goes back
to the signed part of the signed Distance Field. The capped shape basically says,
we just write a zero or a 0.5 for the whole middle
of the shape. We don’t let the field
continue in and that just automatically caps
it, which is interesting. So, I mentioned you can do more
Texture-based effects with the Materials
and you can pretty much do whatever you could imagine
in the Material Editor and make it as expensive
as you want. So, here’s an example where
we start with the raw shape which you can see is triangulated
and pretty low poly. And then we add some Curl Noise, which, starting
with some large distortion into smaller distortion, gives us some more natural
earth-like features almost. And then the Distance Field
blending is the lower left there,
the edge smoothing, which gives it
a nice smooth transition down into the terrain. And then displacement
is basically just adding a Texture displacement result, like that example
of Material slide. And then terracing is an example of just creating some terraces
with the brush as well, they’re all things
that you can do. And these are effects
that are included with the Landmass brush as well. So, the other cool thing about
having brushes like this inside the layer stack is that you now have a list
of brushes in the Landscape tool and you can just select and drag
and drop them around to reorder the priority.
And that means if you have, this is an example
of two brushes, well, actually
it’s three brushes. We have one Material
only brush doing the noise, we have one brush for the canyon and then one brush
for the mountain. The only difference between
the canyon and mountain brush is the cap option;
the cap shape, and then changing the priority, you can see either
the mountain’s on top or the canyon’s on top,
but it makes it pretty fun and interesting
to try different combinations. Just, messing around
with gigantic shapes. Some of the experimental risks
with the Landmass and the custom brush system
right now is that performance
is not quite where we want yet. That’s in large part due to
the fact that each brush currently just renders
the entire Landscape and returns
the whole Landscape result. We have some early tests
of optimizing that down to just affected regions, but they’re still
a little bit too error prone to make the default behavior. And then another issue
that we’ve learned from actually using these tools
in-house on some of our game projects,
is the map dirtiness issue, which currently could cause you to get prompted
to check out levels, which you didn’t actually modify
just due to really subtle precision
changes in a few pixels. Most of the time it’s ignorable
and we can deal with it, but we hope to get
that fixed soon as well. Going to go actually and jump
ahead into the Editor real quick and demo some of that stuff. Here we have just kind of
a pretty blank level that has a Landscape. And it’s been set to use
the new Layer System, which we can find
when we click on the Landscape and just go down
to its Details Panel that has enable
Layer System checked, which is just a box when you
make a new Landscape as well. So, I’ll just go ahead —
So, if we do New Landscape, we can see we have Enable
Layer Support there, which already went ahead
and created one here. So, what does this look like? You can see
we have our Landscape. Most things behave as normal. We still have our Manage,
Sculpt, and Paint. But we also have
this new little layers area which shows our layer one and what are the targets
of the layer. And that includes,
so we go to paint mode, and we have all these targets
we can paint in our rocks and our scree and our grass and whatever
the Material has set up. We could go down here
and create a new layer. And now we have two layers. So I could kind of do
a really quick version of creating that little example
from before where say, on one level and we have — and this is what
I’m talking about when I talk about
scaling edits is this isn’t usually
the type of edit you want to do with a hand brush
if you’re making a continent. But just for the sake
of demonstration here, let’s say we have two
different brushes here now. We have one brush
affecting our continent and then one brush
affecting some little edits that we did afterwards. And notice, we can come down
here and set that to 0.1, slowly fade it in,
pretty much whatever you want. You can also reorder them, which in this case
isn’t really doing much because they’re just additive. But it will affect in some cases
depending on opacity and things. I’m going to go ahead and delete
one of the layers again. Oh yeah, also to show that
when you go over to paint and you have these two layers,
I mentioned before, but you actually have
kind of a unique copy of each of these layers
to paint in. So, I could come down here
and just paint in rocks. And that’s just in this layer.
So, if I hide that layer, now it’s gone, and likewise
I can just paint in this layer. And then you can go in
and erase them as well. And there’s also
subtractive blend mode, which can be interesting. If you want to have a layer
that’s higher up dedicated to just erasing some things
that were lower down, that’s also an option.
Alright, I’m going to go ahead and simplify
this down a little bit and talk about
the Blueprint Custom Brushes. Let me clear some of these
effects that we did real quick. There’s also some different
options for clearing now, when you right click a layer,
you can clear all sculpt layers or all paint layers as well
as sculpt layers as well, which is useful for when you’re
just duplicating things around and you’re not sure
how you want to use them. The next thing to show
is the Blueprint Custom Brush. So, we have this new mode here
in both sculpt and paint; Blueprint Custom Brushes. And we also have
this little custom brush layer stack down here. So how that works is,
there’s the Landmass plugin which I’ve enabled to here,
which you’ve got to do, Show Engine Plugin
and Show Plugin Content. Then that’ll show up down
kind of under Engine. Under Landmass
Blueprint Brushes, there’s some simple
Blueprint examples in here. We’re going to use one that’s
a little bit more fully featured to show what can be done. This is the
CustomBrush_Landmass. And you can also
make children of it that have custom overrides.
So, I’m going to go ahead and drag in this
Landmass brush canyon, and notice it added it here
to the layer brush stack. It doesn’t look like it did
anything just because it’s coincidentally
sitting at the right Z plane, I hope, if it works. So, you can see
dragging it down, it’s affecting the Landscape and it’s kind of using
the shape of the spline. We can come in here
and start editing it or alt-dragging to modify it. The way to know
when you alt-drag, which part of the spline
the new points going to go to is if there’s
a highlighted orange line, the alt-dragged point
is going to go to that side unless you’re at
the very end spline and it’s going to make
a new point. So that’s kind of just how you
define the shape of the spline. And then getting into
how you modify it, that’s still done through
the Details Panel of the actual Brush Actor. So right now, we have this
kind of settings list, which is under Brush Settings. The type is basically
you can change to also use things
like spline Meshes or single Static Meshes
or Material only to bypass all the functionality of
the Distance Field generation. There’s different alpha blend, so min
wouldn’t change anything here because we’re only lowering max. I want to edit the actual brush. Max is only raising up
and additive won’t look like
it’s changing anything here because there’s nothing to see
from the base layer but it’s effectively
just additive. And then the main things that
you might want to change under here are the falloff. The rest is pretty
much overrides for when you have the Material
or the Mesh or the spline as well as how
you communicate to paint layers. So we’re going to go ahead
and show falloff, which is currently set to angle and that’s why when
I lower this down, the size grows as it needs to
to maintain that current angle, which I can kind of lower
that down to almost flat or make it really sharp, which you didn’t
really want to do, 90 degree cliffs in Landscape, you want to give the system
something to work with. And then you of course
you can offset these shapes to shrink or expand them. Or you don’t have to use angle.
You can also use width based, which width based is different
because now it’s not expanding or contracting based on
the intersection of the object. So, we can just, really quickly
going to make this look a little bit more interesting
since I’m using more of our time than I used in my rehearsals
apparently, which is okay. We’ve set this brush up. It kind of acts as like a vague
start of a canyon. We’re going to go down
under Effects to see some of those Material effects
that we talked about earlier. So, the interesting one here
is Curl Noise. And right now, there are two
octaves of Curl Noise exposed. So, we have kind of
a large octave and we can kind of slowly turn this thing up from zero,
kind of creeping in. And that’s going to —
it applies this distortion to the beginning of the
Distance Field generation stage. So, it kind of creates these
almost eroded looking slopes and then adding in some more
detailed values, for example. You can add
a little more detail, tweak how you want
this thing to blend. I’m going to go ahead
and set it back to angle because I think angle works
a little bit better. Go for a shallower angle.
There we go. So now we’ve got kind of a nice
little start for a canyon and then we can select it and go down
to other things like, we could add a bunch of Texture
displacement if we wanted to. We could do this smooth blending
operations I mentioned before, such as if we wanted
to softly blend on the inside or maybe a little bit more
of a crisp blend on the outside, but still there. So that’s basically a quick
overview of the height stamping ability of this. And of course, these are just
regular Actors and as long you are inside
of the Landscape tool, you can duplicate them
and scale them and do whatever just like you
could with regular Actors. So, I just press control C
control V to duplicate this object. And now, notice I’m just kind of
moving it around like I would any other object and able to change
how it affects the Landscape. And I can then come in and modify
some of the blend modes. Like maybe we don’t want to cap
the shape on this one. Maybe we want this one
to be more of a mountaintop. So, they’re in a kind
of place it and move it around. And this’ll help demonstrate one of the things
about alpha blending is that it’s kind of
a combination min and max mode and that you’ll see it’s kind of
lowering the terrain up until it meets
the intersection point and it’s raising the terrain up
until the intersection point. So, it’s not truly like
an alpha blend mode when it’s set to angle,
but when it’s set to width mode, then it really does behave
like a traditional alpha blend. So, going back to angle mode
and uncapping the shape, you can go over here to our
layer stack and reorder this. So, if we put the canyon below or the mountain
below the canyon, now it’s kind of like the canyon
is going to cut out the mountain wherever we put this thing. Which we may want
or we may not want. And notice it’ll cut out as high
as it needs to go to maintain that angle, which sometimes you want
to use width mode to avoid that. So, for weight maps,
going to go down here real quick,
we only got seven minutes. We have some regular
paint layers down on Landscapes
such as rocks and scree. So, all we really have to do
is select our brush and add a paint layer
to this paint layers array. I’m going to say rocks,
it showed up and then it went away.
Hang on. Here we go. So, this is using the same
Distance Field blending. So, you have control to change
the width of this blend. You can set it to be pretty soft
and a big blend if you want. You can set it
to use Texture breakup to break up the influence.
And that’s all controllable by the Distance
Field Gradient as well. So, what you’re really doing
is kind of modulating that intersection
but, say we don’t probably want that much
texturing on this point. What we really want to do
is put rocks on the cliffs. We can go ahead and flip
the sign of this width and now it’s going
to go up on top and we can also restrict it
to not continue going. We say what if we want to give
a limited range, we can kind of find,
what’s the value that we need? And then type that in. And these are also not limited
to single layer effects. We can go ahead
and add another one down here. We had scree if you remember.
So, I’m going to go ahead and add scree but of course
it did the same thing. It put it at the very bottom. We might want to use
edge offsetting and we might need to move it up
a little bit more. So, I think what’s happening
here is our rock is actually covering up
the scree when we shouldn’t be doing that.
So, we could actually solve that by shifting the rock up the hill
a little bit. And now let’s say
we want to take the scree and we’re going to
flip it as well. But then we might need to flip
what our bias is and then we can apply
a little bit of Texture to that with the same breakup effect, we might need
to give this gradient a little bit more width
to play with, because you can only modulate within the width of the gradient
that you set. And you can also play
with shifting around the effect by changing the mid-point
of that displacement Texture, which can be pretty interesting. I’ve kind of shifted it
a little bit too far. There we go. And then maybe
we need to adjust our rock shift a little bit as well so that
we’re not covering up all of it. We tighten that up
and move it down. So then, we have nice
little scree piles just at the bottom
of each of these mountains by playing with some of these
Distance Field settings there. Alright,
I’m going to jump over to some other things
again real quick and then we’ll jump back to
the Editor here if we have time. So next we’re talking about
Procedural Foliage improvements. We’ve had Procedural Foliage
tools in the Engine for a while and they’ve been able to read
Landscape in some sense. They can previously do
what’s called inclusive layers. So, say you had a grass layer
that you wanted to always spawn these tree stumps on
or these moss pieces, but you also had this rock
pedal layer where you wanted
to not do this on. Right now, what you would
kind of have to do is also erase out the grass where this rock
is to avoid that. And we added exclusion layers,
which means you can add list of layers
to not spawn Foliage on. And what that really does
is makes a computed-just-in-time mask
for that Foliage so you can have more control
over how it spawned. And this is on
Foliage type Actor which showed in there
at the bottom left, how to create that
from the content browser, under right-click Foliage. And then this is also making
an Actor Foliage as well. So Foliage tool
can also print Blueprints and Actors or place Blueprints
and Actors as well, just like regular Foliage,
which has a lot of advantages for basically scripting and
doing gameplay in Open Worlds because you can place
a lot of trees and those trees can now have
coconuts or scripted beehives or a bunch of gameplay
and effects attached. And it’s a nice more
flexibility way to do that. The next thing I want to talk
about is with rendering. It’s Virtual Texturing. So, there’s two main Virtual
Texturing systems in UE4. There’s the Streaming VT
and the Runtime VT. We’re mostly going to talk
about the Runtime VT right now. But just to help with a quick
rundown of the difference. The main differences, the Streaming VT
is streamed from disk and its main idea
is to save memory but it costs some performance. The Runtime VT is generated
by the GPU on demand and in general, it costs memory but it can save
some performance. And that’s where we get a lot
of interesting blending effects in different features
we can do inside of Materials, is from the Runtime VT. And with the Streaming VT,
you’re really still dealing with individual
Textures and Materials. In fact, nothing should really
change much from a user perspective other than you check the VT
check box on the Textures, and then Materials, you have
Virtual Texture sampler types. For Runtime VT, it’s a little
bit more complicated because you have separate
Textures and Materials all contributing to one
combined Virtual Texture and then that Virtual Texture
is updated on demand. So that makes it more well
suited for complicated blends where you have lots of decals
and objects and terrain layers all making one really expensive
mega shader because you’re not always
rendering it unless you need to. So, for Streaming VT, the idea is that if you have
a really large Texture that’s high resolution
across a large area, you probably don’t need
to have that in the same resolution
everywhere. So that allows kind of
breaking up Textures into grids and only streaming it
in high detail where you actually need
a custom camera. And shown there on the left
is a grid with a color for which level of detail
the Texture is using and you can see it
cascades back. With Runtime VT you can
do things like have objects combined into one single Texture and then the VT can be sampled
by other Materials, even those that are contributing
to the VT themselves. So, you can have roads
that are writing to the VT and then displaying the VT so that you don’t have
another shader to deal with, but you still get the Mesh
ability to cause silhouette and make sure collision matches
and everything like that. And then as before, it’s ideal for expensive
multilayered Materials. You can do really cool things
with virtual texturing such
as automatically blend static Meshes
into the terrain seamlessly, which if you’ve ever tried
to use Photogrammetry Assets or rock piles with skirts, it can be frustrating because
you start placing these things and it covers up
the look of your terrain and it starts to look really
obvious from a bird’s eye view. These sort of things really help
you get the most out of these assets and make sure they always
feel like they’re making sense and they’re being used properly. Even if you really are
just scattering the same Mesh. So, what this looks like
on the content side is, there’s two different nodes
that are important. The first one is Runtime
Virtual Texture Output, which has to go
inside of the Material that is rendering into the VT and it kind of just defines
what you want to go there. And usually,
you just use Material attributes in your layers
and you pipe some stuff in. And then for sampling
the runtime Virtual Texture, it’s pretty much the same thing. There’s a Sample node,
you place it in the Material and then you sample it like
you would some standard layered Material
attributes layer. And for the height blend
that was previously shown, that was done using
an intersection blend using the Runtime
Virtual Texture World Height. And you can see we have a world
height value on the Sample node down there, although that uses an actual
separate Virtual Texture and a map.
And that’s done here. What we do is we take
the world position of the object that we’re rendering,
like the rock, and we subtract
from that the world position of the Virtual Texture
and then biasing and dividing. And that gives us
a nice gradient that starts at the point
of intersection, and we can blend it
to be whatever, modulate it with Textures like we were previously
showing a little bit. So, here’s an example of a scene
based on where we were looking at before with some little scree
rock piles and rock piles and cliffs
that are all blending together with Virtual Texturing on both
the terrain and the Landscape. So, another thing to talk about
is the new Sky Atmosphere. This new model is an accurate
multi-bounce Rayleigh Scattering Model, and it’s a great way to bring
realism and scale to scenes. Some of the most exciting things
about this new system are first that it scales from
really high-end PC to mobile. And we’re also starting to use
this in Fortnite pretty soon. It supports a lot
of advanced features such as planetary views, meaning
you can see those tiny planets that are literally made by
making your planet radius small in the fog settings
or the sky settings and then flying the
camera way back. You can do dual sun and moon for
space games and sci-fi. And most importantly
for content creators, a lot of view parameters and rendering features for the sky are set up with parameters
exposed for artistic tweaking. So when you use a Sky Atmosphere
in your scene and you place in a Skylight and you set it to capture
from the scene, you actually now get proper
sun/sky luminance and color, which is great because
it allows you to tweak the scene and get instant feedback, especially once you create
a Blueprint, it forces the Skylight
to dynamically recapture as you’re editing things. That can be
a really important way to see a full dynamic response
from your scene. And this also does tie
to Height Fog, which was another question
from the last talk. Currently, if you have just
a regular Height Fog, what you want to do is turn off
all the directional and ambient scattering options and the sky will start to drive
the scattering values and it’ll just match the horizon of the atmosphere sky
automatically. You can use it
with Volumetric Fog, but the Volumetric system
is not fully linked with the colors
and things like that. So, you’ll break some of
the color for the Skylight because it’ll be rendered
as unshadowed into the Skylight at the moment.
But if you turn it on and keep it at kind of
like a small value, it can work without
too many problems, but that’ll hopefully
be updated soon. One of the really cool things
about the atmosphere sky stuff is that there’s a bunch
of new Material nodes that were added to help aid
in the creation of custom Skydome Materials.
So, we see here, these are basically
the different main components that are used inside
the actual rendering of the sky. Such as, what is the luminance
from the sun at some point in the sky or what is the luminance
from the distance scattering, which is all the blue Rayleigh bounce that’s coming
to a certain point, the sun disc
and aerial perspective. You don’t have to use
all of those. You can just use a couple and
make a quick attempt at color matching the sky for example. Which this slide
is an example of that here. This is actually
just the default Unreal Engine default Skydome Texture that was updated
to use these colors and kind of a fake
lighting effect based off just shifting the Texture around
based on the light vector. But you can get
a pretty convincing and well-matched result
without a lot of work. This was just a kind
of a quick prototype to test that it’s working. And then you’re not also
not limited to using that node on simple
just Texture-based skies. So, unlike the last one, which was just
a regular Skydome Texture, this is applying
those same values to a ray marched atmosphere represented, kind of using
a custom ray march effect, which was an old prototype.
It’s really slow and it’s not been corrected
or tweaked for physical accuracy or anything close to that,
but it’s still pretty satisfying to see that you can just take
some custom ray march shader, plug in these
and nice lighting values and suddenly
you’ve got something that works pretty well. And it seems like
I might time to do a quick live demo of that,
which is great. I’m going to go ahead and leave
that slide up real quick. Going to go ahead
and load a different map here. Okay, so I mentioned before, a Blueprint
to recapture the sky. We have one of those in here, which is just
a simple Blueprint. It has a reference to a Skylight
and really on the event graph, it’s calling this
Recapture Sky function, which allows us to do things
like we can maximize the screen and go into G mode. And then another really
cool feature that was added
by Sebastian Hillaire who worked on a lot of this Sky
Atmosphere stuff is, you can now hold control L
in the Editor and you have
a nice little a gizmo there where you can
rotate the sun around. How it works is that left
and right movement is the horizontal rotation
around the Z axis. And then up and down movement is kind of
like how high it is in the sky. So, it makes it pretty easy whenever you’re
in a certain view, you don’t have to like bounce
back and forth to the Details Panel or try to find the rotation
gizmo or any of that. You’re just kind of like, how do I find the best lighting
response for this scene? Including, going from
complete blackness to sunrise, you get that nice
really bright red line from the Rayleigh scattering.
So jumping in real quick to, we have kind of
an invisible Material applied
to the Skydome right now, but we could apply that
2D Texture based sky real quick, which once it’s in motion, we might be
a little bit more clear that it’s a Texture
based offset effect, but it still pretty —
hopefully conveys the point of why you would want to tie in just a simple fake Skydome
to your atmosphere. So, there’s at least you don’t
have jarring floating sky cloud polygons on top of the fog
or something like that. You want something reasonable.
And then for the other clouds, we have that Volumetric example
here as well. These are a little bit more
heavy on the performance side, but you can see these are
actually fully Volumetric now. So, the way that
the shadow moves is a little bit more accurate and we can actually
kind of fly around, although the fog sorting
hasn’t been implemented yet to work inside of those. And these are just
a prototype example basically showing
what’s possible. But you can kind of change the
Volume dynamically and change the altitude of the clouds,
change the tiling of the clouds, and when you’d add
a zero in there, there’s kind of a detail layer
which is breaking up, which, you want to keep
these detail layers correlated to each other
between same values. But it’s pretty,
you can kind of hone in on some different sky
settings pretty quickly. You could have, what if I wanted to have
a completely clouded sky? You’re still benefiting
from the physically based guy because now you have
that nice red sunrise and everything like that. And this was just kind of
a quick prototype again. So, imagine what you can do
once you really sit down and calibrate everything and make it all physically
based and correct. So, jumping back over here. In summary improved
streaming management and level editing tools
are on the horizon. And that includes things like
Grid Systems and File Per Actor. New ways to edit
Landscapes and worlds, we’re exploring those now. And that’s the Layer System,
Blueprint Brushes, Procedural Foliage improvements
and all those types of things. And a lot of the new
rendering tech that benefits all of this is
already here to play with now. And that includes things
like the sky, Virtual Texturing and a lot of the other
rendering features that Nick mentioned
in the keynote, like the Screen Space GI
even really helps rocks pop, although we don’t have examples
of that in this talk, unfortunately. So, I want to give a big thanks
to our Open World teams, our tools teams, rendering,
and tech art teams back at Epic who helped contribute
to so many of these features. And I also want to thank
all you guys for attending and your interest
in the Unreal Engine. So, thank you guys.
That’s it for me.

57 Comments

Add a Comment

Your email address will not be published. Required fields are marked *