Jump to content

Requesting Critiques on 3D Models: Dev Journal


DrGravitas
 Share

Recommended Posts

2 years of progress posts... gone. Oh well! 

We're Back!

0620_BeachFurry_ThumbnailC.thumb.png.bb1                11095ac2a465d768951705e8f0d891624a33dba7                 17316191@200-1438829758.jpg

          Rederick                                                          Gravitas                                                                 Blythe       

I am a 3D hobbyist looking for thoughtful critiques, suggestions, and learning resources!

My focus is generally on a base model that has grown to the trio of fox-like anthropomorphs above! Gravitas was the beginning and Rederick brought the retopology where the majority of improvements were made. Blythe is the current focus, adjusting the model for female features while incorporating further improvements. I use Maya 2015 and render with its Mental Ray plugin.

The old FAF thread was revolutionary for me, inspiring 5 pages worth of improvement including a complete model retopology. As I said in old thread, good advice is like gold: hard to find, but totally worth it! (Oh, and some time they're both really hard to bear.) So of course I'm going to start a new one! Onward we march in progress! :D

 

Picking up around where I left off: A demonstration of the new PyMel script (which I've made available here) which is used to create a render-able view of model's skeleton! Coupled with renderable wireframes, it makes for a fairly good overview of the model thus far:

http://www.furaffinity.net/full/17638414/ (nsfw for exposed nipples)

Looking forward, I've decided to move forward with Blythe beyond topology work. Based on feedback from the last poster of the FAF thread, I'm going to have another go at fur effects and decide how to proceed with them. But, all the fur tech (except XGen, sort of) requires good UVs. So first, I need to rebuild those. From there, I think the first thing I've got to work out is whether to keep the tail's geometry big and fluffy as it is now and simply place a layer of fur over it, or reduce it to a thin whip-like shape and use fur to build the volume in a realistic fashion.

 

As always: Comments, suggestions, and critiques are highly appreciated!

Edited by DrGravitas
Adding nsfw tag
Link to comment
Share on other sites

Glad to hear you managed to read my last comment about fur before the forum change happened. Yeah, implementing fur in an effective way surely won't be easy, but it'll be exciting to see how it goes. If anything, making sure your UVs are good is something you'll have use for either way. As far as the tail goes, I'd say just try it out and see whichever works better/looks better. However do consider whether you're intending on using it for images or actual animations as how they're perceived surely depend on the amount of fur used.

It'll be interesting to see how you'll continue on from here.

Link to comment
Share on other sites

2 years of progress posts... gone. Oh well!

Well, not entirely gone ....

Courtesy of the Internet Archive's Wayback Machine

I was following your progress in the old thread.  I'd strongly recommend that you [File menu --> Save] this before it disappears from the Archive, if you want to keep a permanent record.

Link to comment
Share on other sites

Well, not entirely gone ....

Courtesy of the Internet Archive's Wayback Machine

I was following your progress in the old thread.  I'd strongly recommend that you [File menu --> Save] this before it disappears from the Archive, if you want to keep a permanent record.

Aaaactually, I was just being dramatic. I save my wall-o-text posts all the time (for offline reference), so I lost squat. :P

But thanks for being thoughtful!

Edited by DrGravitas
Reordering sentence
Link to comment
Share on other sites

You're welcome.  And at the very least, new visitors to the thread can use this to catch up on what has gone before.

 

 

From there, I think the first thing I've got to work out is whether to keep the tail's geometry big and fluffy as it is now and simply place a layer of fur over it, or reduce it to a thin whip-like shape and use fur to build the volume in a realistic fashion.


I'd recommend the latter option, myself, especially if you'll be placing fur on it anyway.  I'd tried "big-and-fluffy" tail geometry with fur on Krystal, Furrette, and Renamon, and the results weren't consistently pleasing.

I've decided to use the "whip" approach on Furrette 3, with "big-and-fluffy" morphs as a fallback option, although this will require the creation of multiple dynamic-fur presets (length, styling, etc.) to accommodate different tail types.

Link to comment
Share on other sites

Before I start, I wanted to go a little more into how this thread will handle images. Some of the images may be nsfw at times.

There will be 4 categories, with varying alterations made to their presentation:

  1. Clean / "Barbie-doll anatomy" - Simple thumbnails (seen later in this post)
  2. Artistic Nudity - These will look like their respective images, but with censor bars.
  3. Mature Content - NSFW level 1, Replacement Thumbnail links to images which are suggestive of sexual activity and may contain nudity.
  4. Adults Only - NSFW level 2, Replacement Thumbnail links to images which are explicitly contain sexual activity.

0907_SkeletonSet01_Expanded_Thumbnail3.t

(Category 2 - Image links to uncensored image)

(Category 3)                                                                                                            (Category 4)

15_0929_MatureContent.thumb.png.9e1dff07                                                  15_0929_AdultsOnly.thumb.png.66e65c9bb34

This 'Mature Content' Thumbnail links to NSFW content!                                         This 'Adults Only' Thumbnail links to NSFW content!

Hopefully that's clear for everybody! Oh, and by the way, those last too links are live; they really do link to my first experiments in sexual content! (well, first to see the light of day anyways.) Any feedback or critiques would be appreciated!

 

You're welcome.  And at the very least, new visitors to the thread can use this to catch up on what has gone before.

 

I'd recommend the latter option, myself, especially if you'll be placing fur on it anyway.  I'd tried "big-and-fluffy" tail geometry with fur on Krystal, Furrette, and Renamon, and the results weren't consistently pleasing.

I've decided to use the "whip" approach on Furrette 3, with "big-and-fluffy" morphs as a fallback option, although this will require the creation of multiple dynamic-fur presets (length, styling, etc.) to accommodate different tail types.

Thanks for the input! I definitely think I'll have a lot of different fur presets to workout. I've been thinking that I might try mixing and matching multiple sets of fur to try to blend things together. One of my biggest issues with the fur I've done so far is it doesn't feel like it fits in with the model right. It always feels like something tacked on. If there's one goal I have for the fur, whether it ends up being fur, nHair, XGen, polygons, something else, or a mix of things, it to make it mesh well with the rest of the model.

 

But, I didn't get a chance to work on fur this week. Instead, I setup my UVs and then invested a ton of time into an insane scheme to make producing the same UVs on slightly altered topology simpler. I succeeded! To a degree, anyways.

17839893@200-1443740572.jpg

(Hey, a Category 1 example!)

The scrap above details the new UVs. But the new script is where the real work was. So how about a bit of an informal tutorial on PyMel and space partitioning algorithms while I go over the scripts?

 

PyMel is Maya's Python-based wrapper for MEL, it's dedicated scripting language. It has virtually every feature of Mel (and even conventions for accessing Maya's API for plug-ins) but in python style (eg. it doesn't feel like a bad, hackney UNIX command line scripting language like MEL.)

For my purposes this time, I wanted to be able to quickly and easily recreate UVs between model changes. Because my workflow involves polygon mirroring, I can't rely on vertex/edge/face IDs to be the same every time. Technically, I could get around this by mirroring and then doing all my changes from there, but I like my workflow and it would still be susceptible to topological changes. So, I devised a script that would choose edges based on the positions of its vertices in world space and how close they were to a stored set of positions.

 

First, I created script for exporting a selected set of edges. Because I was rushing and lazy, I create them as a multi-dimensional list rather than a proper class, but, I did say informal tutorial! I won't go over all of it, but if it interests you, here is the relevant method that operates over selection.

def buildPointsList(selected):
	output = []
	
	for edges in selected:
		#Selection may include lists of edges. Maya is funny like that.
		for edge in edges:
			#Type Check If any selected node is not actually an edge, throw an error.
			if not isinstance(edge, pm.MeshEdge):
				pm.error(edge + ' is Type: '+type(edge).__name__ + '. Not the required Type: '+pm.MeshEdge.__name__)
			
			edgeId = [ edge.index(), edge.getLength() ]
			edgeDef = []
			vertDef = []
			
			for vertex in edge.connectedVertices():
				vertDef.append( [ vertex.index(), vertex.getPosition(space='world') ] )
			
			output.append( [ edgeId, vertDef ] )
		
	return output

Simple, no? Iterate over every object selected and iterate over it is a list. Then, check if it's an edge, throw an error if not, and then setup the data to be appended. The hard part turned out to be how to actually export it. I ended up resorting to Python's pickling protocols I am not totally happy with the results, so I'm not going to go over that part. PyMel has its own capabilities for file IO that relate to the Maya application, too. In fact, just about everything you can do in Maya manually can be done by code with PyMel! (or Mel for that matter.)

 

Once I had the stored file of UV border edges, a script was devised to read it (via unpickling) and then operate on a selected mesh in order to identify the closest matching edges, by way of looking for vertices closes to the point definitions of the edge's vertices. One of the best (and sometimes worst) things about Python is its dynamic typing. It allows for some very powerful and flexible code. For example, the def I developed for identifying the nearest vertex to an "ideal" vertex position is written in such a way that it could be used for any number of different things: edges, vectors, vertices, points, even faces. I can even alter how I choose to compare these things, so if I want something more nuanced than "closest in position" I merely pass in a different method name parameter!

'''
	Given a list of Eligible Targets to select from and the Ideal Target, apply the given comparison method and
	find the eligbile most comparable (ex. closest to, most similar length, etc) to the target.
'''
def findBestMatch(ideal, eligible, comparisonMethod):
	best = eligible[0]
	minDifference = comparisonMethod( ideal, best )
	
	for currentTarget in eligible[1:]:
		if minDifference == 0:
			break
		
		difference = comparisonMethod( ideal, currentTarget )
		if minDifference > difference:
			best = currentTarget
			minDifference = difference
			
	return best

'''
	meshVertDef is a list consisting of a vector and its MeshVertex
'''
def comparePreComputedVectorDistance(vector, meshVertDef):
	return vector.distanceTo( meshVertDef[0] )

This version of the method has a check to return exact matches to speed up finding the best match. The comparePreComputedVectorDistance method def is the comparisonMethod method name that gets passed to findBestMatch. As you can see, findBestMatch actually isn't working on points or lists of vertices, it's being passed a vector and a multi-dimensional array! It doesn't care what they are, it's all the same to it. The method call it makes to find the distance is what cares about them and performs the actual distance calculation, but distance could be any number of other numerical representations of comparison.

You can see it running with a special visualization I did up by inserting some select and force refreshes into the script. Some frames were clipped for time but otherwise this is real time. It is quite a bit slower when it has to display all this, though.

0929_UVRecreater02.thumb.gif.049de3d8308

 

So why this List of "meshVertDef" Lists? Well, it turns out that going over every single vertex, creating a vector of it, and calculating its distance from a point, (for 1,400+ points!) is quite slow. In fact, it takes a 1 hour and 30 minutes for my low-poly base model (which has ~15,000 vertices)! So, I had to come up with a way to speed it up! I created a very crude method of organizing vertices so I didn't have to look at all of them.

When you create a vector of a vertex, it basically describes a line from the center (0,0,0) to that vertex's point in space. The distance of that vector is how close to the center of the world that vertex is. By rounding this to a whole number, I divide up the mesh vertices into spherical layers. You can see them at the start of the .gif above and a bit easier in the .gif below.

0917_BeautifulVertexSelection.thumb.gif.

So, I'm organizing the vertices into a dictionary where the key is the the whole number vector distance and the value is a list of all those vertices whose vector distance rounds to that whole number! Since I have to iterate over every vertex and create a vector for them to do this, it makes sense to store that vector so I can use it again during the comparePreComputedVectorDistance method without recreating it. Hence, instead of being just a list of vertices, it is a list of lists of a vertex and its corresponding vector (eg. a list of those meshVertDefs).

This is that space partitioning algorithm!

'''
	Construct a vector-length layer hash of grouping the vertices of the target (a mesh's shape node)
	grouped on the rounded Vector-length of the verticies. 
'''
def constructVertexMap(target):
	# rounded (or relative) Vector-length dictionary.
	rVLdict = {}
	
	for vertex in target.verts:
		vector = pm.datatypes.Vector(vertex.getPosition(space='world'))
		rVL = round(vector.length())
		
		if rVL in rVLdict:
			rvLLevelDetail = rVLdict[rVL]
			rvLLevelDetail[0] = rvLLevelDetail[0] + 1
			rvLLevelDetail[1].append([vector,vertex])
		else:
			rvLLevelDetail = [1,[[vector,vertex]]]
			rVLdict[rVL] = rvLLevelDetail
	
	return rVLdict

I've taken to calling the layer sphere furballs because I had planned at one point to make the vertices that met a certain threshold into multiple groups, to prevent problems where verts don't precisely show up in the same layer as their ideal point, thus making the sphere's borders fuzzy or furry :P

But I never got around to that because it turned out not to be a problem and wasn't worth it. Why? Because this space partitioning method is terrible! In no way should you think that I think this is actually a good way to group this stuff. It was done purely because it was extremely quick and very easy to setup. It has tons of problems, doesn't work well for lots of cases and will be stripped out in the future as I reuse components of this script in future scripts (there's a whole lot of useful applications for identifying parts of the mesh for automated operations!) A plain old Octree or maybe a Binary Space partition tree would be vastly better.

But, this works! It took 1 hour 30 minutes and turned it into 2 minutes flat!

I won't be going over the rest of the script (you know, the parts that do the actual function of finding an edge and stuff) mostly because it still needs a lot of work. While it works fine for exactly the same topology and slightly different shapes, it does not do so well with significant shape differences. Like with the whip-like tail difference. While it was able to automatically select most of the edges, I basically had to manually select each of the edges of the tail. But, even that was vastly faster than reworking the UVs my self manually, so I consider it a success!

 

So that was the week, I will begin new experiments with fur for the next one. I am also considering creating a blog here on Phoenix, instead of using this thread, but I am undecided. If you have any questions (or hell, if you want the scripts themselves!) or have any suggestions on fur experiments or anything else (even things like color, image composition) I'd love to hear from you!

Edited by DrGravitas
Fixing links; Attempt 4
Link to comment
Share on other sites

Okay so the anatomy and everything is okay but the shading is a nightmare. My goodness, fur has no glossy shader. You should really work on it

 

To be honest, I actually like the color shading (I like shiny/latex-y kinds of things) but you're right that it doesn't really look right for fur. Moreover, it definitely doesn't play well with dynamic fur, as that turntable at the top demonstrates. Something else will have to be figured out for dynamic fur. In the mean time, what do you think of something more like the right-side figure in this (3480x960) image?

1002_LambertComparison_Thumbnail.thumb.p

  • On the left, is the baseline PhongE plugging to the diffuse parameter of a Mia_X_Passes shader. The mia shader provides the photon material capabilities for use by Mental Ray's Global Illumination and Caustics (although, I don't really use caustics much right now). Phong E's whiteness and roughness parameter give it the majority of the glossy/shiny look.

  • In the center image replaces the PhongE with a simple Lambert, completely removing the shine. The polygons have the lambert applied to them, so there is no mia_x_passes in its shader network. This also means it doesn't really take advantage of photon mapping. You'll also notice the lighing works differently for it.

  • The right image is more like the left, with the mia_x_passes being the file shader applied to the figure. This allows it to use photon mapping but also adds back quite a bit of the shine. Clearing up excess gloss won't be as easy as just eliminating the PhongE! Especially if we want to continue taking advantage of Global Illumination.

I'm a bit of a loss on how to create an appealing look for it that doesn't have bland, uniform coloring. I like the right most image and all, but the baseline on the left is just so much more colorful! This is because the whiteness color and specular color parameters each have their own ambient occlusion shaders with additional colors.

Your gallery has some pieces with nice coloring without looking flat. I see some ambient occlusion on most of your models. It looks like you use Blender, but most shader types are based on shared underlying concepts so what sort of shading do you like to use? What renderer do you use?

 

Edited by DrGravitas
Testing something with a stuck upload image.
Link to comment
Share on other sites

Well, for these basic models I would just use a basic Diffuse shader and dont even use the Glossy shader. If you would use blender, you could of course create for example fur simulations with glossy materials but for now I would only use glossy if your model looks like a balloon or whatever.

Link to comment
Share on other sites

Poser itself uses a variant of Python.  Unfortunately I've never devoted any time to learning the syntax.  I can usually puzzle out some of the simpler scripts and make a few modifications to better suit my purposes, but something that advanced is beyond me.

Do all of those seams on your UVs cause difficulties when painting complex textures, or does Maya allow you to paint directly onto the model?

Link to comment
Share on other sites

Poser itself uses a variant of Python.  Unfortunately I've never devoted any time to learning the syntax.  I can usually puzzle out some of the simpler scripts and make a few modifications to better suit my purposes, but something that advanced is beyond me.

Do all of those seams on your UVs cause difficulties when painting complex textures, or does Maya allow you to paint directly onto the model?

I believe Blender uses Python as well. Definitely worth the time investment once you get going, in my opinion. Honestly, I'm of the mind that everyone (not just 3D modelers or programmers) can benefit greatly from learning to code at least a little bit. It really changes the way you view the world! Python is by far the best language I've seen for non-programmers to get into coding.

I'll likely continue detailing more python scripts in future posts as that script improves and expands into new ones. Although, I'll try to do a better job of noting which lines are specific to Maya next time!

 

I don't believe Maya has on-model painting specifically for painting textures. But it does have Maya Artisan which does allow you to paint attributes for various function onto your model as well as paint weights. It also has some kind of paint effects thing but that's quite different and I haven't had the time to investigate it. Maya is very feature rich! (They should be, they buy just about every company's tech and incorporate it into their blob.) It's seriously insane! Shame it's switching over to subscription only at the end of this year. My perpetual license will last forever, but I'm not paying full price again for a one year difference, even if it is their last perpetual license!

Err, where was I? Oh, right! UVs and texturing!

 

I use ZBrush to paint texture maps. Tools in ZBrush do a great job of dealing with the seams for me and on the rare occasion that I don't care to bring it back in, I can usually deal with what remains easily in Photoshop. But, I don't really paint complex color textures like you'd see for a game model. My method is very dynamic, using complex shader networks and simple texture maps. Speaking of which...

 

Since it was lost to the old forum, now might be a good time to detail my (increasingly decrepit, but still beloved) custom shader network! This is how I construct the look of the model! Well, of Rederick and Dr. Gravitas, at least. Up until now Blythe didn't have workable UVs to support it, so she had the shaders applied directly to the polygon-faces.

(More details in these link's descriptions, and more in the hundreds of my other FA scraps)

16608429@200-1432177165.jpg                   16608735@200-1432179145.jpg                 16989086@200-1435800043.jpg

      Fig. 1 The Custom Shader Network                    Fig. 2. Greyscale Texture maps                Fig. 2 AO-Based Coloring                 

The Custom Shader Network:

You'd think I'd have come up with a snappy name for this by now... The custom shader network is a series of separate, UV-oriented greyscale texture maps which detail how each shader (detailed below) is applied to the model. It provides shapes for everything from eyebrows to fox socks 'n gloves! The green box encloses most of the ones I use. This images is actually quite old, from when I did Rederick's UVs. At the time there were 6 texture map files with associated nodes, plus two more in the red box. I think I added a couple more later on. At the time I used 8129x8129 texture maps, but I've downsized to 2048x2048 since then because it wasn't really making a difference.

The heart of the network is the Blend Color Utility Node. Two colors blend based on an input parameter. By piping a greyscale texture into a luminance node and piping that into the blend parameter, I can dynamically assign color 1 and color 2 to the light and dark parts of the model, even blend them together! This is extremely flexible, because I can define very exacting interactions of color and shape without committing to a given color; the color is assigned afterwards. This makes the turn around time on new looks or experimental color extremely short. By piping other shaders like PhongE into the color 1 and color 2 parameters, I get utilize the special properties of other shaders or even dynamic textures, to achieve the effect I want. I don't have to painstakingly repaint textures just because I don't like the color!

The custom shader network is designed to be extensible. Because I can plug just about anything into the color parameters of the Blend Utility, I can just keep plugging in more and more blend utilities. I can stack them on top of each other, basically layering the shaders. This is actually an awful idea. There are layering nodes that are much more efficient for this. MILA nodes are a new layering capability specific to Mental Ray that are also perfect for this. But, I've never managed to get the hang of using them. This works and is easy to deal with, so I put up with the performance hit until I take the time to redesign the whole thing.

The great part is, no matter what the topology or shape change, all I'd have to replace is the greyscale maps (which are simpler to make up than a painting a complete texture) and plug it in and poof! It all works for the new model and shares the same stylized look! I could take the same network used for Rederick, replace the maps, and apply it right to Blythe. Fig. 2 is an (outdated) example of what the maps for Rederick look like.

 

Ambient Occlusion Coloring:

This is the pattern of shader that gives the model its specific colors! Fig. 3 highlights the way this affects the model, by assigning Magenta to each of the parts. Check its description for more info. It consists of a PhongE with Ambient Occlusion nodes plugged into its diffuse, whiteness, and specular colors. AO shaders take 2 colors to specify what the light and dark areas should have applied. Tinkering with these colors, the spread of the AO, and a number of other things allows me to basically subtly paint and interweave colors dynamically. A very colorful result! The AO shader, plugged into the PhongE shader's diffuse color, has a second AO shader plugged into its light color of addition variation in color.

In the custom shader network, these PhongE networks provide the body, light (face/chest), and dark (socks 'n gloves) marks. They pipe into the blend utilities associated with those maps, and that whole reverse matryoshka doll of blend utilities pipe into the final shader: a mia_x_passes shader. This provides the photon capabilities and other utility as well as creating further interesting color effects. When I get into dynamic fur, the custom shader network won't really be as relevant, at least at first.

 

So, that is the basic overview of how the coloring on my models works! I'd be happy to answer any questions you might have, as well as take suggestions for improvement!

Edited by DrGravitas
Fixing alignment
Link to comment
Share on other sites

One more for the road:

15_0929_AdultsOnly.thumb.png.9c3472127d7

Links to NSFW content

OK! Hopefully now that I've got that out of my system, I can focus on more meaningful progress again!

This week was focused primarily on working with Maya Fur. Specifically, rediscovering my hatred of Maya Fur. Maya Fur is a terrible system for a lot of reasons. But I won't let this post devolve into a rant.

 

17897648@200-1444272086.jpg       17897768@200-1444273014.jpg

I'll leave the ranting to the scrap descriptions :P

Primary issues encountered were in dealing with fur attribute maps and the fur's reaction to model UVs. Maya Fur works through attribute maps. You create, either in Maya or in a separate program, greyscale textures mapped to the UVs representing the values of various fur parameters on different parts of the model. Although the length painted leads to the sparse chest distribution of fur, that's mostly because I gave up on it part way through in order to focus on painting direction.

Direction is where the worst problems were encountered. Direction defines the flow of the fur, by making the hairs bend and point in the painted direction. Because it is extremely difficult to imagine direction as a shade of grey, I'm stuck with using Maya's tools to paint the values so I can get visual feedback of the new orientation of the hairs. The fur feedback is the root of the issues I have with Maya Fur. Due to how it works, I'm stuck with needing heavy detailed feedback setting which means I often have to wait for 10-30 seconds after a brush stroke to see the results. Which, more often than not, aren't the results I want because the way the brush works and responds is awful. Switching between detail levels is a no go as it wrecks the maps being painted behind the scene and introduces patchy flows.

Further problems were to be had with the UVs, again due to the way the brush works. When you paint over a seam, it fails to account for the difference in the UVs and the fur ends up going in the wrong directions. Various sources site the state of UVs as being very important to fur, but don't detail what the UVs should be like. So, I tried a handful of alternate UVs and compared the quality of the fur it produced without any painting. The UVs I developed last week proved to be the least worst default.

Honestly, Maya Fur wouldn't be that bad, if only I had a better way to deal with painting the direction. Direction is actually a parameter called polar and other similar attributes include inclination, roll, and curve. Besides those few parameters, I can mostly paint the fur parameters in ZBrush in greyscale without direct feedback (it's not hard to imagine fur length being white for short and black for long) and export them to a texture to plug in as a map to the Maya Fur. It's only those few that I can't paint in ZBrush, because the need for less abstract visual feedback, that it really becomes a problem. Ok, maybe it would also be tedious to paint all those individual parameters, too. Also, the fur just plain looks ugly.

 

I've investigated a number of alternatives to Maya Fur. Previously, I've worked with XGen (hate it), nHair (not really ideal for full body fur). I looked into other fur providing plugins and found "Shave and a Haircut" which is apparently an industry standard. It does not have a version for Maya 2015 and I don't know if the current version would work for it. Yeti is another acclaimed solution, but it's not available in the US because of patents owned by the creator of "Shave and a Haircut".

So, while I sort out the possibilities of those plugins, I plan to try out a technique for creating fur in ZBrush using its fibermesh and exporting it as curves. Those curves can then be imported, and used to create nHair. I don't know how feasible it is, but here's hoping!

Link to comment
Share on other sites

This week was focused primarily on working with Maya Fur. Specifically, rediscovering my

hatred

of Maya Fur. Maya Fur is a terrible system for a lot of reasons.

At least it isn't Poser fur ....

DynamicHair-P7-P8.thumb.png.dc6e352df4f7

 

I've investigated a number of alternatives to Maya Fur. Previously, I've worked with XGen (hate it), nHair (not really ideal for full body fur). I looked into other fur providing plugins and found "Shave and a Haircut" which is apparently an industry standard. It does not have a version for Maya 2015 and I don't know if the current version would work for it. Yeti is another acclaimed solution, but it's not available in the US because of patents owned by the creator of "Shave and a Haircut".

Is Worley's Sasquatch plugin still around?  I don't know whether they ever made it available for anything other than Lightwave.

 

Edited by Little_Dragon
Link to comment
Share on other sites

At least it isn't Poser fur ....

[TRUNCATED]

Is Worley's Sasquatch plugin still around?  I don't know whether they ever made it available for anything other than Lightwave.

 

Looks like it's only for Lightwave. I've posed the question of what fur solutions are available for Maya (beyond the ones I've mentioned) to another forum, but so far there have been no responses. Looks like slim pickins. From one random, really old post I found on another forum, it seems that most professional studios roll their own fur solutions, sometimes redesigning them for every film. So, these might be my only real options.

On a side note, I thought it would be interesting to point out that XGen is actually a creation of Disney that Autodesk is licensing for Maya. Disney also had to come to a settlement with the owner of "Shave and A Haircut" over his patent (which is what chilled the Yeti makers on selling in the US.) XGen is quite new and was first integrated in Maya 2015. Maya is also known for being popular for use in filmmaking 3D. I wonder, then, if XGen serves as the basis of the tech behind the fur in Disney's upcoming Zootopia film.

Edited by DrGravitas
Link to comment
Share on other sites

Instead of looking for fur plugins, what about typical hair plugins? V-Ray 3.0 I think has great hair/fur potential.

I currently work with Mental Ray, it seems like a lot of effort to invest in an entirely new renderer. I've heard a lot of good things about V-Ray, but haven't looked into it in-depth all that much. The only comparison between the two that I could find suggested that V-Ray was easier and better at physically accurate, realistic rendering but Mental Ray was more configurable and better at non-realistic. It was a bit of an old post, though.

I can't find any information about specific hair plug-ins for V-Ray. I thought V-Ray just had general support in the form of some manner of hair primitives and you had to pipe-in curves or something. Do you have any links where I can learn more about hair in V-Ray?

Link to comment
Share on other sites

It supports maps, which is excellent. It has support for ramp textures which is superb. Not exactly clear on how I'd do dynamics with it, but Maya fur just connects with nHair if you want any dynamics out of it, so maybe there's something similar.

 

Also, I finally managed to find their feature set listing including specs on the fur node. Their licensing and pricing is... *hugs wallet tightly to chest* Well,  considering it's only about $200 more than "Shave and a Haircut" and yet is a fully featured render technology and associated capabilities instead of just a fur plugin, I guess it's a pretty good deal. It has a 90 day trial. Hmm... I'll have to look into this more.

Link to comment
Share on other sites

I go on vacation during the last week of this month. Probably best not to start a limited-time trial with VRay until after I clear that. Prepping is going to take some chunks out my usual schedule. I've also read that there actually is a way for me to buy Yeti. I think I'll hold off that until after trying out V-Ray later on, at least.

 

This week, I explored nHair again. It turns out it's incredibly easy to go from ZBrush's Fibermesh to Maya's nHair. It is still a bit difficult getting the kind of results I want, though. But, I think that will improve with time. At the very least, I like working with it more than Maya Fur. Plus, since nHair seems to be the only option for driving dynamics/animation of XGen, VRay Fur, and Maya Fur, I think it makes the most sense to focus on improving my skills with it for now.

17962134@200-1444874340.jpg

The results don't look half-bad! Certainly not in stills, anyways. In motion, they still leave something to be desired, but I think that will come with time. What it is not, currently, is fast. But, I have some ideas for that too. While this might work alright for tails, I don't think it's appropriate for the full body. I suspect a combination of shaders (not the placeholder metallic one used here) and well-placed fur tufts will give me the style I'm looking for. It's also possible I may need to combine it was another fur technology. nHair would serve the stylistic and dynamic elements, the model and its shaders would provide the overall shape and design, and the other technology would serve to blend them together in some fashion. Maybe. I think I need to experiment with it more.

As far as the tail, however, using the whip-like tail model requires lots of floof. Rather than go with the Fibermesh-to-curves-to-nHair route, I think it might be more productive to work out some programmatic approaches to curve placements and then proceed with nHair creation from there. This should give me better control of the way the fur looks and how many curves get created.

There were a number of improvements to the nHair done here, compared to when last I tried it with Rederick. Most notably, an exploration of the difference between passive and dynamic follicles and ultimately the interpolation between these two types. The previous attempt with Red was done entirely with dynamic follicles and no interpolation. I've learned that this is responsible for the very spiky look to that fur. Interpolation (regardless of whether the follicles are passive or dynamic) helps create a smooth transition of fur between the individual follicles. So, I have 3 animations for comparison:

1014_BlytheTail_Passive_Thumbnail2Small. 1014_BlytheTail_Dynamic_ThumbnailSmall.t 1015_BlytheTail_Hybrid_ThumbnailSmall.th

    Passive                                           Dynamic                                       Hybrid (SWF)

Passive follicles are not dynamically simulated, meaning they follow the motion of the body they're attached to but don't deform based gravity or wind or such. Dynamic follicles are simulated and can also be affected by collisions. Hybrid here refers to the use of both passive and dynamic follicles. About 1/3 of the follicles are passive and the rest are dynamic. This means the nHair will interpolate the passive follicles between the position/motion of the dynamic follicles around it. This creates a "best of both worlds" where you get voluminous yet dynamic fur.

The animation used in these tests are the same automatic counter rotation used in the Rederick image. It's really very poor; you can see the legs jostling a bit in most the images. You can really see the defects in the hybrid follicles animation, because I forgot to move the legs back into position after capturing the stills. But, now that it's in a simple Python script it is so incredibly simple to setup that I might as well use it for tests. Maybe I can make some tweaks to it. Really, if I had the time, I should work on building better rigging controls.

 

As for next week, since I have a vacation interruption after that, I think I want to take some time to bring the Rederick (and with it, Gravitas) models up to par with Blythe. I don't think this will take too much work, but I've been wrong before. If there's any time left after that, I'll probably try out some more stuff with nHair or maybe work on a couple other PyMel script ideas. Of course, ideas and suggestions  are also welcome!

 

Edited by DrGravitas
Removing dup-bugged Thumbnail
Link to comment
Share on other sites

The first thing I think of as I see the tails is that the fur seems very thick. Not in the density but rather the size of each individual strand. I feel the fur could be made much thinner in order to match the size of normal fur, as right now I must admit it feels more like a mop, especially in the dynamic version. However what I also see in the dynamic version is that the fur actually looks good on top of the tail just by the base. If you maybe shortened the fur to two centimeter or so, you could potentially make it look more alive without it necessarily hanging down like someone splashed it with water. Maybe even the passive version could be useful like this if you wish for it to be more stiff and fluffy instead.

How are you doing the colour for the fur by the way? I don't know what possibilities nHair offers you in such terms, though I've recently learned that Blender allows you to use a texture map to base your hair on. Not that you use a texture on each individual strand, but rather for the mesh area you've applied fur on. Every fur then looks at their UV and paints their strand based on where it's placed. This allows you to make subtle differences to your fur, making it seem more alive compared to limiting it to one unified colour.

I think this might interest you as well considering you're working on fur: How to Render Hair with Cycles. The first ten minutes is mostly about how much hair rendering has improved in the newer versions of Blender compared to how it used to be, but there should be a lot of things I believe will interest you after that. Like hair rendering modes, hair shape size and the way to use textures for hair. Who knows? Maybe you might find some more use of this than what I've mentioned here.

  • Like 1
Link to comment
Share on other sites

[...] as right now I must admit it feels more like a mop, especially in the dynamic version.

 

Hah! That was exactly the same thing I said when I first saw the dynamic version.

The individual hairs can be thinner, but not much. I used 0.003 and it can go down to 0.001. However, there a number of other parameters to that complicate things, including including a hair width scale that allows me to adjust the scale of the base width  ranged over the length of the hair. Even more important to appearance of width is the number of hairs per clump (set to 900 for my renders here) and the width of the clump.

settings.thumb.png.c01ff7b9fbbcb9be40986

There are plenty of settings to tinker with. With time, I hope to find a good balance.

The interpolation settings also affect this, but the majority of the thickness is due to the clump settings and number of hairs per clump, but these are also vital to getting good coverage. I have an example in a bit.

How are you doing the colour for the fur by the way? I don't know what possibilities nHair offers you in such terms, though I've recently learned that Blender allows you to use a texture map to base your hair on. Not that you use a texture on each individual strand, but rather for the mesh area you've applied fur on. Every fur then looks at their UV and paints their strand based on where it's placed. This allows you to make subtle differences to your fur, making it seem more alive compared to limiting it to one unified colour.

nHair is a bit odd when it comes to coloring. You can set Hair color, specular color, and a Hair color scale (which I'm not too familiar with.) Hair coloring wasn't really my focus in my last post. But now, let's run some quick tests and we can explore more about how it works!

For speed and simplicity (and to illustrate its affect on apparent width) we'll keep hairs per clump set to 1 instead of 900. Hair width will remain at 0.003 and every other parameter (except for colors) will remain the same. I will use a 3D Crater Texture. I assume Blender has a crater texture too, but if it doesn't it looks sort of like this, but in 3D:

Crater.thumb.png.a5d134ca2b125f487193535

Results: (Click images for full size)

Hair Color Set with Crater

HairCraterMagentaSpec.thumb.png.1bcf76da HairCraterMidGreySpec.thumb.png.5409d7e2 HairCraterGreenSpec.thumb.png.3b161e22c9

Magenta Spec                                                        MidGrey Spec                                                             Green Spec

______________________________________________________________________________________________

Specular Color set with Crater

 _HairMagenta_SpecCrater.thumb.png.78a6eb _HairMidGrey_SpecCrater.thumb.png.7e426a _HairGreen_SpecCrater.thumb.png.228fa003

Magenta Hair                                                        MidGrey Hair                                                             Green Hair

Hair color and specular color clearly work differently. Hair color appears to be constant across the hair and seemingly the positions of the hairs, too. Specular is clearly varied across the length of the hair itself, rather than the position of the hair. There may be ways around this, but it'd probably be simpler just to use other tech with better support. However, as stated earlier, nHair appears to be the primary means of giving dynamic properties to the other technologies. So, delving too far into color or possibly even size and width is probably not that important. Or maybe it is. I don't know. We'll have to see when I manage to get to the other tech after my vacation.

For the impact of hairs per clump and hair width, here's the tail with 300 and 0.001 settings:

 

300_HairMidGrey_SpecCrater.thumb.png.97c 300_HairCrater_SpecMidGrey.thumb.png.cc0

Spec Crater, Hair MidGrey           Spec MidGrey, Hair Crater

And, 900 with 0.001

900_HairCrater_SpecMidGrey.thumb.png.114

As a side note, applying textures to hair color is even weirder than I expected. The brown fur above was the result upon reconnecting the crater to the Hair Color parameter after having rendered the switch. I don't know why it was brown this time. I didn't reposition or alter the crater texture. I just ended up brown when I connected it this time. I really don't think Hair color supports textures. EDIT: It does, but it requires you to paint them in the application. That's stupid but perhaps there is still a way to pipe in textures created externally.

 

I haven't looked at your video just yet, but I'll take a look at it soon. I work with Maya rather than Blender, so I don't know how much will apply. But, it will be interesting to see how Blender handles it too.

 

 

 

 

 

Edited by DrGravitas
fixing missing images, resizing, and adding one more
Link to comment
Share on other sites

Very few hair plug-ins do decent dynamics, but you can sometimes make a good approximation using animated vertex maps to adjust the attitude, gravity, figure-hugging-amount and so on. It's a bit more work though.

One thing I've considered trying but never gotten around to is to bake shadowing into a special texture which causes darker colours to be interpreted as closer figure-hugging. This means that you could 'squash' fur down by shining a light from behind the item doing the squashing such that it shadows the fur to be compressed. It's a technique I've never gotten around to testing though.

Link to comment
Share on other sites

Very few hair plug-ins do decent dynamics, but you can sometimes make a good approximation using animated vertex maps to adjust the attitude, gravity, figure-hugging-amount and so on. It's a bit more work though.

One thing I've considered trying but never gotten around to is to bake shadowing into a special texture which causes darker colours to be interpreted as closer figure-hugging. This means that you could 'squash' fur down by shining a light from behind the item doing the squashing such that it shadows the fur to be compressed. It's a technique I've never gotten around to testing though.

I've never heard of that shadow squashing trick before, sounds interesting! Might be a useful quick way of setting fur length, too. I expect nHair will serve as the dynamic portion of whatever fur/hair system I ultimately go to for looks. Out of curiosity, what programs do you work in?

 

 

As for this week's efforts, I guess trying to totally revamp Rederick and bring him up to Blythe's standard in a single week was a bit... overly ambitious.

As always, I struggled with the topology of the arm-shoulder area and the crotch-pelvis. I made huge strides forward, but there are still a few nagging issues. The pelvis especially feels like it has excess edge-loops for the the crotch.

18026571@200-1445480074.jpg

I think I finally caught on to what seems wrong about the crotch. It was too high. The crotch area was brought lower without adjusting the legs. The net result makes it look a bit bigger too. Topological improvements inspired by the Blythe work also improved bulge shape retention during deformation (ie, if he bends over to pick something up in a certain way, you can see his package : P ) It's not quite right yet, but I suspect it has more to do with the skin weights than the perineum topology. Spreading the legs apart also has some unfavorable deformation on the inner thighs  which is more likely to be a topological problem.

As for fur and nHair, a few further refinements were made, based off BerryBubbleBlast's thoughts and a few other experiments. Particularly, I found nice alternative settings with 90 hairs per clump (in place of 900) and 6 sub-segments instead of 3. It makes them a bit smoother. I also tried a little bit different coloring. The biggest test came from a per-follicle setting called "braid". It's supposed to do exactly what it sounds like: make the follicle into a braid. But, something about my setup didn't do this. It did create a very different look, though. It helped break up the uniformity a bit more. But, I don't know if I really like it or not.

1020_Braid_ThumbnailSmall.thumb.png.a59a   1020_NoBraid_ThumbnailSmall.thumb.png.3d

     Braid Set                                       No Braid Set

I also played with another per-follicle attribute: color override. It allows you to override the Hair Color parameter individually on each follicle. Sadly, it doesn't provide a way to override the specular color or anything like that. This means that the white fur tip of the tail ends up with a light blue specular look, or the blue fur ends up with an ugly white/grey specular color. So, that just about settles it: Something else is going to have to provide the visuals of the fur, even if nHair drives it.

18033838@200-1445554112.jpg

No animation on this one

I also discovered, just today in fact, a lovely pair of python modules: numpy and scipy. These include data structures and supports for things like K-D Trees and other spacial processing goodies that should make it much simpler to develop and improve those nearest-vertex-based scripts I was tinkering with a while back! Bubbling in the back of my mind are all sorts of ideas for how to appropriately compensate for significant topological changes in identifying appropriate vertex choices. If I can get those ideas to work, I'll have have some very powerful tools to vastly improve productivity, automation, and the repeatability of tests! I dream of a day the machine will paint the skin weights for me as an apprentice does while I, the master, review and correct a few remaining issues to bring it to perfection!

But, that is a long way off, especially given my current plans...

 

I figure another week or so and I'll have Rederick's topology update nailed down. But, I have vacation at the Grand Canyon next week and after that I intend to start investigating VRay. I want to try to tackle VRay and Rederick in parallel. I think this will work out because rendering can leave huge gaps where I don't work directly on the scene. I can fill those gaps with Rederick, as I do with other experiments normally.

 

Oh yeah, and if you don't hear from me two weeks from now, I probably fell in the canyon :P

Edited by DrGravitas
Fixing broken .gif links
Link to comment
Share on other sites

Hi,

 

From what I'm reading and seeing you work hard.

What is it really that you are going for? A work flow to render out "characters" from a base already created and rigged? If so... you can reduce the work I'm seeing here in like 80%.

 

That is one goal, yes. My overarching goal is to produce high-quality and interesting content. I think that part of reaching that goal will require a degree of automation.

I'm interested in hearing how you think I might improve my productivity.

 

 

Link to comment
Share on other sites

This week was not very productive; I didn't even get a chance to do more work on Rederick's topology. But, it was never going to be a productive week. Considering a vacation took a chunk out of it and I started investigating an entirely new render technology. Speaking of VRay...

1105_Kneel_VRay_VRayBuffer.thumb.png.e961105_Kneel_VRay_MayaBuffer.thumb.png.4a5

VRay Demo is limited to 600x450 and is watermarked.

I haven't quite wrapped my head around how their lighting model works. I eventually figured out I can add attributes to spotlights and other maya lights, instead of creating the VRay-specific sphere, rect, or mtl lights. But, I haven't quite figured out how to make them not so... Global. Everything is so evenly lit and I like some dark spots and stuff.

VRay certainly seems to have near or complete feature parity, but the workflow is completely foreign to me. One of the weirder things is the separate VRay render window, called the VRay Frame Buffer. I love that it allows me to continue operating Maya while the scene renders, but everything that comes out of it looks completely different from how they look when I send them to Maya's traditional Framebuffer. I suppose that's fine, but it's just an extra step. The look in the VRay Frame Buffer is on the left, Export to Maya Frame Buffer on the right. I'm probably just missing some kind of setting or setup I need to do in the VRay FrameBuffer.

Ugh, and I can't find any good written documentation for VRay, beyond the company's comparatively bare-bones documentation. Mental Ray and Mental Ray for Maya have tons of great documentation! So, I'm picking it up a bit slower than expected. I haven't quite decided if everything is to my liking, but I'll probably have to take at least 2-3 weeks to get settled with it before I can decide if I want to go with it. I haven't even gotten around to trying its fur yet!

There doesn't seem to be a standalone Ambient Occlusion shader material for VRay, so it might not be possible to replicate my old "temp" Mental Ray shaders, which will make direct comparisons a bit less interesting. Maybe I'll find something in the VRay materials that will prove more intriguing than the old look, though!

 

Anybody have any good written resources for VRay? I can't even seem to get VRay RT to work in the viewport...

Edited by DrGravitas
Forgot to mention Demo limitations
Link to comment
Share on other sites

I think VRay may be starting to grow on me. But, I also found some major disappointments in it this week.

On the positive side, I found the equivalent to Mental Ray's Ambient Occlusion Shader. In VRay it is called the dirt texture. I had expected that to be something for their car paint shader, but no it is nearly identical to the AO shader. Except that it's faster and has more options for control. I haven't worked out an exact duplicate of the old body shader for Rederick, but I'm pretty happy with it for now.

The only remaining annoyance on the non-fur front is that I can't export/import shader networks composed of its VRay materials. They come back as objects and import the model they were applied to as well. No idea why.

18219597@200-1447300566.jpg

I really like the top two images! But, I have become acutely aware of something I hadn't messed with in Maya or Mental Ray yet: Color correction and gamma setting. Maya/Mental Ray default to 2.2 gamma. If you enable both color correction and gamma in VRay, you get something like the lower right corner. Worse yet, Blythe looks best with a gamma of 2.2, while Rederick looks best without alteration. So, I've narrowed it down to a happy medium of 1.5 as in the lower left corner, so I can make sure they both look good when they end up together.

There's still a lot of work to be done on the coloring front. Fur presents its own challenges, just as it did with Mental Ray. However, it's even easier to adjust the renderer to suit fur than Mental Ray (at least, once you get it to actually render, more on that in a minute.

I have a choice of secondary bounce methods used in Global Illumination. The best looking, lower left, is the photon map and, like using photons on fur in Mental Ray, it is incredibly slow. That one took 24 minutes. There are likely tweaks I can apply to improve this, but probably not enough to make it worth while. The Brute Force secondary bounce method was second best and the fastest at 3 minutes. It is in the lower right, but I'm using a different gamma setting which makes it darker. That color setting is what you have when you don't do color correction or gamma on the model; like Rederick it is a deep and dark shade, but it doesn't suit Blythe as well. Light cache doesn't look that good to me, but it is recommended (and 3rd fastest with no secondary bounce obviously being the fastest) so maybe there are tweaks that can improve it.
 

18219749@200-1447301640.jpg

But, fur and hair were the major disappointments of this week :(

I began with the most promising one: Fur. For whatever reason, I couldn't get it to render at all when I applied it to the tail wagging scene I setup prior to my vacation. I honestly have no idea why. I was able to finally get it to render by stepping back and applying it to a basic, unposed/unanimated rig I had saved just before that. While VRay Fur is easier to use and faster than Maya Fur, it's not quite as good as I hoped. There is little in the way of grooming tools and I've determined that there is no way to drive it with nHair, so it is stuck as static. It's not bad, but it's not as good as I'd hoped.

 

Vray hair shaded nHair, however, was the biggest disappointment. I can't figure out how to actually render nHair! I've tried everything I can think of. I tried it with that tail-wagging scene, I tried it with the one before that, I tried it with a fresh scene with just a plane, and dozens of other different things with in each (like with and without using the VRay Hair shader.) The only thing I managed to get working was a crude sample scene I found on the internet. Even then, any nHair I create in that scene fails to render, too. Only the precreated nHair renders. Maybe it's some undocumented limitation of the demo?

18219952@200-1447302873.jpg

The other disappointing part of the nHair was that it doesn't seem to really offer much that the normal rendering of nHair doesn't. It's literally just a plugged in attribute on the hairshape. It offers a bit easier control in some ways, but I see no way to dynamically shade it base on where it is on the mesh. Is it really too much to ask to have the best of both Fur and nHair together?!

It's kind of ugly, too, but then again I don't have much practice with it yet, so that might be why. The previous settings for Blythe's nHair are applied in the upper-right while upper left are the same colors naively applied to the VRay shader parameters. I suspect the weird grey outline around the hairs are an effect of the transmission color. If I play with that, it might turn out better.

There is a glimmer of hope for the VRay hair shader, though. On the lower left you can see (from a 2D Magenta-to-Green Ramp shader) it clearly doesn't apply more than the one color from this texture. But, on the lower right, you can see that there is something unusual going on when I used a Magenta-and-Green dirt texture (aka AO Shader). Some of those hairs are clearly green, implying that 3D position is taken into account in some fashion. Maybe with a bit of work, I could get positional coloring after all! Or, at least squeeze some neat effects out of it.

 

Regardless, there's clearly a lot of work left to do in evaluating VRay. Its hair and fur may not be the end-all-be-all, but I may still have a very good renderer worth the switch.

Link to comment
Share on other sites

I think perhaps I was a bit too harsh on VRay's fur last week. While it was a little disappointing, it wasn't nearly as disappointing V-Ray's handling of hair. Really, V-Ray fur is a major step up from Maya fur. I think I was more frustrated with the initial difficulty of getting it to work at all, coupled with problems with rendering nHair. Not being able to couple VRay fur to nHair is a bit of a let down, but frankly a smarter plan would be to use fur as a static supplementary along side a dynamic technology (now certain to be nHair.) Not everything about the hair/fur needs to be animated. I didn't invest a whole lot into getting to look nice, either. So, I feel I owe V-Ray fur a bit of a second chance.

I also explored one of V-Ray's more unique texture offerings: the Fresnel texture. There's no equivalent I can find in Mental Ray. The closest would probably be the edge parameter of the metallic shader. But, even that is a bit different and it would be wrapped with a whole host of other parameters along with certain limitations. With it, I created this:

Chernoblog.thumb.gif.1bfd43867194bbd3b6d

Originally, I wanted to create a banner for my Chernoblog here on Phoenix. I wanted there to be green fire licking off the model. Unfortunately, V-Ray doesn't support the Cloud-type particle effects. It only supports spheres and I wasn't really able to get that to work at all.

 

V-Ray rendering of nHair remains a bit of a pain.

After failing to get it to render at all, I turned to a general 3D forum. When that was silent, I tracked down Chaos Group's forum for V-Ray. You can't even view the subforums (let alone the topics or anything) without registering, so I did. Once in, I found that you aren't allowed to post anything on the forum. I sent an email to their admin asking about how I should inquire about a problem with the demo, but received no response. 6Tails was kind enough to go to bat for me on Twitter after I complained in this forum's 'Things I Hate' thread about their forum, but the group's response was that they prefer a "tight knit community forum". Not long afterwards, the admin wrote back stating that the forums were only for V-Ray customers.

Despite this, I managed to stumble upon a sub-forum on the "customers only" forum that actually does allow me to post. I posted my problem and about a day later they responded stating that nHair seems to render fine with the demo build and asking me to send my scene to their support team... Ah, no. I think not. I had considered recreating the sphere-based nHair test, but before doing so I figured out the problem. The demo is limited to rendering 200 objects. However, the nHair output is not considered a single object instead, it counts the invisible follicles as objects.Even the default setting of nHair create well over 200 follicles. Even though I had spotted the 200 limit early, the way it counts nHair is all-or-nothing and even objects hidden from rendering still count against the total. So, I was finally able to render nHair on the original tail wagging test scene!

1119_HairballTail.thumb.png.b3ea40ba0bb6

The white streaks in the hair are part of an unrelated experiment with nHair's texture painting.

Ah... 200 is not exactly great for getting an idea of what the final product would look like, is it?

 

This week did see some improvements to Rederick's topology and shape. I fixed a triangle in the shoulder I had missed and, more importantly, I manage to fix the weirdly tall-looking pelvis area! Not only does it look much better, but it deforms just as well or better than it did before. It still requires a touch of skin-weight painting.

119_RederickPelvisQuick.thumb.png.9ceea9

 

Link to comment
Share on other sites

My sympathies regarding your continuing hair struggles.  I'm about to upgrade to Poser 11 Pro, and expect to face a bit of a learning curve myself, as my last version was 8.  The developers added a new rigging system and render engine during the interim, among other sundry tools and features, and it may take me a while to catch up.

Edited by Little_Dragon
  • Like 1
Link to comment
Share on other sites

  • 2 weeks later...

Well, I have to say that I'm more fond of V-Ray's fur (and even the nHair rendering) now that I've gotten past all the initial difficulties. It has certainly grown on me, even if it isn't an all-singing, all-dancing extravaganza. I wanted to explore combining fur and nHair a bit because I expect this sort of combo will be the direction I choose to go (unless I go with stylized fur shapes built into the geometry.) The face made for a good test case because it's limited need for nHair fits well with the V-Ray demo's limitations.

738f277709ca10143ec0b7dcddeb9d152ec115a9

It's quite crude, but I was mostly interested in seeing if I could get the two to blend together well, visually. Eh, not exactly successful. Very bright looking. But, I do like the way the fur is accentuated by the purple AO/Dirt coloring of the underlying shader. I think it would help to have a better complimenting shader on the model. That will require proper fur maps, though. The nHair still needs some playing around with. It sticks out too much from the fur body, making it very apparent that it's a separate body. I suspect having more than 6 nurbs curves per tuft will help it. I struggled quite a bit with adding additional curves to nHair, but I'm starting to get the hang of that process. Constructing those curves and predicting their affect on the nHair's resulting paint effect is another matter entirely. I am also considering exploring making my own paint effects, as I discovered they are perfectly compatible with Mental Ray. Maya's paint effects system is quite extensive and appears capable of some really beautiful things! But, it is also liable to be a huge challenge and require me to develop skills more akin to drawing.

 

Aside for that (and some more work on Rederick's topology) the primary focus has been on comparing Mental Ray and V-Ray.

 

Now that I'm a bit more comfortable, I thought it'd be useful to look into the biggest differences. First among them, Phong E versus Ward. Bit lopsided in favor of V-Ray and its built in ward specular highlights, but Mental Ray pulled out a surprise when I discovered it has a legacy shader that implements ward!

4dcf192681c776dad08e390349dabcc1afddd615

This linked submission's description has more details regarding the comparison

The biggest surprise didn't come from direct comparison, but rather from something I found in the V-Ray documentation's Known issues and Limitations page:

These are limitations, which due to the architecture of the V-Ray rendering system, cannot be implemented.

  • Materials cannot be used as textures (e.g. as an input to a texture or to another material). Although this can be implemented in principle, it may have unexpected results on the rendering or lead to various issues.
  • The Light Info and Surf. Luminance shading nodes cannot be supported. Although these can be implemented in principle, they may have unexpected results and lead to various issues.

These are two key capabilities that I exploit often in the creation of my custom shader networks.

In previous weeks, I was mildly perplexed that I wasn't seeing quite the same look I expected with shader networks I recreated in V-Ray. I had just assumed some of my setting choices needed adjustment. Since it wasn't a big deal, I hadn't looked into it further. Now that I've learned about these limitations, I decided to test it a bit further.

eaf57e7009b77fa19f3411905273590f8c947d95                                                                          fd7d2f4e6e16e79e4286880795f671532deca2b9         

Description further details the limitation's impact      This submission is a larger render of the new experimental shader network in Mental Ray

Despite V-Ray's fur tech growing on me, V-Rays fantastic speed and easy-of-use for most tasks, the these limitations may very well kill it for me. I haven't decided. I'm still going to proceed with fur, regardless of which render technology I go with. But, I want to be able to render more than just these characters floating in a void or over a bad Photoshop background. The capabilities of these renderers need to satisfy other creations, too.

I would love to hear people's thoughts on this, or at least what they think of the look of these shader networks!

Link to comment
Share on other sites

Let me try to structure this response for each shader you've tried out, starting on the image with the four different shaders. I'll just be giving my opinion of what the shaders feel like as I see them.

Top left with MR with the AO shaders: As you've mentioned yourself, it does feel plastic as the skin reflects too much light. Or rather it reflects it too brightly. Normal skin can reflect light, but most of it gets "trapped" in the skin and thus seems less reflective. If there's a way to implement a feature where you can allow some of the light to pass through, I believe you can get more of the feeling you're after. Alternatively this might work with a "second skin" beneath the normal skin, where you could have the normal skin let some light through and the second skin reflect most of it back through the normal skin as well. I've never really experimented with shaders or multi-layered reflections myself so I don't know how feasible it is, or even possible at all. However this is the first thing which came into mind as using only one shader to mimic skin reflections might not do it justice. Especially in case the same shader throws off the shading in the rest of the room.

Top right with MR with the legacy shader: This one feels better, yet it still looks a bit off. It might have to do with the smoothed surface the model has, or rather the smoothed surface the shader makes it look like it has. The reflected light in itself doesn't feel bad except that it "dulls out" the details on the skin. For example around the throat and eyebrow, the contours become less visible compared to the upper left image, although comparing it with an image which has "overly defined" contours due to the reflected light might be unfair.

Lower left with MR with the legacy shader: Here this shader actually makes the image look better. It doesn't look too much or too little, short of the feeling that there should be more details on the body (i.e. muscles and bones and such), although that's just me wishing to see this model improved even further. Actually, have you ever tried making custom textures with height, normal, specular, and such? I don't know how well it'd work with this specific shader, but one thing you could do is to paint in areas where the skin reflects lights either more or less as to give the illusion of details. That way you could get away with not having to model or sculpt certain things as long as the viewer believe they can see it on the body. As a matter of fact, Substance Painter might be able to help you with that as it's basically and litterally meant to paint textures, heights, and more directly on the model. You can even get a free 30 day trial if you ever wish to try it out.

Lower right with V-Ray with the ward specular highlights: This one feels very dark as there's little to no reflection at all. This has the same feeling as the top right one, but with less light instead. However despite this it makes the skin absorb more light rather than reflect it, and the skin colour might make it look like it's absorbing it even more. If it's possible I'd like to see this one with a slightly more reflective surface. It could work for the better if the skin is actually allowed to reflect light at all.

 

Everything I've said now is not based on experience, but rather on what I "feel" when I see them and thus is biased on my own idea of what's good or bad. I might just as well be wrong about something, so don't take my feedback as the truth, but rather as a different viewpoint. My suggestions what to do for certain shaders (or any shader at all for that matter) might or might not work, though that's up to you to decide whether you wish to try them out or not. I hope this can be of use to you nevertheless.

  • Like 1
Link to comment
Share on other sites

That really isn't bad Doc. Sure it's got room for improvement but I'd say it's an excellent start! Curious to see where you'll end up, far I hope. The potential is there

Link to comment
Share on other sites

14 hours ago, Amiir said:

That really isn't bad Doc. Sure it's got room for improvement but I'd say it's an excellent start! Curious to see where you'll end up, far I hope. The potential is there

Thank you for the kind words! They really mean a lot :D

16 hours ago, BerryBubbleBlast said:

Lower right with V-Ray with the ward specular highlights: This one feels very dark as there's little to no reflection at all. This has the same feeling as the top right one, but with less light instead. However despite this it makes the skin absorb more light rather than reflect it, and the skin colour might make it look like it's absorbing it even more. If it's possible I'd like to see this one with a slightly more reflective surface. It could work for the better if the skin is actually allowed to reflect light at all.

 

Everything I've said now is not based on experience, but rather on what I "feel" when I see them and thus is biased on my own idea of what's good or bad. I might just as well be wrong about something, so don't take my feedback as the truth, but rather as a different viewpoint. My suggestions what to do for certain shaders (or any shader at all for that matter) might or might not work, though that's up to you to decide whether you wish to try them out or not. I hope this can be of use to you nevertheless.

Thanks for the reply! I really enjoyed reading your feedback.

I saw Substance Painter on Steam during the T-Day sale, but forgot about it. I think I'll try out that 30 Day trial soon, in case it comes on sale again after Christmas.

As for the V-Ray ward specular highlights: It looks like I didn't save that specific scene for V-Ray, but it was easy to recreate from the Mental Ray scene. There are a number of complicating factors in these V-Ray images, especially the gamma setting. I don't know what the gamma for the original test was but here are new renders with varying gamma correction settings:

11251208_Compare_VR_NoGamma.png.422bef69

No Gamma Correction (this seems close to the original result)

11251208_Compare_VR_1pt5Gamma.png.948276

1.5 Gamma (Previously identified as the happy middle setting for Rederick and Blythe)

11251208_Compare_VR_2pt2Gamma.png.21305e

2.2 Gamma (Standard Default Gamma Correction)

I also realized that I had used the wrong shade of blue for Blythe! She's supposed to be a lighter kind of baby-blue shade. This is the same material, with no gamma correction but with the correct shade of blue:

11251208_Compare_VROriginalBlue_NoGamma.

However, looking at the 1.5 and 2.2 (left and right) Gamma corrected versions, I'm starting to think there was a reason I originally used the alt-blue shade in V-Ray:

11251208_Compare_VROriginalBlue_1pt5Gamm11251208_Compare_VROriginalBlue_2pt2Gamm

 

Everything below this point is 1.5 gamma correction.

Anyways, regarding the reflectivity of the shader material: Technically, I already have the reflection amount set to 1 which is the highest reflective value. Roughness, which does affect reflection is set to 0; increasing it will dull reflection and make it darker still. The highlight and reflection glosses basically just spreads out (or pulls together) the reflection rather than intensifying it or something. I have 0.6 for both of these. A 1 is a perfect reflection, but because there isn't really anything to reflect in my scene, it wouldn't really have any highlight or gloss. So, .9 is a good view of that one end.

11251208_Compare_VRNewBlue_00HR.png.0a5f11251208_Compare_VRNewBlue_99HR.png.11bd

0.0 (left) and 0.9 (right) for Highlight/Reflection gloss on New Blue color

11251208_Compare_VROldBlue_00HR.png.733111251208_Compare_VROldBlue_99HR.png.f46b

0.0 (left) and 0.9 (right) for Highlight/Reflection gloss on Old Blue color

I found that after .6, it very quickly starts to be noticeably more highlight-y and plastic-y, especially around the forehead.

11251208_Compare_VROldBlue_77HR.png.ed8911251208_Compare_VRNewBlue_77HR.png.e8dc

Old blue and New blue at 0.7 Highlight/Reflection gloss:

The other trick-y bit is that I don't use basic white for the reflection color. I plug in a dirt texture to provide AO-based Pink and kind of a desaturated reddish-strawberry thing. Admittedly, it can be kind of difficult to tell the difference between the two (Even when they're lined up next to each other) but the difference is mostly in the area where the reflection would be shaded a bit as well as the really big reflection of the forehead where it starts to meet the eyebrow-ridge.

11251208_Compare_VR_1pt5Gamma.png.94827611251208_Compare_VRNewBlue_WhiteReflect.

New Blue with Pink AO reflection color (left) and Plain White reflection color (right)

11251208_Compare_VROriginalBlue_1pt5Gamm11251208_Compare_VROldBlue_WhiteReflect.

Old Blue with Pink AO reflection color (left) and Plain White reflection color (right)

Now, I could choose to make the colors very different. Maybe contrast-y or maybe a tad lighter. It might help to better illustrate where the two sets of colors are being applied on the AO reflection color, at least.

11251208_Compare_VROldBlue_GreenMagenta.11251208_Compare_VRNewBlue_GreenMagenta.

Old/New Blue with Green and Magenta AO

Oddly enough, when I reversed the two colors, I found a rather pleasant coloring. Well, not so much for Old blue.

11251208_Compare_VROldBlue_MagentaGreen.11251208_Compare_VRNewBlue_MagentaGreen.

Old/New Blue with Magenta and Green AO

 

So, that's about all I can squeeze out of variation in the color/material parameters that affect reflection or things that approximate reflection. Let me know if you have other variations you want to see or have any further feedback :D

Edited by DrGravitas
Resizing images, hiding attachment bug
Link to comment
Share on other sites

Seeing these makes me think of something I thought of yesterday after I made my post: What is your goal with these shaders? What specific end result are you after?

My question here lies with the feedback I gave, which I personally based on how the human skin reflects light. However that might not be what you're intending with your shading, which in turn means I might've made suggestions for something you don't even need. Also depending on how much fur you'll implement on your models, the skin might not matter just as much anyway. Therefore I'm curious of in what way you intend to use the shaders as well as how important they'll be if you're going to cover the body in fur anyway -- if you're actually going to completely cover it in fur that is. It'll be easier to make more suggestions if I know what you're actually after. :)

Speaking of fur, I like the way you've implemented it in your last post. The brightness obviously throws it off a bit, but the overall feeling looks really nice. The one thing I can think about right now is if it's possible to vary how dense and long the fur is in different areas. I'm thinking in the lines of very dense and short fur around the snout and face and longer fur the further down the head and neck you get. Kinda like how the fur is on wild foxes and wolves. Although the same question I made earlier applies here as well; how far do you intend on going with fur? Just specific parts, or maybe even the whole body? It'd be easier for us to help if we know if you have an end goal your're aiming for. Or maybe you want us to suggest things out of the blue, hoping something new and unexpected might appear. That works too.

  • Like 1
Link to comment
Share on other sites

10 minutes ago, BerryBubbleBlast said:

Seeing these makes me think of something I thought of yesterday after I made my post: What is your goal with these shaders? What specific end result are you after?

My question here lies with the feedback I gave, which I personally based on how the human skin reflects light. However that might not be what you're intending with your shading, which in turn means I might've made suggestions for something you don't even need. Also depending on how much fur you'll implement on your models, the skin might not matter just as much anyway. Therefore I'm curious of in what way you intend to use the shaders as well as how important they'll be if you're going to cover the body in fur anyway -- if you're actually going to completely cover it in fur that is. It'll be easier to make more suggestions if I know what you're actually after. :)

Speaking of fur, I like the way you've implemented it in your last post. The brightness obviously throws it off a bit, but the overall feeling looks really nice. The one thing I can think about right now is if it's possible to vary how dense and long the fur is in different areas. I'm thinking in the lines of very dense and short fur around the snout and face and longer fur the further down the head and neck you get. Kinda like how the fur is on wild foxes and wolves. Although the same question I made earlier applies here as well; how far do you intend on going with fur? Just specific parts, or maybe even the whole body? It'd be easier for us to help if we know if you have an end goal your're aiming for. Or maybe you want us to suggest things out of the blue, hoping something new and unexpected might appear. That works too.

I am open to any suggestions; new thing out of the blue can lead to interesting new directions! As for my goal with these shaders: I want to make something that looks pretty beautiful sublime, regardless of whether or not it is strictly realistic.

I haven't completely made my mind up on fur, either. I'm still even considering building the shape of fur into the polygons and skipping the fur tech. But, for the moment, the plan is a full-body combination of short, static fur with tufts of longer dynamic nhair-based fur. Both will be of varying lengths and densities.

With V-Ray fur, (and I suspect this will go for all fur technologies) I've determined I can get densities high enough to completely hide the shader below so the look of those shaders still matter. That is part of why I ended up focusing on what I did for this week, but more on that a bit later once I finish some things up.

Link to comment
Share on other sites

And now for something completely different!

I was in a mood for programming, so this week I had another go at the UV cutting script. This script, which I went over earlier in the thread, takes a list of edges (in the form of vertex positions in worldspace) and operates over the mesh to identify the vertex nearest to the specified one and gets the edge(s) between them. With this list of the mesh's edges, we construct a fairly well designed UV map that is. The idea is that I can save a ton of time by automating UV unwrapping a bit more.If you'll recall from last time, this script doesn't exactly work.

But now? It works! ...Well, not really or at least not perfectly in every scenario.

The problem is considerably more complicated than it would seem, but this script does work perfectly in what I call the base case, and it's performance degrades slower than the old script. Bullet point time! (I've made up the percentages :P )

  • Base Case: No vertex position changes.
    • Old Script: 99% successful with a few rare cases where it won't properly identify a vertex and thus can't find an edge.
    • New Script: 100% successful!
  • Near Case: Nearly no position change (either the vertices are close to where they were or very minor topology changes)
    • Old Script: 60% successful; if the two nearest vertices are not directly connected, it cannot identify the edge. Some missed verts
    • New Script: 99.9% successful! It can build edge paths to disconnected verts (recovering from minor topo changes) but sometimes not always in manner reflecting the original intended cuts.
  • Hard Case: Drastic position and Topology changes (guessing edge cuts on Rederick based on stored definitions created from Blythe)
    • In effect, this case and beyond represents bonus cases; nice to do well but not really needed.
    • Old Script: Ah Hahaha~ No. Not successful. You might get lucky and catch a few verts that still have direct connections.
    • New Script: 72% successful. Turns out you need a lot more intelligence in picking out and pathing these things to get perfect results, but it still gets you more than half-way there, saving me a good chunk of time when combined with manual processes.
  • Insane Case: Same model the position definitions were generated from, except smoothed.
    • There is literally no reason to do this other than to see how the script would fair in extreme conditions.
    • Old Script: -1x10e^6% You will waste several hours of time and probably crash Maya.
    • New Script: 0% Success. You will get a result and it technically did everything correctly (well maybe) but it'll be worthless as an unwrapped UV.

1209_RealTimeUVBuildingWithHeatMap.gif.d

Real time Running of the Script on Rederick under Best Case Scenario, with resulting UVs (including a blue-compress, red-stretching, white-balanced UV review) ~18 seconds of actual script execution, on 4,583 vertices. Old script would take around 2 minutes for very nearly the same result.

566a0dda14f12_1210SmoothUVUgly.thumb.png

View of torso (Blythe) under Insane Case, resulting UV map. Took 54min 24 sec operating on 75,522 vertices. Still surprisingly recognizable, even if it's useless. Click images for full-size

566a0deccbced_1210_4SquareUVScriptProble

Closer Examples of the pathing problems in the Insane case (Top 2 Quads). Lower Left Quad; the difficulties exhibited in the hard case, with crude vertex selection and possibly wonky pathing. Lower Right Quad: Skipping pathing, we get at least a good number of correct choices with a few things that need to be chosen manually, and some that need to be removed. Still a time-saver. With pathing: 2 minutes. Skipping pathing: 25 seconds

The key differences in the new script (and any issues with them) are:

  • K-D Tree: This space-partitioning data structure replaces the weird sphere thing I was doing and enables quick, efficient division of the mesh by position of the vertices, reducing/minimizing the number of vertices we need to evaluate to find the one closest to a given position.
    • Minor problem: I have to iterate over every vert in the model before I construct the tree. Should be possible to get around that, but it still only takes a couple seconds.
  • Nearest Neighbor Search on K-D Tree: This greedy search algorithm traverses our K-D Tree and identifies nearest vert.
    • Problem: Because I need to find two verts that should be connected, I need to make sure I don't select the same one. I thought I could just ignore the previously identified vert (using a list of illegal points) but now I'm not sure. I might need to actually delete the element from the tree (this requiring it be reconstructed and making things a lot slower.) Not sure about that. If so, then I'd probably be better off creating an R* Tree.
  • A* Path-Finding Algorithm: This uses PyMel functions to examine edges/verts and find the shortest path of edges between two given verts.
    • Problem: I think I screwed up the heuristic, because the Insane case as weird loops and bizarre paths.

1208_KDTreeAnim01.gif.586fed4655528eeecb1208_KDTreeAnim02.gif.1104c2ac671761071d1208_KDTreeAnim04_LargeWireframe.gif.4eb

Various .gifs visualizing how the K-D Tree divides up the model. Displayed are cells which represent a tree-node (vert) and each of the nodes branching off of it on down to leaf level. The first small gif shows every cell as if reading the tree starting from the root and going down left then right for each node. The other two .gifs show cells in order of largest to smallest, though I think they exclude the empty child nodes of the leaf nodes that the small .gif shows. The larger of those two is just a wireframe view of the same middle .gif.

The edge selection process isn't very successful in the hard/insane cases not just because of issues with the implementation of these components but because such a task actually require a fair bit more intelligence. Nearest vertices may not line up and the shortest path isn't necessarily the best one for cutting UVs. To truly succeed, this script would require a lot more metrics to judge which edgeloops it should select and generally have to be a lot more intelligent. Not only would that be significantly slower, it'd be hella difficult to write!

Despite being crude and kind of a stupid-level of intelligence, and even though there are some potential issues with these implementations that I haven't quite figured out (or even verified are an issue) it still has proven useful. More over, while the script isn't perfect, the K-D Tree, NNS, and A* are powerful tools that can serve as the basis behind a number of other such scripts (many of which will as successful and perhaps even more successful, due to simpler tasks). These other script ideas include:

  • Save/Restore Per-Vertex-Per-Joint skin weights: This could potentially reduce or eliminate manual skin weight painting for Best Case and Near Case scenarios
  • Save/Restor Per-Vertex Classic Linear to Dual Quadrion Blend Weights: A simpler task than either the UV unwrap or skin weights, this could eliminate blend weight painting that takes hours, as well as back-propagate minor tweaks, made for individual posings, to the saved rigged baseline.

Both of these have the potential to really speed up my workflow for topology and shape changes. Taking what takes about a day and possibly reducing it to minutes. Coupled with this UV script (when only minor incremental changes are made.) This could enable me to do a fully rigged and weighted model in less than an hour and allowing me to do texture painting every week even when I go back and make topology changes!

 

You know what's even better? I'm putting the full scripts out there so you can check them out! Or even use/modify them, I don't care. Just don't expect quality python, outside the K-D Tree, NNS, and A* which are mostly copied and modified from Wikipedia and other free web resources.

  • UV Unwrap Script: http://pastebin.com/Epfi6DV8
  • Edge Definition Storing Script: http://pastebin.com/8jpAg6Ug (select the border edges of an existing UV and run the script to export.)
  • Bonus: http://pastebin.com/DNtX583s See a visualization of your mesh in a K-D Tree, like the one used to create those .gifs. Just select your model and run the script (Give it some time to start and remember that this depends on being in Maya.)

My python quality varies and I'm still using java-style naming conventions, but if you really want to know more about the scripts I'd be happy to answer questions. If your programmatically inclined, I'd also love to hear suggestions!

Link to comment
Share on other sites

  • 2 weeks later...

Bunch of minor stuff over these past couple weeks.

  • Following the UV unwrap for Rederick, I prepared a full skin weight painting, DQWeight Blend, and face rigging for him.
  • Upon completion, I discovered that I had mistakenly bound something wrong at the start and had to do it all over again.
    • At least I didn't have to redo the UV unwrap by hand! Yay, scripts!
  • I painted up texture maps for Rederick. This requires about 6 texture maps at minimum. Some went through a couple revisions, but usually only the white marks and black marks are all that difficult to make.
  • Made up a holiday hat, a turntable, and some miscellaneous stuff
  • Tested a VRay shader network to serve as an equivalent to my Mental Ray custom shader network.

The VRay shader network isn't so much a network as a Blend node which takes in several materials and textures that serve as filters. It is much more efficient than the custom shader network because it only renders materials for the sections it will be visible. There is an equivalent in Mental Ray for Maya, but I've never gotten around to using it.

1219_Views_3Qrtrs.png.30d8344c4e11aea05d    1219_Views_3Qrtrs_1Color.png.4870d433f25

Something seems... off about the results. You can kind of see what I mean when you compare the blended material with the single material shader. The red shader is the base material. I can tell by the nose that it not just using the base material settings and applying the other materials as colors (the nose is a different type of shader). But, it's like the deeper in the material is, the lighter and less like its self the material gets. I checked the texture maps and they are definitely set to let all materials below them through 100%, where they are not to be applied. Not sure what causes it to look like this.

Eyebrows are also really annoying to make sometimes. I am not too happy with how they look right now. Very bland. I think Rederick might not have enough shape differentiating his face from Blythe in these areas. But, that would require redoing all that weighting/rigging/painting...

1223_MR_CustomShaderNetwork.thumb.png.dd

This image uses Mental Ray and my good ol' custom shader network! The stretching of the eyelid texture is more apparent here as I haven't cleaned it up like I did in the others.

Really, there's a couple problems right now. These are ok for the underlying shading, but the texturemaps will need changes to work well as fur maps. Moreover, there's a lot of ugly stretching on the eyelids and lips. For the eyelids, I believe the problem is with the UVs. Fixing those would mean redoing all that weighting/rigging/painting...

 

OK. So, lots of things are pointing towards a do over of all that stuff. So, I decided to spend some time this week and work on those other two scripts! Those will help reduce or even eliminate the amount of effort that goes into bringing it from UV to ready for texture mapping! The new scripts:

  • Save/Restore Per-Vertex DQBlend Weights
    • I didn't get everything I wanted working on it, but I have successfully gotten it to copy over the blend weights.
    • No consideration of topology changes and only small position changes can be properly tolerated, but it'll save plenty of time.
  • Save/Restore Skin Painted Weights
    • Incomplete, but coming along nicely!
    • This will be much more complicated and more fragile than the other scripts. Each vertex has a series of joints that influence it and a weight for how much that joint influences it.
    • We don't want to just copy those over directly because... reasons. It'll be more robust if we do it a different way, making it more useful for things I'm planning with it for the future. Like mix-and-match painting. I also don't want to alter anything that wouldn't be changed by my own painting in order to avoid messing things up if something small changes in an area I don't have to paint manually.
    • So, to do this we can imagine that we have 4 states of skin binding:
      • A baseline of all these vertex-joint weights from when the original mesh is first bound to the skeleton, with out any weight paints.
      • A finalized look at all these VJWs from after that original mesh has all its weights painted properly (by hand likely)
      • A baseline of the mesh we want to copy these painting actions to when it is first bound.
      • A finalization of the mesh we want to copy it to, as we would ideally like it.
    • To get from the new mesh's baseline to its finalization, given the baseline and its finalization, is our goal!
      • To do this, I believe I will need to take a sort of "difference" between the original mesh baseline and finalization.
      • This difference then could be applied to the baseline of the new mesh to recreate the painting steps without straight-copying everything over.
    • No doubt there will be other considerations. But, this leaves lots of good information and places in the code to insert intelligent actions and decision making processes so that the script can make some logical inferences to fit the results better to what I want than just straight copying.

Of course, all of this is in the very early stages, but as far as allowing my to make those UV (and maybe head-shape changes) they should save me a ton of frustration. I'm quite certain this is only the start of needing to redo those parts. No doubt all three of these scripts will be very useful in the future.

1223_Fur_BasicTest03.png.c26d58d54778124

Just for the hell of it, I also tried out the texturemaps with VRay fur for all 3 major portions. Literally just a few minutes ago. The whole setup didn't take more than an hour. There's lots of tweaking to be done on that, but it has some interesting effects. I haven't quite figured out how to properly control application of the fur on the body with textures and length is really awkward is some places, like the eyes, fingers, nose, and other finer detail areas. I kind of like the chest, though.

Edited by DrGravitas
Switching out an improved version of the fur test without fur covering the eyes
Link to comment
Share on other sites

  • 2 weeks later...

3D is a fickle medium, prone to sensations of great progress that suddenly fall apart at the end. UVs have been like that lately, a lot.

The new scripts have seen an explosion in new UV experiments, just as expected. I've been moving so fast, I haven't even slowed down to document progress much. The last document set of UVs (somewhere between 10-20 UV sets ago) had a few issues: unexpected stretching in some areas and a few UVs that folded over themselves leading to texture artifacts.

Seams were also abundant on the 12/23 UVs and that was set as my first target.

0101_UVSeamsB.gif.121267a370696a95be7b11

Dealing with the seams was... somewhat frustrating.

Eventually, I settled upon UV shells that greatly reduced the number of components (and thus, seams) and placed the cut edges in areas of the model were seams should be out of the way of textures, or at least less visible.

568efa4c899bf_0107_4SquareTxtrUVProblems

A selection of UVs. The upper 2 quads are UVs and a test image from 12/23. The lower left quad is a smoothed version of the latest UV shells, designed around seam reduction/isolation in their default size and arrangement. The lower right quad is the same set with manual scale adjustments and automated rearranged layout.

One issue discovered was forgotten from much earlier UV lessons: Texturing of a given area, being simple bitmaps, have a resolution that is directly dependent of the size of that area on the UV. That means the eye lids, which are very small polygons and thus their default UV size is quite small, will have a very small number of pixels available to them on the texture. This leads to ugly stretching, dithering, and highly blocking textures on those area. To combat this, either the UV shells must be cut in sizes sufficient for the automatic unwrapper to be able to resize them appropriately (not enough) or I have to manually resize appropriate shells and rearrange their layout. This additional manual operation has an impact on some further future ideas, but otherwise is more of a nuisance than a problem. Automation of this process is unlikely for now, as it would be even more complicated than even the recent scripts and not quite as helpful, either.

The biggest limiting factor to progress on UVs and texturing has been painting the textures themselves in ZBrush. It's entirely manual and has very few tools that are really all that helpful, and there's not much that can be done about that. Though, I also have a terrible tendency to spend excessive amounts of time trying to perfect the lines and shapes I want to paint. So, to speed that up a little bit, I tend to run simple test textures that take a bit less time. These tests have proven very misleading at times, leading to significant amounts of time being wasted painting textures for UVs that ultimately end up having crippling issues, on the believe that I've actually solved them this time.

In prioritizing dealing with the seams, the stretch/compression of UVs features suffered for a number of elements. This was found to be a significant issue after one full texture test. ZBrush does something strange when it calculates how to create a UV texture from polypaint. Somehow, stretching or compression effects certain areas, even fairly balanced areas surrounded by compression/stretching and the polypaint won't apply as expected. I can't really tell this until after I finish painting and run test, either, making it quite time consuming to deal with.

568efa38b497b_0107_4SquareTxtrUVProblems

As seen here, stretching/compression of UVs makes the well defined and clean paw pads of polypaint in ZBrush, devolve into terrible textures in Maya. This also affected the edges of the ears and the eyelids, initially.

Dealing with this issue is still a work in progress. Yesterday, initial tests with textures seemed to indicate I had solved the texture creation problem. It was only after a full texture test that I found this issue had cropped up in entirely different places (the paws) and somehow had remained in the eyelids despite explicitly testing them and getting a green-light. The issue is difficult to test and very elusive.

As an alternative to solving this issue in the manner chosen, I have finally downloaded the demo for the Substance Painter. Despite initial reservations (and irritation with yet another unique camera control scheme) I am starting to like it. Texture size is limited to 4096x in Substance, which may be a bit of a problem. I found that size to result in a number of artifacts in ZBrush, and had much more success creating 8192x textures and resizing them to 2048x in Photoshop. However, the manner Substance paints is fundamentally different from ZBrush in a way that seems to avoid the compression/stretching texture creation issues and has several other advantages. The primary disadvantage is I don't get to see the butter-smooth, lickable shapes and edges I can see in ZBrush (because I'm actually painting the texture, which technically makes this an advantage to all but my aesthetic sensibilities :P)

 

I am further behind on the UVs than I wanted. I had hoped to have finished with Rederick's UVs and textures, and moved on to or completed Blythe's so that I could get back to working on Fur with good UVs and textures for both of them. If things suddenly start going right, I could still see that happen next week, but history indicates I will not get back to fur for at least two weeks, or possibly all of January. Here's hoping Substance Painter (or maybe figuring out my ZBrush issues) can prevent that!

Link to comment
Share on other sites

  • 2 weeks later...

Woo! The new texture maps are finally done for Both Rederick and Blythe! As expected, they took a bit more than a week. I could resist playing around with them, though. So, no progress towards fur.

On the bright side, I've finally revamped the custom shader network! It is now a proper layer shader, using Mental Ray's new MILA shader! It's much flatter, network-wise, but the MILA node is smart and knows how to instruct the renderer so that only visible shaders are called. That means instead of rendering all shaders in the network for every pixel and then figuring out the color based on the network, it only renders shaders that actually impact a given pixel's color value.

This makes it much faster. Maya's implementation of MILA also makes it easier to add and control layers in the network (once you figure out it's obtuse nature). So, I played around with with it!

                             60784e92c622f9cdd8a1e4161d54b0e7faaa076c                     

Using the Metallic-based Shaders                   Using the classic ThongE err, I mean PhongE-based Shaders

I created a marble-like look that I want to use with a piece inspired by my recent statuette piece.

RnBTest1.png          Test.png

It's still very much a work in progress. You can also see the fur-alternative bloated polygons being used for the tails.

It utilizes the metallic-derived shaders for coloring. Both Rederick and Blythe have these setups now.

0117_Blythe Stand post.png

Blythe's Metallic and Baseline PhongE shaders. Click for full-size.

15_0929_MatureContent.png

(Mildly NSFW for Nudity and suggestive posing. Thumbnails is link.)

I also created a thong for Rederick. I kind of feel like it would be a bit more appropriate than simply having a white-fur colored bulge where his genitals would be covered. Don't know whether I'll keep it, but at least it'll be easy to add back.

15_0929_MatureContent.png

(Mildy NSFW for crotch bulge focus and vaguely suggestive posing. Nice try anyways, Rederick)

 

Fur is finally back in the crosshairs. With these texture maps, dividing up the spaces should be much easier. I will no doubt have to derive specialized maps for the fur, but these will provide excellent starting points and ways of masking off areas I don't want fur to appear (like the nose.) It leaves the question of Vray, however. While I don't like VRay for rendering shaders, I have grown to like its fur implementation and the ability to add shaders to nHair.

Decisions, decisions...

 

Link to comment
Share on other sites

  • 2 weeks later...

Well, last week was terribly unproductive thanks to unfortunate circumstances. Worse yet, fur progress has been mired in disappointment.

56b3e5f8be5ae_0204_4SquareXGenTailspost.

I gave XGen yet another shot, and was once more disappointed. It's frustrating because it is so close to being an amazing solution to this problem. But, because it's tied to the model topology, it just can't produce the level of fur I need without some intense effort. Even if I were to put the kind of effort needed into making up all the copies of XGen descriptions necessary to get the fur coverage I want, I don't know that I would be able to render them. Worse, there is so little information available and virtually no good tutorials. The inability to use UV maps to easily cordon off sections of the model for coverage of a given color (or better yet actually determine the color) is frustrating and the whole thing is a completely alien workflow. I can't even get Mental Ray's AO shaders to ignore it!

The only easy-to-use and really nice part of XGen is that I can apply whatever shader I want to it and get exactly what I want. I used a simplified version of the model's metallic shader to color the fur in these tests and they fit in just fine with the model's look (even if the metallic look isn't really right for fur.) XGen is just a frustrating disappointment every time I use it.

I think I'm going to try some more exotic fur solutions next. No doubt most of them won't pan out, but we'll have to see.

Link to comment
Share on other sites

How exactly does XGen place fur based on the topology? Does it just take one polygon, place x number of hair with the length of y and then lets them act however "they" want? Is this then what's limiting the level you're after? I'm wondering since if the topology is the main issue, would it then be possible to have a "second skin" above the already existing tail with a different topology which you'll make sure to follow exacly how the original tail moves? Since there'd be fur everywhere you wouldn't even see the second skin, though I can't say for certain it'll be completely invisible. Or work at all even. It all depends on how the fur actually behaves of course.

  • Like 1
Link to comment
Share on other sites

19 hours ago, BerryBubbleBlast said:

How exactly does XGen place fur based on the topology? Does it just take one polygon, place x number of hair with the length of y and then lets them act however "they" want? Is this then what's limiting the level you're after? I'm wondering since if the topology is the main issue, would it then be possible to have a "second skin" above the already existing tail with a different topology which you'll make sure to follow exacly how the original tail moves? Since there'd be fur everywhere you wouldn't even see the second skin, though I can't say for certain it'll be completely invisible. Or work at all even. It all depends on how the fur actually behaves of course.

You are pretty much correct in understanding how XGen depends on topology. The XGen description controls how the move, length, and color. The images above actually do have a separate tail, flagged so the renderer won't render it at all, with many more polygons that's bound in the same manner at the body mesh. There is a maximum number of hairs that a single XGen Description (in this case, all of the tail is a single description) can have. It may be possible to get better coverage with multiple descriptions maxed out, applied in segments along the tail, but I have not yet attempted such tedium.

Link to comment
Share on other sites

I can understand it might take way too much time for your computer to calculate if you put even more fur on the tail. But what if you made the fur shorter as well in order to lighten the burden and hopefully make it easier to control? Of course with the tail mesh you're using right now you probably won't get the result you're after, considering how thick you've modeled them before you began using fur. Then what if you used that thick model as a base and put fur on it instead? The model itself will make sure it's as thick as you'd want it while the fur gives it just enough of the tail-y feeling to make it look good. Granted it won't be just as flexible as if you'd use long and dense fur and will basically be a simplified solution to an obviously difficult challenge. The chinchilla from the Blender animation Big Buck Bunny would be a great example of an animal which has quite a lot of fur, but is simplified enough to still look fluffy with as little fur as possible. You could always complement with some textures underneath it to fill in where the fur doesn't cover completely as well.

The limitations with this is of course the fact that there really isn't much fur at all, which might or might not actually be an issue. Things like having a wind breeze through the fur, or just making it wet, will be much harder since there really isn't any fur to interact with. Despite this, it could be enough for what you're after ... depending on what you're actually after that is. At least the option to do it this way is possible no matter how you want to create your characters.

  • Like 1
Link to comment
Share on other sites

I will be responding to that post soon enough, but first something special I've been planning for Valentine's Day.

Description is available on its associated Weasyl post. Be sure to check it out in 1080p!

 

Ok, analysis time. This was not as successful as I would have liked. I am happy with the pose, and the models themselves (mostly) but there were some severe technical issues towards the end. The turntable itself is too fast, but the render speed was so slow that I couldn't simply add frames to draw out the rotation without it taking far too long. It took 21 hours 19 minutes and 26 seconds to render as is and that was for 150 frames working out to a mere 4 seconds. I used Youtube's video editor to duplicate the 4 seconds several times over to build it up to a couple minutes in order to accompany music. The music I wanted didn't fit well with the fast pace. I tried to slow it down, again through Youtube, to go with the music but this introduced a number of visual glitches, especially around the background and raised arms. Very disappointing. So, I had to drop both the slow rotation and the music.

The background itself presented several unexpected problems as I had a lot of trouble getting it uniformly lit. In the end, I had to simply settle with what I had. Most of those issues weren't evident until seen in motion. Fortunately, I had left myself lots of time for a second render run, should any problems only be visible during the animated finish.Unfortunately, I additional problems along the same lines cropped up in the second run. I had worked on a solution while the second run was still rendering, but accidentally overwrote a few of the render frames with tests. It was clear, however, that there were too many frames to replace and that I would have to leave the dim background alone.

The hair presented a number of challenges, as usual, but I think I am finally starting to get better and making it look less crappy. Styling still needs work, but overall I think it is an improvement from previous head hair attempts. It added significantly to the render time, however, though I managed to figure out a way to make the photon tracing skip it through configuring its object.

On the shader side of things, I remain fairly happy. Though I admit the transition to the new metallic shaders has cannibalized all the performance gains I made in finally building a proper MILA-based network. The AO shaders are great. I love them. But, they are just too costly to continue using, especially in complex scenes like this (with a whole two characters oh wow :U ). Fur is just not going to render fast enough with these shaders. I need to trim it down and simplify the shaders. Honestly, these shaders have attracted much more attention than the old ones. I have noted interesting possibilities in mixing colors and partial alpha rather than simply layering colors. I think this will be the most fruitful direction. Perhaps some how I could replace the 12 or so AO shaders with a single AO shader mixed over simple colors. Or something.

 

At any rate, it is over and done with. Fur remains the challenge ahead.

Edited by DrGravitas
Analysis added
Link to comment
Share on other sites

The exotic fur solutions failed. There were two I attempted:

  • Using SOuP's pfxToArray node
    • The idea here would be to use this in combination with other nodes to find some way around nHair's built-in PFx coloring limitations
    • I was unable to figure any of it out.
  • Applying a custom PFx to an nHair system
    • Any Paint eFfect brush can be applied to an nHair system
    • Unfortunately, it just pipes into the same node and loses all its coloring options, remaining limited in the same fashion as the default. I suck at making good looking PFx, too, so none of them looked better than nHair's default PFx

56c658f837b55_0215PFxTailFail.thumb.png.

The meager results of the custom PFx applied to the tail.

 

That's not to say no progress was made. I returned to work on XGen and for once I actually made some improvements!

  • I discovered the Density slider is not limited to 100%, and thus I was not at the limit of visible splines, like I thought. So, no need for tons of additional Descriptions applied to sets of the fur.
  • After much struggling, I managed to work around the brokenness of the XGen implementation and connect a texture map to influence the fur
    • This a key success! With it, I finally have the UV-specific fur influence properties I've sought! XGen is very flexible, to the point of being difficult to figure out and tedious to setup effects that are nearly automatic in other fur solutions.
    • The key here is that XGen is NOT UV-dependent. It uses pTex files when are defined without the UVs. But, Maya has the ability to bake UV-specific texture from its 3D paint into pTex files. By replacing the texture of the 3D paint with my previously created texture furmaps, I can bake pTex files for the model that serve as XGen fur maps.
    • This is incredibly painful because of how broke the implementation is. I literally have to manually copy-and-paste files outside of Maya because it isn't creating them like it's supposed to. It has trouble linking them automatically, too for somethings.
  • I've made progress in figuring out XGen's unique and awkward expressions language. This has allowed both for a improved control and variability on things like length as well as mapping work.

56c677b7273d4_0218XGenFurBody_3piece_Clo56c6784926d46_0218XGenFurBody_3piece_Ful

There are still lots of issues. Chiefly, I cannot find a way to vary the shader colors based on the UVs, even with the maps applied to the primitive color, like I saw in tutorial videos. So, for now, I have to have multiple XGen Defs; one for each different color section (3 total). This has its own problems. First, the XGen's busted implementation means it can't auto-create the shader assignments. I'm going to have to figure out how to create them myself so I can color the other fur patches.

Second, the pTex resolution is insufficient (even at 1000, compared to the default of 5) to get good and fine application. The result is gaps at the edge of the fur markings. This will require special versions of the furmaps that have overlapping color markings so that I can blend between the two colored furs. I will likely require density maps that specially affect the blend-space, too. I've also been experimenting with a map (and ways of adjusting it with expressions) that varies the length of the fur based on where on the model it appears.

This fur is going to require a lot of maps...

Edited by DrGravitas
Forgot some text
Link to comment
Share on other sites

FINALLY!

3 colors, 2 maps, and a single XGen Definition driving a unified, texture-based fur. I was starting to think that I'd never get here!

56d0dafc3e63e_0221XGenUnified_Pose2.thum    9ae0d0cc59afdadc0086163c2a3b4286cf29bfbe

Yes, it's XGen. I know, I complain about it a lot (and oh boy am I ever going to keep complaining about it!) but the only reason I've kept coming back to it is because I have had this awful, deep-seated feeling that it was going to be the way I would ultimately go. This was largely due to it meeting all the criteria I wanted in a fur tech, on paper anyways. The trick all this time has been getting it to work! That, and struggling with its awful implementation in Maya 2015. But, at last I have succeeded in the struggle and constructed a solid foundation on which to build the fur.

XGen fur is composed of collections and definitions. Don't ask me what a collection means in a technical sense, it still confusing for me. But definitions are basically the fur itself (or whatever the XGen is applying; it does more than just hair/fur). Previously, I had major struggles with this definition, especially with regards to density. The coloring of fur was the biggest challenge, however, and that is at last solved! I was able to produce custom shader parameters for choosing colors based on a pTex (XGen's UV-like tech) which in turn was baked from the texture-maps devised to define the body marking for coloring in the shader. Naturally, all of this is available in Maya's manual (well, OK apparently not in the one I was using locally) but getting it to actually work was an incredible pain and many times in the past, it seemed it didn't actually work the way it was described.

These pictures also features a crude means of directing the fur to "flow" in certain directions. It's not great, but at least it's not sticking out perpendicular to the surface of the model. In addition to the coloring of the fur, the length of the fur is varied according to specially drawn maps tied into a custom length Expression. A second map for

Moving forward, goals for the fur are as follow:

  • Refine the length of the fur and experiment with tufts of fur on the chest, neck, or other areas.
  • Implement eyebrows
  • Figure out a way to do the tail as a fluff of fur rather than fur-coated polygon blob. This will probably require a dedicated XGen Def and possibly nHair simulation.
  • Work on the coloring, including root-to-tip variation.

Feel free to let me know what you think of the fur so far and if you can think of any other goals or things I should work towards, fur-related or not!

Link to comment
Share on other sites

  • 2 weeks later...

In addition to working on fur, I've taken some time to develop an alternative Rederick model sports a [CENSORED]. Yes, a [CENSORED]. It seems perfectly pragmatic to produce a [CENSORED] in this fandom and I felt like trying something new. I don't generally create sexual works all that often and so they inevitably feel awkward to work on. I think by exposing myself to working on producing these from time to time should help me get over the awkwardness and free me to follow wherever my creativity leads me. Plus, developing this [CENSORED] (which requires topological changes) is an excellent chance to see how useful some of those scripts will be!

So, the [CENSORED] has a few problems still. As I posted a while back, I will be using the Mature/Adult Only thumbnails which will link to NSFW materials.

15_0929_MatureContent.png.d626d4758c524b

This NSFW Links to WIP images of the [CENSORED] including some of the problems remaining.

The initial [CENSORED] seemed alright, but I felt the need to figure out if it was well-proportioned. That's when I hit upon the idea of using the model's mouth! I simply moved it around and crammed Rederick's [CENSORED] in his own mouth for comparison. :D But, since his jaw isn't articulated while building this, it was wasn't much help. So, naturally I brought in Blythe's model (with rigging) and found an excellent [CENSORED] measurement tool by having Blythe [CENSORED] Rederick. As you can see in the left column, the initial [CENSORED] was too thick. I did find some use for it, however, as a [CENSORED] and ended up creating several lewd renders with both Rederick and Blythe.

While I did get something a bit more proportionate, I don't know if the [CENSORED] is too long. It also has some deformation issues still and appears to taper a bit towards the ends. I'm also not sure how best to connect the [CENSORED] with the [CENSORED] as well as to the pubic area. I'm fairly happy with the backside of those [CENSORED] though, but I failed to include that in the [CENSORED] WIP scrap. I'd love for any feedback you can muster on the [CENSORED] (no you don't have to keep up this lame joke.)

 

I haven't forgotten about fur. Unfortunately, none of my experiments recently have yielded any advancements in its look. I even did one of the lewd renders with fur, but really wasn't happy with the results:

 

NSFW Links: Blythe Ride (Female Solo)

15_0929_AdultsOnly.png.6f76f4745619d156a 15_0929_AdultsOnly.png.6f76f4745619d156a

      With fur (FA)                                                        Without Fur (Weasyl)

As for the rest of the lewd renders, I'm very satisfied with them! Even though I wasn't able to get any with Rederick's [CENSORED], I did come up with a couple ideas for use of the [CENSORED] I derived from the overly-endowed version. Lighting ended up being the big problem for these and I am really disappointed I wasn't able to fix the shadows in these.

15_0929_AdultsOnly.png.6f76f4745619d156a

NSFW Link: Rederick toying around (Male Solo, also 2560x1600)

15_0929_AdultsOnly.png.6f76f4745619d156a

NSFW Link: Rederick and Blythe (1920x1080)

 

 

 

 

 

... Penis

 

Link to comment
Share on other sites

OK! As per the new NSFW rules, from here on out artistic nudity will no longer have censor bars!

You have been warned! There is a reason this thread now sports a nsfw tag.

Regular nsfw sexual content will still have the Mature or Adults only thumbnails.

 

 

Anyways, back to discussions of genitals. This cycle I did not actually make any progress on fur, but I finished off integration of genitals on Rederick. I am fairly satisfied with the results. Further details on development in this scrap:

15_0929_MatureContent.png

This links to nsfw content that may be interpreted as sexual (male solo)

I also had a chance to try some poses. I wanted something to demonstrate the genitals in a non-sexual context and in that respect it seemed like a good reason to show that it can be flaccid rather than perma-erect:

 

I wanted this to be part of a series, but that'll have to continue in development for now.

 

Not sure about the color choices of the shaft, but for now this shade of red will have to do. As you can see in the development scrap, there were a handful of alternative shades used. The fleshy one was the chosen one for a while, but I really wanted some thing more red for Rederick. Plus, I lost that shader mix to a crash.

Finally, I had an interesting opportunity to see how good I am at predicting poses. Back in November, I did up a silly little pose implying masturbation under the surface of water for my Black Hole Waterboarding thread.

15_0929_MatureContent.png   15_0929_AdultsOnly.png

Links to old .gif NSFW on FA (Male Solo)   Links to new model NSFW on Weasyl (Male Solo)

Surprising, the pose turned out to fit perfectly once actual genitals had been integrated! A few small adjustments to the fingers to fix clipping and it fit perfectly.

I will probably continue experimenting with poses and such with this new feature unless I make some real headway on fur again.

Link to comment
Share on other sites

I spent most of the week doing fun pose work, but surprisingly the biggest advancement came together suddenly at the tail end of everything and I finally have (mostly) well-groomed fluffy fur!

b0b41d96f037355a9bd2e15992b48b4487db8b75

Scrap of rendered Face fur. Took about 45 minutes, but I can probably cut that down with lower AA settings without sacrificing too much quality.

The real big progress began during a very helpful discussion with @Kaizan, who helped me understand some of the mathematics behind how XGen appears to calculating the hairs. The key take away point was that dPdu and dPdv (which are referenced but not defined in the documentation) appear to be partial derivatives describing how the surface (of the model) P is changing in the U (or V) direction for a certain value of u/v.

Understanding this gave me a better understanding of what the different parameters did and how they were effected. I eventually figured out that my entire direction was wrong and started the expressions over. Using two expressions from the documentation, I was able to devise an altered form of the expressions that would bend rather than point the hairs, towards the general direction I wanted! Coupled with a touch of randomness from the "evenly messy" expression setup in XGen, I was able to get fairly decent results in nearly every case!

0323 Uniform Fur_Body ThumbnailCensor.png

This scrap likewise took quite a bit of time, despite being further out.

The global expressions were as follows:

BendMagU float defined as:

-acosd(dot(abs(norm($N)),[0,1,0]))/180*(dot(norm($dPdu),[0,1,0]))

BendMagV float defined as:

-acosd(dot(abs(norm($N)),[0,1,0]))/180*(dot(norm($dPdv),[0,1,0]))

BendRand float defined as:

$size = 0.0090;#.005,1;

fit(voronoi($P*(1.005 - $size),3,.5,1,4,4,.4),.4,1,0,1)

The BendMag expressions are essentially the same as those in the documentation, but with different vectors. BendRand is essentially a default as well, but copied into a global expression. I then use these three in expressions for BendU[0] and BendV[0] parameters for the hairs, multiplied by a multiplier value, to get the fur this way.

 

There are still improvements that could be made to the fur bending. The inner-ears look particularly bad. The direction is controlled almost entirely by the vector applied in the BendMag expressions. If I can come up with either a programmatic switch that chooses different vectors based on some position calculations, or a map, I could make different parts of the model's fur bend in different directions.

Since working on posing and then doing a touch of fur on the side was so successful, I think I'll continue with this mix. I am currently working on an image series, so I'll hold off posting any of the pose images for now.

 

Let me know if you have any suggested improvements to the fur! (I'm not sure why it's so shiny right now, though.)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...