How rendering should be

General discussion about 3D DCC and other topics
User avatar
Maximus
Posts: 1105
Joined: 09 Jun 2009, 15:45

Re: How rendering should be

Post by Maximus » 31 Aug 2012, 13:06

People complained about Houdini prices since quite a bit, and SideFx cut down the price by 50%.
Now compared to Softimage there is a 1k difference in annual upgrade cost, which is quite worth the software if you refers to ICE. Not to count the abysmal difference in the company behind the software.

SideFX is way more serious than AD. they have their product which they develop and still care of, cannot say the same about AD. I think in a really short period a lot of people will take into consideration Houdini, beside you can skip the annual upgrade (which in any case is free for the first year, hello AD) i am not sure what happens if you want to upgrade after you skipped one year, but i dont think you would pay half the price of the whole software to get back on subscription like AD does.

When making comparison should also take into account the amount of free updates and bugfixes they release daily. Many updates you wont even pay for.
Softimage became a specialist application because they only develops ICE, so why not take into consideration Houdini? And now that the price is more friendly...well, a collegue of mine already bought it to be honest. And i wont be surprised if other people will just start using it.

The thing is really simple, if AD doesnt start working for good on their softwares, its gonna go worse and worse. But as usual, untill you are sick you wont heal yourself, prevention is better than curing. And AD fucked up way bad lately and they seem to not care. I cant wait to be proven wrong with the releases, but i already know its not gonna happen :)

We are at a point where there are no excuses for Autodesk anymore, no one forced AD to buy 10000 softwares if they cant keep up with the development and then they complain about other software houses that just have one to take care of.

There are some evident things that are disappointing, there are some bugs and things that single users fix by themselves or implement into those softwares which is insulting that AD didnt take care of. This is just a sign of people not doing their job, and we users are paying money, also they keep increasing prices. Seriously? Its a damn shame and soon or later AD will pay the disaster they created fully.


@luceric - even if its not an one-man army its still depressing that nothing that caliber came out of Autodesk, considering how big AD is, and the tech AD has. So nothing really changes much. AD is unable to develop and to work. There isnt a single year when AD surprises you with a feature or something stunning new. Pathetic.

User avatar
Nizar
Posts: 725
Joined: 30 May 2010, 22:54

Re: How rendering should be

Post by Nizar » 31 Aug 2012, 15:07

Your expectation is, for sure, partial true, actually in CAD area AD is technological behind every Dassault packages, seems they suffer a losses market share (and money haemorrhage) in this field.

AD lost pieces every day, so can deciding to sell Softimage again? (Dassault, IMHO the best home for softimage).

luceric
Posts: 1251
Joined: 22 Jun 2009, 00:08

Re: How rendering should be

Post by luceric » 31 Aug 2012, 16:16

nixx wrote:But it has happened before, hasn't it ? Remember back when people fled from the Lightwave camp en-mass to come over to the Softimage side ? People were really fed up with NewTek back then, and that, combined with the "3democracy" pricing, as well as the fact that XSI was the easiest-to-adapt-to package for an ex-Lightwaver, led to many people leaving and never looking back.
as far as I know, a "mass exodus" to XSI has never happened. We all know people who have moved from Lightwave to XSI.. but that number is in the dozens. There was a mass exodus from Softimage to Maya earlier in the century, however.. those things can happen around technology change.

User avatar
Mathaeus
Posts: 1778
Joined: 08 Jun 2009, 21:11
Location: Zagreb, Croatia
Contact:

Re: How rendering should be

Post by Mathaeus » 31 Aug 2012, 23:46

Ramon wrote: But it has happened before, hasn't it ? Remember back when people fled from the Lightwave camp en-mass to come over to the Softimage side ? People were really fed up with NewTek back then, and that, combined with the "3democracy" pricing, as well as the fact that XSI was the easiest-to-adapt-to package for an ex-Lightwaver, led to many people leaving and never looking back.
Well according to people I know in place where I'm live, actually no one "switched" from Lightwave to XSI. Many of them tried, but they didn't have a time or willing force to learn a completely new 3d app. Anyway there is area of freelancers, small shops, occasional users, employed people who still want 3d app for small gigs here and there.... where LW and Cinema 4d are tradition, and now Modo become a player. And no one of three AD apps. OK a few architects have the oldest possible Max and newer possible V-Ray . Love falls when it becomes to licensing schemes. Me to, I love Softimage and Max, but not enough to pay the offense called AD subscription ( it's NOT nearly the same as AVID subscription, when it comes to final price ). How wide is that area, really hard to say.

Imho only way to *really* put a new 3d app on the road, it's to find a job to utilize it, at least partially. Old XSI Foundation had a lot of advantages against any 3d app of this time, for any user. Fast subdivs, fast fCurve editor, ability to bake anything. Reliable text tool too. In these times, try to launch animation editor in Max with more than twenty fCurves, then wait some time for crash. Baking with Mental Ray and Maya, almost impossible.

Today I really don't see *any* advantage of Houdini in everyday life. I'd believe cheap version have sense only for learning, for people who want to be masters in big facilities, one day. If there is some optimist who want to be competitive with it in everyday task of small shop, who cares. It even can't be used "partially". On another side. Blender's simulation modules, each developed in it's own way, could be a nightmare for huge pipelines, but for small tasks, they just calling to be used. And simulation in Blender is easy to learn, exactly because every module offers all available options, in few hours you know would you use Blender or not.

An now that "when technology changes". That cloud computing thing, well that's total change, against habits, current technology, anything. By any logic, new era belongs to 3d app written to live on new technology, not to three old AD monsters. Let's say for archiviz, product shots or like, my bet is for some power variance of Google Sketchup. They already have more than enough of users, applied technology like V-Ray for Sketchup, rich owner. We will see, when "mother of all technology battles" :) begin.

By the way, moving SI team to Maya, actually means "approval" of SI technology, ICE and so. Otherwise Maya and Max people would do anything, trying to marginalize SI and everything related to SI. Just like they were all the time. Now SI is a player, well in some strange way...

gfxman
Posts: 92
Joined: 28 Mar 2011, 15:14

Re: How rendering should be

Post by gfxman » 01 Sep 2012, 00:45

Can't remember, what was the best stunning xsi 2013 feature? :D

luceric
Posts: 1251
Joined: 22 Jun 2009, 00:08

Re: How rendering should be

Post by luceric » 01 Sep 2012, 14:41

Autodesk actually does have interactive GPU/Raytracing render, but it's in Autodesk Showcase. You can do fun interactive stuff like drag the shows to position lights and all of that stuff. Maya's viewport 2.0 and Max's Nitrous viewport shares technology with showcase, but not the raytracing.

you're not going to see any impressive ray tracing development in M&E, because they don't do that. There are tons of renderers, the best thing to do would be to focus on the realtime viewports, API and kick mental ray off the curb. The rendering stuff at Autodesk is RapidRT and Showcase. Now was anything impressive last year... that depends on your area of interest I guess. I was impress by GPU cache demo with the city of venice in Maya http://www.youtube.com/watch?v=7n6s-w63IRc, the Direcx11 viewport, the mudbox gigatexel http://www.youtube.com/watch?v=exnEDYKqNyM The ICE crowd stuff for sure is technically interesting, since it's all basicly open source compounds in ICE, so it's basically built with what's already in ice, except the gpu instancing stuff

Letterbox
Posts: 391
Joined: 17 Jun 2009, 14:49

Re: How rendering should be

Post by Letterbox » 02 Sep 2012, 01:20

it might also be fair to realize they are running a tesla, Would then a fairer comparison not be if you took a separate pc and networked it to aid solely with the render region?

Kzin
Posts: 432
Joined: 09 Jun 2009, 11:36

Re: How rendering should be

Post by Kzin » 03 Sep 2012, 09:06

luceric wrote:Autodesk actually does have interactive GPU/Raytracing render, but it's in Autodesk Showcase. You can do fun interactive stuff like drag the shows to position lights and all of that stuff. Maya's viewport 2.0 and Max's Nitrous viewport shares technology with showcase, but not the raytracing.

you're not going to see any impressive ray tracing development in M&E, because they don't do that. There are tons of renderers, the best thing to do would be to focus on the realtime viewports, API and kick mental ray off the curb. The rendering stuff at Autodesk is RapidRT and Showcase. Now was anything impressive last year... that depends on your area of interest I guess. I was impress by GPU cache demo with the city of venice in Maya http://www.youtube.com/watch?v=7n6s-w63IRc, the Direcx11 viewport, the mudbox gigatexel http://www.youtube.com/watch?v=exnEDYKqNyM The ICE crowd stuff for sure is technically interesting, since it's all basicly open source compounds in ICE, so it's basically built with what's already in ice, except the gpu instancing stuff
you have excess to technology called mental ray and iray. it would be easy for ad to implement this framework to get realtime feedback with full shading. you can also use the mia mat for booth, so you dont have to create special shader for realtime and offline rendering. it can run on gpu and cpu, the tech is here and you only need to implement this (the city demo was using alembic and was rendered with nvidia shaders btw).
using the available tech would also mean you can free some resources for other areas. for example the ice stuff which is only slightly updated from time to time without really big steps forward in terms of new modelingtools for example.

luceric
Posts: 1251
Joined: 22 Jun 2009, 00:08

Re: How rendering should be

Post by luceric » 03 Sep 2012, 10:23

as far as I can tell, you're replying specifically about softimage now.
Kzin wrote:you have excess to technology called mental ray and iray. it would be easy for ad to implement this framework to get realtime feedback with full shading. you can also use the mia mat for booth, so you dont have to create special shader for realtime and offline rendering. it can run on gpu and cpu, the tech is here and you only need to implement this
As you know, Softimage doesn't implement iray, which is a separate renderer, but 2013 uses mental ray's real time stuff already, including mia material. that mental ray CPU/GPU tech is already what you have. And that's also used in Viewport 2.0 and the Nitrous viewport. It doesn't actually give you much for free, that's not how the gpu works. you still need to write your own code for lighting, environment maps, and generating shadows in multi passes, and all sort of stuff we take for granted and that's why they are so much work to implement. You have to write a whole new OpenGL engine to get even basic lighting effects. in the softimage 2013 viewport we used everything we could easily use from mental image, and added code to generate shadows and multisampling on top of our existing viewport. (Iray doesn't do anytning for you viewport, it's like a separate renderer like Octane) In Maya and Max they wrote an entire new viewport engine architecture that is very game engine like and the mental ray material stuff is an optional component. In our case we did just the work we could afford, which is to finish up the mental ray metaSL integration.

Kzin
Posts: 432
Joined: 09 Jun 2009, 11:36

Re: How rendering should be

Post by Kzin » 03 Sep 2012, 10:45

luceric wrote:as far as I can tell, you're replying specifically about softimage now.
Kzin wrote:you have excess to technology called mental ray and iray. it would be easy for ad to implement this framework to get realtime feedback with full shading. you can also use the mia mat for booth, so you dont have to create special shader for realtime and offline rendering. it can run on gpu and cpu, the tech is here and you only need to implement this
As you know, Softimage doesn't implement iray, which is a separate renderer, but 2013 uses mental ray's real time stuff already, including mia material. that mental ray CPU/GPU tech is already what you have. And that's also used in Viewport 2.0 and the Nitrous viewport. It doesn't actually give you much for free, that's not how the gpu works. you still need to write your own code for lighting, environment maps, and generating shadows in multi passes, and all sort of stuff we take for granted and that's why they are so much work to implement. You have to write a whole new OpenGL engine to get even basic lighting effects. in the softimage 2013 viewport we used everything we could easily use from mental image, and added code to generate shadows and multisampling on top of our existing viewport. (Iray doesn't do anytning for you viewport, it's like a separate renderer like Octane) In Maya and Max they wrote an entire new viewport engine architecture that is very game engine like and the mental ray material stuff is an optional component. In our case we did just the work we could afford, which is to finish up the mental ray metaSL integration.
i think thats the problem. the users and ad talks about different thing then they talk about realtime viewport. the users i talk about when it comes to realtime viewport (mostly lighting/shading) always, ALWAYS thinks about a fast realtime feedback of the rendered scene and NOT a new opengl thing which is completly disconnected from the offline renderer. why not implement progressive rendering in the viewport, when you say iray will not be available. why ad is coding a completely new thing and not integrating current tech, short time wise? i think it would be good that ad states in which direction they want with it. what users they want to adress.

luceric
Posts: 1251
Joined: 22 Jun 2009, 00:08

Re: How rendering should be

Post by luceric » 03 Sep 2012, 11:06

Kzin wrote:i think thats the problem. the users and ad talks about different thing then they talk about realtime viewport. the users i talk about when it comes to realtime viewport (mostly lighting/shading) always, ALWAYS thinks about a fast realtime feedback of the rendered scene and NOT a new opengl thing which is completly disconnected from the offline renderer. why not implement progressive rendering in the viewport, when you say iray will not be available. why ad is coding a completely new thing and not integrating current tech, short time wise? i think it would be good that ad states in which direction they want with it. what users they want to adress.
Whether because they do animation, previz, game, or use another renderer, a huge chunk of Autodesk customers don't use mental ray and will never use it, so providing better mental ray preview is not a super obvious thing to do. nVidia's commitment to mental ray is very cloudy, this is definitely not obvious to start adding more dependency on mental ray in more product and turn users into using that.

a great real time viewport architecture needs to be done by autodesk, a third party cannot implement that in the app. but third parties can implement plug-in offline renders. mental image should take up the work to work on the integration, just like every other third party does.

The demo Maximus posted above is a great third party preview tech in Maya. did you also see the OpenSudDiv demo from pixar in the maya viewport? To me this shows that autodesk is doing the right thing; these third party were able to implement these amazing things and get great performance with the viewport 2.0 API. IMHO That's effort better spent then just chasing the stuff from mental image, trying to get it to work, and then then disappointing because it rarely does what people think, or they won't use it. Everyone however uses the opengl viewport!

SreckoM
Posts: 187
Joined: 25 Jul 2010, 00:18
Skype: srecko.micic

Re: How rendering should be

Post by SreckoM » 03 Sep 2012, 11:17

This is exactly what I am talking about all the time. Better integration of MentalRay is not top priority from AD standpoint. For Maya this might not be problem, as Nvidia can easily create their own integration. Not sure how this can be done within SI, what are they ways of doing that, geoshader? Not to talk about that they will probably prioritize apps, and how things are now I am afraid that SI is not on their top list either.
- H -

Kzin
Posts: 432
Joined: 09 Jun 2009, 11:36

Re: How rendering should be

Post by Kzin » 03 Sep 2012, 13:25

luceric wrote:
Kzin wrote: a great real time viewport architecture needs to be done by autodesk, a third party cannot implement that in the app. but third parties can implement plug-in offline renders. mental image should take up the work to work on the integration, just like every other third party does.
so can we expect that ad gives up the mi plugin development, and open source it so we can expect mr plugins from third partys? actual ad has control over the mr plugin, but great to hear that this will change. we are waiting for things like this to improve the whole situation. string option for xsi would be a start so it should be possible to code new ui's for mr.

User avatar
Maximus
Posts: 1105
Joined: 09 Jun 2009, 15:45

Re: How rendering should be

Post by Maximus » 03 Sep 2012, 14:26

Well its quite clear since a while that Mental Ray is not followed by AD neither their own developers.
Mental Images always blamed Autodesk for poor implementation in the softwares and thats fine, then AD decided to strip out Mental Ray from Maya and put it as an external plugin with loaded dll. Now that was done to make possible for Mental Images to work on it.

Did you see any new features/upgrade/implementation/work from Mental Images into Maya mental ray? None. How many months passed?
So again i dont see much difference into having mental ray developed or implemented by Mental Images or AD, they are both the same, they dont care.
Can perfectly understand AD wanting to move away from a dead product, what i cant understand is the poor effort almost unexistant from Mental Images.

Just move on to another render engine, MR will not catch up anymore and even if it will you will always be fucked up into a limbo of non getting updates/features/bugfixes in human timeframe.
Its just dead.

Kzin
Posts: 432
Joined: 09 Jun 2009, 11:36

Re: How rendering should be

Post by Kzin » 03 Sep 2012, 14:35

Maximus wrote:Well its quite clear since a while that Mental Ray is not followed by AD neither their own developers.
Mental Images always blamed Autodesk for poor implementation in the softwares and thats fine, then AD decided to strip out Mental Ray from Maya and put it as an external plugin with loaded dll. Now that was done to make possible for Mental Images to work on it.

Did you see any new features/upgrade/implementation/work from Mental Images into Maya mental ray? None. How many months passed?
So again i dont see much difference into having mental ray developed or implemented by Mental Images or AD, they are both the same, they dont care.
Can perfectly understand AD wanting to move away from a dead product, what i cant understand is the poor effort almost unexistant from Mental Images.

Just move on to another render engine, MR will not catch up anymore and even if it will you will always be fucked up into a limbo of non getting updates/features/bugfixes in human timeframe.
Its just dead.
i am really tired to awnser to people which does not know how things are going and writing comments like this. its success of ad's marketing that mi is responsible for all the problems. also see lucerics last comment which contains things which are not true. but because its "hip" and "cool" to bash mr so ad MUST say the thruth here, they cant be wrong, ridicilous.

luceric
Posts: 1251
Joined: 22 Jun 2009, 00:08

Re: How rendering should be

Post by luceric » 03 Sep 2012, 15:13

I did not blame anything on mental images, read the post again, just replayed to your question about why focusing on the viewport rather than doing mental ray preview

btw here is a line from the iRay FAQ from mental image
"iray is not intended as an interactive preview-mode for mental ray and it is not a real time ray tracer (RTRT). "

there is already a third party mental ray plugin for Maya, it's called mentalCore
http://core-cg.com/

there are enough third party renderers for XSI, maya and max to prove that no one is waiting for anything from autodesk to write a renderer plugin for mental ray or anything else
Maximus wrote: Did you see any new features/upgrade/implementation/work from Mental Images into Maya mental ray? None. How many months passed
only about five month since the release I think? there is still more changes in maya required to allow not installing mental with maya in the first place.

Post Reply

Who is online

Users browsing this forum: No registered users and 35 guests