Friday 10 November 2017

Year 2 - Unit 66,67 and 68: Understand theory and applications of 3D

Unit 66,67 and 68
Understand theory and applications of 3D
 Task 1

Applications of 3D:
Fantastic Beats And Where To Find Them
3D CGI/VFX
2016
Models represent a physical body in 3D, they use a collection of points in 3D space and they are connected by many varies of geometric entities. These may include triangles, curved surfaces, lines etc. They can be created by hand, algorithmically or scanned and then be extended with definition using texture mapping. Many computer games used pre-rendered images of 3D models as sprites before computers could render them in real-time. They are used in many fields with the main ones being the medical industry, movie industry, video game industry, science sector, architectural industry, engineering and also the earth science community. Product design within the 3D world is very helpful for any company trying to come up with initial and final ideas. A company can figure out how they would like their product to look without spending a lot of money of resources they don't necessarily need or are sure about. Also, they can develop their ideas with flexibility and next to no restraints or losses. 3D Animations consist of models that are built on a computer monitor and then 3D figures are rigged with a virtual skeleton. The limbs, mouth, clothes, eyes etc. of the model are then moved by the animator on key frames. All frames must be rendered after the modelling is fully completed. In most 3D computer animation systems an animator will create a much simpler representation of a characters anatomy, this generally looks much like a skeleton or a stick figure.3D in TV and Film is often related to CGI which stands for computer-generated imagery, this is the application of computer graphics to create realistic images. Much like computer animation the term CGI counts both static scenes and dynamic images. It is used for creating scenes and special effects, the best way to explain this is to think about a TV show or film that had something that couldn't possibly be seen in real life (not animated). Perhaps this is something mythical like a dragon or a fairy, however something like a rhino or an elephant could also be like this as I'm sure it must be cheaper and actually possible to have a CGI elephant and rhino next each other rather than the real ones which most likely won't get on well! This also joins into VFX. The use of 3D in games is very large as I'm sure you can imagine with the majority of major games being in this dimension. I have talked more about how 3D is applied in games here:

Displaying 3D polygon animations:
In-depth Graphics Pipeline
Local Illumination vs. Global illumination

API's (Application Programming Interface) are basically messengers that tell systems what to do. Specialised versions have been created to ease all stages of computer graphics as they have proved to be extremely important to computer graphics hardware manufacturers. They provide a way for programmers to access the hardware in an abstract and more or less new way. Examples of API's are; Direct3D which is a low-level 3D API that is part of DirectX and can render 3D graphics in applications like in games, it uses hardware acceleration if it actually available on the graphics card;WebGL which is a web-based API that renders 3D (and 2D) graphics and can be used in any compatible web browser without the use of plug-ins, it can be mixed with other HTML elements and can also be combined with other parts of the page or within the pages background; and OpenGL which is a high-level API that renders 3D (and 2D) vector graphics, it is most often used to interact with a GPU (Graphics Processing Unit) and was released in 1991 with it having around 17 versions that have been out. The Graphics Pipeline is a conceptual model in computer graphics that describes what steps a graphics system needs to render a 3D scene to a 2D screen. Some things involved in this are; clipping which removes parts of the image that aren't visible in the 2D screen and only the primitives which are within the visual volume need to actually be rastered; lighting which is where scenes can place light sources around to make the lighting of objects look much more realistic; projection which has the ability to transform the view volume into a cube with the corner point co-ordinates, occasionally the other target volumes are used as well; rasterisation in which all primitives are rastered and the grid points also called fragments for the sake of distinction with one fragment equalling one pixel in the frame buffer which corresponds to one pixel per screen; and shading, the most important shader units are pixel shaders, vertex shaders and geometry shaders. Rendering techniques include ray tracing which provides realistic simulation of lighting over different rendering methods. It effects like reflections and shadows are difficult to simulate using other algorithms, they are natural results of the ray tracing algorithm and models mirror reflections well but the diffuse reflection is approximated. Radiosity is a second rendering technique with models diffusing reflections accurately but mirror reflections are ignored and it attempts to simulate the way in which directly illuminated surfaces act as an indirect light source. Rendering engines convert 3D wire frame models into 2D images on a computer, Mental Ray uses and supports Ray Tracing, and Arnold is based on Ray Tracing technology. There are two types of major lighting with Indirect (Global illumination) being the first, it is all of the inter-reflected light in a scene. It is also an approximation of real-world indirect light transmission and an example would be if light spilled into a room through the space at the bottom or side of the a door. The second is Local illumination (light sources) where it is only the light provided directly from a light source. Examples of this method are if there was a spotlight on a stage or the sun shining directly on a solar panel. Applying texture is like applying wrapping paper to a present and it is done by having every vertex in a polygon assigned a texture co-ordinate. Fogging is a technique that is used to give an impression of distance, it achieves this by imitating fog. Objects faded out will be the ones that are further away and if there any even further a away will not be in view at all, this can save processor power. Pixel shaders are components that can be programmed to work on a per pixel basis and they take care of things like lighting and bump mapping. A vertex shader is programmed using a specific assembly-like language, they are orientated to the scene geometry.

Geometric theory:
The construction of a face that could make up
a polygon using vertices and edges.
Vertex is the basic object used in mesh modelling, which is a point in 3D space. This whole theory focuses on creating and editing 3D objects.  Two vertices connected by a straight line of any size will then become an edge. Edges are known as the connection between two vertices and can make a face with a closed set of edges. Curves are often made with multiple lines joining up with vertices very closely. The simplest polygon will probably be when three vertices are connected to each other by three edges which forms a triangle. The more complex polygons are created out of many of these, or just as a single object with more than 3 vertices. A group of polygons which are connected to each other by shared vertices are most likely going to be referred to as an element, each of the polygons making this up are called a face. It may be possible to create a mesh by manually specifying vertices and faces but it much more usual to create meshes using a large amount of tools. It is possible for two faces to exist at the same location. They are made up of three vertices, a face will always be a triangular shape, it is basically the face of a shape where the edges and vertices have been created. This is all needed to understand the Mesh Construction shown below:


Mesh construction:
The elements of a mesh
The construction of a simple mesh
using a primitive cube shape
It is a technique that is used in 3D modelling, the model is created by modifying primitive shapes to create a rough draft before creating the final model. Simply it is the process of making certain objects with polygon meshes, as shown on the right the elements are vertices, edges, faces, polygons and surfaces like there are with any 3D models. Extrusion modelling is the usual method to model with, it is also referred to as inflation modelling. You could create 2D shapes which trace the outlines so the model would then be symmetrical . It is widely used by 3D artists because of how easy it is to use. Box modelling uses two simple tools; the subdivide tool which splits faces and edges into smaller pieces, this is done by adding new vertices; and the extrude tool which duplicates vertices whilst still keeping the new geometry and is still connected to the original vertices. Extrusion modelling is often referred to as inflation modelling, this is were the user creates a two-dimensional shape which traces the outline of an object, this could be from a photograph or a drawing. Common primitives are probably the most basic polygon models that 3D software can make, this makes it easier for the user of the program to create models by using this as a base. Some of the most common standard primitives that we use in Maya are spheres, cubes, cylinders, cones, planes, prisms, pyramids and pipes.

3D development software:
Halo 4 - Energy Sword made in Maya
2012
Autodesk 3ds Max is a 3D computer graphics program which is used to make 3D animations, models, games and images. It has modelling capabilities with flexible plugin architecture, it is often used by video game developers, Tv commerial studios as well as architectural studios. Maya is an animation, modelling, simulation and rendering software that provides an integrated toolset. It is used for animation, environments, motion graphics, VR and character creation. It has been used in the first Chronicles of Narnia film, the South Park series and the video game Halo 4. Mudbox is a digital painting and sculpting software which gives 3D artists with a tactile toolset for creating and modifying 3D geometry and textures. Some file formats that are currently used within the modelling industry are; .3ds which is used by Autodesk's 3ds Max which aims to retain only the essential geometry, lighting and texture data; .mb is used by Autodesk's Maya software, mb stands for Maya Binary and it contains 3D models, textures, lighting and animation data; .lwo is used by LightWave, the files contain objects stored as meshes and also features polygons, points and surfaces; and .C4d files have 3D models created with Cinema 4D, it will contain a scene which has one or more objects with position, pivot points, meshes, rotation and animation information. Plug-ins are basically add-ons which can be applied to a modelling software.

Constraints:
Lara Croft gaining in polygon size over time along with
the systems as they can handle the higher quality.
The polygon count of a model can majorly strain a project as the amount of polygons you should use heavily depends on the quality that you require and the platform you use. On a mobile devices somewhere between 300 and 1500 polygons per mesh would give the best results and on a desktop it should be anywhere between 1500 and 4000 depending on the ability of the PC. There is a possibility you may have to reduce the polygon count per mesh if the game has a lot of characters on a single screen. The file size is also a large restraint due to how large models and textures can create huge file sizes, this would mean that the uploading, downloading and general loading times will be increased as well as the amount of storage the user will need on their device. Lowering the polygon count is a way to reduce the file size. The rendering time is a heavy constraint with pre-rendered the 3D image or animation is rendered over a long period of time. This could be only a few seconds, minutes or hours, and in some bad cases a vast amount of days. Real-time rendering typically uses a GPU and is most common in video games. The 3D image is created on the fly. 

No comments:

Post a Comment