Part 1 - Basic concepts of 3D world and OpenGL (Beginners) Part 2 - OpenGL ES 2.0 in-depth (Intermediate) Part 3 - Jedi skills in OpenGL ES 2.0 and 2D

Similar documents
Page 2 of 5 아니다 means to not be, and is therefore the opposite of 이다. While English simply turns words like to be or to exist negative by adding not,

Page 2 of 6 Here are the rules for conjugating Whether (or not) and If when using a Descriptive Verb. The only difference here from Action Verbs is wh

2 min 응용 말하기 01 I set my alarm for It goes off. 03 It doesn t go off. 04 I sleep in. 05 I make my bed. 06 I brush my teeth. 07 I take a shower.

Hi-MO 애프터케어 시스템 편 5. 오비맥주 카스 카스 후레쉬 테이블 맥주는 천연식품이다 편 처음 스타일 그대로, 부탁 케어~ Hi-MO 애프터케어 시스템 지속적인 모발 관리로 끝까지 스타일이 유지되도록 독보적이다! 근데 그거 아세요? 맥주도 인공첨가물이

본문01

야쿠르트2010 9월재출

하나님의 선한 손의 도우심 이세상에서 가장 큰 축복은 하나님이 나와 함께 하시는 것입니다. 그 이 유는 하나님이 모든 축복의 근원이시기 때문입니다. 에스라서에 보면 하나님의 선한 손의 도우심이 함께 했던 사람의 이야기 가 나와 있는데 에스라 7장은 거듭해서 그 비결을

step 1-1

¹Ìµå¹Ì3Â÷Àμâ

Stage 2 First Phonics

#중등독해1-1단원(8~35)학

H3050(aap)

퇴좈저널36호-4차-T.ps, page Preflight (2)

1_2•• pdf(••••).pdf

- 2 -

I&IRC5 TG_08권

야쿠르트2010 3월 - 최종

김기남_ATDC2016_160620_[키노트].key

2

Output file

untitled

11¹Ú´ö±Ô

Something that can be seen, touched or otherwise sensed

<B3EDB9AEC1FD5F3235C1FD2E687770>

PowerChute Personal Edition v3.1.0 에이전트 사용 설명서

2007 학년도 하반기 졸업작품 아무도 모른다 (Nobody Knows) 얄리, 보마빼 (AIi, Bomaye) 외계인간 ( 外 界 人 間 ) 한국예술종합학교 연극원 극작과 예술전문사 안 재 승

영어-중2-천재김-07과-어순-B.hwp


DE1-SoC Board

#Ȳ¿ë¼®


VOL /2 Technical SmartPlant Materials - Document Management SmartPlant Materials에서 기본적인 Document를 관리하고자 할 때 필요한 세팅, 파일 업로드 방법 그리고 Path Type인 Ph

<B3EDB9AEC1FD5F3235C1FD2E687770>

0125_ 워크샵 발표자료_완성.key

<32382DC3BBB0A2C0E5BED6C0DA2E687770>

K7VT2_QIG_v3

2 2010년 1월 15일 경상북도 직업 스쿨 운영 자격 취득 위한 맞춤형 교육 시 10곳 100명에 교육 기회 제공 본인에게 적합한 직종 스스로 선택 1인당 최고 100만원까지 교육비 지원 경상북도는 결혼이주여성 100명에게 맞춤형 취업교 육을 제공하는 결혼이민자 직

<B1E2C8B9BEC828BFCFBCBAC1F7C0FC29322E687770>

2011´ëÇпø2µµ 24p_0628

PowerPoint 프레젠테이션

May 2014 BROWN Education Webzine vol.3 감사합니다. 그리고 고맙습니다. 목차 From Editor 당신에게 소중한 사람은 누구인가요? Guidance 우리 아이 좋은 점 칭찬하기 고맙다고 말해주세요 Homeschool [TIP] Famil

강의10

Journal of Educational Innovation Research 2019, Vol. 29, No. 1, pp DOI: (LiD) - - * Way to

Social Network

[ReadyToCameral]RUF¹öÆÛ(CSTA02-29).hwp

_KF_Bulletin webcopy

가정법( 假 定 法 )이란, 실제로 일어나지 않았거나 앞으로도 일어나지 않을 것 같은 일에 대해 자신의 의견을 밝히거나 소망을 표현하는 어법이다. 가정법은 화자의 심적 태도나 확신의 정도를 나타내는 어법이기 때문 에 조동사가 아주 요긴하게 쓰인다. 조동사가 동사 앞에

Journal of Educational Innovation Research 2017, Vol. 27, No. 2, pp DOI: : Researc

4. #include <stdio.h> #include <stdlib.h> int main() { functiona(); } void functiona() { printf("hihi\n"); } warning: conflicting types for functiona

항공우주뉴스레터-제13호-컬러3

Microsoft Word Hanwha Daily_New.doc

歯Lecture2.PDF

Microsoft Word - FunctionCall

<31342D3034C0E5C7FDBFB52E687770>

FMX M JPG 15MB 320x240 30fps, 160Kbps 11MB View operation,, seek seek Random Access Average Read Sequential Read 12 FMX () 2

중학영어듣기 1학년

소식지도 나름대로 정체성을 가지게 되는 시점이 된 거 같네요. 마흔 여덟번이나 계속된 회사 소식지를 가까이 하면서 소통의 좋은 점을 배우기도 했고 해상직원들의 소탈하고 소박한 목소리에 세속에 찌든 내 몸과 마음을 씻기도 했습니다. 참 고마운 일이지요 사람과 마찬가지로

<C1DF3320BCF6BEF7B0E8C8B9BCAD2E687770>

6자료집최종(6.8))

농심-내지

Microsoft PowerPoint - ch03ysk2012.ppt [호환 모드]

?????

휠세미나3 ver0.4


PowerPoint 프레젠테이션

슬라이드 1

API 매뉴얼

OP_Journalism

C# Programming Guide - Types

Microsoft PowerPoint - CHAP-03 [호환 모드]

182 동북아역사논총 42호 금융정책이 조선에 어떤 영향을 미쳤는지를 살펴보고자 한다. 일제 대외금융 정책의 기본원칙은 각 식민지와 점령지마다 별도의 발권은행을 수립하여 일본 은행권이 아닌 각 지역 통화를 발행케 한 점에 있다. 이들 통화는 일본은행권 과 等 價 로 연

PowerPoint 프레젠테이션

새천년복음화연구소 논문집 제 5 권 [특별 기고] 說 敎 의 危 機 와 展 望 조재형 신부 한국천주교회의 새로운 복음화에 대한 小 考 정치우 복음화학교 설립자, 교장 [심포지엄] 한국 초기 교회와 순교영성 한반도 평화통일과 한국 교회의 과제 교황 방한의 메시지와 복음의

Ⅱ. Embedded GPU 모바일 프로세서의 발전방향은 저전력 고성능 컴퓨팅이다. 이 러한 목표를 달성하기 위해서 모바일 프로세서 기술은 멀티코 어 형태로 발전해 가고 있다. 예를 들어 NVIDIA의 최신 응용프 로세서인 Tegra3의 경우 쿼드코어 ARM Corte

歯15-ROMPLD.PDF

<30322D28C6AF29C0CCB1E2B4EB35362D312E687770>

歯M PDF

1

.. IMF.. IMF % (79,895 ). IMF , , % (, 2012;, 2013) %, %, %

6 영상기술연구 실감하지 못했을지도 모른다. 하지만 그 이외의 지역에서 3D 영화를 관람하기란 그리 쉬운 일이 아니다. 영화 <아바타> 이후, 티켓 파워에 민감한 국내 대형 극장 체인들이 2D 상영관을 3D 상영관으로 점차적으로 교체하는 추세이긴 하지만, 아직까지는 관


300 구보학보 12집. 1),,.,,, TV,,.,,,,,,..,...,....,... (recall). 2) 1) 양웅, 김충현, 김태원, 광고표현 수사법에 따른 이해와 선호 효과: 브랜드 인지도와 의미고정의 영향을 중심으로, 광고학연구 18권 2호, 2007 여름

untitled

Microsoft PowerPoint - 7-Work and Energy.ppt

Page 2 of 8 Here s how we can change the previous sentence to use honorific speech, to show extra respect to the father. 아버지가어디에계세요? Where s dad? Usin

solution map_....

합격기원 2012년 12월 정기모의고사 해설.hwp

No Slide Title

강의지침서 작성 양식

RVC Robot Vaccum Cleaner


04_오픈지엘API.key

04-다시_고속철도61~80p

DBPIA-NURIMEDIA

2 소식나누기 대구시 경북도 영남대의료원 다문화가족 건강 위해 손 맞잡다 다문화가정 행복지킴이 치료비 지원 업무협약 개인당 200만원 한도 지원 대구서구센터-서부소방서 여성의용소방대, 업무협약 대구서구다문화가족지원센터는 지난 4월 2일 다문화가족의 지역사회 적응 지원을

<BFACBCBCC0C7BBE7C7D E687770>

112초등정답3-수학(01~16)ok

저작자표시 2.0 대한민국 이용자는아래의조건을따르는경우에한하여자유롭게 이저작물을복제, 배포, 전송, 전시, 공연및방송할수있습니다. 이차적저작물을작성할수있습니다. 이저작물을영리목적으로이용할수있습니다. 다음과같은조건을따라야합니다 : 저작자표시. 귀하는원저작자를표시하여야합니

untitled

Week3

Transcription:

All about OpenGL ES 2.x - (part 2/3) 2014.11.25. 이석우번역 Very welcome back, my friends! 돌아온것을매우환영한다. 나의친구 Now we are aware about the basic concepts of 3D world and OpenGL. Now it's time to start the fun! Let's go deep into code and see some result on our screens. Here I'll show you how to construct an OpenGL application, using the best practices. 이제우리는 OpenGL 과 3D 세상에대해서기본개념을안다이제재밌는걸할시간이다. 코드로깊게들어가서, 우리의화면에뭔가결과를보자. 여기서어떻게 OpenGL 어플리케이션을만드는지보여준다. If you lost the first one, you can check the parts of this serie below. This serie is composed by 3 parts: 만약처음것을까먹었으면, 이시리즈를다시봐라이시리즈는 3 부작이다

Part 1 - Basic concepts of 3D world and OpenGL (Beginners) Part 2 - OpenGL ES 2.0 in-depth (Intermediate) Part 3 - Jedi skills in OpenGL ES 2.0 and 2D graphics (Advanced) Before start, I want to say something: is Thank You! The first tutorial of this serie became much much bigger than I could imagine. When I saw the news at the home of http://www.opengl.orgwebsite, I was speechless, stunned, that was really amazing!!! So I want to say again, Thank you so much! Here is a little list of contents to orient your reading: List of Contents to this Tutorial Download the OpenGL ES 2.0 iphone project OpenGL data types and programmable pipeline Primitives o Meshes and Lines Optimization Buffers o Frame buffer o Render buffer o Buffer Object Textures Rasterize o Face Culling o Per-Fragment Operations Shaders o Shader and Program Creation o Shader Language o Vertex and Fragment Structures o Setting the Attributes and Uniforms o Using the Buffer Objects Rendering o Pre-Render o Drawing o Render Conclusion At a glance As asked in comments below, here is a PDF file for those of you who prefer to read this tutorial on a file instead of here in the blog. Download now PDF file with navigation links 1.3Mb

Remembering the first part of this serie, we've seen: 이시리즈의첫번째파트를기억해라. 우리가본것은 1. OpenGL s logic is composed by just 3 simple concepts: Primitives, Buffers and Rasterize. 2. OpenGL works with fixed or programmable pipeline. 3. Programmable pipeline is synonymous of Shaders: Vertex Shader and Fragment Shader. 1. OpenGL 은 3 가지컨셉으로구성되어있다. 기본도형, 버퍼, Rasterize 2. OpenGL 은 fixed 파이프라인또는 programmable 파이프라인으로작동함 3. Programmable 파이프라인은 shader 와같다. Vertex Shader, Fragment Shader Here I'll show code more based in C and Objective-C. In some parts I'll talk specifically about iphone and ios. But in general the code will be generic to any language or platform. As OpenGL ES is the most concise API of OpenGL, I'll focus on it. If you're using OpenGL or WebGL you could use all the codes and concepts here. 여기의코드는 C 와 Objective-C 를기반으로한다. 어떤부분은아이폰과관련있다. 그러나일반적인코드는어떤언어나플랫폼과도상관없다. OpenGL ES 는 OpenGL 의축소된 API 이다. 여기에초점을맞춘다. The code in this tutorial is just to illustrate the functions and concepts, not real code. In the link bellow you can get a Xcode project which uses all the concepts and code of this tutorial. I made the principal class (CubeExample.mm) using Objective-C++ just to make clearly to everybody how the OpenGL ES 2.0 works, even those which don't use Objective-C. This training project was made for ios, more specifically targeted for iphone. 여기의코드들은기능과컨셉을설명한다. 진짜코드가아니다. 아래의링크에서 xcode 프로젝트를얻을수있다. Download now Xcode project files to ios 4.2 or later 172kb

Here I'll use OpenGL functions following the syntax: gl + FunctionName. Almost all implementation of OpenGL use the prefix "gl" or "gl.". But if your programming language don't use this, just ignore this prefix in the following lines. Another important thing to say before we start is about OpenGL data types. As OpenGL is multiplatform and depends of the vendors implementation, many data type could change from one programming language to another. For example, a float in C++ could represent 32 bits exactly, but in JavaScript a float could be only 16 bits. To avoid these kind of conflict, OpenGL always works with it's own data types. The OpenGL's data type has the prefix "GL", like GLfloat or GLint. Here is a full list of OpenGL's data type: 시작하기전에다른중요한것은 OpenGL 데이터타입이다. OpenGL 은멀티플랫폼이고, 특정벤더의구현에독립적이기때문에많은데이터타입이하나의언어에서다른언어로바뀔수있다. 예를들어 C++ 에서 float 는정확히 32 비트이지만, 자바스크립트에서는 16 비트다. 이런혼란을피하기위해, OpenGL 은자신만의데이터타입으로동작한다. GLfloat, Glint 처럼 GL 로시작한다. 아래는모든데이터타입을보여준다. OpenGL's Data Type Same as C Description GLboolean (1 bits) unsigned char 0 to 1 GLbyte (8 bits) char -128 to 127 GLubyte (8 bits) unsigned char 0 to 255 GLchar (8 bits) char -128 to 127 GLshort (16 bits) short -32,768 to 32,767 GLushort (16 bits) unsigned short 0 to 65,353 GLint (32 bits) int -2,147,483,648 to 2,147,483,647 GLuint (32 bits) unsigned int 0 to 4,294,967,295 GLfixed (32 bits) int -2,147,483,648 to 2,147,483,647 GLsizei (32 bits) int -2,147,483,648 to 2,147,483,647 GLenum (32 bits) unsigned int 0 to 4,294,967,295 GLdouble (64 bits) double 9,223,372,036,854,775,808 to 9,223,372,036,854,775,807 GLbitfield (32 bits) unsigned int 0 to 4,294,967,295 GLfloat (32 bits) float -2,147,483,648 to 2,147,483,647 GLclampx (32 bits) int Integer clamped to the range 0 to 1 GLclampf (32 bits) float Floating-point clamped to the range 0 to 1 GLclampd (64 bits) double Double clamped to the range 0 to 1 GLintptr int pointer * GLsizeiptr int pointer * GLvoid void Can represent any data type

A very important information about Data Types is that OpenGL ES does NOT support 64 bits data types, because embedded systems usually need performance and several devices don't support 64 bits processors. By using the OpenGL data types, you can easily and safely move your OpenGL application from C++ to JavaScript with less changes, for example. OpenGL ES 의데이터타입에서중요한점은 64 비트를지원하지않는다는것이다. 왜냐면, 임베디트시스템은성능이중요하고, 일부기기는 64 비트프로세서를지원하지않는다. OpenGL 데이터타입을사용함으로써, 쉽고안전하고 c++ 에서자바스크립트로변환할수있다. One last thing to introduce is the graphics pipeline. We'll use and talk about the Programmable Pipeline a lot, here is a visual illustration: 마지막으로소개할것은그래픽파이프라인이다. 우리는 programmable 파이프라인에대해이야기할것이다. 여기그림이있다. We'll talk deeply about each step in that diagram. The only thing I want to say now is about the "Frame Buffer" in the image above. The Frame Buffer is marked as optional because you have the choice of don't use it directly, but internally the OpenGL's core always will work with a Frame Buffer and a Color Render Buffer at least. 이그림의각단계를깊이있게이야기할것이다. 지금말하고싶은것하나는

Frame Buffer 다. Frame Buffer 는옵션으로표시되어있다. 이것은직접사용하지않을수도있기때문이다. 하지만 OpenGL 내부에서는결국 Frame Buffer 와 Color Buffer 를사용한다. Did you notice the EGL API in the image above? This is a very very important step to our OpenGL's application. Before start this tutorial we need to know at least the basic concept and setup about EGL API. But EGL is a dense subject and I can't place it here. So I've created an article to explain that. You can check it here: EGL and EAGL APIs. I really recommend you read that before to continue with this tutorial. 위의이미지에서 EGL API 를보았나? 이것은 OpenGL 어플리케이션에서매우중요하다. 우리는먼저 EGL API 의기본컨셉과설정법을알아야한다. 그러나 EGL 은큰주제라서여기에다쓸수없다. 그래서이것에대한새로운글을썼다. 이글을읽기전에저걸먼저읽기를권장한다. If you've read or already know about EGL, let's move following the order of the first part and start talking about the Primitives. 이미읽었거나, EGL 에대해알고있다면, 다음으로넘어가 Primitives ( 기본도형 ) 에대해이야기하자 Primitives top Do you remember from the first part, when I said that Primitives are Points, Lines and Triangles? All of them use one or more points in space to be constructed, also called vertex. A vertex has 3 informations, X position, Y position and Z position. A 3D point is constructed by one vertex, a 3D line is composed by two vertices and a triangle is formed by three vertices. As OpenGL always wants to boost the performance, all the informations should be a single dimensional array, more specifically an array of float values. Like this: 첫번째문서에서 Primitives 는 Points( 점 ), Lines( 선 ), Triangles( 삼각형 ) 이라고한것을기억하는가? 이것들은공간에서만들어질때 vertex( 꼭짓점 ) 이라고불리는것들로이루어져있다. Vertex 는 3 개의정보를가지고있다. x 위치, y 위치, z 위치. 3 차원 Point 는하나의꼭짓점으로, 3 차원 Line 은 2 개의꼭짓점으로, Triangle 은 3 개의꼭짓점으로만들어진다. OpenGL 은항상성능향상을원하기때문에, 모든정보는 float 형의 1 차원배열이어야한다. 다음과같이 GLfloat point3d = {1.0,0.0,0.5}; GLfloat line3d = {0.5,0.5,0.5,1.0,1.0,1.0}; GLfloat triangle3d = {0.0,0.0,0.0,0.5,1.0,0.0,1.0,0.0,0.0};

As you can see, the array of floats to OpenGL is in sequence without distinction between the vertices, OpenGL will automatically understand the first value as the X value, the second as Y value and the third as the Z value. OpenGL will loop this interpretation at every sequence of 3 values. All that you need is inform to OpenGL if you want to construct a point, a line or a triangle. An advanced information is which you can customize this order if you want and the OpenGL could work with a fourth value, but this is a subject to advanced topics. For now assume that the order always will be X,Y,Z. The coordinates above will construct something like this: 보다시피 OpenGL 에넘겨질 float 의배열은꼭짓점사이의구분없이쭉나열되어있다. OpenGL 은자동으로첫번째값은 x, 두번째는 y, 세번째는 z 라고인식한다. OpenGL 은이렇게 3 개의값들을끝까지읽는다. 너가해야할일은 OpenGL 에게이데이터로점을그릴지, 선을그릴지, 삼각형으로그릴지알려줘야한다. An advanced information is which you can customize this order if you want and the OpenGL could work with a fourth value, 이건지금은어려운주제다. 지금은항상 x,y,z 순서라고만보자.

In this image, the dashed orange lines is just an indication to you see more clearly where the vertices are related to the floor. Until here seems very simple! But now a question comes up: "OK, so how could I transform my 3D models from 3DS Max or Maya into an OpenGL's array?" When I was learning OpenGL, I thought that could exist some 3D file formats we could import directly into OpenGL. "After all, the OpenGL is the most popular Graphics Library and is used by almost all 3D softwares! I'm sure it has some methods to import 3D files directly!" Well, I was wrong! Bad news. I've learned and need to say you: remember that OpenGL is focused on the most important and hard part of the 3D world construction. So it should not be responsible by fickle things, like 3D file formats. Exist so many 3D file formats,.obj,.3ds,.max,.ma,.fbx,.dae,.lxo... it's too much to the OpenGL and the Khronos worrying about. But the Collada format is from Khronos, right? So would I expect that, one day, OpenGL will be able to import Collada files directly? No! Don't do this. Accept this immutable truth, OpenGL does not deal

with 3D files! OK, so what we need to do to import 3D models from 3D softwares into our OpenGL's application? Well my friend, unfortunately I need to say you: you will need a 3D engine or a third party API. There's no easy way to do that. If you choose a 3D engine, like PowerVR, SIO2, Oolong, UDK, Ogre and many others, you'll be stuck inside their APIs and their implementation of OpenGL. If you choose a third party API just to load a 3D file, you will need to integrate the third party class to your own implementation of OpenGL. Another choice is to search a plugin to your 3D software to export your objects as a.h file. The.h is just a header file containing your 3D objects in the OpenGL array format. Unfortunately, until today I just saw 2 plugins to do this: One to Blender made with Phyton and another made with Pearl and both was horribles. I never seen plugins to Maya, 3DS Max, Cinema 4D, LightWave, XSI, ZBrush or Modo. I wanna give you another opportunity, buddy. Something called NinevehGL! I'll not talk about it here, but it's my new 3D engine to OpenGL ES 2.x made with pure Objective-C. I offer you the entire engine or just the parse API to some file formats as.obj and.dae. Whatever you prefer. You can check the NinevehGL's website here: http://nineveh.glwhat is the advantage of NinevehGL? Is to KEEP IT SIMPLE! The others 3D engines is too big and unnecessary expensive. NinevehGL is free! OK, let's move deeply into primitives. 이그림에서노란색점선은바닥과꼭짓점들과의관계를잘보이게하기위한거다. 여기까지는간단하다. 그러나여기서질문이나온다. 좋아그래서 3DS Max 나 Maya 에서 3D 모델을어떻게 OpenGL 의배열로바꾸지? 내가처음 OpenGL 을공부할때, 3D 파일포맷을 OpenGL 로바로가져올수있을거라고생각했다. 어쨌든, OpenGL 은가장유명한그래픽라이브러리고, 거의모든 3D 소프트웨어가사용하니까, 3D 파일을바로가져올수있는함수가있을거야 라고확신했다. 하지만틀렸다. OpenGL 은 3D 세상을만들때가장중요하고가장어려운부분에초점이맞추어져있다. 3D 파일포맷과같이변덕스러운것들은 OpenGL 이책임지지않는다. 3D 파일포맷은많다 obj,.3ds,.max,.ma,.fbx,.dae,.lxo... 등등 OpenGL 과 Khronos 가걱정하기에너무많다. 하지만 Collada 포맷은 Khronos 에서만들었다. 맞지? 그래서언젠가는 OpenGL 이 Collada 포맷을바로가져올수있을수도있다고기대한다. 아냐, 하지마. 이불변의진리를받아들여, OpenGL 은 3D 파일과타협하지않는다. 좋아. 그래서우리가필요한건 3D 소프트웨어에서 3D 모델을 OpenGL 어플리케이션으로가져오는것인가? 불행히도 3D 엔진을쓰거나, 써드파티 API 를써라. 그것을할수있는쉬운방법은없다. 만약 3D 엔진을선택했다면 (PowerVR, SIO2, Oolong, UDK, Ogre ) 그것들의 API 에갇히게됨. 3D 파일을로드하려고써드파티 API 를선택했다면, 그들의클래스를너의어플리케이션에통합해야한다. 다른선택은너가쓰는 3D 소프트웨어에서.h 파일로객체를내보낼수있는플러그인을찾는것이다..h 파일은 3D 객체를 OpenGL 의배열로가지고만있으면된다. 불행히도오늘까지나는 2 개의플러그인을봤지만둘다안좋다. ( 블랜더에서파이선과펄 ) 마야, 3DS Max, Cinema 4D, LightWave 에서는플러그인을볼수없었다.

여기서는다루지않지만 NinevehGL 이라는것도있다. 이것은순수 Objective-C 로만들어진 OpenGL ES 2 를위한나의새로운 3D 엔진이다. 너가원하면이엔진의전체또는 obj, dae 파일포맷을파싱하느 API 를살펴봐라. NinevehGL 를쓰면좋은점은간단하다. 다른엔진들을무겁고비싸다. NinevehGL 은꽁짜다. OK, Primitives 로깊게들어가자 Meshes and Lines Optimization top A 3D point has only one way to be draw by OpenGL, but a line and a triangle have three different ways: normal, strip and loop for the lines and normal, strip and fan for the triangles. Depending on the drawing mode, you can boost your render performance and save memory in your application. But at the right time we'll discuss this, later on this tutorial. For now, all that we need to know is that the most complex 3D mesh you could imagine will be made with a bunch of triangles. We call these triangles of "faces". So let's create a 3D cube using an array of vertices. 점은그리는방법이한가지다. 하지만선과삼각형은 3 가지방법이있다. 선은 normal, strip, loop 방법이있고, 삼각형은 normal, strip, fan 방법이있다. 그리는방법에따라, 속도를높이고메모리를절약할수있다. 이문서말고나중에이것에대해서논의할거다. 지금우리가알아야할것은여러개의삼각형으로이루어져있는복잡한 3 차원 mash( 다각형 ) 이다. 우리는이삼각형을 faces( 면 ) 라고부른다. 자꼭짓점들의배열을이용해서 3 차원큐브를만들어보자 // Array of vertices to a cube. GLfloat cube3d[] = { 0.50,-0.50,-0.50, // vertex 1 0.50,-0.50,0.50, // vertex 2-0.50,-0.50,0.50, // vertex 3-0.50,-0.50,-0.50, // vertex 4 0.50,0.50,-0.50, // vertex 5-0.50,0.50,-0.50, // vertex 6 0.50,0.50,0.50, // vertex 7-0.50,0.50,0.50 // vertex 8 } The precision of the float numbers really doesn't matter to OpenGL, but it can save a lot of memory and size into your files (precision of 2 is 0.00 precision of 5 is 0.00000). So I always prefer to use low precision, 2 is very good! I don't want to make you confused too soon, but has something you have to know. Normally meshes have three great informations: verticex, texture coordinates and normals. A good practice is to create one single array containing all these informations. This is called Array of Structures. A short example of it could be:

float 의정밀도는 opengl 에서문제되지않지만, 메모리를절약할수있고, 파일사이즈를줄일수있다 ( 정밀도 2 는 0.00 정밀도 5 는 0.00000) 그래서나는항상낮은정밀도를좋아한다. 2 면충분하다. 너를혼란스럽게하기는싫지만알아야할게있다. 일반적으로 mesh( 다각형 ) 는 3 가지중요한정보를가지고있다 ( 꼭짓점, 텍스쳐좌표, 법선 ). 하나의일차원배열에이모든정보를가지고있는게좋다. 이를구조배열이라고부른다. 간단한예를들어 // Array of vertices to a cube. GLfloat cube3d[] = { 0.50,-0.50,-0.50, // vertex 1 0.00,0.33, // texture coordinate 1 1.00,0.00,0.00 // normal 1 0.50,-0.50,0.50, // vertex 2 0.33,0.66, // texture coordinate 2 0.00,1.00,0.00 // normal 2... } You can use this construction technique for any kind of information you want to use as a per-vertex data. A question arises: Well, but at this way all my data must be of only one data type, GLfloat for example? Yes. But I'll show you later in this tutorial that this is not a problem, because to where your data goes, just accept floating-point values, so everything will be GLfloats. But don't worry with this now, in the right time you will understand. OK, now we have a 3D mesh, so let's start to configure our 3D application and store this mesh into an OpenGL's buffer. 각꼭짓점의데이터마다이런식으로정보를담아서생성할수있다. 질문이나올수있다. 이런식으로데이터를담으면모두동일한데이터타입이될텐데. Glfloat 로. 그렇다. 나중에알려주겠지만이것은문제되지않는다. 왜냐면이데이터가넘어가는곳은 float 만받는다. 모든것이 Glfloat 가될것이다자이제우리는 3 차원 mesh( 다각형 ) 를가지고있다. 3 차원어플리케이션을설정하고이 mesh 를 OpenGL 의버퍼에담아보자 Buffers top Do you remember from the first part when I said that OpenGL is a state machine working like a Port Crane? Now let's refine a little that illustration. OpenGL is like a Port Crane with severeal arms and hooks. So it can hold many containers at the same time.

Basically, there are four "arms": texture arm (which is a double arm), buffer object arm (which is a double arm), render buffer arm and frame buffer arm. Each arm can hold only one container at a time. This is very important, so I'll repeat this: Each arm can hold only ONE CONTAINER AT A TIME! The texture and buffer object arms are double arms because can hold two different kinds of texture and buffer objects, respectively, but also only ONE KIND OF CONTAINER AT A TIME! We need is instruct the OpenGL's crane to take a container from the port, we can do this by informing the name/id of the container. Backing to the code, the command to instruct OpenGL to "take a container" is: glbind*. So every time you see a glbindsomething you know, that is an instruction to OpenGL "take a container". Exist only one exception to this rule, but we'll discuss that later on. Great, before start binding something into OpenGL we need to create that thing. We use the glgen* function to generate a "container" name/id. 첫번째파트에서 OpenGL 은항구의크레인처럼동작하는상태머신이라고한것을기억하는가? 이제조금다르게재정의하자. OpenGL 은여러개의암 ( 어깨 ) 과훅 ( 고리 ) 을가지고있는항구의크레인처럼행동한다. ( 그림을봐라 ) 그래서, 동시에많은것을잡을수있다. 기본적으로 4 개의암이있다. Texture 암 (2 개 ), buffer object 암 (2 개 ), render buffer 암 (1 개 ), frame buffer 암 (1 개 ). 각암은한번에하나의물체만잡을수있다. 이것은중요하다. 다시반복한다. 각암은한번에하나의물체만잡을수있다. texture 와 buffer object 암은다른종류의 texture 와 buffer object 를잡을수있기때문에 2 개씩있다. 하지만한번에같은종류만잡을수있다.

우리는 OpenGL 에게무언가를잡으라고지시하면된다. 우리는잡을것의이름과 id 를알려주면된다. 코드로돌아가서, OpenGL 에게잡으라고알려주는명령어는 glbind* 이다. 그래서 glbind* 라는명령어를보면, 무엇을잡는것으로알면된다. 한가지예외가있는데나중에논의하자. OpenGL 이무언가를잡기전에그무언가를만들어야한다. glgen* 명령어를사용해서이름과 id 를주고무언가를만들수있다. Frame Buffers top A frame buffer is a temporary storage to our render output. Once our render is in a frame buffer we can choose present it into device's screen or save it as an image file or either use the output as a snapshot. This is the pair of functions related to frame buffers: frame buffer 는 render output 을위한임시저장소다. ( 그릴것을임시로저장한다는뜻같다 ) render( 그릴것 ) 이일단 frame buffer 에들어가면기기의화면에표시할수도있고, 이미지파일로저장할수도있다. 아래는 frame buffer 와관련된기능 2 가지이다. FrameBuffer Creation GLvoid glgenframebuffers (GLsizei n, GLuint* framebuffers) n: The number representing how many frame buffers's names/ids will be generated at once. framebuffers: A pointer to a variable to store the generated names/ids. If more than one name/id was generated, this pointer will point to the start of an array. n: 한번에몇개의 frame buffer 를만들것이지. framebuffers: 생성된 frame buffer 의번호를저장할변수. 하나이상을만들때는배열의시작점을넣음 GLvoid glbindframebuffer (GLenum target, GLuint framebuffer) target: The target always will be GL_FRAMEBUFFER, this is just an internal convention for OpenGL. framebuffers: The name/id of the frame buffer to be bound. target: 항상 GL_FRAMEBUFFER 이다. framebuffer: 연결할 frame buffer 의번호 In deeply the creation process of an OpenGL Object will be done automatically by the core when we bind that object at the first time. But this process doesn't generates a name/id to us. So is advisable always useglgen* to create buffer names/ids instead create your very own names/ids. Seems confused? OK, let's go to our first lines of code and you'll understand more clearly: Opengl 은우리가 bind 를할때실제로버퍼를생성하지만, 우리에게버퍼번호를주지는않는다. 그래서우리마음대로버퍼번호를사용하는대신 glgen 으로생성된버퍼번호를쓰는게좋다는뜻같다.

GLuint framebuffer; // Creates a name/id to our framebuffer. glgenframebuffers(1, &framebuffer); // The real Frame Buffer Object will be created here, // at the first time we bind an unused name/id. glbindframebuffer(gl_framebuffer, framebuffer); // We can suppress the glgenframbuffers. // But in this case we'll need to manage the names/ids by ourselves. // In this case, instead the above code, we could write something like: // glgenframebuffers 를생략할수있다. 그러면, 버퍼번호를우리가스스로관리해야한다. 만약그러려면아래와같이작성할수도있다 // GLint framebuffer = 1; // glbindframebuffer(gl_framebuffer, framebuffer); The above code creates an instance of Gluint data type called framebuffer. Then we inform the memory location of framebuffer variable to the glgenframebuffers and instruct this function to generate only 1 name/id (Yes, we can generate multiple names/ids at once). So finally we bind that generated framebuffer to OpenGL's core. 위의코드는 framebuffer 라고불리는 Gluint 타입의인스턴스를만든다. 우리가 glgenframebuffers 에게 framebuffer 의메모리위치를알려주고, 1 개의프레임버퍼번호를만들라고지시한다 ( 우리는한번에여러개의프레임버퍼번호를만들수도있다 ) 마지막으로 framebuffer 를 bind 한다 Render Buffers top A render buffer is a temporary storage for images coming from an OpenGL's render. This is the pair of functions related to render buffers: render buffer 는 opengl 의 render 로부터넘어오는이미지의임시저장소다. 아래는 render buffer 와관련된기능 2 가지이다

RenderBuffer Creation GLvoid glgenrenderbuffers (GLsizei n, GLuint* renderbuffers) n: The number representing how many render buffers's names/ids will be generated at once. renderbuffers: A pointer to a variable to store the generated names/ids. If more than one name/id was generated, this pointer will point to the start of an array. n: 한번에몇개의 render buffer 를만들것이지. renderbuffers: 생성된 frame buffer 의번호를저장할변수. 하나이상을만들때는배열의시작점을넣음 GLvoid glbindrenderbuffer (GLenum target, GLuint renderbuffer) target: The target always will be GL_RENDERBUFFER, this is just an internal convention for OpenGL. renderbuffer: The render buffer name/id to be bound. target: 항상 GL_RENDERBUFFER 이다. framebuffer: 연결할 frame buffer 의번호 OK, now, before we proceed, do you remember from the first part when I said that render buffer is a temporary storage and could be of 3 types? So we need to specify the kind of render buffer and some properties of that temporary image. We set the properties to a render buffer by using this function: 더나가기전에첫번째파트에서내가한말이기억나는가? render buffer 는임시저장소고 3 가지타입이될수있다는것. 그래서어떤종류의 renderbuffer 인지기술해야하고, 임시이미지의속성을몇개기술해야한다. 아래의함수를사용해서 render buffer 의속성을설정한다. RenderBuffer Properties GLvoid glrenderbufferstorage (GLenum target, GLenum internalformat, GLsizei width, GLsizei height) target: The target always will be GL_RENDERBUFFER, this is just an internal convention for OpenGL. internalformat: This specifies what kind of render buffer we want and what color format this temporary image will use. This parameter can be: o o o GL_RGBA4, GL_RGB5_A1 or GL_RGB56 to a render buffer with final colors; GL_DEPTH_COMPONENT16 to a render buffer with Z depth; GL_STENCIL_INDEX or GL_STENCIL_INDEX8 to a render buffer with stencil

informations. width: The final width of a render buffer. height: The final height of a render buffer. target: 항상 GL_RENDERBUFFER 이다. internalformat: 어떤종류의 render buffer 인지, 그리고임시이미지에어떤색포맷을사용하것인지 width:render buffer 의최종넓이 height:render buffer 의최종높이 You could ask, "but I'll set these properties for which render buffer? How OpenGL will know for which render buffer is these properties?" Well, it's here that the great OpenGL's state machine comes up! The properties will be set to the last render buffer bound! Very simple. Look at how we can set the 3 render buffers kind: 이건어떤 render buffer 를위한설정인가? 라는질문이있을수있다. opengl 은어떤 render buffer 에이속성들을설정할지어떻게알수있나? 이것은 opengl 의상태머신특징이다. 이속성들은마지막 render buffer 에연결된다. 매우간단하다. 어떻게 3 개의 render buffer 를설정하는지보자 GLuint colorrenderbuffer; GLuint depthrenderbuffer; GLuint stencilrenderbuffer; GLint sw = 320, sh = 480; // Screen width and height, respectively. // Generates the name/id, creates and configures the Color Render Buffer. glgenrenderbuffers(1, &colorrenderbuffer); glbindrenderbuffer(gl_renderbuffer, colorrenderbuffer); glrenderbufferstorage(gl_renderbuffer, GL_RGBA4, sw, sh); // Generates the name/id, creates and configures the Depth Render Buffer. glgenrenderbuffers(1, &depthrenderbuffer); glbindrenderbuffer(gl_renderbuffer, depthrenderbuffer); glrenderbufferstorage(gl_renderbuffer, GL_DEPTH_COMPONENT16, sw, sh); // Generates the name/id, creates and configures the Stencil Render Buffer. glgenrenderbuffers(1, &stencilrenderbuffer); glbindrenderbuffer(gl_renderbuffer, stencilrenderbuffer); glrenderbufferstorage(gl_renderbuffer, GL_STENCIL_INDEX8, sw, sh);

OK, but in our cube application we don't need stencil buffer, so let's optimize the above code: 우리큐브어플리케이션은 stencil buffer 가필요없다. 아래처럼최적화해보자 GLuint *renderbuffers; GLint sw = 320, sh = 480; // Screen width and height, respectively. // Let's create multiple names/ids at once. // To do this we declared our variable as a pointer *renderbuffers. glgenrenderbuffers(2, renderbuffers); // The index 0 will be our color render buffer. glbindrenderbuffer(gl_renderbuffer, renderbuffers[0]); glrenderbufferstorage(gl_renderbuffer, GL_RGBA4, sw, sh); // The index 1 will be our depth render buffer. glbindrenderbuffer(gl_renderbuffer, renderbuffers[1]); glrenderbufferstorage(gl_renderbuffer, GL_DEPTH_COMPONENT16, sw, sh); At this point I need to make a digression. This step is a little bit different if you are in Cocoa Framework. The Apple doesn't allow to us put the OpenGL render directly onto device's screen, we need to place the output into a color render buffer and ask to the EAGL (The EGL's implementation by Apple) to present the buffer on the device's screen. As the color render buffer in this case is always mandatory, to set their properties we need to call a different method from the EAGLContext calledrenderbufferstorage:fromdrawable: and inform a CAEAGLLayer which we want to render onto. Seems confused? So is time to you make a digression in your reading and go to this article: Apple's EAGL. In that article I explain what is the EAGL and how to use it. Once knowing about EAGL, you use the following code to set the color render buffer's properties, instead glrenderbufferstorage: 여담을해야겠다. 이단계는 cocoa framework 에있는것과약간다르다. 애플은우리가기기의스크린에직접 opengl render 를올리도록허용하지않음. 우리는출력을 color render buffer 에넣고 EAGL( 애플이구현한 EGL) 에기기의화면에그버퍼를보이게하라고요청해야한다. 이경우 color render buffer 는필수사항인데, EAGLContext 에있는 renderbufferstorage:fromdrawable: 함수를호출할때 CAEAGLLayer 는우리가그리기원하는것이다.

좀헷갈리나? 이제너가약간옆길로새서, Apple s EAGL 이라는글을읽을때다. 이글에서 EAGL 이무엇인지, 어떻게사용하는지설명하고있다. glrenderbufferstorage 대신에다음과같은코드로 color render buffer 의속성을설정한다 ( 이석우추가 : 원래 opengl 에서는 glrenderbufferstorage 함수를쓰는게정석인데, 애플에서는이걸약간바꿔서 EAGL 의 renderbufferstorage 메소드를사용해야함. color render buffer 만이래야하고, depth render buffer, stencil render buffer 는원래대로해야함 ) RenderBuffer Properties in case of Cocoa Framework - (BOOL) renderbufferstorage:(nsuinteger)target fromdrawable:(id)drawable target: The target always will be GL_RENDERBUFFER, this is just an internal convention for OpenGL. fromdrawable: Your custom instance of CAEAGLLayer. // Suppose you previously set EAGLContext *_context // as I showed in my EAGL article. GLuint colorbuffer; glgenrenderbuffers(1, & colorbuffer); glbindrenderbuffer(gl_renderbuffer, colorbuffer); [_context renderbufferstorage:gl_renderbuffer fromdrawable:mycaeagllayer]; When you call renderbufferstorage:fromdrawable: informing a CAEAGLLayer, the EAGLContext will take all relevant properties from the layer and will properly set the bound color render buffer. Now is time to place our render buffers inside our previously created frame buffer. Each frame buffer can contain ONLY ONE render buffer of each type. So we can't have a frame buffer with 2 color render buffers, for example. To attach a render buffer into a frame buffer, we use this function: renderbufferstorage:fromdrawable: 을호출할때, EAGLContext 는레이어에서관련속성들을가져온다. 그리고 color render buffer 에적절한속성을할당한다. 전에만들었던 frame buffer 에 render buffer 를놓아보자하나의 frame buffer 는타입마다한개씩의 render buffer 만가질수있다. ( 이석우추가 : frame buffer 는 color render buffer 1 개, depth render buffer 1 개, stencil render buffer 1 개만가질수있다는뜻 ) 그래서 frame buffer 는 2 개의 color render buffer 를가질수없다. frame buffer 에 render buffer 를붙이려면아래의함수를사용한다. Attach RenderBuffers to a FrameBuffer

GLvoid glframebufferrenderbuffer (GLenum target, GLenum attachment, GLenum renderbuffertarget, GLuint renderbuffer) target: The target always will be GL_FRAMEBUFFER, this is just an internal convention for OpenGL. attachment: This specifies which kind of render buffer we want to attach inside a frame buffer, this parameter can be: o o o GL_COLOR_ATTACHMENT0: To attach a color render buffer; GL_DEPTH_ATTACHMENT: To attach a depth render buffer; GL_STENCIL_ATTACHMENT: To attach a stencil render buffer. renderbuffertarget: The renderbuffertarget always will be GL_RENDERBUFFER, this is just an internal convention for OpenGL. renderbuffer: The name/id of the render buffer we want to attach. target: 항상 GL_FRAMEBUFFER 이다. attachment:frame buffer 에붙일 render buffer 의종류를기술한다. renderbuffertarget: 항상 GL_RENDERBUFFER 이다. renderbuffer:render buffer 의번호

The same question comes up: "How OpenGL will know for which frame buffer attach these render buffers?" Using the state machine! The last frame buffer bound will receive these attachments. OK, before move on, let's talk about the combination of Frame Buffer and Render Buffer. This is how they looks like: 같은질문이나온다. opengl 은어떤 frame buffer 에이 render buffer 들을붙일지어떻게알지? 상태머신을이용한다. Bind 된마지막 frame buffer 에 render buffer 들이붙는다. 계속나가기전에 frame buffer 와 render buffer 의조합에대해이야기하자. 아래그림을봐라 Internally OpenGL's always works with a frame buffer. This is called windowsystem-provided frame buffer and the frame buffer name/id 0 is reserved to it. The frame buffers which we control are know as application-created frame buffers. The depth and stencil render buffers are optionals. But the color buffer is always enabled and as OpenGL's core always uses a color render buffer too, the render buffer name/id 0 is reserved to it. To optimize all the optional states, OpenGL gives to us an way to turn on and turn off some states (understanding as state every optional OpenGL's feature). To do this, we use these function: opengl 은내부적으로항상 frame buffer 로작동한다. Frame buffer 0 번은윈도우시스템에서제공하는 frame buffer 로정해져있다. 우리가건드릴수있는것은어플리케이션에서만든 frame buffer 들이다. depth, stencil render buffer 는안써도되지만, color render buffer 는 opengl 내부에서항상사용된다. render buffer 0 번은이걸로정해져있다. 최적화를위해 opengl 은우리가상태 ( 특성 ) 를활성화, 비활성화할수있는방법을제공한다.

Turning ON/OFF the OpenGL' States GLvoid glenable(glenum capability) capability: The feature to be turned on. The values can be: o o o o o o o o o o GL_TEXTURE_2D GL_CULL_FACE GL_BLEND GL_DITHER GL_STENCIL_TEST GL_DEPTH_TEST GL_SCISSOR_TEST GL_POLYGON_OFFSET_FILL GL_SAMPLE_ALPHA_TO_COVERAGE GL_SAMPLE_COVERAGE Capability: 활성화할특징. GLvoid gldisable(glenum capability) capability: The feature to be turned off. The values can be the same as glenable. Once we turned on/off a feature, this instruction will affect the entire OpenGL machine. Some people prefer to turn on a feature just for a while to use it and then turn off, but this is not advisable. It's expensive. The best way is turn on once and turn off once. Or if you really need, minimize the turn on/off in your application. So, back to the depth and stencil buffer, if you need in your application to use one of them or both, try enable what you need once. As in our cube's example we just need a depth buffer, we could write: 일단상태를활성화 / 비활성화하면 opengl 머신전체에영향이미친다. 어떤사람은필요할때만상태를활성화시키고나서다시끄는것을좋아하는데, 권장할만한방법은아니다. 이것은시스템에부하를주기때문에. 젤좋은방법은한번활성화하고, 한번비활성화하는거다. 활성화 / 비활성화를적게해라. depth, stencil 가필요하면활성화해라. 여기서는 depth render buffer 만쓴다. // Doesn't matter if this is before or after // we create the depth render buffer. // The important thing is enable it before try // to render something which needs to use it. // depth render buffer 를만들기전이나, 만든후에이함수를호출해도 // 상관없음. 중요한건뭔가를그리기전에활성화해야한다는거임. glenable(gl_depth_test);

Later I'll talk deeply about what the depth and stencil tests make and their relations with fragment shaders. 나중에 depth, stencil 을테스트하고, fragment shader 와의관계를이야기할거임 Buffer Objects top The buffer objects are optimized storage for our primitive's arrays. Has two kind of buffer objects, the first is that we store the array of vertices, because it the buffer objects is also known as Vertex Buffer Object (VBO). After you've created the buffer object you can destruct the original data, because the Buffer Object (BO) made a copy from it. We are used to call it VBO, but this kind of buffer object could hold on any kind of array, like array of normals or an array of texture coordinates or even an array of structures. To adjust the name to fit the right idea, some people also call this kind of buffer object as Array Buffer Object (ABO). The other kind of buffer object is the Index Buffer Object (IBO). Do you remember the array of indices from the first part? (click here to remember). So the IBO is to store that kind of array. Usually the data type of array of indices is GLubyte or GLushort. Some devices have support up to GLuint, but this is like an extension, almost a plugin which vendors have to implement. The majority just support the default behavior (GLubyte or GLushort). So my advice is, always limit your array of indices to GLushort. OK, now to create these buffers the process is very similar to frame buffer and render buffer. First you create one or more names/ids, later you bind one buffer object, and then you define the properties and data into it. buffer object 는 primitive( 기본도형 ) 의배열을저장하는데최적화되어있다. 2 종류의 buffer object 는가지는데, VBO 라고알려진첫번째는꼭짓점의배열을저장한다. Buffer object 는생성한후에는원본데이타를지워도된다. 왜냐면 buffer object 가원본데이타의복사본을가지고있기때문이다. VBO 는어떤종류의배열이라도가지고있을수있다 ( 법선배열, 텍스쳐좌표배열, 구조체배열 ) 두번째 buffer object 는 IBO 이다. (IBO 는인덱스배열이고, 그인덱스는 VBO 의데이터를가르킨다 ) 일반적으로 IBO 의데이터타입은 Glubyte 또는 Glushort 이다. 이것들을만드는과정은 frame buffer 와 render buffer 를만드는것과비슷하다.

Buffer Objects Creation GLvoid glgenbuffers(glsizei n, GLuint* buffers) n: The number representing how many buffer objects's names/ids will be generated at once. buffers: A pointer to a variable to store the generated names/ids. If more than one name/id was generated, this pointer will point to the start of an array. GLvoid glbindbuffer(glenum target, GLuint buffer) target: The target will define what kind of buffer object will be, VBO or IBO. The values can be: o o GL_ARRAY_BUFFER: This will set a VBO (or ABO, whatever). GL_ELEMENT_ARRAY_BUFFER: This will set an IBO. buffer: The name/id of the frame buffer to be bound. Now is time to refine that illustration about the Port Crane's hooks. The BufferObject Hook is in reality a double hook. Because it can hold two buffer objects, one of each type: GL_ARRAY_BUFFER andgl_element_array_buffer. OK, once you have bound a buffer object is time to define its properties, or was better to say define its content. As the "BufferObject Hook" is a double one and you can have two buffer objects bound at same time, you need to instruct the OpenGL about the kind of buffer object you want to set the properties for. 아까항구의크레인을설명할때, buffer object 는 2 개의고리를가지고있다고했다. GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER 다. buffer object 를 bind 했으면, 데이터를설정한다. Buffer Objects Properties GLvoid glbufferdata(glenum target, GLsizeiptr size, const GLvoid* data, GLenum usage) target: Indicates for what kind of buffer you want to set the properties for. This param can be GL_ARRAY_BUFFER orgl_element_array_buffer. size: The size of the buffer in the basic units (bytes). data: A pointer to the data. usage: The usage kind. This is like a tip to help the OpenGL to optimize the data. This can be of three kinds: o o GL_STATIC_DRAW: This denotes an immutable data. You set it once and use the buffer often. GL_DYNAMIC_DRAW: This denotes a mutable data. You set it once and update

its content several times using it often. GL_STREAM_DRAW: This denotes a temporary data. For who of you which is familiar with Objective-C, this is like an autorelease. You set it once and use few times. Later the OpenGL automatically will clean and destroy this buffer. 해석하기귀찮아서안함. GLvoid glbuffersubdata(glenum target, GLintptr offset, GLsizeiptr size, const GLvoid* data) target: Indicates for what kind of buffer you want to set the properties for. This param can be GL_ARRAY_BUFFER orgl_element_array_buffer. offset: This number represent the offset which you will start to make changes into the previously defined buffer object. This number is given is basic units (bytes). size: This number represent the size of the changes into the previously defined buffer object. This number is given is basic units (bytes). data: A pointer to the data. 해석하기귀찮아서안함 Now let's understand what these functions make. The first one, glbufferdata, you use this function to set the content for your buffer object and its properties. If you choose the usage type of GL_DYNAMIC_DRAW, it means you want to update that buffer object later and to do this you need to use the second one, glbuffersubdata. When you use the glbuffersubdata the size of your buffer object was previously defined, so you can't change it. But to optimize the updates, you can choose just a little portion of the whole buffer object to be updated. Personally, I don't like to use GL_DYNAMIC_DRAW, if you stop to think about it you will see that doesn't exist in the 3D world an effect of behavior which only can be done changing the original vertex data, normal data or texture coordinate data. By using the shaders you can change almost everything related to those data. Using GL_DYNAMIC_DRAW certainly will be much more expansive than using a shaders's approach. So, my advice here is: Avoid to use GL_DYNAMIC_DRAW as much as possible! Always prefer think in a way to achieve the same behavior using the Shaders features. Once the Buffer Object was properly created and configured, it's very very simple to use it. All we need to do is bind the desired Buffer Objects. Remember we can bind only one kind of Buffer Object at a time. While the Buffer Objects stay bound, all the drawing commands we make will use them. After the usage is a good idea unbind them. Now let's move to the textures.

이제이함수들이뭘만드는지알아보자. 첫번째 glbufferdata 함수는 buffer object 에컨텐츠와속성을설정한다. usage 변수에 GL_DYNAMIC_DRAW 를설정하면, 나중에데이타가바뀐다는뜻이고, glbuffersubdata 함수를통해서필요한데이터가업데이트된다. glbuffersubdata 를사용할때 buffer object 의크기는이미정해져있고, 바꿀수없다. 데이터수정을최적화하기위해서전체버퍼에서작은부분만바꿔라. 개인적으로 GL_DYNAMIC_DRAW 를안좋아한다. 3 차원세계에서는움직임효과, 노말 ( 법선 ) 데이터, 텍스쳐좌표데이타 처럼원본데이터를변화해야만하는것들이있다. shader 를사용해서이런데이터와관련된거의모든것을변경할수있다. GL_DYNAMIC_DRAW 는 shader 를쓰는방식보다부하가크다. 그래서나의충고는 GL_DYNAMIC_DRAW 를가능한쓰지마라. shader 의특징을사용해서같은효과를낼수있는지생각해라. 한번 buffer object 가생성되고설정되면, 사용하는건아주쉽다. 원하는 buffer object 를연결만하면된다. 한종류의 buffer object 는하나만 bind 된다는걸기억해라 ( 꼭짓점하나, 인덱스하나 ) 일단 buffer object 가 bind 되면모든그리기명령은그 buffer object 가 사용된다. 사용이끝나면 unbind 하는게좋다.

Textures top Oh man, textures is very large topic in OpenGL. To don't increase the size of this tutorial more than it actually is, let's see the basic about the texture here. The advanced topics I'll let to the third part of this tutorial or an exclusive article. The first thing I need to say you is about the Power of Two (POT). OpenGL ONLY accept POT textures. What that means? That means all the textures must to have the width and height a power of two value. Like 2, 4, 8, 16, 32, 64, 128, 256, 512 or 1024 pixels. To a texture, 1024 is a bigger size and normally indicate the maximum possible size of a texture. So all texture which will be used in OpenGL must to have dimensions like: 64 x 128 or 256 x 32 or 512 x 512, for example. You can't use 200 x 200 or 256 x 100. This is a rule to optimize the internal OpenGL processing in the GPU. Another important thing to know about textures in OpenGL is the read pixel order. Usually image file formats store the pixel information starting at the upper left corner and moves through line by line to the lower right corner. File format like JPG, PNG, BMP, GIF, TIFF and others use this pixel order. But in the OpenGL this order is flipped upside down. The textures in OpenGL reads the pixels starting from the lower left corner and goes to the upper right corner. texture 는 opengl 에서큰부분이다. 이문서를간략히하기위해서 texture 에대해서미리공부해라. 나중에깊게설명하겠다. 처음말하고싶은건 POT 이다. opengl 은 POT 텍스쳐만받아들인다. 모든텍스쳐는가로, 세로길이가 2 의배수여야한다 (2,4,8,16,32,64 1024 픽셀 ) 1024 는충분히크고, 보통최대크기이다. 따라서 opengl 에서사용하는모든텍스쳐는다음과같은크기여야한다 (64*128, 256*32, 512*512) 200*200, 256*100 같은건쓸수없다. 이것은 GPU 에서실행되는 Opengl 의최적화를위한규칙이다. 또하나중요한규칙은픽셀을읽는순서다. 일반적으로이미지파일은좌상귀에서우하귀순서로픽셀정보를저장한다. jpg, png, bmp, gif 등거의모든파일이이런순서로저장한다. 하지만 opengl 은순서가위아래로뒤집힌다. 좌하귀에서우상귀순서다

So, to solve this little issue, we usually make a vertical flip on our images data before upload it to the OpenGL's core. If your programming language let you re-scale the images, this is equivalent to re-scale the height in -100%. 이런문제를해결하기위해, opengl 에이미지를넘기기전에이미지를위아래로뒤집는다.

Now, shortly about the logic, the textures in OpenGL works at this way: You have an image file, so you must to extract the binary color informations from it, the hexadecimal value. You could extract the alpha information too, OpenGL supports RGB and RGBA format. In this case you'll need to extract the hexadecimal + alpha value from your image. Store everything into an array of pixels. With this array of pixel (also called texels, because will be used in a texture) you can construct an OpenGL's texture object. OpenGL will copy your array and store it in an optimized format to use in the GPU and in the frame buffer, if needed. Now is the complex part, some people has criticized OpenGL so much by this approach. Personally I think this could be better too, but is what we have today. OpenGL has something called "Texture Units", by default any OpenGL implementation by vendors must supports up to 32 Texture Units. These Units represent a temporary link between the stored array of pixels and the actual render processing. You'll use the Texture Units inside the shaders, more specifically inside the fragment shaders. By default each shader can use up to 8 textures, some vendors's implementation support up to 16 textures per shader. Further, OpenGL has a limit to the pair of shader, though each shader could use up to 8 texture units, the pair of shader (vertex and fragment) are limited to use up to 8 texture units together. Confused? Look, if you are using the texture units in only one shader you are able to use up to 8. But if you are

using texture units in both shader (different textures units), you can't use more than 8 texture units combined. Well, OpenGL could hold on up to 32 Texture Units, which we'll use inside the shaders, but the shader just support up to 8, this doesn't make sense, right? Well, the point is that you can set up to 32 Texture Units and use it throughout many shaders. But if you need a 33th Texture Unit you'll need reuse a slot from the firsts 32. Very confused! I know... Let's see if an visual explanation can clarify the point: opengl 에서텍스쳐가동작하는방식을간략히말하면이렇다. 이미지파일에서 16 진수로바이너리색정보를추출해야한다. 알파값도같이추출할수있으므로, opengl 은 RGB 와 RGBA 포맷을지원한다. 픽셀의모든색정보를배열에저장해라.( 이걸텍셀이라고부른다 ) 이배열을가지고, texture object 를만들수있다. Opengl 은이배열을따로저장하여 GPU 와 frame buffer 에최적화시킨다. 이제부터복잡한부분이나오는데어떤사람들이이런방식때문에 opengl 을비난한다. 개인적으로도이건개선할필요가있다고생각한다. Opengl 은 texture unit 이란걸가지고있다. 기본적으로 opengl 은 texture unit 을 32 개까지지원함. 이것들은저장된픽셀배열과실제 rendering 과정과의임시링크이다. texture unit 을 fragment shader 안에서사용할수있다. 기본적으로각 shader 는 8 개의텍스쳐를쓸수있지만, Opengl 은 2 개의 shader 를가지고있으므로, 2 개의 shader 에서사용하는텍스쳐의합이 8 을넘을수없다. 혼란스럽나? 만약너가하나의 shader 에서만텍스쳐를쓴다면그 shader 안에서 8 개의텍스쳐를쓸수있다. 너가 2 개의 shader 를쓴다고해도 (vertex, fragment) 사용하는텍스쳐의합이 8 개를넘을수없다. opengl 는 32 개의텍스쳐를가질수있고, 이것들은모두 shader 에서사용할수있지만, shader 는 8 개의텍스쳐만지원된다. 말이안되지? 중요한건 33 개의텍스쳐가필요하다면첫번째텍스쳐를다시사용해야된다는것이다. 되게헷갈린다. 나도안다. 다음그림을보자

As you saw in that image, one Texture Unit can be used many times by multiple shaders pairs. This approach is really confused, but let's understand it by the Khronos Eyes: "Shader are really great!", a Khronos developer said to the other, "They are processed very fast by the GPU. Right! But the textures... hmmm.. textures data still on the CPU, they are bigger and heavy informations! Hmmm.. So we need a fast way to let the shaders get access the textures, like a bridge, or something temporary. Hmmm... We could create an unit of the texture that could be processed directly in the GPU, just as the shaders. We could limit the number of current texture units running on the GPU. A cache in the GPU, it's fast, it's better. Right, to make the setup, the user bind a texture data to a texture unit and instruct his shaders to use that unit! Seems simple! Let's use this approach." Normally the texture units are used in the fragment shader, but the vertex shader can also performs look up into a texture. This is not common but could be useful in some situations. Two very important things to remember are: first, you must to activate the texture unit by using glactivetexture() and then you bind the texture name/id usingglbindtexture(). The second important thing is that even by default the OpenGL supports up to 32 texture units, you can't use a slot number higher than the maximum supported texture units in your vendor's implementation, so if your OpenGL implementation doesn't support more than 16 texture units, you just can use the texture units in range 0-15. Well, the OpenGL texture units approach could be better, of course, but

as I said, it's what we have for now! OK, again the code is very similar to the others above: You Generate a texture object, Bind this texture and set its properties. Here are the functions: 이그림을보면, 하나의 texture unit 은여러 shader 에서여러번사용될수있다. 이런접근방법은혼란스럽지만, 크로노스의입장에서이해해보자. 크로노스개발자의말이다 shader 는굉장하다. GPU 에의해서매우빠르게처리된다. 하지만크고무거운텍스쳐데이터는여전히 CPU 가처리한다. 우리는 shader 가 texture 에빠르게접근하는방법이필요하다. 그래서 shader 처럼 GPU 에서바로처리될수있는 texture unit 을만들었다. GPU 상에서동작하는 texture unit 의수를제한해야한다. GPU 안의캐쉬는빠르다. 사용자는 texture 데이터를 texture unit 에연결하고, texture unit 을사용할 shader 를알려준다. 간단하다. 이방법을써보자. 보통 texture unit 은 fragment shader 에서사용하지만, vertex shader 역시 texture 내부를볼수있다. 일반적이진않지만어떤경우엔유용하다. 기억해야할 2 가지중요한것은, 첫째 glactivetexture() 로 texture unit 을활성화해야하고, glbindtexture() 로바인딩해야한다. 둘째 opengl 은 32 개까지 texture unit 을지원하지만, vender 에서지원하는개수를넘을수없음. ( 이석우추가 : vender 는애플, 구글등 opengl 명세를구현한회사들 ) vender 가 16 개가지원하면 0~15 번까지쓸수있다. 코드로돌아가서 texture object 를만들고, bind 하고속성을설정하자. Texture Creation GLvoid glgentextures(glsizei n, GLuint* textures) n: The number representing how many textures' names/ids will be generated at once. textures: A pointer to a variable to store the generated names/ids. If more than one name/id was generated, this pointer will point to the start of an array. n: 생성할텍스쳐갯수 textures: 텍스쳐번호를저장할변수 GLvoid glbindtexture(glenum target, GLuint texture) target: The target will define what kind of texture will be, a 2D texture or a 3D texture. The values can be: o o GL_TEXTURE_2D: This will set a 2D texture. GL_TEXTURE_CUBE_MAP: This will set a 3D texture. texture: The name/id of the texture to be bound. Is so weirdest a "3D texture"? The first time I heard "3D texture" I thought:"wtf!". Well, because of this weirding, the OpenGL calls a 3D texture of a Cube Map. Sounds better! Anyway, the point is that represent a cube with one 2D texture in each face, so the 3D texture or cube map represents a

collection of six 2D textures. And how we can fetch the the texels? With a 3D vector placed in the center of the cube. This subject need much more attention, so I'll skip the 3D textures here and will let this discussion to the third part of this tutorial. Let's focus on 2D texture. Using only the GL_TEXTURE_2D. So, after we've created a 2D texture we need to set its properties. The Khronos group calls the OpenGL's core as "server", so when we define a texture data they say this is an "upload". To upload the texture data and set some properties, we use: 3D texture 라니이상하다. Opengl 은이걸 cube_map 이라고부른다. 어쨌든중요한건, 해석못함이주제는많은주의가필요하다. 그래서, 3d texture 는그냥넘어간다. 일단 GL_TEXTURE_2D 를써라. 생성후에속성을설정해야한다. 크로노스는 opengl 의내부를서버라고부른다. 그래서 texture data 를정의할때 upload 한다고한다. Texture data 를 upload 하고속성을설정하려면아래의함수를써라. Texture Properties GLvoid glteximage2d (GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const GLvoid* pixels) target: To a 2D texture this always will be GL_TEXTURE_2D. 항상 GL_TEXTURE_2D level: This parameter represents the mip map level. The base level is 0, for now let's use only 0. mip map 레벨을설정. 기본값 0 internalformat: This represents the color format of the pixels. This parameter can be: 색포맷 o GL_RGBA: For RGB + Alpha. o GL_RGB: For RGB only. o GL_LUMINANCE_ALPHA: For Red + Alpha only. In this case the red channel will represent the luminosity. o GL_LUMINANCE: For Red only. In this case the red channel will represent the luminosity. o GL_ALPHA: For Alpha only. width: The width of the image in pixels. 이미지넓이 height: The height of the image in pixels. 이미지높이 border: This parameter is ignored in OpenGL ES. Always use the value 0. This is just an internal constant to conserve the compatibly with the desktop versions. 이건 opengl es 에서안씀. 그냥 0 으로해라.

format: The format must to have the same value ofinternalformat. Again, this is just an internal OpenGL convention. internalformat 과같은값으로해라. type: This represent the data format of the pixels. This parameter can be: 픽셀의포맷이다. o GL_UNSIGNED_BYTE: This format represent 4 Bytes per pixel, so you can use 8 bits for red, 8 bits for green, 8 bits for blue and 8 bits for alpha channels, for example. This definition is used with all color formats. o GL_UNSIGNED_SHORT_4_4_4_4: This format represents 2 bytes per pixel, so you can use 4 bits for red, 4 bits for green, 4 bits for blue and 4 bits for alpha channels, for example. This definition is used with RGBA only. o GL_UNSIGNED_SHORT_5_5_5_1: This format represents 2 bytes per pixel, so you can use 5 bits for red, 5 bits for green, 5 bits for blue and 1 bit for alpha channels, for example. This definition is used with RGBA only. o GL_UNSIGNED_SHORT_5_6_5: This format represents 2 bytes per pixel, so you can use 5 bits for red, 6 bits for green and 5 bits for blue, for example. This definition is used with RGB only. pixels: The pointer to your array of pixels. Wow! A lot of parameters! OK, but is not hard to understand. First of all, same behavior of the others "Hooks", a call to glteximage2d will set the properties for the last texture bound. About the mip map, it is another OpenGL feature to optimize the render time. In few words, what it does is progressively create smaller copies of the original texture until an insignificant copy of 1x1 pixel. Later, during the rasterize process, OpenGL can choose the original or the one of the copies to use depending on the final size of the 3D object in relation to the view. For now, don't worry with this feature, probably I'll create an article only to talk about the texture with OpenGL. After the mip map, we set the color format, the size of the image, the format of our data and finally our pixel data. The 2 bytes per pixel optimized data formats is the best way to optimize your texture, use it always you can. Remember which the color you use in the OpenGL can't exceed the color range and format of your device and EGL context. OK, now we know how to construct a basic texture and how it works inside OpenGL. So now let's move to the Rasterize. 파라메터가많고, 이해하기어렵다. 다른것들처럼 glteximage2d 는맨나중에 bind 된 texture 의속성을설정한다. 밉맵은그리는시간을최적화하는 opengl 의특징이다. 간단히말해원본 texture 를순차적으로작게만든복사본을만드는것이다 (1*1 크기가될때까지 ) 나중에 rastrize 단계에서물체의보여지는크기에따라서원본이나작은복사본을선택해사용하는것임. 지금은이특징에대해서걱정하지말자. 나중에 texture 에대한글을쓸거다. 밉맵후에 color format, 이미지크기, 데이터포맷, 최종픽셀데이타를설정한다. The 2 byets use it always you can. 해석못함 Opengl 은기기와 egl context 의색범위, 포맷안에서만사용가능하단걸기억해라.

이제기본 texture 를만드는법과 opengl 안에서 texture 가작동하는법을알았다. Rasterize 로넘어가자 Rasterize top The Rasterize in the strict sense is only the process which the OpenGL takes a 3D object and convert its bunch of maths into a 2D image. Later, each fragment of this visible area will be processed by the fragment shader. Looking at that Programmable Pipeline illustration at the beginning of this tutorial, you can see the Rasterize is just a small step through the graphics pipeline. So why it is so important? I like to say which everything that comes after the Rasterize step is also a Rasterize process, because all that is done later on is also to construct a final 2D image from a 3D object. OK, anyway. The fact is the Rasterize is the process of creating an image from a 3D object. The Rasterize will occurs to each 3D object in the scene and will update the frame buffer. You can do interferences by many ways in the Rasterize process. Rasterize 는엄격한의미로 opengl 이 3 차원물체를 2 차원이미지로만드는수학적부분에서동작된다. 나중에눈에보이는영역의각조각은 fragment shader 에의해서처리된다. 이글의첫부분에 programmable pipeline 그림에서 rasterize 는그래픽파이프라인에서작은부분이란걸볼수있다. 근데이게왜중요하냐? 어쨌든 rasterize 는 3 차원물체에서이차원이미지를만드는과정이란것이다. rasterize 는 3 차원장면마다발생하고, frame buffer 를업데이트한다. rasterize 처리과정을여러방법으로간섭할수있다. Face Culling top Now is time to talk about the Culling, Front Facing and Back Facing. OpenGL works with methods to find and discard the not visible faces. Imagine a simple plane in your 3D application. Let's say you want this plane be visible just by one side (because it represent a wall or a floor, whatever). By default OpenGL will render the both sides of that plane. To solve this issue you can use the culling. Based on the order of vertices OpenGL can determine which is the front and the back face of your mesh (more precisely, it calculates the front and back face of each triangle) and using the culling you can instruct OpenGL to ignore one of these sides (or even both). Look at this picture: 이제 Culling, 앞면, 뒷면에대해서이야기해보자 opengl 은안보이는면을찾아서무시한다. 간단한면이있는 3 차원어플리케이션을상상해보자. 너가이면의한쪽면만보이길원한다 ( 왜냐면이것은벽, 바닥등 ) 기본적으로 opengl 은면의양쪽면을모두그린다. 이걸해결하기위해서 culling 을이용한다. Opengl 은꼭짓점의정렬순서를바탕으로앞면과뒷면을구분한다 ( 정확히각삼각형이앞면인지뒷면인지계산한다 ) culling 을이용하여 opengl 에게어떤면을무시할지지시할수있다. 그림봐라.

This feature called culling is completely flexible, you have at least three ways to do the same thing. That picture show only one way, but the most important is understand how it works. In the picture's case, a triangle is composed by vertex 1, vertex 2 and vertex 3. The triangle at the left is constructed using the order of {1,2,3} and that one at the right is formed by the order {1,3,2}. By default the culling will treat triangles formed in the Counter ClockWise as a Front Face and it will not be culled. Following this same behavior, in the right side of the image, the triangle formed in ClockWise will be treated as Back Face and it will be culled (ignored in rasterization process). To use this feature you need to use glenablefunction using the parameter GL_CULL_FACE and doing this the default behavior will be the explained above. But if you want to customize it, you can use these functions: culling 은유연하다. 3 가지방법으로할수있다. 이사진은한가지방법만보여줌. 어떻게동작하는지이해하는게중요한. 이사진의경우는삼각형이꼭짓점 1,2,3 으로구성되어있다. 왼쪽삼격형은순서가 1,2,3 이고오른쪽삼격형은 1,3,2 다. 기본적으로 culling 은반시계방향순서의삼각형을앞면으로취급한다. 오른쪽삼각형은시계방향순서이므로뒷면으로간주되고그려지지않는다. (rasterization 처리에서무시된다 ) 이특징을쓰려면 glenable 함수에 GL_CULL_FACE 를넣어라. 위의그러면사진처럼동작하게된다. 이걸바꾸려면아래의함수를써라

Cull Face properties GLvoid glcullface(glenum mode) mode: Indicates which face will be culled. This parameter can be: mode: 어떤면을안그릴지지정함. 다음중하나 GL_BACK: This will ignore the back faces. This is the default behavior. GL_FRONT: This will ignore the front faces. GL_FRONT_AND_BACK: This will ignore both front and back faces (don't ask me why someone will want to exclude the both sides, even knowing which this will produce no render. I'm still trying to figure out the reason of this silly setup until today). GLvoid glfrontface(glenum mode) mode: Indicates how the OpenGL will define the front face (and obviously also the back face). This parameter can be: mode: 어떤방식으로앞면을정할지지정함. 다음중하나 GL_CCW: This will instruct OpenGL to treat triangles formed in counter clock wise as a Front Face. This is the default behavior. GL_CW: This will instruct OpenGL to treat triangles formed in clock wise as a Front Face. As you can imagine, if you set the glcullface(gl_front) andglfrontface(gl_cw) you will achieve the same behavior as default. Another way to change the default behavior is by changing the order which your 3D objects are constructed, but of course this is much more laborious, because you need to change your array of indices. The culling is the first thing to happens in the Rasterize step, so this can determine if a Fragment Shader (the next step) will be processed or not. glcullface(gl_front), glfrontface(gl_cw) 로설정하면기본값과같음 ( 이석우추가, 둘다반대로했으므로, 기본값은 glcullface(gl_back), glfrontface(gl_ccw)) 다른방법으로는 3d 객체를만들때꼭짓점의순서를바꾸는것이지만, 이건꼭짓점배열과인덱스배열도바꿔야하니까힘들다. Culling 은 rasterize 단계에서처음에실행되므로, fragment shader 에서처리할지말지결정할수있다.

Per-Fragment Operations top Now let's refine a little our programmable pipeline diagram at the top of this tutorial, more specifically what happens in the process of post fragment shader. 이제이글의첫부분에있는파이프라인을수정해보자 fragment shader 뒤에서어떤일이일어나는지기술했다. Between the Fragment Shader and the Scissor Test exist one little omitted step. Something called "Pixel Ownership Test". This is an internal step. It will decide the ownership of a pixel between the OpenGL internal Frame Buffer and the current EGL's context. This is an insignificant step to us. You can't use it to anything, I just told this to you know what happens internally. To us, developers, this step is completely ignored. As you saw, the only step which you don't have access is the Logicop. The Logicop is an internal process which includes

things like clamp values to 0.0-1.0 range, process the final color to the frame buffer after all per-fragment operations, additional Multisample and other kind of internal things. You don't need to worry about that. We need to focus on purple boxes. The purple boxes indicate processes which is disabled by default, you need to enable each of them using the glenable function, if you want to use them, of course. You can look again at the glenable parameters, though just to make this point clear, in short words the purple boxes at this image represent the following parameters and means: fragment shader 와 scissor test 사이에생략된단계가하나있다. pixel ownership test 라불리는내부단계이다. 이건우리에게의미없다. 개발자에게이단계는완전히무시해도된다. 그림에서보듯이접근할수없는단계는 Logicop 이다. 우리는보라색단계에집중해야한다. 보라색단계는기본적으로비활성화되었고, 활성화하려면 glenable 함수를써라. glenable 의파라메터는다음과같다. Scissor Test: GL_SCISSOR_TEST - This can crop the image, so every fragment outside the scissor area will be ignored. Stencil Test: GL_STENCIL_TEST - Works like a mask, the mask is defined by a black-white image where the white pixels represent the visible area. So every fragment placed on the black area will be ignored. This requires a stencil render buffer to works. Depth Test: GL_DEPTH_TEST - This test compares the Z depth of the current 3D object against the others Z depths previously rendered. The fragment with a depth higher than another will be ignored (that means, more distant from the viewer). This will be done using a grey scale image. This requires a depth render buffer to works. Blend: GL_BLEND - This step can blend the new fragment with the existing fragment into the color buffer. Dither: GL_DITHER - This is a little OpenGL's trick. In the systems wich color available to the frame buffer is limited, this step can optimize the color usage to appears to have more colors than has in real. The Dither has no configuration, you just choose to use it or not. To each of them, OpenGL gives few functions to setup the process likeglscissor, glblendcolor or glstencilfunc. There are more than 10 functions and I'll not talk about they here, maybe in another article. The important thing to understand here is the process. I told you about the default behavior, like the black and white in stencil buffer, but by using those functions you can customize the processing, like change the black and white behavior on the stencil buffer. Look again at the programmable pipeline at the top. Each time you render a 3D object, that entire pipeline will occur from the gldraw* until the frame buffer, but does not enter in EGL API. Imagine a complex scene, a game scene, like a Counter Strike scene. You could render tens, maybe hundreds 3D object to create only one single static image. When you render the first 3D

object, the frame buffer will begin to be filled. If the subsequents 3D objects had their fragments ignored by one or more of the Fragment Operations, then the ignored fragment will not be placed in the frame buffer, but remember that this action will not exclude the fragments which are already in the frame buffer. The final Counter Strike scene is a single 2D image resulting for many shaders, lights, effects and 3D objects. So every 3D object will have its vertex shader processed, maybe also its fragment shader, but this doesn't means that its resulting image will be really visible. Well, now you understand why I said the rasterization process include more than only one single step in the diagram. Rasterize is everything between the vertex shader and the frame buffer steps. Now let's move to the most important section, the shaders! 위의각각에 opengl 은설정함수를제공한다 (glscissor, glblendcolor, glstencilfund) 10 개이상의함수가있지만여기선이야기하지않겠다. 중요한것은프로세스를이해하는것이다. Stencil buffer 의검은색, 흰색과같이기본동작에대해이야기했지만저함수들을쓰면그동작을바꿀수있다. programmable pipeline 을보면 3 차원객체를그릴때마다, 파이프라인전체가 gldraw* 로부터발생한다. Shaders top Here we are! The greatest invention of 3D world! If you've read the first part of this serie of tutorials and read all this part until here, I think you have now a good idea of what the shaders are and what they do. Just to refresh our memories, let's remember a little: 여기까지왔다. 3 차원세상에서뛰어난발명. 이시리즈의첫번째부분을읽었거나, 이파트의여기까지읽었다면, 쉐이더가무엇이고, 무엇을할수있는지알수있었을것이다. 우리의기억을되새겨보자. Shaders use the GLSL or GLSL ES, a compact version of the first one. Shaders always work in pairs, a Vertex Shader (VSH) and a Fragment Shader (FSH). That pair of shader will be processed every time you submit a render command, like gldrawarrays or gldrawelements. VSH will be processed per-vertex, if your 3D object has 8 vertices, so the vertex shader will be processed 8 times. The VSH is responsible by determine the final position of a vertex. FSH will be processed in each visible fragment of your objects, remember that FSH is processed before the "Fragment Operations" in the graphics pipeline, so the OpenGL doesn't knows yet what object is in front of others, I mean, even the fragments behind the others will be processed. The FSH is responsible by define the final color of a fragment. VSH and FSH must be compiled separately and linked together within a Program Object. You can reuse a compiled shader into multiple Program

Objects, but can link only one kind of shader (VSH and FSH) at each Program Object. Shader 는 GLSL 이나 GLSL ES 라는간략화된버젼을사용한다. Shader 는 Vertex shader 와 fragment shader 양쪽에서동작한다. 2 개의 shader 는 gldrawarray, gldrawelements 로 render 할때마다실행된다. vertex shader 는모든꼭짓점마다실행됨 (8 개의꼭짓점을가진물체라면 8 번 ) vertex shader 는꼭짓점의최종위치를결정하는책임을진다. fragment shader 는물체의보여지는픽셀모두에실행되고, 이것은그래픽파이프라인에서 fragment operations 전에실행된다는것을기억해라. 그래서 opengl 은어떤물체가다른물체보다앞에있는지아직모른다. fsh 는픽셀의최종색을결정하는데책임을진다. vsh, fsh 는각가컴파일되고, program object 에같연결된다. 컴파일된쉐이더는여러 program object 에서재사용가능하다. program object 는 vsh 하나, fsh 하나만연결할수있다. Shader and Program Creation top OK, first let's talk about the process of creating a shader object, put some source code in it and compile it. As any other OpenGL's object, we first create a nane/id to it and then set its properties. In comparison to the other OpenGL objects, the additional process here is the compiling. Remember that the shaders will be processed by the GPU and to optimize the process the OpenGL compiles your source code into a binary format. Optionally, if you have a previously compiled shader in a binary file you could load it directly instead to load the source and compile. But for now, let's focus on the compiling process. These are the functions related to shader creation process: 먼저 shader object 의생성과정에대해알아보고, 소스코드를넣고, 컴파일해보자. 다른 opengl 객체들처럼, id 를생성하고속성들을설정할거다. 다른 opengl 객체와비교점은컴파일과정이추가된다는거다. Shader 는 GPU 에의해서실행된다는것과, opengl 이너의소스코드를바이너리포맷으로컴파일해서최적화한다는것을기억하자. 너가미리컴파일된 shader 바이너리파일을가지고있다면, 이것을바로로드할수있다. 하지만지금은컴파일과정에집중하자. 다음은 shader 생성에필요한함수들이다.

Shader Object Creation GLuint glcreateshader(glenum type) type: Indicates what kind of shader will be created. This parameter can be: o o GL_VERTEX_SHADER: To create a Vertex Shader. GL_FRAGMENT_SHADER: To create a Fragment Shader. type: 어떤종류의 shader 를생성할지지정한다. 다음 2 가지중한개다. 이함수는 shader 객체의 id 를넘겨준다 GLvoid glshadersource(gluint shader, GLsizei count, const GLchar** string, const GLint* length) shader: The shader name/id generated by the glcreateshaderfunction. count: Indicates how many sources you are passing at once. If you are uploading only one shader source, this parameter must be 1. string: The source of your shader(s). This parameter is a double pointer because you can pass an array of C strings, where each element represent a source. The pointed array should has the same length as the count parameter above. length: A pointer to an array which each element represent the number of chars into each C string of the above parameter. This array must has the same number of elements as specified in thecount parameter above. This parameter can also be NULL. In this case, each element in the string parameter above must be nullterminated. GLvoid glcompileshader(gluint shader) shader: The shader name/id generated by the glcreateshaderfunction. As you saw, this step is easy. You create a shader name/id, make the upload of the source code to it and then compile it. If you upload a source code into a shader which had another source into it, the old source will be completely replaced. Once the shader has been compiled, you can't change the source code anymore using the glshadersource. Each Shader object has a GLboolean status to indicate if it is compiled or not. This status will be set to TRUE if the shader was compiled with no errors. This status is good for you use in debug mode of your application to check if the shaders are being compiled correctly. Jointly with this check, it's a good idea you query the info log which is provided. The functions are glgetshaderiv to retrieve the status andglgetshaderinfolog to retrieve the status message. I'll not place the functions and parameters here, but I'll show this shortly in a code example. Is important tell you the OpenGL names/ids reserved to the shaders are one

single list. For example, if you generate a VSH which has the name/id 1 this number will never be used again, if you now create a FSH, the new name/id will probably be 2 and so on. Never a VSH will has the same name/id of a FSH, and vice versa. Once you have a pair of shaders correctly compiled is time to create a Program Object to place both shaders into it. The process to create a program object is similar to the shader process. First you create a Program Object, then you upload something (in this case, you place the compiled shaders into it) and finally you compile the program (in this case we don't use the word "compile", we use "link"). The Program will be linked to what? The Program will link the shaders pair togheter and be link itself to the OpenGL's core. This process is very important, because is into it that many verifications on your shaders occur. Just as the shaders, the Programs also has a link status and a link info log which you can use to check the errors. Once a Program was linked with success, you can be sure: your shaders will work correctly. Here are the functions to Program Object: 보듯이이단계는간단하다. Shader id 를생성하고, 소스코드를업로드하고, 컴파일한다. 이미다른소스코드를가지고있는 shader 에새로운소스코드를업로드하면기존소스코드는없어진다. 일단 shader 가컴파일되면소스코드를바꿀수없다. 각 shader 는컴파일되었는지아닌지를알려주는 Glboolean 형상태를가지고있다. Shader 가성공적으로컴파일되면이상태는 TRUE 가되고, 디버그모드에서유용하게쓰인다. 이것과더불어 glgetshaderiv 와 glgetshaderinfolog 함수를써서뭐가잘못됬는지볼수있다. 여기서이함수를설명하진않지만, 간단한소스를보여줄거다. 만약너가 vsh 하나를생성하면 id 값이 1 이나올거고, 이숫자는다시생성되지않는다. 너가 fsh 를하나더생성하면아마 id 가 2 로나올거고, 이런식으로 vsh 와 fsh 는같은 id 값을가질수없다. Vsh, fsh 가제대로컴파일되면 program 객체를만들어서, 여기에 shader 를넣을차례다. Program 을만드는과정은 shader 때와비슷하다. 먼저 program 을만들고, 무언가를업로드하고 ( 여기서는컴파일된 shader), 프로그램을링크한다. program 은무엇을링크하나? Program 은 shader 2 개를링크하고, 자기자신을 opengl 코어와링크한다. 이때 shader 에많은검사가발생하므로매우중요함. program 도 link 상태와 link 정보로그를가지고있고, 우리는이걸로에러를체크할수있다. Program 이성공적으로링크되면, shader 가올바로동작할것이라는것을알수있다. program 객체관련된함수들.

Program Object Creation GLuint glcreateprogram(void) This function requires no parameter. This is because only exist one kind of Program Object, unlike the shaders. Plus, instead to take the memory location to one variable, this function will return the name/id directly, this different behavior is because you can't create more than one Program Object at once, so you don't need to inform a pointer. GLvoid glattachshader(gluint program, GLuint shader) program: The program name/id generated by theglcreateprogram function. shader: The shader name/id generated by the glcreateshaderfunction. GLvoid gllinkprogram(gluint program) program: The program name/id generated by theglcreateprogram function. In the glattachshader you don't have any parameter to identify if the shader is a Vertex or Fragment one. You remember the shaders names/ids are one single list, right?. The OpenGL will atumatically identify the type of the shaders based on their unique names/ids. So the important part is you call the glattachshader twice, one to VSH and other to FSH. If you attach two VSH or two FSH, the program will not be properly linked, also if you attach more than two shaders, the program will fail in linking. You could create many programs, but how the OpenGL will know which program to use when you call a gldraw*? The OpenGL's Crane doesn't has an arm and a hook to programs objects, right? So how the OpenGL will know? Well, the programs are our exception. OpenGL doesn't has a bind function to it, but works with programs at the same way as a hook. When you want to use a program you call this function: glattachshader 함수에서넘겨주는 shader 가 vertex 인지 fragment 인지알려주는파라메터가없다. Shader 의 id 는중복되지않는다는걸기억할거다. opengl 은유일한 shader id 를가지고어떤종류의 shader 인지자동으로안다. 중요한것은 glattachshade 를 2 번호출한다는것이다. Vsh 한번, fsh 한번. 만약 vsh 를 2 번붙이거나, fsh 를 2 번붙이면, program 은링크되지않는다. shader 를 2 개이상붙여도링크실패한다. 너는 program 객체를여러개만들수있다. 그럼 opengl 은어떻게 gldraw 할때어떤프로그램을사용해야할지알수있을까? 다음할수를써서, 어떤프로그램을사용할지지정할수있다.

Program Object Usage GLvoid gluseprogram(gluint program) program: The program name/id generated by theglcreateprogram function. After calling the function above, every subsequent call to gldraw*functions will use the program which is currently in use. As any otherglbind* function, the name/id 0 is reserved to OpenGL and if you callgluseprogram(0) this will unbind any current program. Now is time to code, any OpenGL application which you create will have a code like this: 이함수를호출한후, 다음에나오는모든 gldraw* 함수는이 program id 를사용한다. glbind* 함수들처럼 program id 0 번은 opengl 에서예약되어있다. 그래서 gluseprogram(0) 은하지마라. 소스를보자. GLuint _program; GLuint createshader(glenum type, const char **source) { GLuint name; // Creates a Shader Object and returns its name/id. name = glcreateshader(type); // Uploads the source to the Shader Object. glshadersource(name, 1, &source, NULL); // Compiles the Shader Object. glcompileshader(name); // If you are running in debug mode, query for info log. // DEBUG is a pre-processing Macro defined to the compiler. // Some languages could not has a similar to it. #if defined(debug) GLint loglength; // Instead use GL_INFO_LOG_LENGTH we could use COMPILE_STATUS. // I prefer to take the info log length, because it'll be 0 if the // shader was successful compiled. If we use COMPILE_STATUS // we will need to take info log length in case of a fail anyway. glgetshaderiv(name, GL_INFO_LOG_LENGTH, &loglength); if (loglength > 0) { // Allocates the necessary memory to retrieve the message. GLchar *log = (GLchar *)malloc(loglength);

} #endif // Get the info log message. glgetshaderinfolog(name, loglength, &loglength, log); // Shows the message in console. printf("%s",log); // Frees the allocated memory. free(log); } return name; GLuint createprogram(gluint vertexshader, GLuint fragmentshader) { GLuint name; // Creates the program name/index. name = glcreateprogram(); // Will attach the fragment and vertex shaders to the program object. glattachshader(name, vertexshader); glattachshader(name, fragmentshader); // Will link the program into OpenGL's core. gllinkprogram(_name); #if defined(debug) GLint loglength; // This function is different than the shaders one. glgetprogramiv(name, GL_INFO_LOG_LENGTH, &loglength); if (loglength > 0) { GLchar *log = (GLchar *)malloc(loglength); // This function is different than the shaders one. glgetprograminfolog(name, loglength, &loglength, log); printf("%s",log); free(log); } #endif } return name;

void initprogramandshaders() { const char *vshsource = "... Vertex Shader source using SL..."; const char *fshsource = "... Fragment Shader source using SL..."; GLuint vsh, fsh; vsh = createshader(gl_vertex_shader, &vshsource); fsh = createshader(gl_fragment_shader, &fshsource); _program = createprogram(vsh, fsh); // Clears the shaders objects. // In this case we can delete the shader because we // will not use they anymore and once compiled, // the OpenGL stores a copy of they into the program object. gldeleteshader(vsh); gldeleteshader(fsh); // Later you can use the _program variable to use this program. // If you are using an Object Oriented Programming is better make // the program variable an instance variable, otherwise is better make // it a static variable to reuse it in another functions. // gluseprogram(_program); } Here I've made a minimal elaboration to make it more reusable, separating the functions which creates OpenGL objects. For example, instead to rewrite the code of shader creation, we can simply call the function createshader and inform the kind of shader we want and the source from it. The same with programs. Of course if you are using an OOP language you could elaborate it much more, creating separated classes for Program Objects and Shader Objects, for example. This is the basic about the shader and program creation, but we have much more to see. Let's move to the Shader Language (SL). I'll treat specifically the GLSL ES, the compact version of OpenGL Shader Language for Embedded Systems. 별로필요없는내용이라번역안함. Shader Language top The shader language is very similar to C standard. The variables declarations and function syntax are the same, the if-then-else and loops has

the same syntax too, the SL even accept preprocessor Macros, like #if, #ifdef, #define and others. The shader language was made to be as fast as possible. So be careful about the usage of loops and conditions, they are very expansive. Remember the shaders will be processed by the GPU and the floating-point calculations are optimized. To explore this great improvement, SL has exclusive data types to work with 3D world: shader 언어는표준 c 와비슷하다. 변수선언과함수문법은같고, if-then-else, 반복문도문법이같다. 심지어 #if, #ifdef, #define 과같은매크로도있다. shader language 는가능한빠르게실행되도록만들어야하므로, 반복문이나조건문사용에신중을기해야한다. Shader 는 GPU 가처리하고, 부동소수점계산에최적화되어있다는것을기억해라. Sl 은고유한데이터타입을가지고있다 SL's Data Type Same as C Description void void Can represent any data type float float The range depends on the precision. bool unsigned char 0 to 1 int char/short/int The range depends on the precision. vec2 - Array of 2 float. {x, y}, {r, g}, {s, t} vec3 - Array of 3 float. {x, y, z}, {r, g, b}, {s, t, r} vec4 - Array of 4 float. {x, y, z, w}, {r, g, b, a}, {s, t, r, q} bvec2 - Array of 2 bool. {x, y}, {r, g}, {s, t} bvec3 - Array of 3 bool. {x, y, z}, {r, g, b}, {s, t, r} bvec4 - Array of 4 bool. {x, y, z, w}, {r, g, b, a}, {s, t, r, q} ivec2 - Array of 2 int. {x, y}, {r, g}, {s, t} ivec3 - Array of 3 int. {x, y, z}, {r, g, b}, {s, t, r} ivec4 - Array of 4 int. {x, y, z, w}, {r, g, b, a}, {s, t, r, q} mat2 - Array of 4 float. Represent a matrix of 2x2. mat3 - Array of 9 float. Represent a matrix of 3x3. mat4 - Array of 16 float. Represent a matrix of 4x4. sampler2d - Special type to access a 2D texture samplercube - Special type to access a Cube texture (3D texture) All the vector data types (vec*, bvec* and ivec*) can have their elements accessed by either using "." syntax or array subscripting syntax "[x]". In the above table you saw the sequences {x, y, z, w}, {r, g, b, a}, {s, t, r, q}. They are the accessors for the vector elements. For example,.xyz could represent the first three elements of a vec4, but you can't use.xyz in a vec2, because.xyz in a vec2 is out of bounds, for a vec2 just.xy could be used. You

also can change the order to achieve your results, for example.yzx of a vec4 means you are querying the second, third and first elements, respectively. The reason for three different sequences is because a vec data type can be used to represent vectors (x,y,z,w), colors (r,g,b,a) or even texture coordinates (s,t,r,q). The important thing is that you can't mix these sets, for example you can't use.xrt. The following example can help: 모든 vector 데이터타입은 (vec*, bvec*, ivec*). 기호나 [x] 처럼배열을통해서요소에접근할수있다. xyzw, rgba, strg 들은벡터에접근할수있는요소들이다. 예를들어.xyz 는 vec4 에서앞의 3 개요소를나타낸다. Vec2 에서는.xyz 요소가없기때문에쓸수없다. vec2 는.xy 뿐임. 요소의순서를바꿀수있다. vec4 에서.yzx 는 2 번째,3 번째,1 번째요소를가져온다.vec 데이터타입은벡터의 xyzw, 색의 rgba, 텍스쳐좌표 strg 로쓰일수있다. 하지만.xrt 처럼섞어서사용할수는없다. vec4 myvec4 = vec4(0.0, 1.0, 2.0, 3.0); vec3 myvec3; vec2 myvec2; myvec3 = myvec4.xyz; // myvec3 = {0.0, 1.0, 2.0}; myvec3 = myvec4.zzx; // myvec3 = {2.0, 2.0, 0.0}; myvec2 = myvec4.bg; // myvec2 = {2.0, 1.0}; myvec4.xw = myvec2; // myvec4 = {2.0, 1.0, 2.0, 1.0}; myvec4[1] = 5.0; // myvec4 = {2.0, 5.0, 2.0, 1.0}; Is very simple. Now, about the conversions, you need to take care with some things. The SL uses something called Precision Qualifiers to define the range of minimum and maximum values to a data type. Precision Qualifiers are little instructions which you can use in front of any variable declaration. As any data range, this depends on the hardware capacity. So, the following table is about the minimum range necessary to SL. Some vendors can increase these ranges: 간단하다. 이제변환에대해서알아보자. 주의가필요하다. Sl 은정밀도를사용해데이터타입의최소값, 최대값을정한다. 변수선언앞에붙여서쓴다. 데이터타입의범위는하드웨어에따라달라질수있다. 아래표는 SL 에서규정한최소한의범위다. Precision Floating Point Range Integer Range lowp -2.0 to 2.0-256 to 256 mediump -16,384.0 to 16,384.0-1,024 to 1,024 highp -4,611,686,018,427,387,904.0 to 4,611,686,018,427,387,904.0-65,536 to 65,536

Instead declare a qualifier at each variable you can also define global qualifiers by using the keyword precision. The Precision Qualifiers can help when you need to convert between data types, this should be avoided, but if you really need, use the Precision Qualifiers to help you. For example, to convert a float to an int you should use a mediump float and a lowp int, if you try to convert a lowp float (range -2.0 to 2.0) to a lowp int all result you will have is between -2 and 2 integers. And to convert you must use a build-in function to the desired data type. The following code can help: 변수마다앞에정밀도를붙이는대신, precision 키워드로전역에적용할수있음. 정림도한정자는데이터타입을바꿀때필요하다. 가능하면피하는게좋지만, 진짜필요할때는도움을줄것이다. 예를들어 float 를 int 로바꿀때 mediump float 와 lowp int 를쓸수있다. 만약 lowp float 를 lowp int 로바꾸면 -2~2 의정수값만받을수있다. 변환시에는내장된함수를써야한다. precision mediump float; precision lowp int; vec4 myvec4 = vec4(0.0, 1.0, 2.0, 3.0); ivec3 myivec3; mediump ivec2 myivec2; // This will fail. Because the data types are not compatible. //myivec3 = myvec4.zyx; myivec3 = ivec3(myvec4.zyx); myivec2.x = myivec3.y; // This is OK. // This is OK. myivec2.y = 1024; // This is OK too, but the myivec3.x will assume its maximum value. // Instead 1024, it will be 256, because the precisions are not // equivalent here. myivec3.x = myivec2.y;

One of the great advantages and performance gain of working directly in the GPU is the operations with the floating-point. You can do multiplications or other operation with the floating-point very easily. Matrices types, vectors types and float type are fully compatibles, respecting their dimensions, of course. You could make complex calculations, like matrices multiplications, in a single line, just like these: Floating-point 로연산하는게성능상좋다 (GPU 니까 ). 곱셈이나다른연산에부동소수점을쉽게쓸수있다. 행렬타입, 벡터타입, 부동소수점타입은다호환된다. 행렬의곱셈처럼복잡한연산도한줄로해결된다. mat4 mymat4; mat3 mymat3; vec4 myvec4 = vec4(0.0, 1.0, 2.0, 3.0); vec3 myvec3 = vec3(-1.0, -2.0, -3.0); float myfloat = 2.0; // A mat4 has 16 elements, could be constructed by 4 vec4. mymat4 = mat4(myvec4,myvec4,myvec4,myvec4); // A float will multiply each vector value. myvec4 = myfloat * myvec4; // A mat4 multiplying a vec4 will result in a vec4. myvec4 = mymat4 * myvec4; // Using the accessor, we can multiply two vector of different orders. myvec4.xyz = myvec3 * myvec4.xyz; // A mat3 produced by a mat4 will take the first 9 elements. mymat3 = mat3(mymat4); // A mat3 multiplying a vec3 will result in a vec3. myvec3 = mymat3 * myvec3;

You can also use array of any data type and even can construct structs, just like in C. The SL defines which every shader must have one functionvoid main(). The shader execution will start by this function, just like C. Any shader which doesn't has this function will not be compiled. A function in SL works exactly as in C. Just remember that SL is an inline language, I mean, if you've wrote a function before call it, it's OK, otherwise the call will fail. So if you have more functions in your shader, remember which the void main() must be the last to be written. Now is time to go deeply and understand what exactly the vertex and fragment shaders make. c 처럼여러데이타타입의배열도사용할수있고, 구조체도가능하다. c 처럼 sl 은 void main() 함수를가져야하고이함수로시작한다. main 함수가없으면컴파일되지않는다. SL 의함수는 c 와동일하게동작한다. SL 은 inline 언어라는걸기억해라. 함수는호출되기전에만들어져야한다. 따라서, void main() 앞에함수를놓아야한다. 이제깊게들어가서정확히 vertex, fragment shader 가뭘만드는지알아보자. Vertex and Fragment Structures top First of all, let's take a look into the Shaders Pipeline and then I'll introduce you the Attributes, Uniforms, Varyings and Built-In Functions. 먼저 shader 파이프라인을살펴보고나서, Attributes, Uniforms, Varying, 내장함수 들에대해서알아보자.

Your VSH should always have one or more Attributes, because the Attributes is used to construct the vertices of your 3D object, only the attributes can be defined per-vertex. To define the final vertex position you'll use the built-in variable gl_position. If you are drawing a 3D point primitive you could also set the gl_pointsize. Later on, you'll set the gl_fragcolor built-in variable in FSH. The Attributes, Uniforms and Varyings construct the bridge between the GPU processing and your application in CPU. Before you make a render (call gldraw* functions), you'll probably set some values to the Attributes in VSH. vsh 하나이상의 attribute 를가져야한다. 왜냐면 attribute 는 3 차원객체의꼭지점으로사용되기때문이다. 최종꼭지점위치를정하기위해서내장변수 gl_position 을써라. 3 차원점을그리려면 gl_pointsize 도설정해라. 나중에 fsh 안에서내장변수 gl_fragcolor 를설정할거다. Attribute, uniform, varying 은 GPU 처리와 CPU 에서돌아가는너의어플리케이션을연결해준다. gldraw* 함수로그리기전에 vsh 에몇개의 attribute 값을넣어야한다.

These values can be constant in all vertices or can be different at each vertex. By default, any implementation of OpenGL's programmable pipeline must supports at least 8 Attributes. You can't set any variable directly to the FSH, what you need to do is set a Varying output into the VSH and prepare your FSH to receive that variable. 그값들은모든꼭지점에동일하게적용될수도있고, 각꼭지점마다다르게적용될수도있다. 기본적으로 opengl programmable pipeline 은최소 8 개의 attribute 를지원한다. fsh 에는바로값을전달할수없고, vsh 의 varying 변수에값을설정하고, fsh 에서는동일한변수로그값을받을수있다. This step is optional, as you saw in the image, but in reality is very uncommon construct a FSH which doesn't receive any Varying. By default, any implementation of OpenGL's programmable pipeline must to support at least 8 Varyings. Another way to comunicate with the shader is by using the Uniforms, but as the name suggest, the Uniforms are constants throughout all the shaders processing (all vertices and all fragments). 그림에서보듯이 varying 은필수사항을아니지만. Varying 변수를하나도안받는 fsh 는거의없다. 기본적으로 opengl programmable pipeline 은적어도 8 개의 varying 을지원한다. Uniforms 을통해서도 shader 에값을넘겨줄수있지만, 이름에서짐작할수있듯이, uniforms 는모든 shader 처리동안값이바뀔수없다. A very common usage to uniforms is the samplers, you remember sampler data types, right? They are used to hold our Texture Units. You remember the Texture Units too, right? Just to make this point clear, samplers data types should be like int data types, but is a special kind reserved to work with textures. Just it. The minimum supported Uniforms is different from each shader type. uniforms 의흔한사용처는 sampler 이다. Sampler 데이터타입을기억하는가? Texture unit 을저장할때사용했었다. Texture unit 도기억하는가? 이주제에집중하기위해서, samplers 데이터타입은정수와비슷하지만 texture 작업에쓰이는특별한종류라고생각해라. 지원되는 uniform 은 shader 타입에따라다르다. The VSH supports at least 128 Uniforms, but the FSH supports at least 16 Uniforms. Now, about the Built-In Variables, OpenGL defines few variables which is mandatory to us at each shader. The VSH must define the final vertex position, this is done through the variable gl_position, if current drawing primitive is a 3D point is a good idea to set thegl_pointsize too. vsh 최소 128 uniform 을지원하지만, fsh 는겨우 16 개만지원한다. opengl 은각 shader 마다우리가의무적으로설정해야하는몇개의내장변수를정의한다. Vsh 는최종꼭지점위치를정의해야만하고, gl_position 변수가그역활이다. 만약 3 차원점을그리려면 gl_pointsize 도설정해라.

The gl_pointsize will instruct the FSH about how many fragments each point will affect, or in simple words, the size in the screen of a 3D point. This is very useful to make particle effects, like fire. gl_pointsize 는간단하게 3 차원점의크기를정한다. 이것은불같은파티클효과에매우유용하다. In the VSH has built-in read-only variable, like the gl_frontfacing. This variable is of bool data type, it instructs if the current vertex is front facing or not. In the FSH the built-in output variable is gl_fragcolor. For compatibility with the OpenGL's desktop versions, the gl_fragdata can also be used.gl_fragdata is an array which is related to the drawable buffers, but as OpenGL ES has only one internal drawable buffer, this variable must always be used as gl_fragdata[0]. My advice here is to forget it and focus on gl_fragcolor. vsh gl_frontfacing 같은읽기전용내장변수를가지고있다. 이것은 bool 타입임. 현재꼭지점이앞면인지아닌지정한다. Fsh 에는출력용내장변수 gl_fragcolor 가있다. Opengl 데스크탑버젼과의호환을위해 gl_fragdata 도쓸수있다. gl_fragdata 는그리기버퍼와관련있는배열이지만, opengl es 는 gl_fragdata[0] 하나의내부그리기버퍼만가지고있다. gl_fragcolor 에만집중하자. About the read-only built-in variables, the FSH has three of them: gl_frontfacing, gl_fragcoord and gl_pointcoord. Thegl_FrontFacing is equal in the VSH, it's a bool which indicates if the current fragment is front facing or not. The gl_fragcoord is vec4 data type which indicates the fragment coordinate relative to the window (window here means the actual OpenGL's view. The gl_pointcoord is used when you are rendering 3D points. In cases when you specifygl_pointsize you can use the gl_pointcoord to retrieve the texture coordinate to the current fragment. fsh 는 3 개의읽기전용내장변수를가지고있다 (gl_frontfacing, gl_fragcoord, gl_pointcoord). Gl_FrontFacing 은 vsh 에서와같이 bool 값이고, 현재픽셀이앞면인지아닌지알려준다. Gl_FragCoord 는 vec4 타입이고 opengl 윈도우에서의픽셀좌표을알려준다. Gl_PointCoord 는 3 차원점을그릴때사용된다. gl_pointsize 를기술했으면, 현재픽셀의텍스쳐좌표와관련된 gl_pointcoord 를사용할수있다. For example, a point size is always square and given in pixels, so a size of 16 represent a point formed by 4 x 4 pixels. The gl_pointcoord is in range of 0.0-1.0, exactly like a texture coordinate information. The most important in the built-in output variables is the final values. So you could change the value of gl_positionseveral times in a VSH, the final position will be the final value. The same is true for gl_fragcolor. The following table shows the built-in variables and their data types: 예를들어, point size 를 16 으로설정하면 4*4 픽셀로표시된다. Gl_PointCoord 의범위는텍스쳐좌표처럼 0.0~1.0 이다. 내장출력변수에서중요한것은최종값이다. 너는 vsh 안에서 gl_position 값을여러번바꿀수있다. 최종위치가최종값이된다.

Built-In Variable Precision Data Type Vertex Shader Built-In Variables gl_position highp vec4 gl_frontfacing - bool gl_pointsize mediump float Fragment Shader Built-In Variables gl_fragcolor mediump vec4 gl_frontfacing - bool gl_fragcoord mediump vec4 gl_pointcoord mediump vec2 Is time to construct a real shader. The following code constructs a Vertex and a Fragment Shader which uses two texture maps. Let's start with the VSH. 진짜 shader 를만들시간이다. 다음의코드는 vertex shader 와텍스쳐맵을사용하는 fragment shader 를만든다. precision mediump float; precision lowp int; uniform mat4 u_mvpmatrix; attribute vec4 a_vertex; attribute vec2 a_texture; varying vec2 v_texture; void main() { // Pass the texture coordinate attribute to a varying. v_texture = a_texture; // Here we set the final position to this vertex. gl_position = u_mvpmatrix * a_vertex; } And now the corresponding FSH: 이제 fsh 다

precision mediump float; precision lowp int; uniform sampler2d u_maps[2]; varying vec2 v_texture; void main() { // Here we set the diffuse color to the fragment. gl_fragcolor = texture2d(u_maps[0], v_texture); // Now we use the second texture to create an ambient color. // Ambient color doesn't affect the alpha channel and changes // less than half the natural color of the fragment. gl_fragcolor.rgb += texture2d(u_maps[1], v_texture).rgb *.4; } Great, now is time to back to OpenGL API and prepare our Attributes and Uniforms, remember that we don't have directly control to the Varyings, so we must to set an Attribute to be send to a Varying during the VSH execution. 이제 opengl api 로돌아가서, attributes 와 uniforms 를준비한다. 우리는 varyings 에직접접근할수없다는걸기억해라. 그래서 vsh 가실행될동안 attributes 를설정하고 varying 에전달해야한다. Setting the Attributes and Uniforms top To identify any variable inside the shaders, the Program Object defines locations to its variables (location is same as index). Once you know the final location to an Attribute or Uniform you can use that location to set its value. To setup an Uniform, OpenGL gives to us only one way: after the linking, we retrieve a location to the desired Uniform based on its name inside the shaders. To setup the Attributes, OpenGL gives to us two ways: we could retrieve the location after the program be linked or could define the location before the program be linked. I'll show you the both ways anyway, but set the locations before the linking process is useless, you'll understand why. So let's start with this useless method. Do you remember that exception to the rule of glbindsomething = "take a container"? OK, here is it. To set an attribute location before the program be linked we use a function which starts with glbindsomething, but in reality the OpenGL's Port Crane doesn't take any container at this time. Here the "bind" word is related with the process inside the Program Object, the process of make a connection between an attribute name and a location inside the program. So, the function is: shader 안에있는변수를구별하기위해서, program object 는변수의위치를정의한다. ( 위치는 index 와같다 ) 일단 attribute 와 uniform 의최종위치을알면그위치에값을설정할수있다. uniform 값을설정하는 opengl 에서제공하는

한가지방법이있는데, linking 다음에, shader 내부의이름을기반으로원하는 uniform 의위치를받을수있다. attribute 값을설정하는 opengl 에서제공하는 2 가지방법이있는데, program 이 link 된후위치를받아오거나, program 이 link 되기전위치를정할수있다. 2 가지방법모두보여줄거지만, link 되기전위치를정하는방법은쓸모없다. 나중에왜필요업는지이해할것이다. 자이쓸모없는방법부터시작하자. glbindsomthing 의예외를기억하나? Program 이 link 되기전에 attribute 를설정하려면 glbinidsomething 함수를쓴다. 하지만실제 opengl 에서는이때어떤것도잡지않는다. Bind 라는말은 program object 안에서 attribute 이름과위치를연결하는내부처리와연관이있다. Setting Attribute Location before the linkage GLvoid glbindattriblocation(gluint program, GLuint index, const GLchar* name) program: The program name/id generated by theglcreateprogram function. index: The location we want to set. name: The name of attribute inside the vertex shader. The above method must be called after you create the Program Object, but before you link it. This is the first reason because I discourage doing this. It's a middle step in the Program Object creation. Obviously you can choose the best way to your application. I prefer the next one. Now let's see how to get the locations to Attributes and Uniforms after the linking process. Whatever way you choose, you must to hold the location to each shader variable in your application. Because you will need these locations to set its values later on. Here is the functions to use after the linking: 위의메소드는 program object 를생성한후에, link 하기전에호출해라. 이게내가이방법을쓰지말라는첫번째이유다. 이것은 program object 생성의중간단계다. 확실히너의어플리케이션에맞는최상의방법을선택할수있지만, 나는다음방법을더좋아한다. 자이제 linking 후에 attributes 와 uniforms 위치를얻는법을알아보자. 너가어떤방법을선택하던너의어플리케이션에서는각 shader 에보낼위치를알고있어야한다. 그래야나중에이위치를이용해서값을설정할수있으니까. linking 후에사용할함수는다음과같다.

Getting Attribute and Uniform Location GLint glgetattriblocation(gluint program, const GLchar* name) program: The program name/id generated by theglcreateprogram function. name: The attribute's name inside the vertex shader. GLint glgetuniformlocation(gluint program, const GLchar* name) program: The program name/id generated by theglcreateprogram function. name: The uniform's name inside the shaders. Once we have the locations to our attributes and uniforms, we can do use these locations to set the values we want. OpenGL gives to us 28 different function to set the values of our attributes and uniforms. Those functions are separated in groups which let you define constant values (uniforms or attributes) or dynamic values (attributes only). To use dynamic attributes you need enable them for a while. You could ask what is the difference between the uniforms, which are always constants, and the constants attributes. Well, the answer is: Good question! Just like the culling GL_FRONT_AND_BACK, this is one thing which I can't understand why the OpenGL continue using it. There are no real difference on the performance of uniforms and constant attributes, or on the memory size and these kind of impacts. So my big advice here is: let the attributes to dynamic values only! If you have a constant value, use the uniforms! Plus, have two things which make the uniforms the best choice to constant values: Uniforms can be used 128 times in the vertex shader but the attributes just 8 times and the other reason is because attributes can't be arrays. I'll explain this fact later on. For now, although by default the OpenGL does use the attributes as constants, they was not made for this purpose, they was made to be dynamic. Anyway, I'll show how to set the dynamic attributes, uniforms and even the useless constant attributes. Uniforms can be used with any of the data types or even be a structure or an array of any of those. Here are the functions to set the uniforms values: 일단 attribute 와 uniform 의위치를알게되면, 원하는값으로셋팅할수있다. opengl 은 attribute 와 uniform 값을설정할수있는 28 가지함수를제공한다. 그함수들은상수값을넣을건지 (attribute, uniform), 변경되는값을넣을건지 (attribute 만 ) 에따라서그룹으로나뉘어진다. Dynamic attribute 를쓰려면활성화해야한다. Uniform 은항상상수인데상수 attribute 와무슨차이가있는지궁금할것이다. 좋은질문이다. 그것은 GL_FRONT_AND_BACK culling 과같다. 나도이걸왜쓰는지이해못한다. Uniform 과상수 attribute 의실제성능상의차이는없다. 나의충고는 attribute 는 dynamic 으로만쓰고상수는 uniform 써라.

vertex shader 는 uniform 을 128 개사용할수있고, attribute 는 8 개만쓸수있다. attribute 는배열이될수없다. 나중에설명하겠다. 비록 opengl 이 attribute 를상수로사용할수있다고해도, dynamic 으로쓰는게좋다. 어쨌든 dynamic attribute 와 uniform 을설정하는법을보여줄것이다. Uniform 은구조체, 배열들어떠한데이터타입도가능하다. Uniform 에값을설정하려면아래의함수를써라. Defining the Uniforms Values GLvoid gluniform{1234}{if}(glint location, T value[n]) location: The uniform location retrieved by theglgetuniformlocation function. value[n]: The value you want to set based on the last letter of the function name, i = GLint, f = GLfloat. You must repeat this parameter N times, according to the number specified in the function name {1234}. GLvoid gluniform{1234}{if}v(glint location, GLsizei count, const T* value) location: The uniform location retrieved by theglgetuniformlocation function. count: The length of the array which you are setting. This will be 1 if you want to set only a single uniform. Values greater than 1 means you want to set values to an array. value: A pointer to the data you want to set. If you are setting vector uniforms (vec3, for example), each set of 3 values will represent one vec3 in the shaders. The data type of the values must match with the letter in the function name, i = GLint, f = GLfloat. GLvoid gluniformmatrix{234}fv(glint location, GLsizei count, GLboolean transpose, const GLfloat* value) location: The uniform location retrieved by theglgetuniformlocation function. count: The number of matrices which you are setting. This will be 1 if you want to set only one mat{234} into the shader. Values greater than 1 means you want to set values to an array of matrices, arrays are defined as mat{234}["count"] in the shaders. transpose: This parameter must be GL_FALSE, it is an internal convention just to compatibility with desktop version. value: A pointer to your data.

Many question, I know... Let me explain it step by step. The above table shows exactly 19 OpenGL functions. The notation {1234} means you have to write one of these number in the function name, followed by {if} which means you have to choose one of those letters to write the function name and the final "v" of "fv" means you have to write one of both anyway. The [N] in the parameter means you have to repeat that parameter according to the number {1234} in the function name. Here are the complete list of 19 functions: 질문이많을것이다. 차근차근설명한다. 위의표에는 19 개의함수가있다. 함수명에서 {1234} 는이숫자중하나를선택해사용한다는뜻이다. 함수명에서 {if} 는이숫자중하나를선택해사용한다는뜻이다. gluniform1i(glint location, GLint x) gluniform1f(glint location, GLfloat x) gluniform2i(glint location, GLint x, GLint y) gluniform2f(glint location, GLfloat x, GLfloat y) gluniform3i(glint location, GLint x, GLint y, GLint z) gluniform3f(glint location, GLfloat x, GLfloat y, GLfloat z) gluniform4i(glint location, GLint x, GLint y, GLint z, GLint w) gluniform4f(glint location, GLfloat x, GLfloat y, GLfloat z, GLfloat w) gluniform1iv(glint location, GLsizei count, const GLint* v) gluniform1fv(glint location, GLsizei count, const GLfloat* v) gluniform2iv(glint location, GLsizei count, const GLint* v) gluniform2fv(glint location, GLsizei count, const GLfloat* v) gluniform3iv(glint location, GLsizei count, const GLint* v) gluniform3fv(glint location, GLsizei count, const GLfloat* v) gluniform4iv(glint location, GLsizei count, const GLint* v) gluniform4fv(glint location, GLsizei count, const GLfloat* v) gluniformmatrix2fv(glint location, GLsizei count, GLboolean transpose, const GLfloat* value) gluniformmatrix3fv(glint location, GLsizei count, GLboolean transpose, const GLfloat* value) gluniformmatrix4fv(glint location, GLsizei count, GLboolean transpose, const GLfloat* value) Wow!!! By this perspective, can seems so many functions to learn all, but trust me, it's not! I prefer look at that table. If I want to set a single uniform which is not of matrix data type I use gluniform{1234}{if}according to what I want, 1 = float/bool/int, 2 = vec2/bvec2/ivec2, 3 =vec3/bvec3/ivec3 and 4 = vec4/bvec4/ivec4. Very simple! If I want to set an array I just place a "v" (of vector) at the end of my last reasoning, so I'll use gluniform{1234}{if}v. And finally if what I want is set a matrix data type, being an array or not, I surely will usegluniformmatrix{234}fv according to what I want, 2 = mat2, 3 = mat3and 4 =mat4. To define an array you need to remember that the count of your array must be informed to one of above functions by the parametercount. Seems more simple now, right? 귀찮아서번역생략.

This is all about how to set an uniform into the shaders. Remember two important things: the same uniform can be used by both shaders, to do this just declare it into both. And the second thing is the most important, the uniforms will be set to the currently program object in use. So you MUST to start using a program before set the uniforms and attributes values to it. To use a program object you remember, right? Just call gluseprogram informing the desired name/id. Now let's see how to set up the values to attributes. Attributes can be used only with the data types float, vec2, vec3, vec4,mat2, mat3, and mat4. Attributes cannot be declared as arrays or structures. Following are the functions to define the attributes values. Shader 의 uniform 에값을설정하는법은이것이끝. 중요한 2 가지를기억해라. 같은 uniform 이 2 개의 shader 에서사용될수있다는것 ( 양쪽에다선언해서 ) uniform 은현재의 program object 에설정된다는것이다. 그래서 uniform 과 attribute 를설정하기전에 program 사용을시작해야한다. program object 를사용하려면 gluseprogram 을호출하면된다. 자이제 attribute 를설정하는법을보자. Attribute 는 float, vec2~4, mat2~4 의데이터타입만가능하다. 구조체나배열은안된다. Defining the Attributes Values GLvoid glvertexattrib{1234}f(gluint index, GLfloat value[n]) index: The attribute's location retrieved by theglgetattriblocation function or defined withglbindattriblocation. value[n]: The value you want to set. You must repeat this parameter N times, according to the number specified in the function name {1234}. GLvoid glvertexattrib{1234}fv(gluint index, const GLfloat* values) index: The attribute's location retrieved by theglgetattriblocation function or defined withglbindattriblocation. values: A pointer to an array containing the values you want to set up. Only the necessary elements in the array will be used, for example, in case of setting a vec3 if you inform an array of 4 elements, only the first three elements will be used. If the shader need to make automatically fills, it will use the identity of vec4 (x = 0, y = 0, z = 0, z = 1), for example, in case you setting a vec3 if you inform an array of 2 elements, the third elements will be filled with value 0. To matrices, the auto fill will use the matrix identity.

GLvoid glvertexattribpointer(gluint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const GLvoid* ptr) index: The attribute's location retrieved by theglgetattriblocation function or defined withglbindattriblocation. size: This is the size of each element. Here the values can be: o 1: to set up float in shader. o 2: to set up vec2 in shader. o 3: to set up vec3 in shader. o 4: to set up vec4 in shader. type: Specify the OpenGL data type used in the informed array. Valid values are: o GL_BYTE o GL_UNSIGNED_BYTE o GL_SHORT o GL_UNSIGNED_SHORT o GL_FIXED o GL_FLOAT normalized: If set to true (GL_TRUE) this will normalize the non-floating point data type. The normalize process will place the converted float number in the range 0.0-1.0. If this is set to false (GL_FALSE) the non-floating point data type will be converted directly to floating points. stride: It means the interval of elements in the informed array. If this is 0, then the array elements will be used sequentially. If this value is greater than 0, the elements in the array will be used respecting this stride. This values must be in the basic machine units (bytes). ptr: The pointer to an array containing your data. The above table has the same rules as the uniforms notations. This last table have 9 functions described, which 8 are to set constant values and only one function to set dynamic values. The function to set dynamic values is glvertexattribpointer. Here are the complete list of functions: uniform 때와같은규칙임. 9 개의함수다. 8 개는상수값을넣는함수고, 하나만 dynamic 값을넣는함수다. (glvertexattribpointer 만 dynamic 용이다 ) glvertexattrib1f(gluint index, GLfloat x) glvertexattrib2f(gluint index, GLfloat x, GLfloat y) glvertexattrib3f(gluint index, GLfloat x, GLfloat y, GLfloat z) glvertexattrib4f(gluint index, GLfloat x, GLfloat y, GLfloat z, GLfloat w) glvertexattrib1fv(gluint index, const GLfloat* values) glvertexattrib2fv(gluint index, const GLfloat* values) glvertexattrib3fv(gluint index, const GLfloat* values) glvertexattrib4fv(gluint index, const GLfloat* values) glvertexattribpointer(gluint index, GLint size, GLenum type, GLboolean normalized, GLsizei stride, const GLvoid* ptr) The annoying thing here is the constant value is the default behavior to the shaders, if you want to use dynamic values to attributes you will need to temporarily enable this feature. Dynamic values will be set as per-vertex. You must to use the following functions to enable and disable the dynamic values behavior:

shader 의기본이상수값이라는게짜증난다. 만약 dynamic 값을 attribute 에넣으려면활성화해야한다. Dynamic 값은꼭짓점마다적용된다. 아래의함수를써서 dynamic 값을활성화 / 비활성화할수있다. Variable Values Feature GLvoid glenablevertexattribarray(gluint index) index: The attribute's location retrieved by theglgetattriblocation function or defined withglbindattriblocation. GLvoid gldisablevertexattribarray(gluint index) index: The attribute's location retrieved by theglgetattriblocation function or defined withglbindattriblocation. So, before using glvertexattribpointer to define per-vertex values to the attributes, you must enable the location of the desired attribute to accept dynamic values by using the glenablevertexattribarray. To the pair of VSH and FSH shown early, we could use the following code to setup their values: 그래서 glvertexattribpointer 를써서꼭짓점마다값을넣으려면그전에 glenablevertexattribarray 함수를써서해당위치의 attribute 가 dynamic 값을받을수있도록활성화해야한다. Vsh, fsh 모두 // Assume which _program defined early in another code example. GLuint mvploc, mapsloc, vertexloc, textureloc; // Gets the locations to uniforms. mvploc = glgetuniformlocation(_program, "u_mvpmatrix"); mapsloc = glgetuniformlocation(_program, "u_maps"); // Gets the locations to attributes. vertexloc = glgetattriblocation(_program, "a_vertex"); textureloc = glgetattriblocation(_program, "a_texture"); //... // Later, in the render time... //... // Sets the ModelViewProjection Matrix. // Assume which "matrix" variable is an array with // 16 elements defined, matrix[16]. gluniformmatrix4fv(mvploc, 1, GL_FALSE, matrix);

// Assume which _texture1 and _texture2 are two texture names/ids. // The order is very important, first you activate // the texture unit and then you bind the texture name/id. glactivetexture(gl_texture0); glbindtexture(gl_texture_2d, _texture1); glactivetexture(gl_texture1); glbindtexture(gl_texture_2d, _texture2); // The {0,1} correspond to the activated textures units. int textureunits[2] = {0,1}; // Sets the texture units to an uniform. gluniform1iv(mapsloc, 2, &textureunits); // Enables the following attributes to use dynamic values. glenablevertexattribarray(vertexloc); glenablevertexattribarray(textureloc); // Assume which "vertexarray" variable is an array of vertices // composed by several sequences of 3 elements (X,Y,Z) // Something like {0.0,0.0,0.0, 1.0,2.0,1.0, -1.0,-2.0,- 1.0,...} glvertexattribpointer(vertexloc, 3, GL_FLOAT, GL_FALSE, 0, vertexarray); // Assume which "texturearray" is an array of texture coordinates // composed by several sequences of 2 elements (S,T) // Something like {0.0,0.0, 0.3,0.2, 0.6, 0.3, 0.3,0.7,...} glvertexattribpointer(textureloc, 2, GL_FLOAT, GL_FALSE, 0, texturearray); // Assume which "indexarray" is an array of indices // Something like {1,2,3, 1,3,4, 3,4,5, 3,5,6,...} gldrawelements(gl_triangles, 64, GL_UNSIGNED_SHORT, indexarray); // Disables the vertices attributes. gldisablevertexattribarray(vertexloc); gldisablevertexattribarray(textureloc);

I had enabled and disabled the dynamic values to attributes just to show you how to do. As I said before, enable and disable features in OpenGL are expansive tasks, so you could want enable the dynamic values to the attributes once, maybe in time you get its location, for example. I prefer enable them once. 여기서활성화후 attribute 에 dynamic 값을넣고, 다시비활성화시켰다. 이것은어떻게하는지보여주려고한거다. 내가전에도말했지만활성화 / 비활성화작업은부하가크다. 나는활성화한번만하는걸좋아한다. Using the Buffer Objects top To use the buffer objects is very simple! All that you need is bind the buffer objects again. Do you remember the buffer object hook is a double one? So you can bind a GL_ARRAY_BUFFER and agl_element_array_buffer at the same time. Then you call thegldraw* informing the starting index of the buffer object which you want to initiate. You'll need to inform the start index instead of an array data, so the start number must be a pointer to void. The start index must be in the basic machine units (bytes). To the above code of attributes and uniforms, you could make something like this: buffer object 를쓰는것은간단하다. Buffer object 를다시 bind 하면됨. Buffer object 는 2 개를 bind 할수있단걸기억하나? GL_ARRAY_BUFFER, GL_ELEMENT_ARRAY_BUFFER 를동시에 bind 할수있음. 그후 gldraw* 를이용해서 buffer object 의시작 index 를알려주면된다. 배열데이타대신에시작 index 를줘야하므로, 시작번호는 void 형 pointer 다. 시작 index 는반드시 byte 여야한다. 위의코드는아래처럼간단히쓸수있다. GLuint arraybuffer, indicesbuffer; // Generates the name/ids to the buffers glgenbuffers(1, &arraybuffer); glgenbuffers(1, &indicesbuffer); // Assume we are using the best practice to store all informations about // the object into a single array: vertices and texture coordinates. // So we would have an array of {x,y,z,s,t, x,y,z,s,t,...} // This will be our "arraybuffer" variable. // To the "indicesbuffer" variable we use a // simple array {1,2,3, 1,3,4,...}

//... // Proceed with the retrieving attributes and uniforms locations. //... //... // Later, in the render time... //... //... // Uniforms definitions //... glbindbuffer(gl_array_buffer, arraybuffer); glbindbuffer(gl_element_array_buffer, indicesbuffer); int fsize = sizeof(float); GLsizei str = 5 * fsize; void * void0 = (void *) 0; void * void3 = (void *) 3 * fsize; glvertexattribpointer(vertexloc, 3, GL_FLOAT, GL_FALSE, str, void0); glvertexattribpointer(textureloc, 2, GL_FLOAT, GL_FALSE, str, void3); gldrawelements(gl_triangles, 64, GL_UNSIGNED_SHORT, void0); If you are using an OOP language you could create elegant structures with the concepts of buffer objects and attributes/uniforms. OK, those are the basic concepts and instructions about the shaders and program objects. Now let's go to the last part (finally)! Let's see how to conclude the render using the EGL API. 만일 oop 언어를쓴다면, 위의코드를좀더좋은구조로만들수있을것이다. 이게 shader 와 program object 에대한기본개념이다. 이제마지막부분으로넘어가자. EGL api 사용하는법을끝내자.

Rendering top I'll show the basic kind of render, a render to the device's screen. As you noticed before in this serie of tutorials, you could render to an off-screen surfaces like a frame buffer or a texture and then save it to a file, or create an image in the device's screen, whatever you want. 기기의화면에그리는 render 의기본종류를보여줄거다. 이미알겠지만 texture 나 frame buffer 처럼 off-screen 에그릴수있고, 이걸파일로저장할수있고이미지로만들어화면에보여줄수도있다. Pre-Render top I like to think in the rendering as two steps. The first is the Pre-Render, in this step you need to clean any vestige from the last render. This is important because exists conservation in the frame buffers. You remember what is a frame buffer, right? A colletion of images from render buffers. So when you make a complete render, the images in render buffer still alive even after the final image have been presented to its desired surface. What the Pre-Render step does is just clean up all the render buffers. Unless you want, for some reason, reuse the previous image into the render buffers. Make the clean up in the frame buffer is very simple. This is the function you will use: 나는 rendering 을 2 단계로나누는걸좋아한다. 첫번째로 Pre-Render 단계는이전 render 작업시의자취를지우는것이다. 중요하다. frame buffer 가뭔지기억하나? Render buffer 들로부터나오는이미지의집합이다. 그래서완전한 render 를만들어어딘가에보여지고나서도 render buffer 안의이미지들은남아있다. Pre-render 단계에서하는일은 render buffer 를모두초기화하는것이다. 너가 render buffer 안에있는이전이미지들을재사용하려는목적이라면 pre-render 단계가필요없다. Frame buffer 를초기화하는것은간단하다. Clearing the Render Buffers GLvoid glclear(glbitfield mask) mask: The mask represent the buffers you want to clean. This parameter can be: o GL_COLOR_BUFFER_BIT: To clean the Color Render Buffer. o GL_DEPTH_BUFFER_BIT: To clean the Depth Render Buffer. o GL_STENCIL_BUFFER_BIT: To clean the Stencil Render Buffer.

As now you know well, every instruction related to one of the Port Crane Hooks will affect the last object bound. So before call the above function, make sure you have bound the desired frame buffer. You can clean many buffers at once, as the mask parameter is bit informations, you can use the bitwise operator OR " ". Something like this: 잘알듯이모든명령어는최근에 bind 된객체에영향을미친다. 따라서위의함수를호출하기전에원하는 frame buffer 에연결되어있어야한다. 한번에많은 buffer 를지울수있다. 마스크파라메터가 bit 라서, OR 비트연산자 를사용할수있다. glbindframebuffer(gl_framebuffer, _framebuffer); glclear(gl_color_buffer_bit GL_DEPTH_BUFFER_BIT); OpenGL also gives to us others functions to make the clean up. But the above function is pretty good for any cases. The Pre-Render step should be called before any gldraw* calls. Once the render buffer is clean, it's time to draw your 3D objects. The next step is the drawing phase, but it is not one of the two render steps I told you before, it is just the drawing. opengl 은초기화하는다른함수도지원하지만위의함수가여러모로최고임. pre-render 단계는 gldraw* 가호출되기전에실행되어야한다. 일단초기화가됬으면 3 차원객체를그릴차례다. 하지만그리는것은내가아까말한 2 단계중하나가아니다. 이것은단지그리기일뿐이다. Drawing top I've showed it several times before in this tutorial, but now is time to drill deep inside it. The triggers to drawing in OpenGL is composed by two functions: 이문서에서여러번보여줬지만이번에좀더깊게들어간다. opengl 에서그리기는 2 개의함수로이루어져있다.

Clearing the Render Buffers GLvoid gldrawarrays(glenum mode, GLint first, GLsizei count) mode: This parameter specify which primitive will be rendered and how its structure is organized. This parameter can be: o GL_POINTS: Draw points. Points are composed by single sequences of 3 values (x,y,z). o GL_LINES: Draw Lines. Lines are composed by two sequences of 3 values (x,y,z / x,y,z). o GL_LINE_STRIP: Draw Lines forming a strip. Lines are composed by two sequences of 3 values (x,y,z / x,y,z). o GL_LINE_LOOP: Draw Lines closing a loop of them. Lines are composed by two sequences of 3 values (x,y,z / x,y,z). o GL_TRIANGLES: Draw Triangles. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z). o GL_TRIANGLE_STRIP: Draw Triangles forming a strip. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z). o GL_TRIANGLE_FAN: Draw Triangles forming a fan. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z). first: Specifies the starting index in the enabled vertex arrays. 활성화된 vertex array 의시작 index count: Represents number of vertices to be draw. This is very important, it represents the number of vertices elements, not the number of the elements in the array of vertices, take care to don't confuse both. For example, if you are drawing a single triangle this should be 3, because a triangle is formed by 3 vertices. But if you are drawing a square (composed by two triangles) this should be 6, because is formed by two sequences of 3 vertices, a total of 6 elements, and so on. 그릴꼭짓점의개수. 이거중요함. vertex array 요소갯수가아니라, 꼭지점요소의개수임. 헷갈리지마라. 예를들어너가삼각형한개를그리면이값은 3 이다. 왜냐면삼각형은 3 개의꼭짓점으로되어있으니까, 너가사각형을하나그리면 (2 개의삼각형으로구성된 ) 이값은 6 이다. 왜냐면삼각형 2 개니까

GLvoid gldrawelements(glenum mode, GLsizei count, GLenum type, const GLvoid* indices) mode: This parameter specify which primitive will be rendered and how its structure is organized. This parameter can be: o GL_POINTS: Draw points. Points are composed by single sequences of 3 values (x,y,z). o GL_LINES: Draw Lines. Lines are composed by two sequences of 3 values (x,y,z / x,y,z). o GL_LINE_STRIP: Draw Lines forming a strip. Lines are composed by two sequences of 3 values (x,y,z / x,y,z). o GL_LINE_LOOP: Draw Lines closing a loop of them. Lines are composed by two sequences of 3 values (x,y,z / x,y,z). o GL_TRIANGLES: Draw Triangles. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z). o GL_TRIANGLE_STRIP: Draw Triangles forming a strip. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z). o GL_TRIANGLE_FAN: Draw Triangles forming a fan. Triangles are composed by three sequences of 3 values (x,y,z / x,y,z / x,y,z). count: Represents number of vertices to be draw. This is very important, it represents the number of vertices elements, not the number of the elements in the array of vertices, take care to don't confuse both. For example, if you are drawing a single triangle this should be 3, because a triangle is formed by 3 vertices. But if you are drawing a square (composed by two triangles) this should be 6, because is formed by two sequences of 3 vertices, a total of 6 elements, and so on. 그릴꼭짓점의개수. 이거중요함. vertex array 요소갯수가아니라, 꼭지점요소의개수임. 헷갈리지마라. 예를들어너가삼각형한개를그리면이값은 3 이다. 왜냐면삼각형은 3 개의꼭짓점으로되어있으니까, 너가사각형을하나그리면 (2 개의삼각형으로구성된 ) 이값은 6 이다. 왜냐면삼각형 2 개니까 type: Represent the OpenGL's data type which is used in the array of indices. This parameter can be: o GL_UNSIGNED_BYTE: To indicate a GLubyte. o GL_UNSIGNED_SHORT: To indicate a GLushort.

Many questions, I know. First let me introduce how these functions work. One of the most important things in the programmable pipeline is defined here, the number of times which the VSH will be executed! This is done by the parameter count. So if you specify 128 to it, the currently program in use will process its VSH 128 times. Of course, the GPU will optimize this process so much as possible, but in general words, your VSH will be processed 128 times to process all your defined attributes and uniforms. And why I said to take care about the difference between number of vertices elements and number of the elements in the array of vertices? Is simple, you could have an array of vertices with 200 elements, but for some reason you want to construct just a triangle at this time, so count will be 3, instead 200. This is even more usefull if you are using an array of indices. You could have 8 elements in the array of vertices, but the array of indices specifies 24 elements, in this case the parameter count will be 24. In general words, it's the number of vertices elements you want to draw. If you are using the gldrawarrays the firstparameter works like an initial stride to your pervertex attributes. So if you set it to 2, for example, the values in the vertex shader will start by the index 2 in the array you specified in glvertexattribpointer, instead start by 0 as default. If you are using the gldrawelements the firstparameter will work like an initial stride to the array of indices, and not directly to your per-vertex values. The type identifies the data type, in really it's an optimization hint. If your array of indices has less than 255 elements is a very good idea to use GL_UNSIGNED_BYTE. Some implementations of OpenGL also support a third data type:gl_unsigned_int, but this is not very common. OK, now let's talk about the construction modes, defined in the mode parameter. It's a hint to use in the construction of your meshes. But the modes are not so useful to all kind of meshes. The following images could help you to understand: 많은질문이있을것이다. 먼저이 3 개의함수가어떻게동작하는지알려줄거다. programmable pipeline 의가장중요한점이여기있다. Vsh 가실행될횟수. count 변수에의해서정해진다. 만약 128 을넣었다면현재사용중인 program 은 vsh 를 128 번실행시킨다. 물론 gpu 가가능한최적화하겠지만 vsh 는 128 번처리된다 ( 그안의모든 attribute 와 uniform 들 ) 그리고내가꼭지점요소의개수와꼭지점배열의요소갯수의차이점에대해서주의깊게설명한이유를알겠는가? 간단히말해너가꼭지점배열에 200 개의요소를가지고있지만, 어떤이유로단지 3 개의삼각형만만들고싶길원한다면 count 에 3 을주면된다. 이것은 index 배열을사용하는것보다유용할수있다. 너가꼭짓점배열에 8 개의요소를가지고있고, index 배열에는 24 개가있을수있다. 이경우 count 는 24 이다. 결론적으로이것은너가그리고싶은꼭지점요소의개수다. gldrawarrays 를사용한다면 first 변수는배열에서시작할첫번째요소의순서를적어주면된다. gldrawelements 를사용한다면 index 배열에서시작할첫번째요소의순서를적어주면된다.

The image above shows what happens when we draw using one of the line drawing modes. All the drawing in the image above was made with the sequence of {v0,v1,v2,v3,v4,v5} as the array of vertices, supposing which each vertex has an unique values to x,y,z coordinates. As I told before, the unique mode which is compatible with any kind of draw is the GL_LINES, the others modes is for optimization in some specific situations. Optimize? Yes, look, using the GL_LINES the number of drawn lines was 3, using GL_LINE_STRIP the number of drawn lines was 5 and with GL_LINE_LOOP was 6, always using the same array of vertices and the same number of VSH loops. Now the drawing modes to triangles are similar, look: 위의이미지는우리가 LINE 을그릴때를보여준다. 위의이미지는모두꼭지점 6 개로이루어져있다. GL_LINES 는 3 개의선을그리고, GL_LINE_STRIP 은 5 개, GL_LINE_LOOP 는 6 개의선을그린다.

The image above shows what happens when we draw using one of the triangles drawing modes. All the indicated drawing in the image was made with the sequence of {v0,v1,v2,v3,v4,v5} as the array of vertices, supposing which each vertex has an unique values to x,y,z coordinates. Here again, the same thing, just the basic GL_TRIANGLES is useful to any kind of mesh, the others modes is for optimization in some specific situations. Using the GL_TRIANGLES_STRIP we need to reuse the last formed line, in the above example we must to draw using an array of index like {0,1,2, 0,2,3, 3,2,4,...}. Using the GL_TRIANGLES_FAN we must to always return to the first vertex, in the image we could use {0,1,2, 0,2,3, 0,3,4,...} as our array of indices. My advice here is to usegl_triangles and GL_LINES as much as possible. The optimization gain of STRIP, LOOP and FAN could be achieved by optimizing the OpenGL draw in other areas, with other techniques, like reducing the number of polygons into your meshes or optimizing your shaders processing. 위의사진은모두 6 개의꼭지점으로삼각형을그릴때를보여준다.