I've looked all over, but I haven't been able to find any information on this. What I'm looking for is a general overview of what CEGUI is, what Falagard is, how skinning works, how widgets get drawn by a graphics engine (say, Ogre), and how all those parts relate to each other.
I find lots of very nice tutorials on installing everything and writing code that gets things drawn, but I'm looking for a more big picture outlook.
General overview
Moderators: CEGUI MVP, CEGUI Team
- kungfoomasta
- Not too shy to talk
- Posts: 34
- Joined: Wed Apr 06, 2005 08:25
This is the best source of information for you:
http://www.cegui.org.uk/wiki/index.php/Tutorials
KungFooMasta
http://www.cegui.org.uk/wiki/index.php/Tutorials
KungFooMasta
I'd also recomment http://www.cegui.org.uk/wiki/index.php/CodeSnippets
Sure they are a bit more advanced but they show how Cegui can be used at a higher level. Widget Galore presents nearly every widget and shows how to get and set values within each. Then you'd need to learn more about events i.e. how to know when a particular radio button is selected within a radio group.
Sure they are a bit more advanced but they show how Cegui can be used at a higher level. Widget Galore presents nearly every widget and shows how to get and set values within each. Then you'd need to learn more about events i.e. how to know when a particular radio button is selected within a radio group.
Thanks, but as I mentioned before it's not a lack of tutorials or code that's the problem. I'm trying to understand things like how the widgets listed in widgetgalore get translated into a picture that I display to the user, how the widgets interact, etc.
That is, I'm not looking for information on setting up a widget page, or sending information to it from the input system. I'm looking more for a general overview of how the input received is pushed to the various widgets, how the widgets as logical entities get translated to pictures for the user to see, etc. A big picture outlook.
Something like what Pro OGRE 3D Programming does for Ogre. Though obviously I don't expect to find something of that quality. I'm looking for something light in code and heavy on pictures
Hope I've been clear. You're basically handing me a cookbook when I want a culinary science textbook, if you catch my drift
That is, I'm not looking for information on setting up a widget page, or sending information to it from the input system. I'm looking more for a general overview of how the input received is pushed to the various widgets, how the widgets as logical entities get translated to pictures for the user to see, etc. A big picture outlook.
Something like what Pro OGRE 3D Programming does for Ogre. Though obviously I don't expect to find something of that quality. I'm looking for something light in code and heavy on pictures
Hope I've been clear. You're basically handing me a cookbook when I want a culinary science textbook, if you catch my drift
Pompei2 wrote:I am not sure that such an document currently exists. There is one big pdf file about the falagard system, but I can't remember where.
Maybe someone could give me a quick overview then? I prefer to have a big picture understanding of a library before I begin involving myself in the details.
Levia wrote:Falagard docs: http://www.cegui.org.uk/FalDocs/
Thanks Levia, this is closer to what I'm looking for, but it's still missing the mark a bit for me. Maybe if I start with what I know, people can fill in the gaps?
As I understand it, CEGUI strictly by itself is a collection of widget classes that can operate with themselves and each other to form a GUI. CEGUI (magically? Maybe voodoo? It's never clear) manages to get its widgets rendered in various graphics APIs. The user "injects" inputs into CEGUI, which sends that data to all the widgets. From what I gather, widgets are organized into groups called pages, which I guess are flipped between like in a book (using magic? )
Some (Wiccan?) plugin called Falagard allows XML specification of pages, and probably a bunch of other fun things.
I have no idea how events, in the classic GUI sense of the word, get propogated to members. I have no idea how widgets are built. For instance, in another GUI library I'm familiar with FoxGUI, widgets are built up by inheriting from more base members, such as the FXWindow class.
While I'm on the subject of FoxGUI, this is exactly what I'm looking for for an intro to CEGUI. I found that page quickly brought me up to speed on how FoxGUI operated.
I have no idea how different visual styles are interpreted to or from the core library (for instance, how buttons are spaced based on the fact that different visual styles may have different sized buttons). I have no idea how the widgets get converted from logical entity to physical construct on the user's screen. I have no idea what sort of art assets a GUI uses, or how its used.
There's probably a whole host of other details I don't even know I don't know. If someone could take what I have so far, and fill in some of the details, I'd be closer to a proper understanding of what's going on.
Read every link within the Wiki's Main Page; FAQ, tutorials, etc. Of particular interests are the Layout Editor and Widget Galore. That one is particularly useful as it shows how to load a .layout file, retrieve pointers to the various widgets and perform the most basic functions; get and set values.
On to the concepts part now. Many graphical engines have a Cegui renderer module; have a look at Window System Examples as well as Ogre. Cegui itself provides a DirectX and OpenGL rendering module (which is used by the samples).
You inject mouse movement, mouse clicks, key clicks, and time pulses (tooltips use those) into Cegui. I'm not exactly certain how the rendering is actually started but with Ogre's CeguiRenderer I'm guessing that it's subscribing to Ogre's frameStarted event and draws the Cegui content (it's on a low rendering priority/queue which makes it draw near the end of the frame).
The GUI seems alive due to a cascade of events, namely injecting a mouse click could trigger a FrameWindow to gain focus, changing it's Z-order value and appearing on top of other windows, or activate a pushbutton. You can subscribe to widget events and code an appropriate response, such as reacting to a mouse click on a pushbutton and opening/closing a window.
Hrm, I'm not sure what else to write...time to zzz.
On to the concepts part now. Many graphical engines have a Cegui renderer module; have a look at Window System Examples as well as Ogre. Cegui itself provides a DirectX and OpenGL rendering module (which is used by the samples).
You inject mouse movement, mouse clicks, key clicks, and time pulses (tooltips use those) into Cegui. I'm not exactly certain how the rendering is actually started but with Ogre's CeguiRenderer I'm guessing that it's subscribing to Ogre's frameStarted event and draws the Cegui content (it's on a low rendering priority/queue which makes it draw near the end of the frame).
The GUI seems alive due to a cascade of events, namely injecting a mouse click could trigger a FrameWindow to gain focus, changing it's Z-order value and appearing on top of other windows, or activate a pushbutton. You can subscribe to widget events and code an appropriate response, such as reacting to a mouse click on a pushbutton and opening/closing a window.
Hrm, I'm not sure what else to write...time to zzz.
AFAIK, The events are just some kind of callback functions taht are registered. Here are the things i noticed while debugging code, it's not guaranteed to be true !
You have your main loop, in wich you inject actions like mouse movement, clicks, ... These fuctions checks what changes in the GUI this produces. If a mouse click happens on a button, your code enters the inject mouse click function, does something, and calls your function associated with this event, while it is still in the injection function (so there is no multithreading). I hope i was clear
And oh the thing with the sheets/pages ... don't imagine it like a book, but more like a desktop, where you have transparent sheets (forgot the word ) lying one on top of the other, each containing a bunch of widgets/windows.
As in many (maybe all) GUI libraries, all widgets are in fact windows.
How these widgets are drawn to the screen is defined using the LookNFeel files, but I don't know how these work.
Edit: You should also take a look at the Creating Skins wiki page.
You have your main loop, in wich you inject actions like mouse movement, clicks, ... These fuctions checks what changes in the GUI this produces. If a mouse click happens on a button, your code enters the inject mouse click function, does something, and calls your function associated with this event, while it is still in the injection function (so there is no multithreading). I hope i was clear
And oh the thing with the sheets/pages ... don't imagine it like a book, but more like a desktop, where you have transparent sheets (forgot the word ) lying one on top of the other, each containing a bunch of widgets/windows.
As in many (maybe all) GUI libraries, all widgets are in fact windows.
How these widgets are drawn to the screen is defined using the LookNFeel files, but I don't know how these work.
Edit: You should also take a look at the Creating Skins wiki page.
Okay, now we're getting somewhere
I think alot of the conceptual problem I'm having is understanding where CEGUi ends and Falagard begins. Let's pretend for a moment I don't use Falagard. I strip CEGUI of Falagard and just use vanilla cegui.
There are all sorts of art assetts that CEGUI manages to build the GUI's pictures from. I'd like some information on how this works. Specifically, I'm interested in understanding how abstracted the process is.
My second question concerns positions. I can imagine two different "skins" for the widgets that would cause different absolute spacing. I can imagine stretching and shrinking the screen causing issues with spacing. How "smart" is CEGUI in handling these sorts of issues?
Last, other than allowing XML skinning, what other features does Falagard add to the core? In what ways does it change things?
How these widgets are drawn to the screen is defined using the LookNFeel files, but I don't know how these work.
I think alot of the conceptual problem I'm having is understanding where CEGUi ends and Falagard begins. Let's pretend for a moment I don't use Falagard. I strip CEGUI of Falagard and just use vanilla cegui.
There are all sorts of art assetts that CEGUI manages to build the GUI's pictures from. I'd like some information on how this works. Specifically, I'm interested in understanding how abstracted the process is.
My second question concerns positions. I can imagine two different "skins" for the widgets that would cause different absolute spacing. I can imagine stretching and shrinking the screen causing issues with spacing. How "smart" is CEGUI in handling these sorts of issues?
Last, other than allowing XML skinning, what other features does Falagard add to the core? In what ways does it change things?
Oh, I never looked into Vanilla or CEGUI withoute falagard, so I can't tell you
As I'm currently learning how to use/customise/pimp the look'n'feel files, I can now tell you that this "skin" does in no way change the layout/size/position of any widget. It just draws the widget differently, within the same area, it doesn't matter witch "skin" you use. The dimensions you specify in your layout are always respected by CEGUI.
As I'm currently learning how to use/customise/pimp the look'n'feel files, I can now tell you that this "skin" does in no way change the layout/size/position of any widget. It just draws the widget differently, within the same area, it doesn't matter witch "skin" you use. The dimensions you specify in your layout are always respected by CEGUI.
Who is online
Users browsing this forum: No registered users and 11 guests