A full blown manager might be overkill, all you really need is a single method to take a CEGUI::Font pointer, and CEGUI::String as input. The method needs to get the currently available glyphs from the font, and then you just compare each codepoint in the string you're adding with what's currently available, any codepoint not in the available string should be added, then you submit the final set back to the font for creation:
Code: Select all
void addFontGlyphs(CEGUI::Font* font, const CEGUI::String& new_glyphs)
{
using namespace CEGUI;
// get glyphs currently available.
String glyphset(font->getAvailableGlyphs());
// for each glyph in the given set which we may have to add
for (int i = 0; i < new_glyphs.length(); ++i)
{
// see if the glyph is already defined for the font
if (glyphset.find(new_glyphs[i]) == String::npos)
{
// glyph was not defined so add it to the set which we
// wil submit back to the font.
glyphset.append(new_glyphs[i];
}
}
// submit the new glyph set back to font for creation.
font->defineFontGlyphs(glyphset);
}
This is not the only approach you could use for this, but is just a simple example.
Where are you getting your input from, you did not say? If it is via Win32 messages, then the following need to be considered.
Unicode in Win32 is not as wonderful as they'd have you believe, since you're mostly stuck with using a single UTF-16 codeunit to represent any codepoint; so this immediately chucks a whole load of supplementary plane glyph sets straight out of the window.
If you want to use Unicode at all then you must set up and compile your application as a Unicode application as opposed to a Ansi application. The difference is that with Unicode you'll get proper UTF-16 (or UTF-32, more on that in a minute) standard codepoint values, but with Ansi the values you get will be from some locale determined code page, and so will require translation into UTF-32 (which is beyond the scope of what I can go into here).
Once you have the app set up properly, you have two options for getting these inputs; WM_CHAR and WM_UNICHAR.
WM_CHAR will give you a UTF-16 codepoint. I'm not entirely sure whether they use the low 16 bits only, or whether they use the full 32 bits making use of surrogate pairs where required (the MS docs are particularly sketchy on this, so I assume you're stuck with the single 16 bit codeunit). If surrogate pairs are used though, then you'll need to detect and decode those (
http://www.unicode.org for info).
WM_UNICHAR in Windows XP give you a UTF-32 codepoint and can be passed directly to CEGUI.
CE.