2008年11月26日星期三

穿梭於 UTF-8 與 UTF-16 之間

偶然於 7zipLZMA sdk 裡發現非常簡潔的 UTF-8/UTF-16 變換函數,連一般轉換成 Unicode 的中介動作也省去了。可惜它本身的解壓功能未能滿足遊戲裝載系統的要求,皆因 7zip 的 archive 格式不能以最少的資源去解壓 archive 裡的個別檔案。
以下源始碼引用 LZMA sdk 再加上本人所寫的額外錯誤偵測與註解,enjoy!


typedef byte_t unsigned char;

// Reference: http://en.wikipedia.org/wiki/Utf8
static const byte_t cUtf8Limits[] = {
0xC0, // Start of a 2-byte sequence
0xE0, // Start of a 3-byte sequence
0xF0, // Start of a 4-byte sequence
0xF8, // Start of a 5-byte sequence
0xFC, // Start of a 6-byte sequence
0xFE // Invalid: not defined by original UTF-8 specification
};

/*! Usually it is a 2 steps process to convert the string, invoke utf8ToUtf16() with
dest equals to null so that it gives you destLen (not including null terminator),
then allocate the destination with that amount of memory and call utf8ToUtf16() once
again to perform the actual conversion. You can skip the first call if you sure
the destination buffer is large enough to store the data.

\note Here we assum sizeof(wchar_t) == 2
\ref Modify from 7zip LZMA sdk
*/
bool utf8ToUtf16(wchar_t* dest, size_t& destLen, const char* src, size_t maxSrcLen)
{
size_t destPos = 0, srcPos = 0;

while(true)
{
byte_t c; // Note that byte_t should be unsigned
size_t numAdds;

if(srcPos == maxSrcLen || src[srcPos] == '\0') {
if(dest && destLen != destPos) {
assert(false && "The provided destLen should equals to what we calculated here");
return false;
}

destLen = destPos;
return true;
}

c = src[srcPos++];

if(c < 0x80) { // 0-127, US-ASCII (single byte)
if(dest)
dest[destPos] = (wchar_t)c;
++destPos;
continue;
}

if(c < 0xC0) // The first octet for each code point should within 0-191
break;

for(numAdds = 1; numAdds < 5; ++numAdds)
if(c < cUtf8Limits[numAdds])
break;
uint32_t value = c - cUtf8Limits[numAdds - 1];

do {
byte_t c2;
if(srcPos == maxSrcLen || src[srcPos] == '\0')
break;
c2 = src[srcPos++];
if(c2 < 0x80 || c2 >= 0xC0)
break;
value <<= 6;
value |= (c2 - 0x80);
} while(--numAdds != 0);

if(value < 0x10000) {
if(dest)
dest[destPos] = (wchar_t)value;
++destPos;
}
else {
value -= 0x10000;
if(value >= 0x100000)
break;
if(dest) {
dest[destPos + 0] = (wchar_t)(0xD800 + (value >> 10));
dest[destPos + 1] = (wchar_t)(0xDC00 + (value & 0x3FF));
}
destPos += 2;
}
}

destLen = destPos;
return false;
}

bool utf8ToWStr(const char* utf8Str, size_t maxCount, std::wstring& wideStr)
{
size_t destLen = 0;

// Get the length of the wide string
if(!utf8ToUtf16(nullptr, destLen, utf8Str, maxCount))
return false;

wideStr.resize(destLen);
if(wideStr.size() != destLen)
return false;

return utf8ToUtf16(const_cast<wchar_t*>(wideStr.c_str()), destLen, utf8Str, maxCount);
}

bool utf8ToWStr(const std::string& utf8Str, std::wstring& wideStr)
{
return utf8ToWStr(utf8Str.c_str(), utf8Str.size(), wideStr);
}

//! See the documentation for utf8ToUtf16()
bool utf16ToUtf8(char* dest, size_t& destLen, const wchar_t* src, size_t maxSrcLen)
{
size_t destPos = 0, srcPos = 0;

while(true)
{
uint32_t value;
size_t numAdds;

if(srcPos == maxSrcLen || src[srcPos] == L'\0') {
if(dest && destLen != destPos) {
assert(false && "The provided destLen should equals to what we calculated here");
return false;
}
destLen = destPos;
return true;
}

value = src[srcPos++];

if(value < 0x80) { // 0-127, US-ASCII (single byte)
if(dest)
dest[destPos] = char(value);
++destPos;
continue;
}

if(value >= 0xD800 && value < 0xE000) {
if(value >= 0xDC00 || srcPos == maxSrcLen)
break;
uint32_t c2 = src[srcPos++];
if(c2 < 0xDC00 || c2 >= 0xE000)
break;
value = ((value - 0xD800) << 10) | (c2 - 0xDC00);
}

for(numAdds = 1; numAdds < 5; ++numAdds)
if(value < (uint32_t(1) << (numAdds * 5 + 6)))
break;

if(dest)
dest[destPos] = char(cUtf8Limits[numAdds - 1] + (value >> (6 * numAdds)));
++destPos;

do {
--numAdds;
if(dest)
dest[destPos] = char(0x80 + ((value >> (6 * numAdds)) & 0x3F));
++destPos;
} while(numAdds != 0);
}

destLen = destPos;
return false;
}

bool wStrToUtf8(const wchar_t* wideStr, size_t maxCount, std::string& utf8Str)
{
size_t destLen = 0;

// Get the length of the utf-8 string
if(!utf16ToUtf8(nullptr, destLen, wideStr, maxCount))
return false;

utf8Str.resize(destLen);
if(utf8Str.size() != destLen)
return false;

return utf16ToUtf8(const_cast<char*>(utf8Str.c_str()), destLen, wideStr, maxCount);
}

bool wStrToUtf8(const std::wstring& wideStr, std::string& utf8Str)
{
return wStrToUtf8(wideStr.c_str(), wideStr.size(), utf8Str);
}

2008年11月20日星期四

編程花招的謎思

微軟快要推出下一代 Visual Studio 2010,它對於 C++0x 的支持最令我期待。
儘管 C++0x compiler 還未成熟與普及,已有工程師把弄新的語法,創造耀眼花招
其實我也非常喜歡耍玩語法上的把戲,但我亦知道它會帶來什麼災害。
以下文字引述自花招裡的一篇回覆,也是我心裡想說的:
Interesting acrobatics, but I am a KISS fan.

I prefer not to mandate a C++ black belt (with several Dans on occassion) on coworkers who try to understand and modify my code, so thanks but I'll pass.

Is there anything in the above code that cannot be done in plain C in a way that 90% of the dev population can understand and 80% can modify/extend without a mistake?

Why do architects feel so compelled to save the world by providing infrastructure and plumbing for everything conceivable under the sun?

What about memoization? If I am in such a corner case where caching the results of a function call will *actually* improve performance, what makes you think I would opt for an obscure and totally incomprehensible generic template that I cannot understand or debug, rather than a custom-tailored, totally non-reusable, top-performing, totally understandable and debugable solution?

Don't get me wrong, I am not an anti-STL, do-it-yourself (CMyHashTable, CMyDynamicArray, CMyOS) gangho. I am just a KISS fan (including the rock band). If something can be done in a way that is simpler, easier to understand, debug and extend, then I prefer the simpler way.

I just get so frustrated when people do all this acrobatic stuff in production code just because (a) they can do it (b) it's cool to do it, without thinking back a lil'bit or actually having mastered the 'tools' they are using.

A similar example is 'patternitis'. I have seen countless C++ freshmen reading the GangOf4 Design Patterns book and then creating a total mess in everything, like deciding to implement the Visitor pattern on a problem that required Composite and ended up coding a third pattern alltogether from the same book, still naming the classes CVisitorXYZ (probably they opened the book on the wrong page at some point).

I have met exactly 1 guy (I called him the "Professor") who knew C++ well enough and had the knowledge to apply the patterns where they ought to be applied. His code was a masterpiece, it worked like a breeze, but when he left, none else in the house could figure things out.

So what's the point with these Lambda stuff really? Increase the expression of the language? Are we doing poetry or software? Why should we turn simple code that everyone understands into more and more elegant and concise code that only few can understand and make it work?

I have been coding in C (drivers) and C++ for 15 years and not once was I trapped because I was missing lambda expressions or similar syntactic gizmos.

So what's the point really? Please enlighten me. I don't say that *I* am right and *YOU* are wrong. I am saying that I don't see, I don't understand the positive value that these things bring in that far outweighs the problems they cause by complicating the language.
當然,流行/藝術派與實際派的存在都是有意義的;否則編程世界不是一團糟就是停滯不前。

2008年11月6日星期四

沒有惡意的 Bonjour


無意中在視窗服務裏面發現多了一個服務項,進程為 mDNSResponder.exe。發現時就感覺不妙,還以為是木馬。

原來它不是病毒或惡意程式,一個名為 Bonjour 的服務,是 Apple 公司的產品。一般會在安裝 Adobe CS3 後出現;用於自動發現局域網上的印表機或其他設備,一般沒什麼用處,卸載後也不影響其他軟體的使用,下面是 Adobe 網站上公佈的卸載方法:
  1. 運行 C:\Program Files\Bonjour\mDNSResponder.exe -remove
  2. 重命名 C:\Program Files\Bonjour\mdnsNSP.dll 為 mdnsNSP.old
  3. 重啟電腦
  4. 刪除 C:\Program Files\Bonjour 目錄
[註] Bonjour 在法語中解作 "你好"。