Revision history for HandlingUTF8


Revision [23487]

Last edited on 2016-05-20 07:38:48 by BrianKoontz [Replaces old-style internal links with new pipe-split links.]
Additions:
~-List of [[WikkaSites | sites powered by Wikka]] in 35 languages.
~-Current [[CategoryDevelopmentI18n | i18n/l10n]] development pages.
~-Test page for [[WikkaMultilanguageTestPage | multilanguage support]]
Check it out [[http://www.istitutocolli.org/uniwakka/MultiLanguage | here]].
++first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php | utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php | utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about)++ (''the correct functions would have been http_entity_decode($string, ENT_QUOTES, 'UTF-8') and httpentities($string, ENT_QUOTES, 'UTF-8'), but these functions aren't able to handle multybyte-chars yet. the [[http://de.php.net/manual/de/ref.mbstring.php | mb-string-lib]] might give a more straight and performant solution. andrea's sample code should be valuable to understand what happens but i am still looking for a variant that don't "contaminate" the code too much and keeps it maintainable. a good start might be to introduce two functions "Formstring()" and "DBstring()" which do //all// conversion stuff including mysql_escape_string and such and to maintain the conversion stuff in one central place in future steps'')
- what to do with the wikiword-recognition, which is designed for the latin alphabet. i think at least the ""[[forced | links]]"" should work in every language.
There's a Wakka fork redesigned to support multi-language: [[http://uniwakka.sourceforge.net | UniWakka]] -:).
The same applies to full-text search. The string to be searched is converted into iso-8859-1 plus unicode entities. And unicode entities can be searched. Have a try [[http://gipc49.jus.unitn.it:8080/wakka/TextSearch?phrase=%E6%B6%9B | here]].
[[http://www.joelonsoftware.com/printerFriendly/articles/Unicode.html | The Absolute Minimum Every Software Developer Must Know About Unicode and Character Sets]]
Deletions:
~-List of [[WikkaSites sites powered by Wikka]] in 35 languages.
~-Current [[CategoryDevelopmentI18n i18n/l10n]] development pages.
~-Test page for [[WikkaMultilanguageTestPage multilanguage support]]
Check it out [[http://www.istitutocolli.org/uniwakka/MultiLanguage here]].
++first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about)++ (''the correct functions would have been http_entity_decode($string, ENT_QUOTES, 'UTF-8') and httpentities($string, ENT_QUOTES, 'UTF-8'), but these functions aren't able to handle multybyte-chars yet. the [[http://de.php.net/manual/de/ref.mbstring.php mb-string-lib]] might give a more straight and performant solution. andrea's sample code should be valuable to understand what happens but i am still looking for a variant that don't "contaminate" the code too much and keeps it maintainable. a good start might be to introduce two functions "Formstring()" and "DBstring()" which do //all// conversion stuff including mysql_escape_string and such and to maintain the conversion stuff in one central place in future steps'')
- what to do with the wikiword-recognition, which is designed for the latin alphabet. i think at least the ""[[forced links]]"" should work in every language.
There's a Wakka fork redesigned to support multi-language: [[http://uniwakka.sourceforge.net UniWakka]] -:).
The same applies to full-text search. The string to be searched is converted into iso-8859-1 plus unicode entities. And unicode entities can be searched. Have a try [[http://gipc49.jus.unitn.it:8080/wakka/TextSearch?phrase=%E6%B6%9B here]].
[[http://www.joelonsoftware.com/printerFriendly/articles/Unicode.html The Absolute Minimum Every Software Developer Must Know About Unicode and Character Sets]]


Revision [21486]

Edited on 2011-05-01 12:01:23 by BrianKoontz [updated utf-8 release]
Additions:
** Please note that Wikka 1.4 will be released with full UTF-8 support, and is currently available for testing at http://wush.net/svn/wikka/trunk **
Deletions:
** Please note that Wikka 1.3 will be released with full UTF-8 support, and is currently available for testing at http://wush.net/svn/wikka/branches/1.3 **


Revision [21281]

Edited on 2010-11-14 22:51:40 by BrianKoontz [Added note]
Additions:
** Please note that Wikka 1.3 will be released with full UTF-8 support, and is currently available for testing at http://wush.net/svn/wikka/branches/1.3 **
Deletions:
**Please note that Wikka 1.3 will be released with full UTF-8 support, and is currently available for testing at http://wush.net/svn/wikka/branches/1.3**


Revision [21280]

Edited on 2010-11-14 22:51:07 by BrianKoontz [Added note]
Additions:
**Please note that Wikka 1.3 will be released with full UTF-8 support, and is currently available for testing at http://wush.net/svn/wikka/branches/1.3**


Revision [19160]

Edited on 2008-01-28 00:14:04 by TonZijlstra [Modified links pointing to docs server]

No Differences

Revision [16914]

Edited on 2007-05-31 23:27:12 by TonZijlstra [Reverted]
Additions:
Unfortunately the ascii and iso8859 output is not compatible with htmlspecialchars. This is the reason of a valid_xml function. It has the same scope of htmlspecialchars , but will correctly handle &.
How to use this function? For istance,
- in formatters/wakka.php you should use:
- print($this->str2ascii($text));
- in wikka.php, function SavePage you should use:
- "body = '".mysql_escape_string(trim($this->str2iso8859($body)))."'");
- in handlers/page/edit.php you should use:
- "<textarea rows=\"40\" cols=\"60\" onkeydown=\"fKeyDown()\" name=\"body\" style=\"width: 100%; height: 400px\">".$this->valid_xml($this->str2utf8($body))."</textarea><br />\n"
And so on....
**Update** ''I changed the functions that do the conversion to improve speed and reduce memory usage'' 2004-08-14
--AndreaRossato
Check it out [[http://www.istitutocolli.org/uniwakka/MultiLanguage here]].
The bits:
%%(php)
<?php
//Multilanguage support. We will use: utf-8 for user input, iso8859-1 + unicode for database storage and ascii + unicode for printing
function utf8_to_unicode($str) {
$unicode = array();
$values = array();
$lookingFor = 1;
for ($i = 0; $i < strlen($str); $i++ ) {
$thisValue = ord( $str[$i] );
if ( $thisValue < 128 ) $unicode[] = $thisValue;
else {
if ( count( $values ) == 0 ) $lookingFor = ( $thisValue < 224 ) ? 2 : 3;
$values[] = $thisValue;
if ( count( $values ) == $lookingFor ) {
$number = ( $lookingFor == 3 ) ?
( ( $values[0] % 16 ) * 4096 ) + ( ( $values[1] % 64 ) * 64 ) + ( $values[2] % 64 ):
( ( $values[0] % 32 ) * 64 ) + ( $values[1] % 64 );
$unicode[] = $number;
$values = array();
$lookingFor = 1;
}
}
}
return $unicode;
}
function deCP1252 ($str) {
$str = str_replace("&#128", "€", $str);
$str = str_replace("&#129", "", $str);
$str = str_replace("‚", "‚", $str);
$str = str_replace("ƒ", "ƒ", $str);
$str = str_replace("„", "„", $str);
$str = str_replace("…", "…", $str);
$str = str_replace("†", "†", $str);
$str = str_replace("‡", "‡", $str);
$str = str_replace("ˆ", "ˆ", $str);
$str = str_replace("‰", "‰", $str);
$str = str_replace("Š", "Š", $str);
$str = str_replace("‹", "‹", $str);
$str = str_replace("Œ", "Œ", $str);
$str = str_replace("‘", "‘", $str);
$str = str_replace("’", "’", $str);
$str = str_replace("“", "“", $str);
$str = str_replace("”", "”", $str);
$str = str_replace("•", "•", $str);
$str = str_replace("–", "–", $str);
$str = str_replace("—", "—", $str);
$str = str_replace("˜", "˜", $str);
$str = str_replace("™", "™", $str);
$str = str_replace("š", "š", $str);
$str = str_replace("›", "›", $str);
$str = str_replace("œ", "œ", $str);
$str = str_replace("Ÿ", "Ÿ", $str);
return $str;
}
function code2utf($num){
if($num<128)return chr($num);
if($num<2048)return chr(($num>>6)+192).chr(($num&63)+128);
if($num<65536)return chr(($num>>12)+224).chr((($num>>6)&63)+128).chr(($num&63)+128);
if($num<2097152)return chr(($num>>18)+240).chr((($num>>12)&63)+128).chr((($num>>6)&63)+128). chr(($num&63)+128);
return '';
}
//to print in a form
function str2utf8($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
if (mb_detect_encoding($str) == "UTF-8") {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}
//to print html
function str2ascii ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
return $this->deCP1252($str);
break;
}
}
//for database storage
function str2iso8859 ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 127 && $value <= 160 )
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}return $this->deCP1252($str);
break;
}
}
function valid_xml ($str) {
$str = str_replace("\"", """, $str);
$str = str_replace("<", "<", $str);
$str = str_replace(">", ">", $str);
$str = preg_replace("/&(?![a-zA-Z0-9#]+?;)/", "&", $str);
return $str;
}
?>
%%
--AndreaRossato
----
hmm... i may have the solution but i need to understand the problem ;)
++first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about)++ (''the correct functions would have been http_entity_decode($string, ENT_QUOTES, 'UTF-8') and httpentities($string, ENT_QUOTES, 'UTF-8'), but these functions aren't able to handle multybyte-chars yet. the [[http://de.php.net/manual/de/ref.mbstring.php mb-string-lib]] might give a more straight and performant solution. andrea's sample code should be valuable to understand what happens but i am still looking for a variant that don't "contaminate" the code too much and keeps it maintainable. a good start might be to introduce two functions "Formstring()" and "DBstring()" which do //all// conversion stuff including mysql_escape_string and such and to maintain the conversion stuff in one central place in future steps'')
second it's not perfectly clear to me, how to treat clients that don't accept utf-8 encoding. i haven't had much time to get into the stuff, but so far i think the following tasks have to be managed:
- determine the most convinient charset (that's easy, just have a look at $HTTP_ACCEPT_CHARSET)
- set the apropriate http-header in **header.php** and - if needed - set a flag $this->config["use_utf8"] = true;
- do the conversions on form-data if use_utf8 is set (this sounds like a busy task)
- convert the $_POST data back to iso-8859-1 (the charset we'll internally work with)
- leave the formatter untouched, which should be fed with iso-data (and entities), if i have no fault in the points above. instead use the buffered output which is stored in the variable $output at $wakka->includebuffered to convert it at once, namely to utf-8 what is expected by the client if it sends utf-8-formdata.
what i don't understand yet is:
- what to do with the wikiword-recognition, which is designed for the latin alphabet. i think at least the ""[[forced links]]"" should work in every language.
- will the diff-engine work (not worse than now), when it's fed with html-entities and nothing but entities (this //will// happen with a page that only stores the quotation of an aramean bible-text)
- how will the fulltext-search behave
- and of course am i right with the tasklist above. something's missing? something's wrong?
btw: what wakka-forks already exist, that are redesigned for the needs of a foreign charset? isn't wackowiki a russian spin off? do we have some cyrillic speaking wikka-fans out there? ;)
-- [[dreckfehler]]
----
There's a Wakka fork redesigned to support multi-language: [[http://uniwakka.sourceforge.net UniWakka]] -:).

I'll try to clarify the problem, as far as I can ;)
The problem with character encoding is that UTF-8 is a multi-byte encoding. Ascii and UTF-8 are actually the same stuff, since the first 128 character in UTF-8 are plain 8-bit. The problem is the remaining characters that are encoded with more than 1 byte...
Now, there are two different approaches:
1. you can use 8-bit encoding (iso-8859-*). That is to say: if you have cyrillic characters you can use iso-8859-5 (or cp-1252, as far as I remember). Ascii characters are the same, bur above chr(128) you have cyrillic chars. In this case you can use cyrillic but not, for instance, french accented letters (these are not included in iso-8859-5).
This approach lets you use charset metatags to define the encoding. PHP will be able to handle it, since the characters are plain 8-bit. This cannot be called multi-language support: you can only use a very limited set of languages at a time. Period.
This is the Wacko approach.
2. If you want to have cyrillic letters __and__ Italian (or French) accented letters in the same wiki, then you need UTF-8, that is to say, multi-byte characters. PHP will not able to handle strings with multi-byte encodings: preg_match, preg_replace will not work.
You need to convert those strings into single-byte characters. The only way I was able to find to manipulate those strings is to use iso-8859-1 plus unicode entities.
WikiWords must be plain ascii, as every URI.
I did not study WikkaWiki diff engine. But there shouldn't be any problem as far as you use unicode entities above ascii (or iso-8859-1) characters.
The same applies to full-text search. The string to be searched is converted into iso-8859-1 plus unicode entities. And unicode entities can be searched. Have a try [[http://gipc49.jus.unitn.it:8080/wakka/TextSearch?phrase=%E6%B6%9B here]].
http_entity_decode and httpentities work only with single byte characters, as every php functions. As you said, for multi-byte you need to use mb-string-lib. But if you want to use the lib you are going to rewrite every wakka-derived wiki, and you cannot use perl regular expressions. And this is not going to avoid "contamination" of the code.
Moreover, I would like to ask you to indicate some user-agent that do not support UTF-8. IE, gecko derived browser, Konqueror, Opera do support it. As far as I know Google pages are utf-8 encoded.
--AndreaRossato
''"Modern" user agents support UTF-8 - but as far as I know only the graphical ones (i.e., not Lynx or Links - or maybe they do on Unix, but certainly not on Windows); IE at least as far back as 5.01 - don't know about 4.0 (yes there are people that use this); Netscape 4.x has I think only limited support (if at all), and as you say the Gecko-based browsers are OK, as is Opera (6+ at least, not sure about 5).
-- JavaWoman''
----
==The "We don't like mbstring" version of the code==
%%(php)
<?php
function is_utf8($Str) {
for ($i=0; $i<strlen($Str); $i++) {
if (ord($Str[$i]) < 0x80) continue;
elseif ((ord($Str[$i]) & 0xE0) == 0xC0) $n=1;
elseif ((ord($Str[$i]) & 0xF0) == 0xE0) $n=2;
elseif ((ord($Str[$i]) & 0xF8) == 0xF0) $n=3;
elseif ((ord($Str[$i]) & 0xFC) == 0xF8) $n=4;
elseif ((ord($Str[$i]) & 0xFE) == 0xFC) $n=5;
else return false;
for ($j=0; $j<$n; $j++) {
if ((++$i == strlen($Str)) || ((ord($Str[$i]) & 0xC0) != 0x80))
return false;
}
}
return true;
}
//to print in a form
function str2utf8($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}
//ascii for xhtml
function str2ascii ($str) {
if ($this->is_utf8($str)) {

preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);

}
}
//iso8859 for database storage (so we do not need mysql 4.1)
function str2iso8859 ($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}
return $this->deCP1252($str);

}
}
%%
--AndreaRossato
----
==Links to information on other sites:==
[[http://www.joelonsoftware.com/printerFriendly/articles/Unicode.html The Absolute Minimum Every Software Developer Must Know About Unicode and Character Sets]]
----
CategoryDevelopmentI18n
Deletions:
Unfortunately the ascii and iso8859 output is not compatible with htmlspecialchars. This is the reason of a valid_xml function. It has the same scope of htmlspecialchars , but will correctly handle


Revision [16713]

Edited on 2007-05-31 10:39:17 by SpoZga [Reverted]
Additions:
Unfortunately the ascii and iso8859 output is not compatible with htmlspecialchars. This is the reason of a valid_xml function. It has the same scope of htmlspecialchars , but will correctly handle
Deletions:
Unfortunately the ascii and iso8859 output is not compatible with htmlspecialchars. This is the reason of a valid_xml function. It has the same scope of htmlspecialchars , but will correctly handle &.
How to use this function? For istance,
- in formatters/wakka.php you should use:
- print($this->str2ascii($text));
- in wikka.php, function SavePage you should use:
- "body = '".mysql_escape_string(trim($this->str2iso8859($body)))."'");
- in handlers/page/edit.php you should use:
- "<textarea rows=\"40\" cols=\"60\" onkeydown=\"fKeyDown()\" name=\"body\" style=\"width: 100%; height: 400px\">".$this->valid_xml($this->str2utf8($body))."</textarea><br />\n"
And so on....
**Update** ''I changed the functions that do the conversion to improve speed and reduce memory usage'' 2004-08-14
--AndreaRossato
Check it out [[http://www.istitutocolli.org/uniwakka/MultiLanguage here]].
The bits:
%%(php)
<?php
//Multilanguage support. We will use: utf-8 for user input, iso8859-1 + unicode for database storage and ascii + unicode for printing
function utf8_to_unicode($str) {
$unicode = array();
$values = array();
$lookingFor = 1;
for ($i = 0; $i < strlen($str); $i++ ) {
$thisValue = ord( $str[$i] );
if ( $thisValue < 128 ) $unicode[] = $thisValue;
else {
if ( count( $values ) == 0 ) $lookingFor = ( $thisValue < 224 ) ? 2 : 3;
$values[] = $thisValue;
if ( count( $values ) == $lookingFor ) {
$number = ( $lookingFor == 3 ) ?
( ( $values[0] % 16 ) * 4096 ) + ( ( $values[1] % 64 ) * 64 ) + ( $values[2] % 64 ):
( ( $values[0] % 32 ) * 64 ) + ( $values[1] % 64 );
$unicode[] = $number;
$values = array();
$lookingFor = 1;
}
}
}
return $unicode;
}
function deCP1252 ($str) {
$str = str_replace("&#128", "€", $str);
$str = str_replace("&#129", "", $str);
$str = str_replace("‚", "‚", $str);
$str = str_replace("ƒ", "ƒ", $str);
$str = str_replace("„", "„", $str);
$str = str_replace("…", "…", $str);
$str = str_replace("†", "†", $str);
$str = str_replace("‡", "‡", $str);
$str = str_replace("ˆ", "ˆ", $str);
$str = str_replace("‰", "‰", $str);
$str = str_replace("Š", "Š", $str);
$str = str_replace("‹", "‹", $str);
$str = str_replace("Œ", "Œ", $str);
$str = str_replace("‘", "‘", $str);
$str = str_replace("’", "’", $str);
$str = str_replace("“", "“", $str);
$str = str_replace("”", "”", $str);
$str = str_replace("•", "•", $str);
$str = str_replace("–", "–", $str);
$str = str_replace("—", "—", $str);
$str = str_replace("˜", "˜", $str);
$str = str_replace("™", "™", $str);
$str = str_replace("š", "š", $str);
$str = str_replace("›", "›", $str);
$str = str_replace("œ", "œ", $str);
$str = str_replace("Ÿ", "Ÿ", $str);
return $str;
}
function code2utf($num){
if($num<128)return chr($num);
if($num<2048)return chr(($num>>6)+192).chr(($num&63)+128);
if($num<65536)return chr(($num>>12)+224).chr((($num>>6)&63)+128).chr(($num&63)+128);
if($num<2097152)return chr(($num>>18)+240).chr((($num>>12)&63)+128).chr((($num>>6)&63)+128). chr(($num&63)+128);
return '';
}
//to print in a form
function str2utf8($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
if (mb_detect_encoding($str) == "UTF-8") {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}
//to print html
function str2ascii ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
return $this->deCP1252($str);
break;
}
}
//for database storage
function str2iso8859 ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 127 && $value <= 160 )
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}return $this->deCP1252($str);
break;
}
}
function valid_xml ($str) {
$str = str_replace("\"", """, $str);
$str = str_replace("<", "<", $str);
$str = str_replace(">", ">", $str);
$str = preg_replace("/&(?![a-zA-Z0-9#]+?;)/", "&", $str);
return $str;
}
?>
%%
--AndreaRossato
----
hmm... i may have the solution but i need to understand the problem ;)
++first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about)++ (''the correct functions would have been http_entity_decode($string, ENT_QUOTES, 'UTF-8') and httpentities($string, ENT_QUOTES, 'UTF-8'), but these functions aren't able to handle multybyte-chars yet. the [[http://de.php.net/manual/de/ref.mbstring.php mb-string-lib]] might give a more straight and performant solution. andrea's sample code should be valuable to understand what happens but i am still looking for a variant that don't "contaminate" the code too much and keeps it maintainable. a good start might be to introduce two functions "Formstring()" and "DBstring()" which do //all// conversion stuff including mysql_escape_string and such and to maintain the conversion stuff in one central place in future steps'')
second it's not perfectly clear to me, how to treat clients that don't accept utf-8 encoding. i haven't had much time to get into the stuff, but so far i think the following tasks have to be managed:
- determine the most convinient charset (that's easy, just have a look at $HTTP_ACCEPT_CHARSET)
- set the apropriate http-header in **header.php** and - if needed - set a flag $this->config["use_utf8"] = true;
- do the conversions on form-data if use_utf8 is set (this sounds like a busy task)
- convert the $_POST data back to iso-8859-1 (the charset we'll internally work with)
- leave the formatter untouched, which should be fed with iso-data (and entities), if i have no fault in the points above. instead use the buffered output which is stored in the variable $output at $wakka->includebuffered to convert it at once, namely to utf-8 what is expected by the client if it sends utf-8-formdata.
what i don't understand yet is:
- what to do with the wikiword-recognition, which is designed for the latin alphabet. i think at least the ""[[forced links]]"" should work in every language.
- will the diff-engine work (not worse than now), when it's fed with html-entities and nothing but entities (this //will// happen with a page that only stores the quotation of an aramean bible-text)
- how will the fulltext-search behave
- and of course am i right with the tasklist above. something's missing? something's wrong?
btw: what wakka-forks already exist, that are redesigned for the needs of a foreign charset? isn't wackowiki a russian spin off? do we have some cyrillic speaking wikka-fans out there? ;)
-- [[dreckfehler]]
----
There's a Wakka fork redesigned to support multi-language: [[http://uniwakka.sourceforge.net UniWakka]] -:).

I'll try to clarify the problem, as far as I can ;)
The problem with character encoding is that UTF-8 is a multi-byte encoding. Ascii and UTF-8 are actually the same stuff, since the first 128 character in UTF-8 are plain 8-bit. The problem is the remaining characters that are encoded with more than 1 byte...
Now, there are two different approaches:
1. you can use 8-bit encoding (iso-8859-*). That is to say: if you have cyrillic characters you can use iso-8859-5 (or cp-1252, as far as I remember). Ascii characters are the same, bur above chr(128) you have cyrillic chars. In this case you can use cyrillic but not, for instance, french accented letters (these are not included in iso-8859-5).
This approach lets you use charset metatags to define the encoding. PHP will be able to handle it, since the characters are plain 8-bit. This cannot be called multi-language support: you can only use a very limited set of languages at a time. Period.
This is the Wacko approach.
2. If you want to have cyrillic letters __and__ Italian (or French) accented letters in the same wiki, then you need UTF-8, that is to say, multi-byte characters. PHP will not able to handle strings with multi-byte encodings: preg_match, preg_replace will not work.
You need to convert those strings into single-byte characters. The only way I was able to find to manipulate those strings is to use iso-8859-1 plus unicode entities.
WikiWords must be plain ascii, as every URI.
I did not study WikkaWiki diff engine. But there shouldn't be any problem as far as you use unicode entities above ascii (or iso-8859-1) characters.
The same applies to full-text search. The string to be searched is converted into iso-8859-1 plus unicode entities. And unicode entities can be searched. Have a try [[http://gipc49.jus.unitn.it:8080/wakka/TextSearch?phrase=%E6%B6%9B here]].
http_entity_decode and httpentities work only with single byte characters, as every php functions. As you said, for multi-byte you need to use mb-string-lib. But if you want to use the lib you are going to rewrite every wakka-derived wiki, and you cannot use perl regular expressions. And this is not going to avoid "contamination" of the code.
Moreover, I would like to ask you to indicate some user-agent that do not support UTF-8. IE, gecko derived browser, Konqueror, Opera do support it. As far as I know Google pages are utf-8 encoded.
--AndreaRossato
''"Modern" user agents support UTF-8 - but as far as I know only the graphical ones (i.e., not Lynx or Links - or maybe they do on Unix, but certainly not on Windows); IE at least as far back as 5.01 - don't know about 4.0 (yes there are people that use this); Netscape 4.x has I think only limited support (if at all), and as you say the Gecko-based browsers are OK, as is Opera (6+ at least, not sure about 5).
-- JavaWoman''
----
==The "We don't like mbstring" version of the code==
%%(php)
<?php
function is_utf8($Str) {
for ($i=0; $i<strlen($Str); $i++) {
if (ord($Str[$i]) < 0x80) continue;
elseif ((ord($Str[$i]) & 0xE0) == 0xC0) $n=1;
elseif ((ord($Str[$i]) & 0xF0) == 0xE0) $n=2;
elseif ((ord($Str[$i]) & 0xF8) == 0xF0) $n=3;
elseif ((ord($Str[$i]) & 0xFC) == 0xF8) $n=4;
elseif ((ord($Str[$i]) & 0xFE) == 0xFC) $n=5;
else return false;
for ($j=0; $j<$n; $j++) {
if ((++$i == strlen($Str)) || ((ord($Str[$i]) & 0xC0) != 0x80))
return false;
}
}
return true;
}
//to print in a form
function str2utf8($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}
//ascii for xhtml
function str2ascii ($str) {
if ($this->is_utf8($str)) {

preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);

}
}
//iso8859 for database storage (so we do not need mysql 4.1)
function str2iso8859 ($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}
return $this->deCP1252($str);

}
}
%%
--AndreaRossato
----
==Links to information on other sites:==
[[http://www.joelonsoftware.com/printerFriendly/articles/Unicode.html The Absolute Minimum Every Software Developer Must Know About Unicode and Character Sets]]
----
CategoryDevelopmentI18n


Revision [12096]

Edited on 2005-12-05 19:48:48 by TonZijlstra [corrected typo: wakka where wikka was meant]
Additions:
- in wikka.php, function SavePage you should use:
Deletions:
- in wakka.php, function SavePage you should use:


Revision [10648]

Edited on 2005-08-12 12:16:38 by DarTar [adding see also box]
Additions:
>>**See also:**
~-WikkaLocalization
~-List of [[WikkaSites sites powered by Wikka]] in 35 languages.
~-Current [[CategoryDevelopmentI18n i18n/l10n]] development pages.
~-Test page for [[WikkaMultilanguageTestPage multilanguage support]]
>>::c::
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
Deletions:
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );


Revision [8615]

Edited on 2005-05-28 17:51:26 by JavaWoman [move to subcategory]
Additions:
===Real Multilanguage Support===

Here's some code to provide real multilanguage support.
The first 3 functions are used within the functions that do the real enconding conversions.
str2utf8, str2ascii and str2iso8859 can take any encodend string and convert it into the desired encoding: ascii plus unicode entities for html output, iso8859-1 plus unicode entities for database storage and utf8 for forms.
Unfortunately the ascii and iso8859 output is not compatible with htmlspecialchars. This is the reason of a valid_xml function. It has the same scope of htmlspecialchars , but will correctly handle &.
How to use this function? For istance,
- in formatters/wakka.php you should use:
- print($this->str2ascii($text));
- in wakka.php, function SavePage you should use:
- "body = '".mysql_escape_string(trim($this->str2iso8859($body)))."'");
- in handlers/page/edit.php you should use:
- "<textarea rows=\"40\" cols=\"60\" onkeydown=\"fKeyDown()\" name=\"body\" style=\"width: 100%; height: 400px\">".$this->valid_xml($this->str2utf8($body))."</textarea><br />\n"


And so on....

**Update** ''I changed the functions that do the conversion to improve speed and reduce memory usage'' 2004-08-14
--AndreaRossato

Check it out [[http://www.istitutocolli.org/uniwakka/MultiLanguage here]].

The bits:
%%(php)
<?php
//Multilanguage support. We will use: utf-8 for user input, iso8859-1 + unicode for database storage and ascii + unicode for printing
function utf8_to_unicode($str) {
$unicode = array();
$values = array();
$lookingFor = 1;
for ($i = 0; $i < strlen($str); $i++ ) {
$thisValue = ord( $str[$i] );
if ( $thisValue < 128 ) $unicode[] = $thisValue;
else {
if ( count( $values ) == 0 ) $lookingFor = ( $thisValue < 224 ) ? 2 : 3;
$values[] = $thisValue;
if ( count( $values ) == $lookingFor ) {
$number = ( $lookingFor == 3 ) ?
( ( $values[0] % 16 ) * 4096 ) + ( ( $values[1] % 64 ) * 64 ) + ( $values[2] % 64 ):
( ( $values[0] % 32 ) * 64 ) + ( $values[1] % 64 );
$unicode[] = $number;
$values = array();
$lookingFor = 1;
}
}
}
return $unicode;
}
function deCP1252 ($str) {
$str = str_replace("&#128", "€", $str);
$str = str_replace("&#129", "", $str);
$str = str_replace("‚", "‚", $str);
$str = str_replace("ƒ", "ƒ", $str);
$str = str_replace("„", "„", $str);
$str = str_replace("…", "…", $str);
$str = str_replace("†", "†", $str);
$str = str_replace("‡", "‡", $str);
$str = str_replace("ˆ", "ˆ", $str);
$str = str_replace("‰", "‰", $str);
$str = str_replace("Š", "Š", $str);
$str = str_replace("‹", "‹", $str);
$str = str_replace("Œ", "Œ", $str);
$str = str_replace("‘", "‘", $str);
$str = str_replace("’", "’", $str);
$str = str_replace("“", "“", $str);
$str = str_replace("”", "”", $str);
$str = str_replace("•", "•", $str);
$str = str_replace("–", "–", $str);
$str = str_replace("—", "—", $str);
$str = str_replace("˜", "˜", $str);
$str = str_replace("™", "™", $str);
$str = str_replace("š", "š", $str);
$str = str_replace("›", "›", $str);
$str = str_replace("œ", "œ", $str);
$str = str_replace("Ÿ", "Ÿ", $str);
return $str;
}
function code2utf($num){
if($num<128)return chr($num);
if($num<2048)return chr(($num>>6)+192).chr(($num&63)+128);
if($num<65536)return chr(($num>>12)+224).chr((($num>>6)&63)+128).chr(($num&63)+128);
if($num<2097152)return chr(($num>>18)+240).chr((($num>>12)&63)+128).chr((($num>>6)&63)+128). chr(($num&63)+128);
return '';
}
//to print in a form
function str2utf8($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
if (mb_detect_encoding($str) == "UTF-8") {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}

//to print html
function str2ascii ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
return $this->deCP1252($str);
break;
}
}

//for database storage
function str2iso8859 ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 127 && $value <= 160 )
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}return $this->deCP1252($str);
break;
}
}


function valid_xml ($str) {
$str = str_replace("\"", """, $str);
$str = str_replace("<", "<", $str);
$str = str_replace(">", ">", $str);
$str = preg_replace("/&(?![a-zA-Z0-9#]+?;)/", "&", $str);
return $str;
}
?>
%%
--AndreaRossato
----
hmm... i may have the solution but i need to understand the problem ;)

++first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about)++ (''the correct functions would have been http_entity_decode($string, ENT_QUOTES, 'UTF-8') and httpentities($string, ENT_QUOTES, 'UTF-8'), but these functions aren't able to handle multybyte-chars yet. the [[http://de.php.net/manual/de/ref.mbstring.php mb-string-lib]] might give a more straight and performant solution. andrea's sample code should be valuable to understand what happens but i am still looking for a variant that don't "contaminate" the code too much and keeps it maintainable. a good start might be to introduce two functions "Formstring()" and "DBstring()" which do //all// conversion stuff including mysql_escape_string and such and to maintain the conversion stuff in one central place in future steps'')

second it's not perfectly clear to me, how to treat clients that don't accept utf-8 encoding. i haven't had much time to get into the stuff, but so far i think the following tasks have to be managed:

- determine the most convinient charset (that's easy, just have a look at $HTTP_ACCEPT_CHARSET)
- set the apropriate http-header in **header.php** and - if needed - set a flag $this->config["use_utf8"] = true;
- do the conversions on form-data if use_utf8 is set (this sounds like a busy task)
- convert the $_POST data back to iso-8859-1 (the charset we'll internally work with)
- leave the formatter untouched, which should be fed with iso-data (and entities), if i have no fault in the points above. instead use the buffered output which is stored in the variable $output at $wakka->includebuffered to convert it at once, namely to utf-8 what is expected by the client if it sends utf-8-formdata.

what i don't understand yet is:

- what to do with the wikiword-recognition, which is designed for the latin alphabet. i think at least the ""[[forced links]]"" should work in every language.
- will the diff-engine work (not worse than now), when it's fed with html-entities and nothing but entities (this //will// happen with a page that only stores the quotation of an aramean bible-text)
- how will the fulltext-search behave
- and of course am i right with the tasklist above. something's missing? something's wrong?

btw: what wakka-forks already exist, that are redesigned for the needs of a foreign charset? isn't wackowiki a russian spin off? do we have some cyrillic speaking wikka-fans out there? ;)

-- [[dreckfehler]]
----

There's a Wakka fork redesigned to support multi-language: [[http://uniwakka.sourceforge.net UniWakka]] -:).

I'll try to clarify the problem, as far as I can ;)
The problem with character encoding is that UTF-8 is a multi-byte encoding. Ascii and UTF-8 are actually the same stuff, since the first 128 character in UTF-8 are plain 8-bit. The problem is the remaining characters that are encoded with more than 1 byte...

Now, there are two different approaches:
1. you can use 8-bit encoding (iso-8859-*). That is to say: if you have cyrillic characters you can use iso-8859-5 (or cp-1252, as far as I remember). Ascii characters are the same, bur above chr(128) you have cyrillic chars. In this case you can use cyrillic but not, for instance, french accented letters (these are not included in iso-8859-5).
This approach lets you use charset metatags to define the encoding. PHP will be able to handle it, since the characters are plain 8-bit. This cannot be called multi-language support: you can only use a very limited set of languages at a time. Period.
This is the Wacko approach.

2. If you want to have cyrillic letters __and__ Italian (or French) accented letters in the same wiki, then you need UTF-8, that is to say, multi-byte characters. PHP will not able to handle strings with multi-byte encodings: preg_match, preg_replace will not work.
You need to convert those strings into single-byte characters. The only way I was able to find to manipulate those strings is to use iso-8859-1 plus unicode entities.

WikiWords must be plain ascii, as every URI.
I did not study WikkaWiki diff engine. But there shouldn't be any problem as far as you use unicode entities above ascii (or iso-8859-1) characters.
The same applies to full-text search. The string to be searched is converted into iso-8859-1 plus unicode entities. And unicode entities can be searched. Have a try [[http://gipc49.jus.unitn.it:8080/wakka/TextSearch?phrase=%E6%B6%9B here]].
http_entity_decode and httpentities work only with single byte characters, as every php functions. As you said, for multi-byte you need to use mb-string-lib. But if you want to use the lib you are going to rewrite every wakka-derived wiki, and you cannot use perl regular expressions. And this is not going to avoid "contamination" of the code.

Moreover, I would like to ask you to indicate some user-agent that do not support UTF-8. IE, gecko derived browser, Konqueror, Opera do support it. As far as I know Google pages are utf-8 encoded.

--AndreaRossato

''"Modern" user agents support UTF-8 - but as far as I know only the graphical ones (i.e., not Lynx or Links - or maybe they do on Unix, but certainly not on Windows); IE at least as far back as 5.01 - don't know about 4.0 (yes there are people that use this); Netscape 4.x has I think only limited support (if at all), and as you say the Gecko-based browsers are OK, as is Opera (6+ at least, not sure about 5).
-- JavaWoman''

----
==The "We don't like mbstring" version of the code==

%%(php)
<?php
function is_utf8($Str) {
for ($i=0; $i<strlen($Str); $i++) {
if (ord($Str[$i]) < 0x80) continue;
elseif ((ord($Str[$i]) & 0xE0) == 0xC0) $n=1;
elseif ((ord($Str[$i]) & 0xF0) == 0xE0) $n=2;
elseif ((ord($Str[$i]) & 0xF8) == 0xF0) $n=3;
elseif ((ord($Str[$i]) & 0xFC) == 0xF8) $n=4;
elseif ((ord($Str[$i]) & 0xFE) == 0xFC) $n=5;
else return false;
for ($j=0; $j<$n; $j++) {
if ((++$i == strlen($Str)) || ((ord($Str[$i]) & 0xC0) != 0x80))
return false;
}
}
return true;
}

//to print in a form
function str2utf8($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}

//ascii for xhtml
function str2ascii ($str) {
if ($this->is_utf8($str)) {

preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);

}
}
//iso8859 for database storage (so we do not need mysql 4.1)
function str2iso8859 ($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}
return $this->deCP1252($str);

}
}
%%
--AndreaRossato

----
==Links to information on other sites:==

[[http://www.joelonsoftware.com/printerFriendly/articles/Unicode.html The Absolute Minimum Every Software Developer Must Know About Unicode and Character Sets]]


----
CategoryDevelopmentI18n
Deletions:
===Real Multilanguage Support===

Here's some code to provide real multilanguage support.
The first 3 functions are used within the functions that do the real enconding conversions.
str2utf8, str2ascii and str2iso8859 can take any encodend string and convert it into the desired encoding: ascii plus unicode entities for html output, iso8859-1 plus unicode entities for database storage and utf8 for forms.
Unfortunately the ascii and iso8859 output is not compatible with htmlspecialchars. This is the reason of a valid_xml function. It has the same scope of htmlspecialchars , but will correctly handle &.
How to use this function? For istance,
- in formatters/wakka.php you should use:
- print($this->str2ascii($text));
- in wakka.php, function SavePage you should use:
- "body = '".mysql_escape_string(trim($this->str2iso8859($body)))."'");
- in handlers/page/edit.php you should use:
- "<textarea rows=\"40\" cols=\"60\" onkeydown=\"fKeyDown()\" name=\"body\" style=\"width: 100%; height: 400px\">".$this->valid_xml($this->str2utf8($body))."</textarea><br />\n"


And so on....

**Update** ''I changed the functions that do the conversion to improve speed and reduce memory usage'' 2004-08-14
--AndreaRossato

Check it out [[http://www.istitutocolli.org/uniwakka/MultiLanguage here]].

The bits:
%%(php)
<?php
//Multilanguage support. We will use: utf-8 for user input, iso8859-1 + unicode for database storage and ascii + unicode for printing
function utf8_to_unicode($str) {
$unicode = array();
$values = array();
$lookingFor = 1;
for ($i = 0; $i < strlen($str); $i++ ) {
$thisValue = ord( $str[$i] );
if ( $thisValue < 128 ) $unicode[] = $thisValue;
else {
if ( count( $values ) == 0 ) $lookingFor = ( $thisValue < 224 ) ? 2 : 3;
$values[] = $thisValue;
if ( count( $values ) == $lookingFor ) {
$number = ( $lookingFor == 3 ) ?
( ( $values[0] % 16 ) * 4096 ) + ( ( $values[1] % 64 ) * 64 ) + ( $values[2] % 64 ):
( ( $values[0] % 32 ) * 64 ) + ( $values[1] % 64 );
$unicode[] = $number;
$values = array();
$lookingFor = 1;
}
}
}
return $unicode;
}
function deCP1252 ($str) {
$str = str_replace("&#128", "€", $str);
$str = str_replace("&#129", "", $str);
$str = str_replace("‚", "‚", $str);
$str = str_replace("ƒ", "ƒ", $str);
$str = str_replace("„", "„", $str);
$str = str_replace("…", "…", $str);
$str = str_replace("†", "†", $str);
$str = str_replace("‡", "‡", $str);
$str = str_replace("ˆ", "ˆ", $str);
$str = str_replace("‰", "‰", $str);
$str = str_replace("Š", "Š", $str);
$str = str_replace("‹", "‹", $str);
$str = str_replace("Œ", "Œ", $str);
$str = str_replace("‘", "‘", $str);
$str = str_replace("’", "’", $str);
$str = str_replace("“", "“", $str);
$str = str_replace("”", "”", $str);
$str = str_replace("•", "•", $str);
$str = str_replace("–", "–", $str);
$str = str_replace("—", "—", $str);
$str = str_replace("˜", "˜", $str);
$str = str_replace("™", "™", $str);
$str = str_replace("š", "š", $str);
$str = str_replace("›", "›", $str);
$str = str_replace("œ", "œ", $str);
$str = str_replace("Ÿ", "Ÿ", $str);
return $str;
}
function code2utf($num){
if($num<128)return chr($num);
if($num<2048)return chr(($num>>6)+192).chr(($num&63)+128);
if($num<65536)return chr(($num>>12)+224).chr((($num>>6)&63)+128).chr(($num&63)+128);
if($num<2097152)return chr(($num>>18)+240).chr((($num>>12)&63)+128).chr((($num>>6)&63)+128). chr(($num&63)+128);
return '';
}
//to print in a form
function str2utf8($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
if (mb_detect_encoding($str) == "UTF-8") {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}

//to print html
function str2ascii ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
return $this->deCP1252($str);
break;
}
}

//for database storage
function str2iso8859 ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 127 && $value <= 160 )
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}return $this->deCP1252($str);
break;
}
}


function valid_xml ($str) {
$str = str_replace("\"", """, $str);
$str = str_replace("<", "<", $str);
$str = str_replace(">", ">", $str);
$str = preg_replace("/&(?![a-zA-Z0-9#]+?;)/", "&", $str);
return $str;
}
?>
%%
--AndreaRossato
----
hmm... i may have the solution but i need to understand the problem ;)

++first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about)++ (''the correct functions would have been http_entity_decode($string, ENT_QUOTES, 'UTF-8') and httpentities($string, ENT_QUOTES, 'UTF-8'), but these functions aren't able to handle multybyte-chars yet. the [[http://de.php.net/manual/de/ref.mbstring.php mb-string-lib]] might give a more straight and performant solution. andrea's sample code should be valuable to understand what happens but i am still looking for a variant that don't "contaminate" the code too much and keeps it maintainable. a good start might be to introduce two functions "Formstring()" and "DBstring()" which do //all// conversion stuff including mysql_escape_string and such and to maintain the conversion stuff in one central place in future steps'')

second it's not perfectly clear to me, how to treat clients that don't accept utf-8 encoding. i haven't had much time to get into the stuff, but so far i think the following tasks have to be managed:

- determine the most convinient charset (that's easy, just have a look at $HTTP_ACCEPT_CHARSET)
- set the apropriate http-header in **header.php** and - if needed - set a flag $this->config["use_utf8"] = true;
- do the conversions on form-data if use_utf8 is set (this sounds like a busy task)
- convert the $_POST data back to iso-8859-1 (the charset we'll internally work with)
- leave the formatter untouched, which should be fed with iso-data (and entities), if i have no fault in the points above. instead use the buffered output which is stored in the variable $output at $wakka->includebuffered to convert it at once, namely to utf-8 what is expected by the client if it sends utf-8-formdata.

what i don't understand yet is:

- what to do with the wikiword-recognition, which is designed for the latin alphabet. i think at least the ""[[forced links]]"" should work in every language.
- will the diff-engine work (not worse than now), when it's fed with html-entities and nothing but entities (this //will// happen with a page that only stores the quotation of an aramean bible-text)
- how will the fulltext-search behave
- and of course am i right with the tasklist above. something's missing? something's wrong?

btw: what wakka-forks already exist, that are redesigned for the needs of a foreign charset? isn't wackowiki a russian spin off? do we have some cyrillic speaking wikka-fans out there? ;)

-- [[dreckfehler]]
----

There's a Wakka fork redesigned to support multi-language: [[http://uniwakka.sourceforge.net UniWakka]] -:).

I'll try to clarify the problem, as far as I can ;)
The problem with character encoding is that UTF-8 is a multi-byte encoding. Ascii and UTF-8 are actually the same stuff, since the first 128 character in UTF-8 are plain 8-bit. The problem is the remaining characters that are encoded with more than 1 byte...

Now, there are two different approaches:
1. you can use 8-bit encoding (iso-8859-*). That is to say: if you have cyrillic characters you can use iso-8859-5 (or cp-1252, as far as I remember). Ascii characters are the same, bur above chr(128) you have cyrillic chars. In this case you can use cyrillic but not, for instance, french accented letters (these are not included in iso-8859-5).
This approach lets you use charset metatags to define the encoding. PHP will be able to handle it, since the characters are plain 8-bit. This cannot be called multi-language support: you can only use a very limited set of languages at a time. Period.
This is the Wacko approach.

2. If you want to have cyrillic letters __and__ Italian (or French) accented letters in the same wiki, then you need UTF-8, that is to say, multi-byte characters. PHP will not able to handle strings with multi-byte encodings: preg_match, preg_replace will not work.
You need to convert those strings into single-byte characters. The only way I was able to find to manipulate those strings is to use iso-8859-1 plus unicode entities.

WikiWords must be plain ascii, as every URI.
I did not study WikkaWiki diff engine. But there shouldn't be any problem as far as you use unicode entities above ascii (or iso-8859-1) characters.
The same applies to full-text search. The string to be searched is converted into iso-8859-1 plus unicode entities. And unicode entities can be searched. Have a try [[http://gipc49.jus.unitn.it:8080/wakka/TextSearch?phrase=%E6%B6%9B here]].
http_entity_decode and httpentities work only with single byte characters, as every php functions. As you said, for multi-byte you need to use mb-string-lib. But if you want to use the lib you are going to rewrite every wakka-derived wiki, and you cannot use perl regular expressions. And this is not going to avoid "contamination" of the code.

Moreover, I would like to ask you to indicate some user-agent that do not support UTF-8. IE, gecko derived browser, Konqueror, Opera do support it. As far as I know Google pages are utf-8 encoded.

--AndreaRossato

''"Modern" user agents support UTF-8 - but as far as I know only the graphical ones (i.e., not Lynx or Links - or maybe they do on Unix, but certainly not on Windows); IE at least as far back as 5.01 - don't know about 4.0 (yes there are people that use this); Netscape 4.x has I think only limited support (if at all), and as you say the Gecko-based browsers are OK, as is Opera (6+ at least, not sure about 5).
-- JavaWoman''

----
==The "We don't like mbstring" version of the code==

%%(php)
<?php
function is_utf8($Str) {
for ($i=0; $i<strlen($Str); $i++) {
if (ord($Str[$i]) < 0x80) continue;
elseif ((ord($Str[$i]) & 0xE0) == 0xC0) $n=1;
elseif ((ord($Str[$i]) & 0xF0) == 0xE0) $n=2;
elseif ((ord($Str[$i]) & 0xF8) == 0xF0) $n=3;
elseif ((ord($Str[$i]) & 0xFC) == 0xF8) $n=4;
elseif ((ord($Str[$i]) & 0xFE) == 0xFC) $n=5;
else return false;
for ($j=0; $j<$n; $j++) {
if ((++$i == strlen($Str)) || ((ord($Str[$i]) & 0xC0) != 0x80))
return false;
}
}
return true;
}

//to print in a form
function str2utf8($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}

//ascii for xhtml
function str2ascii ($str) {
if ($this->is_utf8($str)) {

preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);

}
}
//iso8859 for database storage (so we do not need mysql 4.1)
function str2iso8859 ($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}
return $this->deCP1252($str);

}
}
%%
--AndreaRossato

----
==Links to information on other sites:==

[[http://www.joelonsoftware.com/printerFriendly/articles/Unicode.html The Absolute Minimum Every Software Developer Must Know About Unicode and Character Sets]]



CategoryDevelopment


Revision [2837]

Edited on 2004-12-04 14:26:41 by JsnX [adding link]
Additions:
==Links to information on other sites:==
[[http://www.joelonsoftware.com/printerFriendly/articles/Unicode.html The Absolute Minimum Every Software Developer Must Know About Unicode and Character Sets]]


Revision [2162]

Edited on 2004-11-11 20:13:49 by JavaWoman [user agents and UTF-8]
Additions:
''"Modern" user agents support UTF-8 - but as far as I know only the graphical ones (i.e., not Lynx or Links - or maybe they do on Unix, but certainly not on Windows); IE at least as far back as 5.01 - don't know about 4.0 (yes there are people that use this); Netscape 4.x has I think only limited support (if at all), and as you say the Gecko-based browsers are OK, as is Opera (6+ at least, not sure about 5).
-- JavaWoman''


Revision [1410]

Edited on 2004-09-26 16:15:35 by NilsLindenberg [user agents and UTF-8]
Additions:
CategoryDevelopment
Deletions:
--AndreaRossato


Revision [1196]

Edited on 2004-09-14 11:31:53 by AndreaRossato [getting rid of mbstring]
Additions:
===Real Multilanguage Support===

Here's some code to provide real multilanguage support.
The first 3 functions are used within the functions that do the real enconding conversions.
str2utf8, str2ascii and str2iso8859 can take any encodend string and convert it into the desired encoding: ascii plus unicode entities for html output, iso8859-1 plus unicode entities for database storage and utf8 for forms.
Unfortunately the ascii and iso8859 output is not compatible with htmlspecialchars. This is the reason of a valid_xml function. It has the same scope of htmlspecialchars , but will correctly handle &.
How to use this function? For istance,
- in formatters/wakka.php you should use:
- print($this->str2ascii($text));
- in wakka.php, function SavePage you should use:
- "body = '".mysql_escape_string(trim($this->str2iso8859($body)))."'");
- in handlers/page/edit.php you should use:
- "<textarea rows=\"40\" cols=\"60\" onkeydown=\"fKeyDown()\" name=\"body\" style=\"width: 100%; height: 400px\">".$this->valid_xml($this->str2utf8($body))."</textarea><br />\n"


And so on....

**Update** ''I changed the functions that do the conversion to improve speed and reduce memory usage'' 2004-08-14
--AndreaRossato

Check it out [[http://www.istitutocolli.org/uniwakka/MultiLanguage here]].

The bits:
%%(php)
<?php
//Multilanguage support. We will use: utf-8 for user input, iso8859-1 + unicode for database storage and ascii + unicode for printing
function utf8_to_unicode($str) {
$unicode = array();
$values = array();
$lookingFor = 1;
for ($i = 0; $i < strlen($str); $i++ ) {
$thisValue = ord( $str[$i] );
if ( $thisValue < 128 ) $unicode[] = $thisValue;
else {
if ( count( $values ) == 0 ) $lookingFor = ( $thisValue < 224 ) ? 2 : 3;
$values[] = $thisValue;
if ( count( $values ) == $lookingFor ) {
$number = ( $lookingFor == 3 ) ?
( ( $values[0] % 16 ) * 4096 ) + ( ( $values[1] % 64 ) * 64 ) + ( $values[2] % 64 ):
( ( $values[0] % 32 ) * 64 ) + ( $values[1] % 64 );
$unicode[] = $number;
$values = array();
$lookingFor = 1;
}
}
}
return $unicode;
}
function deCP1252 ($str) {
$str = str_replace("&#128", "€", $str);
$str = str_replace("&#129", "", $str);
$str = str_replace("‚", "‚", $str);
$str = str_replace("ƒ", "ƒ", $str);
$str = str_replace("„", "„", $str);
$str = str_replace("…", "…", $str);
$str = str_replace("†", "†", $str);
$str = str_replace("‡", "‡", $str);
$str = str_replace("ˆ", "ˆ", $str);
$str = str_replace("‰", "‰", $str);
$str = str_replace("Š", "Š", $str);
$str = str_replace("‹", "‹", $str);
$str = str_replace("Œ", "Œ", $str);
$str = str_replace("‘", "‘", $str);
$str = str_replace("’", "’", $str);
$str = str_replace("“", "“", $str);
$str = str_replace("”", "”", $str);
$str = str_replace("•", "•", $str);
$str = str_replace("–", "–", $str);
$str = str_replace("—", "—", $str);
$str = str_replace("˜", "˜", $str);
$str = str_replace("™", "™", $str);
$str = str_replace("š", "š", $str);
$str = str_replace("›", "›", $str);
$str = str_replace("œ", "œ", $str);
$str = str_replace("Ÿ", "Ÿ", $str);
return $str;
}
function code2utf($num){
if($num<128)return chr($num);
if($num<2048)return chr(($num>>6)+192).chr(($num&63)+128);
if($num<65536)return chr(($num>>12)+224).chr((($num>>6)&63)+128).chr(($num&63)+128);
if($num<2097152)return chr(($num>>18)+240).chr((($num>>12)&63)+128).chr((($num>>6)&63)+128). chr(($num&63)+128);
return '';
}
//to print in a form
function str2utf8($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
if (mb_detect_encoding($str) == "UTF-8") {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}

//to print html
function str2ascii ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
return $this->deCP1252($str);
break;
}
}

//for database storage
function str2iso8859 ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 127 && $value <= 160 )
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}return $this->deCP1252($str);
break;
}
}


function valid_xml ($str) {
$str = str_replace("\"", """, $str);
$str = str_replace("<", "<", $str);
$str = str_replace(">", ">", $str);
$str = preg_replace("/&(?![a-zA-Z0-9#]+?;)/", "&", $str);
return $str;
}
?>
%%
--AndreaRossato
----
hmm... i may have the solution but i need to understand the problem ;)

++first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about)++ (''the correct functions would have been http_entity_decode($string, ENT_QUOTES, 'UTF-8') and httpentities($string, ENT_QUOTES, 'UTF-8'), but these functions aren't able to handle multybyte-chars yet. the [[http://de.php.net/manual/de/ref.mbstring.php mb-string-lib]] might give a more straight and performant solution. andrea's sample code should be valuable to understand what happens but i am still looking for a variant that don't "contaminate" the code too much and keeps it maintainable. a good start might be to introduce two functions "Formstring()" and "DBstring()" which do //all// conversion stuff including mysql_escape_string and such and to maintain the conversion stuff in one central place in future steps'')

second it's not perfectly clear to me, how to treat clients that don't accept utf-8 encoding. i haven't had much time to get into the stuff, but so far i think the following tasks have to be managed:

- determine the most convinient charset (that's easy, just have a look at $HTTP_ACCEPT_CHARSET)
- set the apropriate http-header in **header.php** and - if needed - set a flag $this->config["use_utf8"] = true;
- do the conversions on form-data if use_utf8 is set (this sounds like a busy task)
- convert the $_POST data back to iso-8859-1 (the charset we'll internally work with)
- leave the formatter untouched, which should be fed with iso-data (and entities), if i have no fault in the points above. instead use the buffered output which is stored in the variable $output at $wakka->includebuffered to convert it at once, namely to utf-8 what is expected by the client if it sends utf-8-formdata.

what i don't understand yet is:

- what to do with the wikiword-recognition, which is designed for the latin alphabet. i think at least the ""[[forced links]]"" should work in every language.
- will the diff-engine work (not worse than now), when it's fed with html-entities and nothing but entities (this //will// happen with a page that only stores the quotation of an aramean bible-text)
- how will the fulltext-search behave
- and of course am i right with the tasklist above. something's missing? something's wrong?

btw: what wakka-forks already exist, that are redesigned for the needs of a foreign charset? isn't wackowiki a russian spin off? do we have some cyrillic speaking wikka-fans out there? ;)

-- [[dreckfehler]]
----

There's a Wakka fork redesigned to support multi-language: [[http://uniwakka.sourceforge.net UniWakka]] -:).

I'll try to clarify the problem, as far as I can ;)
The problem with character encoding is that UTF-8 is a multi-byte encoding. Ascii and UTF-8 are actually the same stuff, since the first 128 character in UTF-8 are plain 8-bit. The problem is the remaining characters that are encoded with more than 1 byte...

Now, there are two different approaches:
1. you can use 8-bit encoding (iso-8859-*). That is to say: if you have cyrillic characters you can use iso-8859-5 (or cp-1252, as far as I remember). Ascii characters are the same, bur above chr(128) you have cyrillic chars. In this case you can use cyrillic but not, for instance, french accented letters (these are not included in iso-8859-5).
This approach lets you use charset metatags to define the encoding. PHP will be able to handle it, since the characters are plain 8-bit. This cannot be called multi-language support: you can only use a very limited set of languages at a time. Period.
This is the Wacko approach.

2. If you want to have cyrillic letters __and__ Italian (or French) accented letters in the same wiki, then you need UTF-8, that is to say, multi-byte characters. PHP will not able to handle strings with multi-byte encodings: preg_match, preg_replace will not work.
You need to convert those strings into single-byte characters. The only way I was able to find to manipulate those strings is to use iso-8859-1 plus unicode entities.

WikiWords must be plain ascii, as every URI.
I did not study WikkaWiki diff engine. But there shouldn't be any problem as far as you use unicode entities above ascii (or iso-8859-1) characters.
The same applies to full-text search. The string to be searched is converted into iso-8859-1 plus unicode entities. And unicode entities can be searched. Have a try [[http://gipc49.jus.unitn.it:8080/wakka/TextSearch?phrase=%E6%B6%9B here]].
http_entity_decode and httpentities work only with single byte characters, as every php functions. As you said, for multi-byte you need to use mb-string-lib. But if you want to use the lib you are going to rewrite every wakka-derived wiki, and you cannot use perl regular expressions. And this is not going to avoid "contamination" of the code.

Moreover, I would like to ask you to indicate some user-agent that do not support UTF-8. IE, gecko derived browser, Konqueror, Opera do support it. As far as I know Google pages are utf-8 encoded.

--AndreaRossato

----
==The "We don't like mbstring" version of the code==

%%(php)
<?php
function is_utf8($Str) {
for ($i=0; $i<strlen($Str); $i++) {
if (ord($Str[$i]) < 0x80) continue;
elseif ((ord($Str[$i]) & 0xE0) == 0xC0) $n=1;
elseif ((ord($Str[$i]) & 0xF0) == 0xE0) $n=2;
elseif ((ord($Str[$i]) & 0xF8) == 0xF0) $n=3;
elseif ((ord($Str[$i]) & 0xFC) == 0xF8) $n=4;
elseif ((ord($Str[$i]) & 0xFE) == 0xFC) $n=5;
else return false;
for ($j=0; $j<$n; $j++) {
if ((++$i == strlen($Str)) || ((ord($Str[$i]) & 0xC0) != 0x80))
return false;
}
}
return true;
}

//to print in a form
function str2utf8($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}

//ascii for xhtml
function str2ascii ($str) {
if ($this->is_utf8($str)) {

preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);

}
}
//iso8859 for database storage (so we do not need mysql 4.1)
function str2iso8859 ($str) {
if ($this->is_utf8($str)) {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
} else {
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}
return $this->deCP1252($str);

}
}
%%
Deletions:
===Real Multilanguage Support===

Here's some code to provide real multilanguage support.
The first 3 functions are used within the functions that do the real enconding conversions.
str2utf8, str2ascii and str2iso8859 can take any encodend string and convert it into the desired encoding: ascii plus unicode entities for html output, iso8859-1 plus unicode entities for database storage and utf8 for forms.
Unfortunately the ascii and iso8859 output is not compatible with htmlspecialchars. This is the reason of a valid_xml function. It has the same scope of htmlspecialchars , but will correctly handle &.
How to use this function? For istance,
- in formatters/wakka.php you should use:
- print($this->str2ascii($text));
- in wakka.php, function SavePage you should use:
- "body = '".mysql_escape_string(trim($this->str2iso8859($body)))."'");
- in handlers/page/edit.php you should use:
- "<textarea rows=\"40\" cols=\"60\" onkeydown=\"fKeyDown()\" name=\"body\" style=\"width: 100%; height: 400px\">".$this->valid_xml($this->str2utf8($body))."</textarea><br />\n"


And so on....

**Update** ''I changed the functions that do the conversion to improve speed and reduce memory usage'' 2004-08-14

Check it out [[http://www.istitutocolli.org/uniwakka/MultiLanguage here]].

The bits:
%%(php)
<?php
//Multilanguage support. We will use: utf-8 for user input, iso8859-1 + unicode for database storage and ascii + unicode for printing
function utf8_to_unicode($str) {
$unicode = array();
$values = array();
$lookingFor = 1;
for ($i = 0; $i < strlen($str); $i++ ) {
$thisValue = ord( $str[$i] );
if ( $thisValue < 128 ) $unicode[] = $thisValue;
else {
if ( count( $values ) == 0 ) $lookingFor = ( $thisValue < 224 ) ? 2 : 3;
$values[] = $thisValue;
if ( count( $values ) == $lookingFor ) {
$number = ( $lookingFor == 3 ) ?
( ( $values[0] % 16 ) * 4096 ) + ( ( $values[1] % 64 ) * 64 ) + ( $values[2] % 64 ):
( ( $values[0] % 32 ) * 64 ) + ( $values[1] % 64 );
$unicode[] = $number;
$values = array();
$lookingFor = 1;
}
}
}
return $unicode;
}
function deCP1252 ($str) {
$str = str_replace("&#128", "€", $str);
$str = str_replace("&#129", "", $str);
$str = str_replace("‚", "‚", $str);
$str = str_replace("ƒ", "ƒ", $str);
$str = str_replace("„", "„", $str);
$str = str_replace("…", "…", $str);
$str = str_replace("†", "†", $str);
$str = str_replace("‡", "‡", $str);
$str = str_replace("ˆ", "ˆ", $str);
$str = str_replace("‰", "‰", $str);
$str = str_replace("Š", "Š", $str);
$str = str_replace("‹", "‹", $str);
$str = str_replace("Œ", "Œ", $str);
$str = str_replace("‘", "‘", $str);
$str = str_replace("’", "’", $str);
$str = str_replace("“", "“", $str);
$str = str_replace("”", "”", $str);
$str = str_replace("•", "•", $str);
$str = str_replace("–", "–", $str);
$str = str_replace("—", "—", $str);
$str = str_replace("˜", "˜", $str);
$str = str_replace("™", "™", $str);
$str = str_replace("š", "š", $str);
$str = str_replace("›", "›", $str);
$str = str_replace("œ", "œ", $str);
$str = str_replace("Ÿ", "Ÿ", $str);
return $str;
}
function code2utf($num){
if($num<128)return chr($num);
if($num<2048)return chr(($num>>6)+192).chr(($num&63)+128);
if($num<65536)return chr(($num>>12)+224).chr((($num>>6)&63)+128).chr(($num&63)+128);
if($num<2097152)return chr(($num>>18)+240).chr((($num>>12)&63)+128).chr((($num>>6)&63)+128). chr(($num&63)+128);
return '';
}
//to print in a form
function str2utf8($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
if (mb_detect_encoding($str) == "UTF-8") {
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
return $str;
} else {
$mystr = $str;
$str = "";
for ($i = 0; $i < strlen($mystr); $i++ ) {
$code = ord( $mystr[$i] );
if ($code >= 128 && $code < 160) {
$str .= "&#".$code.";";
} else {
$str .= $this->code2utf($code);
}
}
$str = $this->deCP1252($str);
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}

return $str;
}
}

//to print html
function str2ascii ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
return $this->deCP1252($str);
break;
}
}

//for database storage
function str2iso8859 ($str) {
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {

case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
}
$unicode = $this->utf8_to_unicode($str);
$entities = '';
foreach( $unicode as $value ) {
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
}
return $this->deCP1252($entities);
break;

case "ISO-8859-1":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 127 && $value <= 160 )
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}return $this->deCP1252($str);
break;
}
}


function valid_xml ($str) {
$str = str_replace("\"", """, $str);
$str = str_replace("<", "<", $str);
$str = str_replace(">", ">", $str);
$str = preg_replace("/&(?![a-zA-Z0-9#]+?;)/", "&", $str);
return $str;
}
?>
%%
----
hmm... i may have the solution but i need to understand the problem ;)

++first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about)++ (''the correct functions would have been http_entity_decode($string, ENT_QUOTES, 'UTF-8') and httpentities($string, ENT_QUOTES, 'UTF-8'), but these functions aren't able to handle multybyte-chars yet. the [[http://de.php.net/manual/de/ref.mbstring.php mb-string-lib]] might give a more straight and performant solution. andrea's sample code should be valuable to understand what happens but i am still looking for a variant that don't "contaminate" the code too much and keeps it maintainable. a good start might be to introduce two functions "Formstring()" and "DBstring()" which do //all// conversion stuff including mysql_escape_string and such and to maintain the conversion stuff in one central place in future steps'')

second it's not perfectly clear to me, how to treat clients that don't accept utf-8 encoding. i haven't had much time to get into the stuff, but so far i think the following tasks have to be managed:

- determine the most convinient charset (that's easy, just have a look at $HTTP_ACCEPT_CHARSET)
- set the apropriate http-header in **header.php** and - if needed - set a flag $this->config["use_utf8"] = true;
- do the conversions on form-data if use_utf8 is set (this sounds like a busy task)
- convert the $_POST data back to iso-8859-1 (the charset we'll internally work with)
- leave the formatter untouched, which should be fed with iso-data (and entities), if i have no fault in the points above. instead use the buffered output which is stored in the variable $output at $wakka->includebuffered to convert it at once, namely to utf-8 what is expected by the client if it sends utf-8-formdata.

what i don't understand yet is:

- what to do with the wikiword-recognition, which is designed for the latin alphabet. i think at least the ""[[forced links]]"" should work in every language.
- will the diff-engine work (not worse than now), when it's fed with html-entities and nothing but entities (this //will// happen with a page that only stores the quotation of an aramean bible-text)
- how will the fulltext-search behave
- and of course am i right with the tasklist above. something's missing? something's wrong?

btw: what wakka-forks already exist, that are redesigned for the needs of a foreign charset? isn't wackowiki a russian spin off? do we have some cyrillic speaking wikka-fans out there? ;)

-- [[dreckfehler]]
----

There's a Wakka fork redesigned to support multi-language: [[http://uniwakka.sourceforge.net UniWakka]] -:).

I'll try to clarify the problem, as far as I can ;)
The problem with character encoding is that UTF-8 is a multi-byte encoding. Ascii and UTF-8 are actually the same stuff, since the first 128 character in UTF-8 are plain 8-bit. The problem is the remaining characters that are encoded with more than 1 byte...

Now, there are two different approaches:
1. you can use 8-bit encoding (iso-8859-*). That is to say: if you have cyrillic characters you can use iso-8859-5 (or cp-1252, as far as I remember). Ascii characters are the same, bur above chr(128) you have cyrillic chars. In this case you can use cyrillic but not, for instance, french accented letters (these are not included in iso-8859-5).
This approach lets you use charset metatags to define the encoding. PHP will be able to handle it, since the characters are plain 8-bit. This cannot be called multi-language support: you can only use a very limited set of languages at a time. Period.
This is the Wacko approach.

2. If you want to have cyrillic letters __and__ Italian (or French) accented letters in the same wiki, then you need UTF-8, that is to say, multi-byte characters. PHP will not able to handle strings with multi-byte encodings: preg_match, preg_replace will not work.
You need to convert those strings into single-byte characters. The only way I was able to find to manipulate those strings is to use iso-8859-1 plus unicode entities.

WikiWords must be plain ascii, as every URI.
I did not study WikkaWiki diff engine. But there shouldn't be any problem as far as you use unicode entities above ascii (or iso-8859-1) characters.
The same applies to full-text search. The string to be searched is converted into iso-8859-1 plus unicode entities. And unicode entities can be searched. Have a try [[http://gipc49.jus.unitn.it:8080/wakka/TextSearch?phrase=%E6%B6%9B here]].
http_entity_decode and httpentities work only with single byte characters, as every php functions. As you said, for multi-byte you need to use mb-string-lib. But if you want to use the lib you are going to rewrite every wakka-derived wiki, and you cannot use perl regular expressions. And this is not going to avoid "contamination" of the code.

Moreover, I would like to ask you to indicate some user-agent that do not support UTF-8. IE, gecko derived browser, Konqueror, Opera do support it. As far as I know Google pages are utf-8 encoded.


Revision [978]

Edited on 2004-08-14 05:49:26 by AndreaRossato [code cleanup to reduce redundancy and memory usage.]
Additions:
**Update** ''I changed the functions that do the conversion to improve speed and reduce memory usage'' 2004-08-14
Check it out [[http://www.istitutocolli.org/uniwakka/MultiLanguage here]].
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {
case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
break;

case "ISO-8859-1":
$value = ord( $str{$i} );
if ($value <= 127)
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
return $this->deCP1252($str);
break;
}
mb_detect_order("ASCII, UTF-8, ISO-8859-1");
$encoding = mb_detect_encoding($str);
switch ($encoding) {
case "UTF-8":
preg_match_all("/&#([0-9]*?);/", $str, $unicode);
foreach( $unicode[0] as $key => $value) {
$str = preg_replace("/".$value."/", $this->code2utf($unicode[1][$key]), $str);
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
break;

case "ISO-8859-1":
$value = ord( $str{$i} );
if ($value > 127 && $value <= 160 )
$constr .= chr( $value );
else $constr .= '&#' . $value . ';';
}//for

return $this->deCP1252($constr);
break;
case "ASCII":
for ($i = 0; $i < strlen($str); $i++ ) {
$value = ord( $str{$i} );
if ($value > 159 && $value <= 255 )
$constr .= chr( $value );
elseif ($value > 127 && $value <= 160 )
$constr .= '&#' . $value . ';';
else $constr .= chr( $value );
}return $this->deCP1252($str);
break;
}
}
Deletions:
Check it out [[http://gipc49.jus.unitn.it:8080/wakka/MultiLanguage here]].
$str = $this->str2utf8($str);
$entities .= ( $value > 127 ) ? '&#' . $value . ';' : chr( $value );
} //foreach
$str = $this->str2utf8($str);
$entities = "";
if ($value <= 127)
$entities .= chr( $value );
elseif ($value > 159 && $value <= 255 )
$entities .= chr( $value );
else $entities .= '&#' . $value . ';';
} //foreach


Revision [837]

Edited on 2004-08-01 14:58:12 by AndreaRossato [corrected somu typos and links.]
Additions:
There's a Wakka fork redesigned to support multi-language: [[http://uniwakka.sourceforge.net UniWakka]] -:).
The problem with character encoding is that UTF-8 is a multi-byte encoding. Ascii and UTF-8 are actually the same stuff, since the first 128 character in UTF-8 are plain 8-bit. The problem is the remaining characters that are encoded with more than 1 byte...
This approach lets you use charset metatags to define the encoding. PHP will be able to handle it, since the characters are plain 8-bit. This cannot be called multi-language support: you can only use a very limited set of languages at a time. Period.
This is the Wacko approach.
2. If you want to have cyrillic letters __and__ Italian (or French) accented letters in the same wiki, then you need UTF-8, that is to say, multi-byte characters. PHP will not able to handle strings with multi-byte encodings: preg_match, preg_replace will not work.
You need to convert those strings into single-byte characters. The only way I was able to find to manipulate those strings is to use iso-8859-1 plus unicode entities.
WikiWords must be plain ascii, as every URI.
Moreover, I would like to ask you to indicate some user-agent that do not support UTF-8. IE, gecko derived browser, Konqueror, Opera do support it. As far as I know Google pages are utf-8 encoded.
Deletions:
There's a wakka fork redesigned to support multi-language: UniWakka -:). I'm going to release it tomorrow, but you can browse the cvs source code here: http://cvs.sourceforge.net/viewcvs.py/uniwakka/
The problem with character encoding is that UTF-8 is a multi byte encoding. Ascii and UTF-8 are actually the same stuff, since the first 128 character in UTF-8 are plain 8-bit. The problem is the remaining characters that are encoded with more than 1 byte...
This approach lets you use charset metatags to define the encoding. PHP will be able to handle it, since the characters are plain 8-bit. This cannot be called multi-language support. You can use a limited set of language at a time. Period.
This is the wacko approach.
2. If you want to have cyrillic letters __and__ Italian (or French) accented letters in the same wiki, then you need UTF-8, that is to say, multi-byte characters. PHP will not able to handle strings with multi byte encodings. preg_match, preg_replace will not work.
You need to convert those strings into single byte characters. The only way I was able to find to manipulate those strings is to use iso-8859-1 plus unicode entities.
WikiWords must be plain ascii, like URI.


Revision [833]

Edited on 2004-07-31 16:52:42 by AndreaRossato [trying to make it clear ;)]
Additions:
There's a wakka fork redesigned to support multi-language: UniWakka -:). I'm going to release it tomorrow, but you can browse the cvs source code here: http://cvs.sourceforge.net/viewcvs.py/uniwakka/

I'll try to clarify the problem, as far as I can ;)
The problem with character encoding is that UTF-8 is a multi byte encoding. Ascii and UTF-8 are actually the same stuff, since the first 128 character in UTF-8 are plain 8-bit. The problem is the remaining characters that are encoded with more than 1 byte...
Now, there are two different approaches:
1. you can use 8-bit encoding (iso-8859-*). That is to say: if you have cyrillic characters you can use iso-8859-5 (or cp-1252, as far as I remember). Ascii characters are the same, bur above chr(128) you have cyrillic chars. In this case you can use cyrillic but not, for instance, french accented letters (these are not included in iso-8859-5).
This approach lets you use charset metatags to define the encoding. PHP will be able to handle it, since the characters are plain 8-bit. This cannot be called multi-language support. You can use a limited set of language at a time. Period.
This is the wacko approach.
2. If you want to have cyrillic letters __and__ Italian (or French) accented letters in the same wiki, then you need UTF-8, that is to say, multi-byte characters. PHP will not able to handle strings with multi byte encodings. preg_match, preg_replace will not work.
You need to convert those strings into single byte characters. The only way I was able to find to manipulate those strings is to use iso-8859-1 plus unicode entities.
WikiWords must be plain ascii, like URI.
I did not study WikkaWiki diff engine. But there shouldn't be any problem as far as you use unicode entities above ascii (or iso-8859-1) characters.
The same applies to full-text search. The string to be searched is converted into iso-8859-1 plus unicode entities. And unicode entities can be searched. Have a try [[http://gipc49.jus.unitn.it:8080/wakka/TextSearch?phrase=%E6%B6%9B here]].
http_entity_decode and httpentities work only with single byte characters, as every php functions. As you said, for multi-byte you need to use mb-string-lib. But if you want to use the lib you are going to rewrite every wakka-derived wiki, and you cannot use perl regular expressions. And this is not going to avoid "contamination" of the code.


Revision [832]

Edited on 2004-07-31 15:10:23 by DreckFehler [just understood a bit more ;)]
Additions:
++first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about)++ (''the correct functions would have been http_entity_decode($string, ENT_QUOTES, 'UTF-8') and httpentities($string, ENT_QUOTES, 'UTF-8'), but these functions aren't able to handle multybyte-chars yet. the [[http://de.php.net/manual/de/ref.mbstring.php mb-string-lib]] might give a more straight and performant solution. andrea's sample code should be valuable to understand what happens but i am still looking for a variant that don't "contaminate" the code too much and keeps it maintainable. a good start might be to introduce two functions "Formstring()" and "DBstring()" which do //all// conversion stuff including mysql_escape_string and such and to maintain the conversion stuff in one central place in future steps'')
second it's not perfectly clear to me, how to treat clients that don't accept utf-8 encoding. i haven't had much time to get into the stuff, but so far i think the following tasks have to be managed:
- leave the formatter untouched, which should be fed with iso-data (and entities), if i have no fault in the points above. instead use the buffered output which is stored in the variable $output at $wakka->includebuffered to convert it at once, namely to utf-8 what is expected by the client if it sends utf-8-formdata.
-- [[dreckfehler]]
Deletions:
first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about). second it's not perfectly clear to me, how to treat clients that don't accept utf-8 encoding. i haven't had much time to get into the stuff, but so far i think the following tasks have to be managed:
- leave the formatter untouched, which should be fed with iso-data (and entities), if i have no fault in the points above. instead use the buffered output which is stored in the variable $output at $wakka->includebuffered and convert it at once, namely to utf-8 what is expected by the client if it sends utf-8-formdata.
-- dreckfehler


Revision [796]

Edited on 2004-07-29 12:44:44 by DreckFehler [tried to name the problems]
Additions:
hmm... i may have the solution but i need to understand the problem ;)
first i don't know why not to take the [[http://www.php.net/manual/de/function.utf8-decode.php utf8-decode]] and [[http://www.php.net/manual/de/function.utf8-encode.php utf8-encode]] functions to handle the conversion itself (but maybe there is a reason i didn't think about). second it's not perfectly clear to me, how to treat clients that don't accept utf-8 encoding. i haven't had much time to get into the stuff, but so far i think the following tasks have to be managed:
- determine the most convinient charset (that's easy, just have a look at $HTTP_ACCEPT_CHARSET)
- set the apropriate http-header in **header.php** and - if needed - set a flag $this->config["use_utf8"] = true;
- do the conversions on form-data if use_utf8 is set (this sounds like a busy task)
- convert the $_POST data back to iso-8859-1 (the charset we'll internally work with)
- leave the formatter untouched, which should be fed with iso-data (and entities), if i have no fault in the points above. instead use the buffered output which is stored in the variable $output at $wakka->includebuffered and convert it at once, namely to utf-8 what is expected by the client if it sends utf-8-formdata.
what i don't understand yet is:
- what to do with the wikiword-recognition, which is designed for the latin alphabet. i think at least the ""[[forced links]]"" should work in every language.
- will the diff-engine work (not worse than now), when it's fed with html-entities and nothing but entities (this //will// happen with a page that only stores the quotation of an aramean bible-text)
- how will the fulltext-search behave
- and of course am i right with the tasklist above. something's missing? something's wrong?
btw: what wakka-forks already exist, that are redesigned for the needs of a foreign charset? isn't wackowiki a russian spin off? do we have some cyrillic speaking wikka-fans out there? ;)
-- dreckfehler


Revision [790]

The oldest known version of this page was created on 2004-07-28 09:55:20 by AndreaRossato [tried to name the problems]
Valid XHTML :: Valid CSS: :: Powered by WikkaWiki