Locale::Maketext::Gettext man page on Fedora

Man page or keyword search:  
man Server   31170 pages
apropos Keyword Search (all sections)
Output format
Fedora logo
[printable version]

Locale::Maketext::GettUser3Contributed Perl DocumeLocale::Maketext::Gettext(3)

NAME
       Locale::Maketext::Gettext - Joins the gettext and Maketext frameworks

SYNOPSIS
       In your localization class:

	 package MyPackage::L10N;
	 use base qw(Locale::Maketext::Gettext);
	 return 1;

       In your application:

	 use MyPackage::L10N;
	 $LH = MyPackage::L10N->get_handle or die "What language?";
	 $LH->bindtextdomain("mypackage", "/home/user/locale");
	 $LH->textdomain("mypackage");
	 $LH->maketext("Hello, world!!");

       If you want to have more control to the detail:

	 # Change the output encoding
	 $LH->encoding("UTF-8");
	 # Stick with the Maketext behavior on lookup failures
	 $LH->die_for_lookup_failures(1);
	 # Flush the MO file cache and re-read your updated MO files
	 $LH->reload_text;
	 # Set the encoding of your maketext keys, if not in English
	 $LH->key_encoding("Big5");
	 # Set the action when encode fails
	 $LH->encode_failure(Encode::FB_HTMLCREF);

       Use Locale::Maketext::Gettext to read and parse the MO file:

	 use Locale::Maketext::Gettext;
	 %Lexicon = read_mo($MOfile);

DESCRIPTION
       Locale::Maketext::Gettext joins the GNU gettext and Maketext
       frameworks.  It is a subclass of Locale::Maketext(3) that follows the
       way GNU gettext works.  It works seamlessly, both in the sense of GNU
       gettext and Maketext.  As a result, you enjoy both their advantages,
       and get rid of both their problems, too.

       You start as an usual GNU gettext localization project:	Work on PO
       files with the help of translators, reviewers and Emacs.	 Turn them
       into MO files with msgfmt.  Copy them into the appropriate locale
       directory, such as /usr/share/locale/de/LC_MESSAGES/myapp.mo.

       Then, build your Maketext localization class, with your base class
       changed from Locale::Maketext(3) to Locale::Maketext::Gettext.  That is
       all.

METHODS
       $LH->bindtextdomain(DOMAIN, LOCALEDIR)
	   Register a text domain with a locale directory.  Returns
	   "LOCALEDIR" itself.	If "LOCALEDIR" is omitted, the registered
	   locale directory of "DOMAIN" is returned.  This method always
	   success.

       $LH->textdomain(DOMAIN)
	   Set the current text domain.	 Returns the "DOMAIN" itself.  If
	   "DOMAIN" is omitted, the current text domain is returned.  This
	   method always success.

       $text = $LH->maketext($key, @param...)
	   Lookup the $key in the current lexicon and return a translated
	   message in the language of the user.	 This is the same method in
	   Locale::Maketext(3), with a wrapper that returns the text message
	   "encode"d according to the current "encoding".  Refer to
	   Locale::Maketext(3) for the maketext plural notation.

       $text = $LH->pmaketext($ctxt, $key, @param...)
	   Lookup the $key in a particular context in the current lexicon and
	   return a translated message in the language of the user.   Use
	   "--keyword=pmaketext:1c,2" for the xgettext utility.

       $LH->language_tag
	   Retrieve the language tag.  This is the same method in
	   Locale::Maketext(3).	 It is readonly.

       $LH->encoding(ENCODING)
	   Set or retrieve the output encoding.	 The default is the same
	   encoding as the gettext MO file.  You can specify "undef", to
	   return the result in unencoded UTF-8.

       $LH->key_encoding(ENCODING)
	   Specify the encoding used in your original text.  The "maketext"
	   method itself is not multibyte-safe to the _AUTO lexicon.  If you
	   are using your native non-English language as your original text
	   and you are having troubles like:

	   Unterminated bracket group, in:

	   Then, specify the "key_encoding" to the encoding of your original
	   text.  Returns the current setting.

	   WARNING: You should always use US-ASCII text keys.  Using non-US-
	   ASCII keys is always discouraged and is not guaranteed to be
	   working.

       $LH->encode_failure(CHECK)
	   Set the action when encode fails.  This happens when the output
	   text is out of the scope of your output encoding.  For exmaple,
	   output Chinese into US-ASCII.  Refer to Encode(3) for the possible
	   values of this "CHECK".  The default is "FB_DEFAULT", which is a
	   safe choice that never fails.  But part of your text may be lost,
	   since that is what "FB_DEFAULT" does.  Returns the current setting.

       $LH->die_for_lookup_failures(SHOULD_I_DIE)
	   Maketext dies for lookup failures, but GNU gettext never fails.  By
	   default Lexicon::Maketext::Gettext follows the GNU gettext
	   behavior.  But if you are Maketext-styled, or if you need a better
	   control over the failures (like me :p), set this to 1.  Returns the
	   current setting.

	   Note that lookup failure handler you registered with fail_with()
	   only work when die_for_lookup_failures() is enabled.	 if you
	   disable die_for_lookup_failures(), maketext() never fails and
	   lookup failure handler will be ignored.

       $LH->reload_text
	   Purge the MO text cache.  It purges the MO text cache from the base
	   class Locale::Maketext::Gettext.  The next time "maketext" is
	   called, the MO file will be read and parse from the disk again.
	   This is used when your MO file is updated, but you cannot shutdown
	   and restart the application.	 For example, when you are a co-hoster
	   on a mod_perl-enabled Apache, or when your mod_perl-enabled Apache
	   is too vital to be restarted for every update of your MO file, or
	   if you are running a vital daemon, such as an X display server.

FUNCTIONS
       %Lexicon = read_mo($MOfile);
	   Read and parse the MO file.	Returns the read %Lexicon.  The
	   returned lexicon is in its original encoding.

	   If you need the meta infomation of your MO file, parse the entry
	   $Lexicon{""}.  For example:

	     /^Content-Type: text\/plain; charset=(.*)$/im;
	     $encoding = $1;

	   "read_mo()" is exported by default, but you need to "use
	   Locale::Maketext::Gettext" in order to use it.  It is not exported
	   from your localization class, but from the
	   Locale::Maketext::Gettext package.

NOTES
       WARNING: do not try to put any lexicon in your language subclass.  When
       the "textdomain" method is called, the current lexicon will be
       replaced, but not appended.  This is to accommodate the way
       "textdomain" works.  Messages from the previous text domain should not
       stay in the current text domain.

       An essential benefit of this Locale::Maketext::Gettext over the
       original Locale::Maketext(3) is that: GNU gettext is multibyte safe,
       but Perl source is not.	GNU gettext is safe to Big5 characters like
       \xa5\x5c (Gong1).  But if you follow the current Locale::Maketext(3)
       document and put your lexicon as a hash in the source of a localization
       subclass, you have to escape bytes like \x5c, \x40, \x5b, etc., in the
       middle of some natural multibyte characters.  This breaks these
       characters in halves.  Your non-technical translators and reviewers
       will be presented with unreadable mess, "Luan4Ma3".  Sorry to say this,
       but it is weird for a localization framework to be not multibyte-safe.
       But, well, here comes Locale::Maketext::Gettext to rescue.  With
       Locale::Maketext::Gettext, you can sit back and relax now, leaving all
       this mess to the excellent GNU gettext framework.

       The idea of Locale::Maketext::Getttext came from
       Locale::Maketext::Lexicon(3), a great work by Autrijus.	But it has
       several problems at that time (version 0.16).  I was first trying to
       write a wrapper to fix it, but finally I dropped it and decided to make
       a solution towards Locale::Maketext(3) itself.
       Locale::Maketext::Lexicon(3) should be fine now if you obtain a version
       newer than 0.16.

       Locale::Maketext::Gettext also solved the problem of lack of the
       ability to handle the encoding in Locale::Maketext(3).  I implement
       this since this is what GNU gettext does.  When %Lexicon is read from
       MO files by "read_mo()", the encoding tagged in gettext MO files is
       used to "decode" the text into the internal encoding of Perl.  Then,
       when extracted by "maketext", it is "encode"d by the current "encoding"
       value.  The "encoding" can be set at run time, so that you can run a
       daemon and output to different encoding according to the language
       settings of individual users, without having to restart the
       application.  This is an improvement to the Locale::Maketext(3), and is
       essential to daemons and "mod_perl" applications.

       You should trust the encoding of your gettext MO file.  GNU gettext
       "msgfmt" checks the illegal characters for you when you compile your MO
       file from your PO file.	The encoding form your MO files are always
       good.  If you try to output to a wrong encoding, part of your text may
       be lost, as "FB_DEFAULT" does.  If you do not like this "FB_DEFAULT",
       change the failure behavior with the method "encode_failure".

       If you need the behavior of auto Traditional Chinese/Simplfied Chinese
       conversion, as GNU gettext smartly does, do it yourself with
       Encode::HanExtra(3), too.  There may be a solution for this in the
       future, but not now.

       If you set "textdomain" to a domain that is not "bindtextdomain" to
       specific a locale directory yet, it will try search system locale
       directories.  The current system locale directory search order is:
       /usr/share/locale, /usr/lib/locale, /usr/local/share/locale,
       /usr/local/lib/locale.  Suggestions for this search order are welcome.

       NOTICE: MyPackage::L10N::en->maketext(...) is not available anymore, as
       the "maketext" method is no more static.	 That is a sure result, as
       %Lexicon is imported from foreign sources dynamically, but not
       statically hardcoded in Perl sources.  But the documentation of
       Locale::Maketext(3) does not say that you can use it as a static method
       anyway.	Maybe you were practicing this before.	You had better check
       your existing code for this.  If you try to invoke it statically, it
       returns "undef".

       "dgettext" and "dcgettext" in GNU gettext are not implemented.  It is
       not possible to temporarily change the current text domain in the
       current design of Locale::Maketext::Gettext.  Besides, it is
       meaningless.  Locale::Maketext is object-oriented.  You can always
       raise a new language handle for another text domain.  This is different
       from the situation of GNU gettext.  Also, the category is always
       "LC_MESSAGES".  Of course it is.	 We are gettext and Maketext.

       Avoid creating different language handles with different textdomain on
       the same localization subclass.	This currently works, but it violates
       the basic design of Locale::Maketext(3).	 In Locale::Maketext(3),
       %Lexicon is saved as a class variable, in order for the lexicon
       inheritance system to work.  So, multiple language handles to a same
       localization subclass shares a same lexicon space.  Their lexicon space
       clash.  I tried to avoid this problem by saving a copy of the current
       lexicon as an instance variable, and replacing the class lexicon with
       the current instance lexicon whenever it is changed by another language
       handle instance.	 But this involves large scaled memory copy, which
       affects the proformance seriously.  This is discouraged.	 You are
       adviced to use a single textdomain for a single localization class.

       The "key_encoding" is a workaround, not a solution.  There is no
       solution to this problem yet.  You should avoid using non-English
       language as your original text.	You will get yourself into trouble if
       you mix several original text encodings, for example, joining several
       pieces of code from programmers all around the world, with their
       messages written in their own language and encodings.  Solution
       suggestions are welcome.

       "pgettext" in GNU gettext is implemented as "pmaketext", in order to
       look up the text message translation in a particular context.  Thanks
       to the suggestion from Chris Travers.

BUGS
       GNU gettext never fails.	 I tries to achieve it as long as possible.
       The only reason that maketext may die unexpectedly now is "Unterminated
       bracket group".	I cannot get a better solution to it currently.
       Suggestions are welcome.

       You are welcome to fix my English.  I have done my best to this
       documentation, but I am not a native English speaker after all. ^^;

SEE ALSO
       Locale::Maketext(3), Locale::Maketext::TPJ13(3),
       Locale::Maketext::Lexicon(3), Encode(3), bindtextdomain(3),
       textdomain(3).  Also, please refer to the official GNU gettext manual
       at <http://www.gnu.org/software/gettext/manual/>.

AUTHOR
       imacat <imacat@mail.imacat.idv.tw>

COPYRIGHT
       Copyright (c) 2003-2008 imacat. All rights reserved. This program is
       free software; you can redistribute it and/or modify it under the same
       terms as Perl itself.

perl v5.14.1			  2011-06-20	  Locale::Maketext::Gettext(3)
[top]

List of man pages available for Fedora

Copyright (c) for man pages and the logo by the respective OS vendor.

For those who want to learn more, the polarhome community provides shell access and support.

[legal] [privacy] [GNU] [policy] [cookies] [netiquette] [sponsors] [FAQ]
Tweet
Polarhome, production since 1999.
Member of Polarhome portal.
Based on Fawad Halim's script.
....................................................................
Vote for polarhome
Free Shell Accounts :: the biggest list on the net