Search Results

Search found 1474 results on 59 pages for 'unicode'.

Page 12/59 | < Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >

  • DRM Tallyrand - The New User Interface

    - by russ.bishop
    I received word recently that the Tallyrand (11.1.2.0) build is out of our hands. I'm not sure when it will hit eDelivery, but if it hasn't already it should happen soon. For this post, I want to really quickly show the new user interface. The login screen: When you login, you are browsing versions and hierarchies. Note that Unicode is fully supported: The UI attempts to provide context-sensitive links where possible; notice here that an unloaded version is selected, so the UI shows a link. Clicking the link automatically brings up this Load Version dialog. This same thing applies elsewhere in the UI when you attempt to perform an action with an unloaded version: Here is browsing a hierarchy, with the property grid and context menu displayed (though you can hide the property grid anytime you like to provide more room): Worried about drag and drop? Don't! We support it even though this is a browser app. Also notice the Relationships feature on the right displaying a node's ancestors: Where possible, we try to present the available options, rather than just throwing up an "OK/Cancel" dialog (which most users never read anyway): Context-sensitive shortcuts automatically fill-in the context based on the currently selected node. For example, if you want to run a query using the selected node as the root, you can just click that query in the Shortcuts tab. In this screenshot, clicking Model After would model the selected node: This is just for starters. There is much more to cover, on both the client and server. For example, all communication channels are now configurable (no more DCOM). You can pick the ports, the encoding (binary or XML), and the transport mechanism (TCP, TCP over SSL, or SOAP over HTTP). All the relevant WS-* standards are also supported, eg: WS-Security, etc. Plus new features (besides the web client and unicode support). I hope to cover as much of these things as I can in the coming months. If you have specific requests, comment on this post and I'll try to cover them.

    Read the article

  • what is optimum length for html title tag in Unicode format?

    - by user1501256
    I have a website that generates its title tag dynamically. the title tag is in unicode format. the title tag is limited to 65 character but sometimes Google doesn't show title tag completely in SERP. I'd like to know what is the optimum length of title tag in terms of seo for unicode titles, and is there any difference between Unicode title and non-Unicode title tag? And what about other search engines Bing, Yahoo and so on.

    Read the article

  • Good bitmap fonts with big sizes and unicode support

    - by bitonic
    I really like bitmap fonts for programming/terminal. As far as I know there are two bitmap fonts with good unicode support: Unifont Fixed The problem is that I have a really high resolution screen, and they're both too small. Fixed does include a large size (10x20) but it looks really bad (it's basically always bold, and bold is a different face). Are there any other bitmap fonts with unicode support and large sizes? Terminus is the only font with a decent size but it doesn't have good unicode support. Having good coverage for mathematical symbols would be enough, since that's what I need.

    Read the article

  • How can I copy files with names containing spaces and UNICODE, when using a shell script?

    - by LOlliffe
    I have a list of files that I'm trying to copy and move (using cp and mv) in a bash shell script. The problem that I'm running into, is that I can't get either command to recognize a huge number of files, seemingly because the filenames contain spaces and/or unicode characters. I couldn't find any switches to decode/re-encode these characters. Instead, for example, if I copy "file name.xml", I get "*.xml" and a script error that the file wasn't found for my result. Does anyone know settings or commands that will deal with these files?

    Read the article

  • how to remove a few lines from a Unicode registry file using batch commands in Windows?

    - by Cosmin
    Hi. I have a program who's generating some data in registry. I save it with "reg export HKCU\Software\ProgramName\Data data.reg" (Unicode format). I need to take it to other computer and import it there so the program from that computer could use the data. But I have to remove some text lines from data.reg. The text lines are easy to find because they contain some strings. Now I'm doing this manually (using Wordpad) every few days but maybe there is another way... Oh and I can't install other programs on these computers (the access is restricted) so I have to use batch/cmd files. What I tried so far: - redirecting the export to "con" but is visual only not in a variable; - using "for /F ..." but this works only with ANSI and removes blank lines. Can somebody please help me...? Thank you.

    Read the article

  • How to ensure that no non-ascii unicode characters are entered ?

    - by Jacques René Mesrine
    Given a java.lang.String instance, I want to verify that it doesn't contain any unicode characters that are not ASCII alphanumerics. e.g. The string should be limited to [A-Za-z0-9.]. What I'm doing now is something very inefficient: import org.apache.commons.lang.CharUtils; String s = ...; char[] ch = s.toCharArray(); for( int i=0; i<ch.length; i++) { if( ! CharUtils.isAsciiAlphanumeric( ch[ i ] ) throw new InvalidInput( ch[i] + " is invalid" ); } Is there a better way to solve this ?

    Read the article

  • What's the fastest way to strip and replace a document of high unicode characters using Python?

    - by Rhubarb
    I am looking to replace from a large document all high unicode characters, such as accented Es, left and right quotes, etc., with "normal" counterparts in the low range, such as a regular 'E', and straight quotes. I need to perform this on a very large document rather often. I see an example of this in what I think might be perl here: http://www.designmeme.com/mtplugins/lowdown.txt Is there a fast way of doing this in Python without using s.replace(...).replace(...).replace(...)...? I've tried this on just a few characters to replace and the document stripping became really slow.

    Read the article

  • Java: How to get Unicode name of a character (or its type category)?

    - by java.is.for.desktop
    Hello, everyone! The Character class in Java defines methods which check a given char argument for equality with certain Unicode chars or for belonging to some type category. These chars and type categories are named. As stated in given javadoc, examples for named chars are HORIZONTAL TABULATION, FORM FEED, ...; example for named type categories are SPACE_SEPARATOR, PARAGRAPH_SEPARATOR, ... However, being byte or int values instead of enums, the name of these types are "hidden" at runtime. So, is there a possibility to get characters' and/or type categories' names at runtime?

    Read the article

  • Where can I find an array of the unassigned Unicode code points for a particular block?

    - by gitparade
    At the moment, I'm writing these arrays by hand. For example, the Miscellaneous Mathematical Symbols-A block has an entry in hash like this: my %symbols = ( ... miscellaneous_mathematical_symbols_a => [(0x27C0..0x27CA), 0x27CC, (0x27D0..0x27EF)], ... ) The simpler, 'continuous' array miscellaneous_mathematical_symbols_a => [0x27C0..0x27EF] doesn't work because Unicode blocks have holes in them. For example, there's nothing at 0x27CB. Take a look at the code chart [PDF]. Writing these arrays by hand is tedious, error-prone and a bit fun. And I get the feeling that someone has already tackled this in Perl!

    Read the article

  • Where can I find an array of the (un)assigned Unicode code points for a particular block?

    - by gitparade
    At the moment, I'm writing these arrays by hand. For example, the Miscellaneous Mathematical Symbols-A block has an entry in hash like this: my %symbols = ( ... miscellaneous_mathematical_symbols_a => [(0x27C0..0x27CA), 0x27CC, (0x27D0..0x27EF)], ... ) The simpler, 'continuous' array miscellaneous_mathematical_symbols_a => [0x27C0..0x27EF] doesn't work because Unicode blocks have holes in them. For example, there's nothing at 0x27CB. Take a look at the code chart [PDF]. Writing these arrays by hand is tedious, error-prone and a bit fun. And I get the feeling that someone has already tackled this in Perl!

    Read the article

  • What is the universal way to use file I/O API with unicode filenames?

    - by dma_k
    In Windows there is a common problem: the filenames should be converted to local codepage, before they are passed to open(). Of course, there is a possibility to use Win32::API for that, but I don't want my script to be platform-dependent. At the moment I have to write something like: open IN, "<", encode("cp1251", $filename) or die $!; but is there any library, that hides these details? I think the local codepage can be automatically detected, so I just want to pass unicode filename and forget about the details. Why is it still not in the box?

    Read the article

  • [perl] where can i find an array of the Unicode code points for a particular block?

    - by gitparade
    At the moment, I'm writing these arrays by hand. For example, the Miscellaneous Mathematical Symbols-A block has an entry in hash like this: my %symbols = ( ... miscellaneous_mathematical_symbols_a => [(0x27C0..0x27CA), 0x27CC, (0x27D0..0x27EF)], ... ) The simpler, 'continuous' array miscellaneous_mathematical_symbols_a => [0x27C0..0x27EF] doesn't work because Unicode blocks have holes in them. For example, there's nothing at 0x27CB. Take a look at the code chart [PDF]. Writing these arrays by hand is tedious, error-prone and a bit fun. And I get the feeling that someone has already tackled this in Perl!

    Read the article

  • Where can I find an array of the Unicode code points for a particular block?

    - by gitparade
    At the moment, I'm writing these arrays by hand. For example, the Miscellaneous Mathematical Symbols-A block has an entry in hash like this: my %symbols = ( ... miscellaneous_mathematical_symbols_a => [(0x27C0..0x27CA), 0x27CC, (0x27D0..0x27EF)], ... ) The simpler, 'continuous' array miscellaneous_mathematical_symbols_a => [0x27C0..0x27EF] doesn't work because Unicode blocks have holes in them. For example, there's nothing at 0x27CB. Take a look at the code chart [PDF]. Writing these arrays by hand is tedious, error-prone and a bit fun. And I get the feeling that someone has already tackled this in Perl!

    Read the article

  • What new Unicode functions are there in C++0x?

    - by luiscubal
    It has been mentioned in several sources that C++0x will include better language-level support for Unicode(including types and literals). If the language is going to add these new features, it's only natural to assume that the standard library will as well. However, I am currently unable to find any references to the new standard library. I expected to find out the answer for these answers: Does the new library provide standard methods to convert UTF-8 to UTF-16, etc.? Does the new library allowing writing UTF-8 to files, to the console (or from files, from the console). If so, can we use cout or will we need something else? Does the new library include "basic" functionality such as: discovering the byte count and length of a UTF-8 string, converting to upper-case/lower-case(does this consider the influence of locales?) Finally, are any of these functions are available in any popular compilers such as GCC or Visual Studio? I have tried to look for information, but I can't seem to find anything? I am actually starting to think that maybe these things aren't even decided yet(I am aware that C++0x is a work in progress).

    Read the article

  • How can I use io.StringIO() with the csv module?

    - by Tim Pietzcker
    I tried to backport a Python 3 program to 2.7, and I'm stuck with a strange problem: >>> import io >>> import csv >>> output = io.StringIO() >>> output.write("Hello!") # Fail: io.StringIO expects Unicode Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' >>> output.write(u"Hello!") # This works as expected. 6L >>> writer = csv.writer(output) # Now let's try this with the csv module: >>> csvdata = [u"Hello", u"Goodbye"] # Look ma, all Unicode! (?) >>> writer.writerow(csvdata) # Sadly, no. Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: unicode argument expected, got 'str' According to the docs, io.StringIO() returns an in-memory stream for Unicode text. It works correctly when I try and feed it a Unicode string manually. Why does it fail in conjunction with the csv module, even if all the strings being written are Unicode strings? Where does the str come from that causes the Exception? (I do know that I can use StringIO.StringIO() instead, but I'm wondering what's wrong with io.StringIO() in this scenario)

    Read the article

  • Unicode paragraph end/line break breaking space / non breaking space aware text editor

    - by martinr
    I want one of those to write my blog articles with. I'm tired of manually converting breaks from rough notes to either paragraphs or line breaks for release as HTML, and tired of converting spaces to breaking or non-breaking ones. There are standard Unicode code points for the difference - what editor lets me use almost plain ASCII text but with builtin support and understanding for Unicode paragraph and non-breaking space characters? And ideally will let me save straight to either plain text UTF8 or to a file of plain HTML paragraphs?

    Read the article

  • dreamweaver disable "language for non unicode programs" detection

    - by YuriKolovsky
    Dreamweaver CS4 auto-detects the language for non unicode programs in windows, in my case it is russian, and conviniently sets the default encoding to be western european instead of the much preferred utf-8, it also changes several bits of text in DW into russian. how do i disable this detection and maintain Dreamweaver in full english? (without having to change the language for non-unicode programs in windows)

    Read the article

  • How to show telugu font in emulator correctly

    - by raman
    i am developing an application in which i get the unicode using json and it showing correctly the unicode when i am trying to see in the debug mode,Now the problem is that how can i show in the emulator,i have using the UTF-8 for rendering the unicode bt it didn't show ? And when i am trying to show in using the setTypeface it showing the Telugu font in a simple program even but not correctly.i am using the Pothana2000.ttf to convert the telugu unicode to Telugu language. Suggestions welcome.Need reply urgently.

    Read the article

  • Finding Those Pesky Unicode Characters in Visual Studio

    - by fallen888
    Sometimes I’m handed HTML that I need to wire up and I find these characters.  Usually there are only a couple on the page and, while annoying to find, it’s not a big deal.  Recently I found dozens and dozens of these guys on a page and wasn’t very happy at the prospect of having to manually search them all out and remove/replace them.  That is, until I did some research and found this very  helpful article by Aaron Jensen - Finding Non-ASCII Characters with Visual Studio. Aaron’s wonderful solution: Try searching your code with the following regular expression: [^\x00-\x7f] Open any of Visual Studio’s find windows and enter the regular expression above into the “Find what:” text box. Click the “Find Options” plus sign to expand the list of options. Check the last box “Use:” and choose “Regular expressions” from the drop down menu. Easy and efficient.  Thanks, Aaron!

    Read the article

  • Shakespeare and storing Unicode characters

    - by John Paul Cook
    This post is about the political issues involved with using multiple languages in a global organization and how to troubleshoot the technical details. The CHAR and VARCHAR data types are NOT suitable for global data. Some people still cling to CHAR and VARCHAR justifying their use by truthfully saying that they only take up half the space of NCHAR and NVARCHAR data types. But you’ll never be able to store Chinese, Korean, Greek, Japanese, Arabic, or many other languages unless you use NCHAR and NVARCHAR...(read more)

    Read the article

  • What is adding frog characters to my URLs?

    - by Jacob Hume
    While browsing the "Crawl Errors" section of Google Webmaster Tools, I discovered a set of very strange 500 errors in reference to my site: I was able to track down what these characters are, and apparently they are the first two characters in the Unicode Private Use Area. My font just happened to map them to a frog wearing a tiny crown, and a symbol that resembles the numeral 7. These symbols only appear on the addresses of non-HTML files; office documents, PDFs, etc. - but they do not just appear in the file name. Where are these symbols coming from, and is there any way I can get rid of them so Google can properly crawl my site? Some background information: Using Web Server running WS2K3 with IIS6 and PHP 5.3.8 Site encoding is UTF-8 These symbols don't appear on the page, or in the source

    Read the article

  • Django model: Reference foreign key table in __unicode__ function for admin

    - by pa
    Example models: class Parent(models.Model): name = models.CharField() def __unicode__(self): return self.name class Child(models.Model): parent = models.ForeignKey(Parent) def __unicode__(self): return self.parent.name # Would reference name above I'm wanting the Child.unicode to refer to Parent.name, mostly for the admin section so I don't end up with "Child object" or similar, I'd prefer to display it more like "Child of ". Is this possible? Most of what I've tried hasn't worked unfortunately.

    Read the article

  • parse XML file that contains unicode characters in iphone

    - by Jim
    Hi, I am trying to parse one XML file that contains some unicode characters.I tried to parse the file using NSXMLParser but i am unable to parse XML.Parser stops when it encounters any unicode characters. Is there any other good solution to parse XML file with unicode letters? Please suggest. Thanks, Jim.

    Read the article

  • Separating null byte separated UNICODE C string.

    - by Ramblingwood
    First off, this is NOT a duplicate of: http://stackoverflow.com/questions/1911053/turn-a-c-string-with-null-bytes-into-a-char-array , because the given answer doesn't work when the char *'s are Unicode. I think the problem is that because I am trying to use Unicode and thus wchar_t instead of char, the length of each character is different and thus, this doesn't work (it does in non-unicode): char *Buffer; // your null-separated strings char *Current; // Pointer to the current string // [...] for (Current = Buffer; *Current; Current += strlen(Current) + 1) printf("GetOpenFileName returned: %s\n", Current); Does anyone have a similar solution that works on Unicode strings? I have been banging my head on the this for over 4 hours now. C doesn't agree with me.

    Read the article

< Previous Page | 8 9 10 11 12 13 14 15 16 17 18 19  | Next Page >