Search Results

Search found 5581 results on 224 pages for 'alignment character'.

Page 201/224 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • How can I build a generic dataset-handling Perl library?

    - by Pep.
    Hello, I want to build a generic Perl module for handling and analysing biomedical character separated datasets and which can, most certain, be used on any kind of datasets that contain a mixture of categorical (A,B,C,..) and continuous (1.2,3,881..) and identifier (XXX1,XXX2...). The plan is to have people initialize the module and then use some arguments to point to the data file(s), the place were the analysis reports should be placed and the structure of the data. By structure of data I mean which variable is in which place and its name/type. And this is where I need some enlightenment. I am baffled how to do this in a clean way. Obviously, having people create a simple schema file, be it XML or some other format would be the cleanest but maybe not all people enjoy doing something like this. The solutions I can think of are: Create a configuration file in XML or similar and with a prespecified format. Pass the information during initialization of the module. Use the first row of the data as headers and try to guess types (ouch) Surely there must be a "canonical" way of doing this that is also usable and efficient. Thanks p.

    Read the article

  • How can I get an NPC to move randomly in XNA?

    - by Fishwaffles
    I basically want a character to walk in one direction for a while, stop, then go in another random direction. Right now my sprites look but don't move, randomly very quickly in all directions then wait and have another seizure. I will post the code I have so far in case that is useful. class NPC: Mover { int movementTimer = 0; public override Vector2 direction { get { Random rand = new Random(); int randDirection = rand.Next(8); Vector2 inputDirection = Vector2.Zero; if (movementTimer >= 50) { if (randDirection == 4) { inputDirection.X -= 1; movingLeft = true; } else movingLeft = false; if (randDirection == 1) { inputDirection.X += 1; movingRight = true; } else movingRight = false; if (randDirection == 2) { inputDirection.Y -= 1; movingUp = true; } else movingUp = false; if (randDirection == 3) { inputDirection.Y += 25; movingDown = true; } else movingDown = false; if (movementTimer >= 100) { movementTimer = 0; } } return inputDirection * speed; } } public NPC(Texture2D textureImage, Vector2 position, Point frameSize, int collisionOffset, Point currentFrame, Point sheetSize, Vector2 speed) : base(textureImage, position, frameSize, collisionOffset, currentFrame, sheetSize, speed) { } public NPC(Texture2D textureImage, Vector2 position, Point frameSize, int collisionOffset, Point currentFrame, Point sheetSize, Vector2 speed, int millisecondsPerframe) : base(textureImage, position, frameSize, collisionOffset, currentFrame, sheetSize, speed, millisecondsPerframe) { } public override void Update(GameTime gameTime, Rectangle clientBounds) { movementTimer++; position += direction; if (position.X < 0) position.X = 0; if (position.Y < 0) position.Y = 0; if (position.X > clientBounds.Width - frameSize.X) position.X = clientBounds.Width - frameSize.X; if (position.Y > clientBounds.Height - frameSize.Y) position.Y = clientBounds.Height - frameSize.Y; base.Update(gameTime, clientBounds); } }

    Read the article

  • How to implement a collection (list, map?) of complicated strings in Java?

    - by Alex Cheng
    Hi all. I'm new here. Problem -- I have something like the following entries, 1000 of them: args1=msg args2=flow args3=content args4=depth args6=within ==> args5=content args1=msg args2=flow args3=content args4=depth args6=within args7=distance ==> args5=content args1=msg args2=flow args3=content args6=within ==> args5=content args1=msg args2=flow args3=content args6=within args7=distance ==> args5=content args1=msg args2=flow args3=flow ==> args4=flowbits args1=msg args2=flow args3=flow args5=content ==> args4=flowbits args1=msg args2=flow args3=flow args6=depth ==> args4=flowbits args1=msg args2=flow args3=flow args6=depth ==> args5=content args1=msg args2=flow args4=depth ==> args3=content args1=msg args2=flow args4=depth args5=content ==> args3=content args1=msg args2=flow args4=depth args5=content args6=within ==> args3=content args1=msg args2=flow args4=depth args5=content args6=within args7=distance ==> args3=content I'm doing some sort of suggestion method. Say, args1=msg args2=flow args3=flow == args4=flowbits If the sentence contains msg, flow, and another flow, then I should return the suggestion of flowbits. How can I go around doing it? I know I should scan (whenever a character is pressed on the textarea) a list or array for a match and return the result, but, 1000 entries, how should I implement it? I'm thinking of HashMap, but can I do something like this? <"msg,flow,flow","flowbits" Also, in a sentence the arguments might not be in order, so assuming that it's flow,flow,msg then I can't match anything in the HashMap as the key is "msg,flow,flow". What should I do in this case? Please help. Thanks a million!

    Read the article

  • How to transform a production to LL(1) grammar for a list separated by a semicolon?

    - by Subb
    Hi, I'm reading this introductory book on parsing (which is pretty good btw) and one of the exercice is to "build a parser for your favorite language." Since I don't want to die today, I thought I could do a parser for something relatively simple, ie a simplified CSS. Note: This book teach you how to right a LL(1) parser using the recursive-descent algorithm. So, as a sub-exercice, I am building the grammar from what I know of CSS. But I'm stuck on a production that I can't transform in LL(1) : //EBNF block = "{", declaration, {";", declaration}, [";"], "}" //BNF <block> =:: "{" <declaration> "}" <declaration> =:: <single-declaration> <opt-end> | <single-declaration> ";" <declaration> <opt-end> =:: "" | ";" This describe a CSS block. Valid block can have the form : { property : value } { property : value; } { property : value; property : value } { property : value; property : value; } ... The problem is with the optional ";" at the end, because it overlap with the starting character of {";", declaration}, so when my parser meet a semicolon in this context, it doesn't know what to do. The book talk about this problem, but in its example, the semicolon is obligatory, so the rule can be modified like this : block = "{", declaration, ";", {declaration, ";"}, "}" So, Is it possible to achieve what I'm trying to do using a LL(1) parser?

    Read the article

  • Python + Expat: Error on &#0; entities

    - by clacke
    I have written a small function, which uses ElementTree and xpath to extract the text contents of certain elements in an xml file: #!/usr/bin/env python2.5 import doctest from xml.etree import ElementTree from StringIO import StringIO def parse_xml_etree(sin, xpath): """ Takes as input a stream containing XML and an XPath expression. Applies the XPath expression to the XML and returns a generator yielding the text contents of each element returned. >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem1').next() 'one' >>> parse_xml_etree( ... StringIO('<test><elem1>one</elem1><elem2>two</elem2></test>'), ... '//elem2').next() 'two' >>> parse_xml_etree( ... StringIO('<test><null>&#0;</null><elem3>three</elem3></test>'), ... '//elem2').next() 'three' """ tree = ElementTree.parse(sin) for element in tree.findall(xpath): yield element.text if __name__ == '__main__': doctest.testmod(verbose=True) The third test fails with the following exception: ExpatError: reference to invalid character number: line 1, column 13 Is the � entity illegal XML? Regardless whether it is or not, the files I want to parse contain it, and I need some way to parse them. Any suggestions for another parser than Expat, or settings for Expat, that would allow me to do that?

    Read the article

  • ViewGroup{TextView,...}.getMeasuredHeight gives wrong value is smaller than real height.

    - by Dewr
    ViewGroup(the ViewGroup is containing TextViews having long text except line-feed-character).getMeasuredHeight returns wrong value... that is smaller than real height. how to get rid of this problem? here is the java code: ;public static void setListViewHeightBasedOnChildren(ListView listView) { ListAdapter listAdapter = listView.getAdapter(); if (listAdapter == null) { // pre-condition return; } int totalHeight = 0; int count = listAdapter.getCount(); for (int i = 0; i < count; i++) { View listItem = listAdapter.getView(i, null, listView); listItem.measure(View.MeasureSpec.AT_MOST, View.MeasureSpec.UNSPECIFIED); totalHeight += listItem.getMeasuredHeight(); } ViewGroup.LayoutParams params = listView.getLayoutParams(); params.height = totalHeight + (listView.getDividerHeight() * (listAdapter.getCount() - 1)); listView.setLayoutParams(params); } ; and here is the list_item_comments.xml:; {RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" } {RelativeLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_centerVertical="true" android:layout_marginLeft="8dip" android:layout_marginRight="8dip" } {TextView android:id="@+id/list_item_comments_nick" android:layout_width="wrap_content" android:layout_height="wrap_content" android:textSize="16sp" style="@style/WhiteText" /} {TextView android:id="@+id/list_item_comments_time" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentRight="true" android:textSize="16sp" style="@style/WhiteText" /} {TextView android:id="@+id/list_item_comments_content" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/list_item_comments_nick" android:autoLink="all" android:textSize="24sp" style="@style/WhiteText" /} {/RelativeLayout} {/RelativeLayout} ;

    Read the article

  • Unexpected results when looking at ASCII codes in C++.

    - by Columbo
    Hello, The bit of code below is extracting ASCII codes from characters. When I convert characters in the normal ASCII region I get the value I expect. When I convert £ and € from the extened region I get a load of 1's padding the INT that I'm storing the character in. e.g. the output of the below is: 45 (ascii E as expected) FFFFFF80 (extended ascii € as expected but padded with ones) It's not causing me an issue but I'm just wondering why this happens. Here's the code... unsigned int asciichar[3]; string cTextToEncode = "E€"; for (unsigned int i = 0; i < cTextToEncode.length(); i++) { asciichar[i] = (unsigned int)cTextToEncode[i]; cout << hex << asciichar[i] << "\n"; } Can anyone explain why this is? Thanks

    Read the article

  • Android game logic problem

    - by semajhan
    I'm currently creating a game and have a problem which I think I know why it is occurring but not entirely sure and even if I knew, don't know how to solve. I have a 2D array 10 x 10 and have a "player" class that takes up a tile. Now, I have created 2 instances of the player and move them around via swiping. Around the edges I have put "walls" that the player cannot walk through and everything works fine, until I remove a wall. Once I remove a wall and move the character/player to the edge of the screen, the player cannot go any further. The problem occurs here, where the second instance of the player is not at the edge of the screen but say 2 tiles from the first instance of "player" who is at the edge. If I try moving them further into the direction of the edge, I understand that the first instance of player wouldn't move or do anything but the second instance of player should still move, but it won't. This is the code that executed when the user swipes: if (player.getArrayX() - 1 != player2.getArrayX()) { player.moveLeft(); } else if (player.getArrayX() - 1 == player2.getArrayX() && player.getArrayY() != player2.getArrayY()) { player.moveLeft(); } if (player2.getArrayX() - 1 != player.getArrayX()) { player2.moveLeft(); } else if (player2.getArrayX() - 1 == player.getArrayX() && player2.getArrayY() != player.getArrayY()) { player2.moveLeft(); } In the player class I have: public void moveLeft() { if (alive) { switch (levelMaster.getLevel1(getArrayX() - 1, getArrayY())) { case 0: break; case 1: subX(); // basically moves player left setArrayX(getArrayX() - 1); // shifts x coord of player 1 within tilemap Log.d("semajhan", "x: " + getArrayX()); break; case 9: subX(); setArrayX(getArrayX() - 1); setAlive(false); break; } } } Any help on the matter or further insight would be greatly appreciated, thanks.

    Read the article

  • Do I need to Salt and Hash a randomly generated token?

    - by wag2639
    I'm using Adam Griffiths's Authentication Library for CodeIgniter and I'm tweaking the usermodel. I came across a generate function that he uses to generate tokens. His preferred approach is to reference a value from random.org but I considered that superfluous. I'm using his fall back approach of randomly generating a 20 character long string: $length = 20; $characters = '0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'; $token = ''; for ($i = 0; $i < $length; $i++) { $token .= $characters[mt_rand(0, strlen($characters)-1)]; } He then hashes this token using a salt (I'm combing code from different functions) sha1($this->CI->config->item('encryption_key').$str); I was wondering if theres any reason to to run the token through the salted hash? I've read that simply randomly generating strings was a naive way of making random passwords but is the sh1 hash and salt necessary? Note: I got my encryption_key from https://www.grc.com/passwords.htm (63 random alpha-numeric)

    Read the article

  • Modify columns in a data frame in R more cleanly - maybe using with() or apply()?

    - by Mittenchops
    I understand the answer in R to repetitive things is usually "apply()" rather than loop. Is there a better R-design pattern for a nasty bit of code I create frequently? So, pulling tabular data from HTML, I usually need to change the data type, and end up running something like this, to convert the first column to date format (from decimal), and columns 2-4 from character strings with comma thousand separators like "2,400,000" to numeric "2400000." X[,1] <- decYY2YY(as.numeric(X[,1])) X[,2] <- as.numeric(gsub(",", "", X[,2])) X[,3] <- as.numeric(gsub(",", "", X[,3])) X[,4] <- as.numeric(gsub(",", "", X[,4])) I don't like that I have X[,number] repeated on both the left and ride sides here, or that I have basically the same statement repeated for 2-4. Is there a very R-style way of making X[,2] less repetitive but still loop-free? Something that sort of says "apply this to columns 2,3,4---a function that reassigns the current column to a modified version in place?" I don't want to create a whole, repeatable cleaning function, really, just a quick anonymous function that does this with less repetition.

    Read the article

  • What are salesforce.com and Apex like as an application development platform?

    - by mhollers
    I have recently discovered that salesforce.com is much more than an online CRM after coming across a Morrison's Case Study in which they develop a works management application. I've been trying it out with a view to recreating our own Works Management system on the platform. My background is in Microsoft and .Net, and the obvious 1st choice would be asp.net. However, there's only really myself with .net experience and my manager with a more legacy Synergy programming background, and I am self taught and am looking at evaluating other RAD options (eg Ironspeed). the nature of the business is in the main 2-5 concurrent construction type contracts that run for 3-5 yrs each, each requiring 15-50 system users. Traditionally we have used our character based Works Mangement system for everything and tweaked it for each contract. The Salesforce licensing model on the face of it suits this sort of flexibilty, but I'm worried about the development flexibilty/learning curve and all the issues that surround lock-in. There doesn't seem to be much neutral sober analysis of the platform on the web that isn't salesforce's own material/blogs Has anyone any experience of developing an application on salesforce as compared to the more 'traditional' .Net route?

    Read the article

  • User will input some filter criteria -- how can I turn it into a regular expression for String.match

    - by envinyater
    I have a program where the user will enter a string such as PropertyA = "abc_*" and I need to have the asterisk match any character. In my code, I'm grabbing the property value and replacing PropertyA with the actual value. For instance, it could be abc_123. I also pull out the equality symbol into a variable. It should be able to cover this type of criteria PropertyB = 'cba' PropertyC != '*-this' valueFromHeader is the lefthand side and value is the righthand side. if (equality.equals("=")) { result = valueFromHeader.matches(value); } else if (equality.equals("!=")) { result = !valueFromHeader.matches(value); } EDIT: The existing code had this type of replacement for regular expressions final String ESC = "\\$1"; final String NON_ALPHA = "([^A-Za-z0-9@])"; final String WILD = "*"; final String WILD_RE_TEMP = "@"; final String WILD_RE = ".*"; value = value.replace(WILD, WILD_RE_TEMP); value = value.replaceAll(NON_ALPHA,ESC); value = value.replace(WILD_RE_TEMP, WILD_RE); It doesn't like the underscore here... abcSite_123 != abcSite_123 (evaluates to true) abcSite_123$1.matches("abcSite$1123") It doesn't like the underscore...

    Read the article

  • What is wrong with this Fortran '77 snippet?

    - by notJim
    I've been tasked with maintaing some legacy fortran code, and I'm having trouble getting it to compile with gfortran. I've written a fair amount of Fortran 95, but this is my first experience with Fortran 77. This snippet of code is the problematic one: CHARACTER*22 IFILE, OFILE IFILE='TEST.IN' OFILE='TEST.OUT' OPEN(5,FILE=IFILE,STATUS='NEW') OPEN(6,FILE=OFILE,STATUS='NEW') common/pabcde/nfghi When I compile with gfortran file.FOR, all lines starting with the common statement are errors (e.g. Error: Unexpected COMMON statement at (1) for each following line until it hits the 25 error limit). I compiled with -Wall -pedantic, but fixing the warnings did not fix this problem. The crazy thing is that if I comment out all 4 lines starting with IF='TEST.IN', the program compiles and works as expected, but I must comment out all of them. Leaving any of them uncommented gives me the same errors starting with the common statement. If I comment out the common statement, I get the same errors, just starting on the following line. I am on OS X Leopard (not Snow Leopard) using gfortran. I've used this very system with gfortran extensively to write Fortran 95 programs, so in theory the compiler itself is sane. What the hell is going on with this code?

    Read the article

  • RewriteRule to store thousands of files in subdirectories

    - by Brandon
    I have a website that will have millions of pages in a directory. I'd like to store those files on-disk in a bunch of subdirectories based on the first characters of the page name. For example http://mysite.com/hugedir/somefile.html would be stored in /var/www/html/hugedir/s/o/m/e/f/ile.html That is fairly trivial to do with a RewriteRule like so: RewriteRule ^hugedir/(.)(.)(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}/{$5}/$6.html RewriteRule ^hugedir/(.)(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}/{$5}.html RewriteRule ^hugedir/(.)(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}/{$4}.html RewriteRule ^hugedir/(.)(.)(.*).html /hugedir/{$1}/{$2}/{$3}.html RewriteRule ^hugedir/(.)(.*).html /hugedir/{$1}/{$2}.html RewriteRule ^hugedir/(.*).html /hugedir/{$1}.html However, the file name may contain hyphens or other non-standard characters and I'd really like to avoid having a directory named with a strange character. Ideally, I'd like to have a list of 'approved' characters and either eliminate or transform the unapproved characters to an underscore. Can anybody think of a way to do that? Or something else equivalent? Part of the requirement is that these be physical files on disk and it not be parsed with a scripting language.

    Read the article

  • PHP using a passed value to open a dir, not working

    - by HadoukenGr
    This is my code <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Gallery</title> <script src="js/jquery-1.7.2.min.js"></script> <script src="js/lightbox.js"></script> <link href="css/lightbox.css" rel="stylesheet" /> </head> <body> <?php $value=$_GET["value"]; $handle = opendir("content/$value/gallery/"); while($file = readdir($handle)) { if($file !== '.' && $file !== '..') { do_something; } } ?> </body> </html> Passing value "?a??????", gives Warning: opendir(content/?a??????/gallery/): failed to open dir: No such file or directory. But if i do this : $handle = opendir("content/?a??????/gallery/"); it works fine. Something to do with character encoding? How could i solve this? Thank you.

    Read the article

  • What is Wordpress doing for content encoding in it's mysql database?

    - by qbxk
    For some convoluted reasons best left behind us, I require direct access the contents of a wordpress database. I'm using mysql 5.0.70-r1 on gentoo with wordpress 2.6, and perl 5.8.8 ftr. So, sometimes we get high-order characters in the blog, we have quite a few authors contributing too, for the most part these characters end up in wp's database in wp_posts.post_content or wp_postmeta.meta_value, Wordpress is displaying these correctly on it's site, but the database stores it using single byte encoding that I can't figure out how to convert to the correct string. Today's example: the blog shows this, and doesn't even seem to escape any chars in the html, Hãhãhães but the database, when viewed via the mysql prompt, has, Hãhãhães So clearly this is some kind of double-byte encoding issue, but I don't know how I can correct it. I need to be able to pull that second string from the database (b/c that's what it gives me) and convert it to the first one, and i need to do so using perl. also, just to help unmuddy any waters, I took these strings and printed out the ascii codes for each character using perl's ord() function. Here is the output of the "wrong" string H = 72 à = 195 £ = 163 h = 104 à = 195 £ = 163 h = 104 à = 195 £ = 163 e = 101 s = 115 This is the correct string, that I need to produce in my script H = 72 ã = 227 h = 104 ã = 227 h = 104 ã = 227 e = 101 s = 115

    Read the article

  • F# ref-mutable vars vs object fields

    - by rwallace
    I'm writing a parser in F#, and it needs to be as fast as possible (I'm hoping to parse a 100 MB file in less than a minute). As normal, it uses mutable variables to store the next available character and the next available token (i.e. both the lexer and the parser proper use one unit of lookahead). My current partial implementation uses local variables for these. Since closure variables can't be mutable (anyone know the reason for this?) I've declared them as ref: let rec read file includepath = let c = ref ' ' let k = ref NONE let sb = new StringBuilder() use stream = File.OpenText file let readc() = c := stream.Read() |> char // etc I assume this has some overhead (not much, I know, but I'm trying for maximum speed here), and it's a little inelegant. The most obvious alternative would be to create a parser class object and have the mutable variables be fields in it. Does anyone know which is likely to be faster? Is there any consensus on which is considered better/more idiomatic style? Is there another option I'm missing?

    Read the article

  • Can't store UTF-8 in RDS despite setting up new Parameter Group using Rails on Heroku

    - by Lail
    I'm setting up a new instance of a Rails(2.3.5) app on Heroku using Amazon RDS as the database. I'd like to use UTF-8 for everything. Since RDS isn't UTF-8 by default, I set up a new Parameter Group and switched the database to use that one, basically per this. Seems to have worked: SHOW VARIABLES LIKE '%character%'; character_set_client utf8 character_set_connection utf8 character_set_database utf8 character_set_filesystem binary character_set_results utf8 character_set_server utf8 character_set_system utf8 character_sets_dir /rdsdbbin/mysql-5.1.50.R3/share/mysql/charsets/ Furthermore, I've successfully setup Heroku to use the RDS database. After rake db:migrate, everything looks good: CREATE TABLE `comments` ( `id` int(11) NOT NULL AUTO_INCREMENT, `commentable_id` int(11) DEFAULT NULL, `parent_id` int(11) DEFAULT NULL, `content` text COLLATE utf8_unicode_ci, `child_count` int(11) DEFAULT '0', `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, PRIMARY KEY (`id`), KEY `commentable_id` (`commentable_id`), KEY `index_comments_on_community_id` (`community_id`), KEY `parent_id` (`parent_id`) ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci; In the markup, I've included: <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> Also, I've set: production: encoding: utf8 collation: utf8_general_ci ...in the database.yml, though I'm not very confident that anything is being done to honor any of those settings in this case, as Heroku seems to be doing its own config when connecting to RDS. Now, I enter a comment through the form in the app: "Úbe® ƒåiL", but in the database I've got "Úbe® Æ’Ã¥iL" It looks fine when Rails loads it back out of the database and it is rendered to the page, so whatever it is doing one way, it's undoing the other way. If I look at the RDS database in Sequel Pro, it looks fine if I set the encoding to "UTF-8 Unicode via Latin 1". So it seems Latin-1 is sneaking in there somewhere. Somebody must have done this before, right? What am I missing?

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

  • Keeping User-Input UITextVIew Content Constrained to Its Own Frame

    - by siglesias
    Trying to create a large textbox of fixed size. This problem is very similar to the 140 character constraint problem, but instead of stopping typing at 140 characters, I want to stop typing when the edge of the textView's frame is reached, instead of extending below into the abyss. Here is what I've got for the delegate method. Seems to always be off by a little bit. Any thoughts? - (BOOL)textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)text { BOOL edgeBump = NO; CGSize constraint = textView.frame.size; CGSize size = [[textView.text stringByAppendingString:text] sizeWithFont:textView.font constrainedToSize:constraint lineBreakMode:UILineBreakModeWordWrap]; CGFloat height = size.height; if (height > textView.frame.size.height) { edgeBump = YES; } if([text isEqualToString:@"\b"]){ return YES; } else if(edgeBump){ NSLog(@"EDGEBUMP!"); return NO; } return YES; } EDIT: As per Max's suggestion below, here is the code that works: - (BOOL)textView:(UITextView *)textView shouldChangeTextInRange:(NSRange)range replacementText:(NSString *)text { CGSize constraint = textView.frame.size; NSString *whatWasThereBefore = textView.text; textView.text = [textView.text stringByReplacingCharactersInRange:range withString:text]; if (textView.contentSize.height >= constraint.height) { textView.text = whatWasThereBefore; } return NO; }

    Read the article

  • How to Avoid Server Error 414 for Very Long QueryString Values

    - by Registered User
    I had a project that required posting a 2.5 million character QueryString to a web page. The server itself only parsed URI's that were 5,400 characters or less. After trying several different sets of code for WebRequest/WebResponse, WebClient, and Sockets, I finally found the following code that solved my problem: HttpWebRequest webReq; HttpWebResponse webResp = null; string Response = ""; Stream reqStream = null; webReq = (HttpWebRequest)WebRequest.Create(strURL); Byte[] bytes = Encoding.UTF8.GetBytes("xml_doc=" + HttpUtility.UrlEncode(strQueryString)); webReq.ContentType = "application/x-www-form-urlencoded"; webReq.Method = "POST"; webReq.ContentLength = bytes.Length; reqStream = webReq.GetRequestStream(); reqStream.Write(bytes, 0, bytes.Length); reqStream.Close(); webResp = (HttpWebResponse)webReq.GetResponse(); if (webResp.StatusCode == HttpStatusCode.OK) { StreamReader loResponseStream = new StreamReader(webResp.GetResponseStream(), Encoding.UTF8); Response = loResponseStream.ReadToEnd(); } webResp.Close(); reqStream = null; webResp = null; webReq = null;

    Read the article

  • Using list() to extract a data.table inside of a function

    - by Nathan VanHoudnos
    I must admit that the data.table J syntax confuses me. I am attempting to use list() to extract a subset of a data.table as a data.table object as described in Section 1.4 of the data.table FAQ, but I can't get this behavior to work inside of a function. An example: require(data.table) ## Setup some test data set.seed(1) test.data <- data.table( X = rnorm(10), Y = rnorm(10), Z = rnorm(10) ) setkey(test.data, X) ## Notice that I can subset the data table easily with literal names test.data[, list(X,Y)] ## X Y ## 1: -0.8356286 -0.62124058 ## 2: -0.8204684 -0.04493361 ## 3: -0.6264538 1.51178117 ## 4: -0.3053884 0.59390132 ## 5: 0.1836433 0.38984324 ## 6: 0.3295078 1.12493092 ## 7: 0.4874291 -0.01619026 ## 8: 0.5757814 0.82122120 ## 9: 0.7383247 0.94383621 ## 10: 1.5952808 -2.21469989 I can even write a function that will return a column of the data.table as a vector when passed the name of a column as a character vector: get.a.vector <- function( my.dt, my.column ) { ## Step 1: Convert my.column to an expression column.exp <- parse(text=my.column) ## Step 2: Return the vector return( my.dt[, eval(column.exp)] ) } get.a.vector( test.data, 'X') ## [1] -0.8356286 -0.8204684 -0.6264538 -0.3053884 0.1836433 0.3295078 ## [7] 0.4874291 0.5757814 0.7383247 1.5952808 But I cannot pull a similar trick for list(). The inline comments are the output from the interactive browser() session. get.a.dt <- function( my.dt, my.column ) { ## Step 1: Convert my.column to an expression column.exp <- parse(text=my.column) ## Step 2: Enter the browser to play around browser() ## Step 3: Verity that a literal X works: my.dt[, list(X)] ## << not shown >> ## Step 4: Attempt to evaluate the parsed experssion my.dt[, list( eval(column.exp)] ## Error in `rownames<-`(`*tmp*`, value = paste(format(rn, right = TRUE), (from data.table.example.R@1032mCJ#7) : ## length of 'dimnames' [1] not equal to array extent return( my.dt[, list(eval(column.exp))] ) } get.a.dt( test.data, "X" ) What am I missing? Update: Due to some confusion as to why I would want to do this I wanted to clarify. My use case is when I need to access a data.table column when when I generate the name. Something like this: set.seed(2) test.data[, X.1 := rnorm(10)] which.column <- 'X' new.column <- paste(which.column, '.1', sep="") get.a.dt( test.data, new.column ) Hopefully that helps.

    Read the article

  • How to do "map chunks", like terraria or minecraft maps?

    - by O'poil
    Due to performance issues, I have to cut my maps into chunks. I manage the maps in this way: listMap[x][y] = new Tile (x,y); I tried in vain to cut this list for several "chunk" to avoid loading all the map because the fps are not very high with large map. And yet, when I update or Draw I do it with a little tile range. Here is how I proceed: foreach (List<Tile> list in listMap) { foreach (Tile leTile in list) { if ((leTile.Position.X < screenWidth + hero.Pos.X) && (leTile.Position.X > hero.Pos.X - tileSize) && (leTile.Position.Y < screenHeight + hero.Pos.Y) && (leTile.Position.Y > hero.Pos.Y - tileSize) ) { leTile.Draw(spriteBatch, gameTime); } } } (and the same thing, for the update method). So I try to learn with games like minecraft or terraria, and any two manages maps much larger than mine, without the slightest drop of fps. And apparently, they load "chunks". What I would like to understand is how to cut my list in Chunk, and how to display depending on the position of my character. I try many things without success. Thank you in advance for putting me on the right track! Ps : Again, sorry for my English :'( Pps : I'm not an experimented developer ;)

    Read the article

  • What are these stray zero-byte files extracted from tarball? (OSX)

    - by Scott M
    I'm extracting a folder from a tarball, and I see these zero-byte files showing up in the result (where they are not in the source.) Setup (all on OS X): On machine one, I have a directory /My/Stuff/Goes/Here/ containing several hundred files. I build it like this tar -cZf mystuff.tgz /My/Stuff/Goes/Here/ On machine two, I scp the tgz file to my local directory, then unpack it. tar -xZf mystuff.tgz It creates ~scott/My/Stuff/Goes/, but then under Goes, I see two files: Here/ - a directory, Here.bGd - a zero byte file. The "Here.bGd" zero-byte file has a random 3-character suffix, mixed upper and lower-case characters. It has the same name as the lowest-level directory mentioned in the tar-creation command. It only appears at the lowest level directory named. Anybody know where these come from, and how I can adjust my tar creation to get rid of them? Update: I checked the table of contents on the files using tar tZvf: toc does not list the zero-byte files, so I'm leaning toward the suggestion that the uncompress machine is at fault. OS X is version 10.5.5 on the unzip machine (not sure how to check the filesystem type). Tar is GNU tar 1.15.1, and it came with the machine.

    Read the article

  • Function calculating the probability of a letter in an sentence

    - by Mike
    I have a function that is supposed to calculate the number of times a letter occurs in a sentence, and based on that, calculate the probability of it occurring in the sentence. To accomplish this, I have a sentence: The Washington Metropolitan Area is the most educated and affluent metropolitan area in the United States. An array of structures, containing the letter, the number of times it occurs, and the probability of it occurring, with one structure for each letter character and an additional structure for punctuation and spaces: struct letters { char letter; int occur; double prob; }box[53]; This is the function itself: void probability(letters box[53], int sum { cout<<sum<<endl<<endl; for(int c8=0;c8<26;c8++) { box[c8].prob = (box[c8].occur/sum); cout<<box[c8].letter<<endl; cout<<box[c8].occur<<endl; cout<<box[c8].prob<<endl<<endl; } } It correctly identifies that there are 90 letters in the sentence in the first line, prints out the uppercase letter as per the structure in the second line of the for loop, and prints out the number of times it occurs. It continually prints 0 for the probability. What am I doing wrong?

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >