Search Results

Search found 19563 results on 783 pages for 'binary search'.

Page 425/783 | < Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >

  • How to install software packages on a Mac? (MacPorts, Fink, anything better?)

    - by Ben Alpert
    On my Mac OS X machine, how would you recommend I install command line software and other packages? I've been using MacPorts and it always seems quite slow, presumably because it has to compile the packages on-the-fly. I'd much prefer a package management system that has binary packages, saving me the need to compile things every time I want to download something new. I think Fink has binaries for some of the packages, but I usually see MacPorts recommended as the system to use. Which do you think works better and why? (Or is there another system that I haven't heard of?)

    Read the article

  • Problem with generic list and extension method(C#3.0)

    - by Newbie
    I have an issue. I am making an extension class for a Collection and it is generic.. like public static class ListExtensions { public static ICollection<T> Search<T>(this ICollection<T> collection, string stringToSearch) { ICollection<T> t1=null; foreach (T t in collection) { Type k = t.GetType(); PropertyInfo pi = k.GetProperty("Name"); if (pi.GetValue(t,null).Equals(stringToSearch)) { t1.Add(t); } } return t1; } } But I cannot add items to t1 as it is declared null. Error: object reference not set to an instance of the object. I am calling the method like List<TestClass> listTC = new List<TestClass>(); listTC.Add(new TestClass { Name = "Ishu", Age = 21 }); listTC.Add(new TestClass { Name = "Vivek", Age = 40 }); listTC.Add(new TestClass { Name = "some one else", Age = 12 }); listTC.Search("Ishu"); And the test class is public class TestClass { public string Name { get; set; } public int Age { get; set; } } Using : (C#3.0) & Framework - 3.5 Thanks

    Read the article

  • CentOS Can't connect to FTP

    - by Steven
    I'm having troubles connecting to my ftp server. Here's what it says, Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/home/sxxxn" Command: TYPE I Response: 200 Switching to Binary mode. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing My vsftpd.conf file: local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES ftpd_banner=Welcome to xxxx.com xferlog_std_format=NO chroot_local_user=NO chroot_list_enable=NO chroot_list_file=/etc/vsftpd/chroot_list listen=YES pasv_enable=YES pasv_min_port=3000 pasv_max_port=3050 pasv_address=64.xx.xx.xxx pam_service_name=vsftpd userlist_enable=YES userlist_deny=NO userlist_file=/etc/vsftpd/vsftpd.userlist And I've got these 2 in my iptables -A INPUT -p tcp -m tcp --dport 21 -j ACCEPT -A INPUT -p tcp -m tcp --dport 3000:3050 -j ACCEPT I've also disabled selinux.

    Read the article

  • $query returns results but not the ones i want: $query looks good to me :S

    - by Toni Michel Caubet
    I'll start again, Lets say My data is: Table element (id,name,....) 1, name element 1, .... 2, name element 2, .... 3, name element 3, .... Table tags (id,name,id_element, ....) 1, happy , 1 2, result, 1 3, very , 1 4, element, 2 5, another, 3 6, element, 1 7, happy, 2 So if search is 'very, happy,element,result': Results i would like 1) element with id = 2 because it has all tags 2) element with id = 1 because it has the tag 'element' and the tag 'happy' (only 2 less taggs) 3) .... (only 3 less taggs) So if search is 'happy,element': Results i would like 1) element with id = 1 because it has all tags (and no more) 2) element with id = 2 because it has the tag 'element' and the tag 'happy' (and two more tags) 3) .... and 3 more tags This is an echo to my query: (it doesn't fit al requirements i wrote, but its first test to find with matched tags) SELECT element.id as id_deseada,tagg.* FROM element,tagg WHERE tagg.id_element = element.id AND tagg.nombre IN ('happy','tagg','result') GROUP BY tagg.id_element ORDER BY element.votos This returns 10 duplicated elements... :S and doen't even have all taggs (and on database there are taggs with 'happy' results) if it helps, thats how i get the elements of a tag (by name and with only one tagg) $query = "SELECT element.id FROM element,tagg WHERE tagg.nombre = '$nombre_tagg' AND tagg.id_element = element.id AND lan = '$lan' GROUP BY tagg.id_element"; I hope it's a bit easier to understand now, excuse my english.. :) Thanks a lot for you possible aportation!

    Read the article

  • Database layout for an application with geocoding features using geokit

    - by vooD
    I'm developing a real estate web catalogue and want to geocode every ad using geokit gem. My question is what would be the best database layout from the performance point if i want to make search by country, city of the selected country, administrative area or nearest metro station of the selected city. Available countries, cities, administrative areas and metro sations should be defined by the administrator of catalogue and must be validated by geocoding. I came up with single table: create_table "geo_locations", :force => true do |t| t.integer "geo_location_id" #parent geo location (ex. country is parent geo location of city t.string "country", :null => false #necessary for any geo location t.string "city", #not null for city geo location and it's children t.string "administrative_area" #not null for administrative_area geo location and it's children t.string "thoroughfare_name" #not null for metro station or street name geo location and it's children t.string "premise_number" #house number t.float "lng", :null => false t.float "lat", :null => false t.float "bound_sw_lat", :null => false t.float "bound_sw_lng", :null => false t.float "bound_ne_lat", :null => false t.float "bound_ne_lng", :null => false t.integer "mappable_id" t.string "mappable_type" t.string "type" #country, city, administrative area, metro station or address end Final geo location is address it contains all neccessary information to put marker of the real estate ad on the map. But i'm still stuck on search functionality. Any help would be highly appreciated.

    Read the article

  • how to handle empty selection in a jface bound combobox?

    - by guido
    I am developing a search dialog in my eclipse-rcp application. In the search dialog I have a combobox as follows: comboImp = new CCombo(grpColSpet, SWT.BORDER | SWT.READ_ONLY); comboImp.setBounds(556, 46, 184, 27); comboImpViewer = new ComboViewer(comboImp); comboImpViewer.setContentProvider(new ArrayContentProvider()); comboImpViewer.setInput(ImpContentProvider.getInstance().getImps()); comboImpViewer.setLabelProvider(new LabelProvider() { @Override public String getText(Object element) { return ((Imp)element).getImpName(); } }); Imp is a database entity, ManyToOne to the main entity which is searched, and ImpContentProvider is the model class which speaks to embedded sqlite database via jpa/hibernate. This combobox is supposed to contain all instances of Imp, but to also let empty selection; it's value is bound to a service bean as follows: IObservableValue comboImpSelectionObserveWidget = ViewersObservables.observeSingleSelection(comboImpViewer); IObservableValue filterByImpObserveValue = BeansObservables.observeValue(searchPrep, "imp"); bindingContext.bindValue(comboImpSelectionObserveWidget, filterByImpObserveValue , null, null); As soon as the user clicks on the combo, a selection (first element) is made: I can see the call to a selectionlistener i added on the viewer. My question is: after a selection has been made, how do I let the user change his mind and have an empty selection in the combobox? should I add a "fake" empty instance of Imp to the List returned by the ImpContentProvider? or should I implement an alternative to ArrayContentProvider? and one additional related question is: why calling deselectAll() and clearSelection() on the combo does NOT set a null value to the bound bean?

    Read the article

  • Barnyard Service - MySQL Error

    - by SLYN
    I installed barnyard2 and saved as a service. When I run service barnyard2 start, Barnyard2 is failed. After I run tail -100 /var/log/messages and I encounter a fault like this. ERROR database: 'mysql' support is not compiled into this build of snort#012 Aug 22 11:52:06 barnyard2[25771]: FATAL ERROR: If this build of barnyard2 was obtained as a binary distribution (e.g., rpm,#012or Windows), then check for alternate builds that contains the necessary#012'mysql' support.#012#012If this build of barnyard2 was compiled by you, then re-run the#012the ./configure script using the '--with-mysql' switch.#012For non-standard installations of a database, the '--with-mysql=DIR'#012syntax may need to be used to specify the base directory of the DB install.#012#012See the database documentation for cursory details (doc/README.database).#012and the URL to the most recent database plugin documentation. Aug 22 11:52:06 barnyard2[25771]: Barnyard2 exiting What sould I do for solving this problem? When I installed Barnyard2, I used these commands: # ./configure --with-mysql --with-mysql-libraries=/usr/lib64/mysql # make ; make install (My System is CentOS 6.5 x86_64.)

    Read the article

  • Encrypt backups with GPG to multiple tapes

    - by Dan
    Currently, I use tar to write my backups (ntbackup files) to a tape drive fed by an autoloader. Ex: tar -F /root/advancetape -cvf /dev/st0 *.bkf (/root/advancetape just has the logic to advance to the next tape if there is one available or notify to swap the tapes out) I was recently handed the requirement to encrypt our tape backups. I can easily encrypt the data with no problems using GPG. The problem I'm having is how do I write this to multiple tapes with the same logic that tar uses to advance the tapes once the current one is filled? I cannot write the encrypted file to disk first (2+TB). As far as I can tell, tar will not accept binary input from stdin (it's looking for file names). Any ideas? :(

    Read the article

  • string comparision and counting the key in target [closed]

    - by mesun
    Suppose we want to count the number of times that a key string appears in a target string. We are going to create two different functions to accomplish this task: one iterative, and one recursive. For both functions, you can rely on Python's find function - you should read up on its specifications to see how to provide optional arguments to start the search for a match at a location other than the beginning of the string. For example, find("atgacatgcacaagtatgcat","atgc") #returns the value 5, while find("atgacatgcacaagtatgcat","atgc",6) #returns the value 15, meaning that by starting the search at index 6, #the next match is found at location 15. For the recursive version, you will want to think about how to use your function on a smaller version of the same problem (e.g., on a smaller target string) and then how to combine the result of that computation to solve the original problem. For example, given you can find the first instance of a key string in a target string, how would you combine that result with invocation of the same function on a smaller target string? You may find the string slicing operation useful in getting substrings of string.

    Read the article

  • Hosting an Access DB

    - by Mitciv
    Hey, So I'm inexperienced in hosting DB's and I've always had the luxury of someone else getting the db setup. I was going to help a friend out with getting a webpage setup, I've got experience in Asp.Net MVC so I'm going with that. They want to setup a search page to query a db and display the results. My question I have is in getting the DB setup and hosted. They currently just have the Access DB on a local computer. There is basically only one table that would need to be queried for the search. What is the best approach to getting this table/db accessible? They would like to keep the main copy of the db on the local machine, so copying the entire db over to the hosted site would be time consuming, could the lone table needed be solely copied to the host? Should I try to convince them to make changes on the hosted db and just make copies of that for their local machines? Any suggestions are welcome, Again I'm a total noob when it comes to hosting databases. Thanks

    Read the article

  • Where are the SQLServer jobs stored?

    - by Saiyine
    I'd like to know what is the process a SqlServer job is executing, but I only can find that it calls DTSRun with an encrypted string. After decoding the string, it results is just the name of the job with the user and the password. How can I find what is this job really calling? Edit: I've found a candidate, they could be at the msdb.sysdtspackages, but again, can't read them as SQLServer says the data is binary. How can I read them to confirm they are the jobs?

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Best way of showing more results with javascript/css

    - by Ricardo Neves
    I'm developing a website and i'm having troubles showing the search results to the user the way I want. Basically, after the user search, the page makes a couple of ajax requests and as soon as a response arrive it appends the info to a specific element on my page. Each results is shown as a line... The problem is that in most case there are going to be more than 1000 results and this would make the page have a large scroll. My idea was to show only the first 15 results and when the user clicks "show more" the element would expand and show the next 15 results and so on... This would be easier to do if the website wasn't responsive, but because it is I can't find the proper way of implementing what I want without lowering the website perfomance. I have "2 ideas": The first is by using something like #element .div:nth-child(-n+15) on my css and figure a way of changing the "15" to how much results I want to show... I don't know if this can be done. Is it possible to call css rules with parameters? Maybe with less css? The second option is probably a bad option if i don't want to lower the website performance. Using javascript I would check if there is a specific css class(like .show-15 .show30 .show45) and add that class to my element and if it don't exist, create it somehow.. Any help would be appreciated.

    Read the article

  • MySQL Replication fix after server shutdown/start

    - by Jagbir
    Server1 is Master Server2 is Slave Both are in our AWS testing env and we stop them once done with our work. When start again, Master rotates/creates new binary log file but slave keep looking for same/existing one and replication stops. Right now, I'm manually repairing it by (slave): stop slave; CHANGE MASTER TO MASTER_HOST='xx', MASTER_USER='xxx', MASTER_PASSWORD='xxx' , MASTER_LOG_FILE='new-mysql-bin.00000x',MASTER_LOG_POS=107; start slave; show slave status\G and slave becomes good again.Mysql is 5.5.x on Ubuntu 12.04. Will appreciate any help in automating it.

    Read the article

  • Which cms or script for social network wiki?

    - by Jason
    Hi, I am building a social network. I need a cms that will allow users to contribute content. Each content item will need to have a google map, list of features, ratings, comments, etc. And the content must be editable by other users with revision control. Also, each user should have a profile with their bookmarked content items, contributed items, comments, etc. It's very important that I can create a template for the wiki/content entries so that each item looks uniform. (and as a kick in the teeth, I would like to be able to search for wiki items using a radial search or map) Joomla was my first choice, as I've used it for many projects, but the wiki functionality is not there. I was also setting up a grou.ps site, but the wiki is so-so - not feature rich and it really doesn't have the option I need. Additionally, I know someone out there will mention Drupal. I may consider it if I can see it put to use without and overabundance of custom programming (I don't mind initial coding, but drupal requires constant coding & recoding - with this site, I dont' have that time commitment) I thought about using mediawiki with buddypress, but i'm not sure if that's the way to go. Thoughts?

    Read the article

  • Test for `point` within an attachment in `mail-mode`

    - by lawlist
    I'm looking for a better test to determine when point is within a hidden attachment in mail-mode (which is used by wl-draft-mode). The attachments are mostly hidden and look like this: --[[application/xls Content-Disposition: attachment; filename="hello-world.xls"][base64]] The test of invisible-p yields a result of nil. I am current using the following test, but it seems rather poor: (save-excursion (goto-char (point-max)) (goto-char (previous-char-property-change (point))) (goto-char (previous-char-property-change (point))) (re-search-backward "]]" (point-at-bol) t))) Any suggestions would be greatly appreciated. Here is the full snippet: (goto-char (point-max)) (cond ((= (save-excursion (abs (skip-chars-backward "\n\t"))) 0) (insert "\n\n")) ((and (= (save-excursion (abs (skip-chars-backward "\n\t"))) 1) (not (save-excursion (goto-char (previous-char-property-change (point))) (goto-char (previous-char-property-change (point))) (re-search-backward "]]" (point-at-bol) t)))) (insert "\n"))) GOAL:  If there are no attachments and no new lines at the end of the buffer, then insert \n\n and then insert the attachment thereafter. If there is just one new line at the end of the buffer, then insert \n and then insert the attachment thereafter. If there is an attachment at the end of the buffer, then do not insert any new lines.

    Read the article

  • Escape characters in MySQL, in Ruby

    - by Swards
    I have a couple escaped characters in user-entered fields that I can't figure out. I know they are the "smart" single and double quotes, but I don't know how to search for them in mysql. The characters in ruby, when output from Ruby look like \222, \223, \224 etc irb> "\222".length => 1 So - do you know how to search for these in mysql? When I look in mysql, they look like '?'. I'd like to find all records that have this character in the text field. I tried mysql> select id from table where field LIKE '%\222%' but that did not work. Some more information - after doing a mysqldump, this is how one of the characters is represented - '\\xE2\\x80\\x99'. It's the smart single quote. Ultimately, I'm building an RTF file and the characters are coming out completely wrong, so I'm trying to replace them with 'dumb' quotes for now. I was able to do a gsub(/\222\, "'"). Thanks.

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • MySQL running on an EC2 m1.small instance has high load but low memory usage, possible resolutions?

    - by Tosh
    I have a MySQL server 5.0.75 Ubuntu, on an m1.small instance running on Amazon's EC2 as part of an application. During peak usage the server load will rise very high, while the memory usage stays low and the application server is no longer responsive since it's waiting for query results. The application server has only 5-8 apache processes running (mod_perl processes). The data directory uses only 140MB of data so the MyIsam tables aren't very big. The queries are pretty complicated with some big joins being performed, and the application makes a lot of queries. mysqltuner reports everything OK except "Maximum possible memory usage: 1.7G (99% of installed RAM)" but I'm nowhere close to using that. My question is, where should I be looking to fix this? Is this something that can be tuned away, or do I just need a larger instance/server? Googling indicates either or also upgrading MySQL server. Any pointers in the right direction would be greatly appreciated, thanks! EDIT: I just discovered this in my slow queries log: # Time: 101116 11:17:00 # User@Host: user[pass] @ [host] # Query_time: 4063 Lock_time: 1035 Rows_sent: 0 Rows_examined: 19960174 SELECT * FROM contacts WHERE contacts.contact_id IN (SELECT external_id FROM contact_relations WHERE external_table = 'contacts' AND contact_id IN (SELECT contact_id FROM contacts WHERE (company_name like '%%butan%%%' OR country like '%%butan%%%' OR city like '%%butan%%%' OR email1 like '%%butan%%%') AND (company_name is not null and company_name != ''))); Which actually brings up a different but related question: If I have a contact table containing: John Smith,The Fun Factory,555-1212,[email protected] What's the best way to search for that record using "factory" as a search key? Fulltext rarely seems to find items in the middle of a word, for example "actor" should bring up "Factory"

    Read the article

  • Is there a word processor similar to MS Word which saves files as readable txt files?

    - by zenbomb
    I'm writing a paper together with my supervisor and would like to have a more sophisticated version control than *_291112_NEW_NEW_revised1.doc files. My supervisor is a non-computer person will never ever use LaTeX or git and loves MS Word. I'm therefore looking for an alternative to Word (I need commenting on text passages!) which stores the files as clean text (Markup for formating is fine), so I'm able to put them under version control on my side. I'm aware that git can also handle binary files, but I'd prefer the cleaner way of looking at the contents directly. If there's a way to automatically extract the text from word files, I'm fine with that too for now.

    Read the article

  • can't run binaries or shell scripts

    - by hyperboreean
    I am running Debian testing and I am not able to run any binary or shell script. I keep getting "No such file or directory". The umask is the default one and I haven't fooled around with the paths. Also, I am aware of this question, but it doesn't work out for me - I compiled my code on this machine and trying to run it on the same machine. Also, all of my shell scripts have the correct shebang. Any advices?

    Read the article

  • Using Linq-To-SQL I'm getting some weird behavior doing text searches with the .Contains method. Loo

    - by Nate Bross
    I have a table, where I need to do a case insensitive search on a text field. If I run this query in LinqPad directly on my database, it works as expected Table.Where(tbl => tbl.Title.Contains("StringWithAnyCase")) // also, adding in the same constraints I'm using in my repository works in LinqPad // Table.Where(tbl => tbl.Title.Contains("StringWithAnyCase") && tbl.IsActive == true) In my application, I've got a repository which exposes IQueryable objects which does some initial filtering and it looks like this var dc = new MyDataContext(); public IQueryable<Table> GetAllTables() { var ret = dc.Tables.Where(t => t.IsActive == true); return ret; } In the controller (its an MVC app) I use code like this in an attempt to mimic the LinqPad query: var rpo = new RepositoryOfTable(); var tables = rpo.GetAllTables(); // for some reason, this does a CASE SENSITIVE search which is NOT what I want. tables = tables.Where(tbl => tbl.Title.Contains("StringWithAnyCase"); return View(tables); The column is defiend as an nvarchar(50) in SQL Server 2008. Any help or guidance is greatly appreciated!

    Read the article

  • How to setup Apache 2.2 (prefork) with mod_fcgid to test a C++ application?

    - by skyeagle
    I have written my first fastcgi application (C/C++), and I need to test it to ensure that it is behaving the way I expect it to. I have searched for examples on setting up Apache 2.2. with mod_fcgid, but all of teh tutorials etc I have seen, relate to PHP, Python, Perl etc. Is anyone aware of a resource that shows how I may setup Apache to use mod_fcgid (NOT mod_fastcgi) to test my binary? If no online resource is available (I'd be surprised), then could someone please point out the steps required to do the testing?

    Read the article

  • Indexing table with duplicates MySQL/MSSQL with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In MSSQL you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • Help with a loop to return UIImage from possible matches

    - by Canada Dev
    I am parsing a list of locations and would like to return a UIImage with a flag based on these locations. I have a string with the location. This can be many different locations and I would like to search this string for possible matches in an NSArray, and when there's a match, it should find the appropriate filename in an NSDictionary. Here's an example of the NSDictionary and NSArray: NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys: @"franceFlag", @"france", @"greeceFlag", @"greece", @"spainFlag", @"spain", @"norwayFlag", @"norway", nil]; NSArray *array = [NSArray arrayWithObjects: @"france" @"greece" @"spain" @"portugal" @"ireland" @"norway", nil]; Obviously I'll have a lot more countries and flags in both. Here's what I have got to so far: -(UIImage *)flagFromOrigin:(NSString *)locationString { NSRange range; for (NSString *arrayString in countryArray) { range = [locationString rangeOfString:arrayString]; if (range.location != NSNotFound) { return [UIImage imageWithContentsOfFile:[[NSBundle mainBundle] pathForResource:[dictionary objectForKey: arrayString] ofType:@"png"]]; } } return nil; } Now, the above doesn't actually work. I am missing something (and perhaps not even doing it right in the first place) The issue is, the locationString could have several locations in the same country, described something like this "Barcelona, Spain", "Madrid, Spain", "North Spain", etc., but I just want to retrieve "Spain" in this case. (Also, notice caps for each country). Basically, I want to search the locationString I pass into the method for a possible match with one of the countries listed in the NSArray. If/When one is found, it should continue into the NSDictionary and grab the appropriate flag based on the correct matched string from the array. I believe the best way would then to take the string from the array, as this would be a stripped-out version of the location. Any help to point me in the right direction for the last bit is greatly appreciated.

    Read the article

< Previous Page | 421 422 423 424 425 426 427 428 429 430 431 432  | Next Page >