Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 697/916 | < Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >

  • jQuery Autocomplete Json Ajax cross browser issue with Google Search Appliance

    - by skyfoot
    I am implementing a jquery autocomplete on a search form and am getting the suggestions from the Google Search Appliance Autocomple suggestions service which returns a result set in json. What I am trying to do is go off to the GSA to get suggestions when the user types something in the search box. The url to get the json suggestions is as follows: http://gsaurl/suggest?q=<query>&max=10&site=default_site&client=default_frontend&access=p&format=rich The json which is returned is as follows: { "query":"re", "results": [ {"name":"red", "type":"suggest"}, {"name":"read", "type":"suggest"}] } The jQuery autocomplete code is as follows: $(#q).autocomplete(searchUrl, { width: 320, dataType: 'json', highlight: false, scroll: true, scrollHeight: 300, parse: function(data) { var array = new Array(); for(var i=0;i<data.results.length;i++) { array[i] = { data: data.results[i], value: data.results[i].name, result: data.results[i].name }; } return array; }, formatItem: function(row) { return row.name; } }); This works in IE but fails in firefox as the data returned in the parse function is null. Any ideas why this would be the case? Workaround I created an aspx page to call the GSA suggest service and to return the json from the suggest service. Using this page as a proxy and setting it as the url in the jQuery autocomplete worked in both IE and FireFox.

    Read the article

  • Different versions of JBoss on the same host

    - by Vladimir Bezugliy
    I have JBoss 4 installed on my PC to directory C:\JBoss4 And environment variable JBOSS_HOME set to this directory: JBOSS_HOME=C:\JBoss4 I need to install JBoss 5.1 on the same PC. I installed it into C:\JBoss51 In order to start JBoss 5.1 on the same host where JBoss 4 was already started, I need to redefine properties jboss.home.dir, jboss.home.url, jboss.service.binding.set: C:\JBoss51\bin\run.sh -Djboss.home.dir=C:/JBoss51 \ -Djboss.home.url=file:/C:/JBoss51 \ -Djboss.service.binding.set=ports-01 But in C:\JBoss51\bin\run.sh I can see following code: … if [ "x$JBOSS_HOME" = "x" ]; then # get the full path (without any relative bits) JBOSS_HOME=`cd $DIRNAME/..; pwd` fi export JBOSS_HOME … runjar="$JBOSS_HOME/bin/run.jar" JBOSS_BOOT_CLASSPATH="$runjar" And this code does not depend either on jboss.home.dir or on jboss.home.dir. So when I start JBoss 5.1 script will use jar files from JBoss 4.3? Is it correct? Should I redefine environment variable JAVA_HOME when I start JBoss 5.1? In this case script will use correct jar files. Or if I redefined properties jboss.home.dir, jboss.home.url then JBoss will not use any variables set in run.sh? How does it works?

    Read the article

  • Webservice returning 403 error

    - by user48408
    I'm wondering whether I'm receiving the 403 errors because there are too many attempted connections being made to the webservice. If this is the case how do I get around it? I've tried creating a new instance of InternalWebService each time and disposing of the old one but I get the same problem. I've disabled the firewall and the webservice is located locally at the moment. I beginning to think it may be a problem with the credentials but the control tree is populated via the webservice at some stage. If I browse to the webmethods in my browser I can run them all. I return an instance of the webservice from my login handler loginsession.cs: static LoginSession() { ... g_NavigatorWebService = new InternalWebService(); g_NavigatorWebService.Credentials = System.Net.CredentialCache.DefaultCredentials; ... } public static InternalWebService NavigatorWebService { get { return g_NavigatorWebService; } } I have a tree view control which uses the webservice to populate itself. IncidentTreeViewControl.cs: public IncidentTreeView() { InitializeComponent(); m_WebService = LoginSession.NavigatorWebService; ... } public void Populate() { m_WebService.BeginGetIncidentSummaryByCompany(new AsyncCallback(IncidentSummaryByClientComplete), null); m_WebService.BeginGetIncidentSummaryByDepartment(new AsyncCallback(IncidentSummaryByDepartmentComplete), null); ... } private void IncidentSummaryByClientComplete(IAsyncResult ar) { MyTypedDataSet data = m_WebService.EndGetIncidentSummaryByCompany(ar); //403 ..cont... } I'm getting the 403 on the last line.

    Read the article

  • Big-O of PHP functions?

    - by Kendall Hopkins
    After using PHP for a while now, I've noticed that not all PHP built in functions as fast as expected. Consider the below two possible implementations of a function that finds if a number is prime using a cached array of primes. //very slow for large $prime_array $prime_array = array( 2, 3, 5, 7, 11, 13, .... 104729, ... ); $result_array = array(); foreach( $array_of_number => $number ) { $result_array[$number] = in_array( $number, $large_prime_array ); } //still decent performance for large $prime_array $prime_array => array( 2 => NULL, 3 => NULL, 5 => NULL, 7 => NULL, 11 => NULL, 13 => NULL, .... 104729 => NULL, ... ); foreach( $array_of_number => $number ) { $result_array[$number] = array_key_exists( $number, $large_prime_array ); } This is because in_array is implemented with a linear search O(n) which will linearly slow down as $prime_array grows. Where the array_key_exists function is implemented with a hash lookup O(1) which will not slow down unless the hash table gets extremely populated (in which case it's only O(logn)). So far I've had to discover the big-O's via trial and error, and occasionally looking at the source code. Now for the question... I was wondering if there was a list of the theoretical (or practical) big O times for all* the PHP built in functions. *or at least the interesting ones For example find it very hard to predict what the big O of functions listed because the possible implementation depends on unknown core data structures of PHP: array_merge, array_merge_recursive, array_reverse, array_intersect, array_combine, str_replace (with array inputs), etc.

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • Problems with jQuery load and getJSON only when using Chrome

    - by leftend
    I'm having an issue with two jQuery calls. The first is a "load" that retrieves HTML and displays it on the page (it does include some Javascript and CSS in the code that is returned). The second is a "getJSON" that returns JSON - the JSON returned is valid. Everything works fine in every other browser I've tried - except Chrome for either Windows or Mac. The page in question is here: http://urbanistguide.com/category/Contemporary.aspx When you click on a Restaurant name in IE/FF, you should see that item expand with more info - and a map displayed to the right. However, if you do this in Chrome all you get is the JSON data printed to the screen. The first problem spot is when the "load" function is called here: var fulllisting = top.find(".listingfull"); fulllisting.load(href2, function() { fulllisting.append("<div style=\"width:99%;margin-top:10px;text-align:right;\"><a href=\"#\" class=\"" + obj.attr("id") + "\">X</a>"); itemId = fulllisting.find("a.listinglink").attr("id"); ... In the above code, the callback function doesn't seem to get invoked. The second problem spot is when the "getJSON" function is called: $.getJSON(href, function(data) { if (data.error.length > 0) { //display error message } else { ... } In this case - it just seems to follow the link instead of performing the callback... and yes, I am doing a "return false;" at the end of all of this to prevent the link from executing. All of the rest of the code is inline on that page if you want to view the source code. Any ideas?? Thanks

    Read the article

  • How to temporarily replace one primitive type with another when compiling to different targets in c#

    - by Keith
    How to easily/quickly replace float's for doubles (for example) for compiling to two different targets using these two particular choices of primitive types? Discussion: I have a large amount of c# code under development that I need to compile to alternatively use float, double or decimals depending on the use case of the target assembly. Using something like “class MYNumber : Double” so that it is only necessary to change one line of code does not work as Double is sealed, and obviously there is no #define in C#. Peppering the code with #if #else statements is also not an option, there is just too much supporting Math operators/related code using these particular primitive types. I am at a loss on how to do this apparently simple task, thanks! Edit: Just a quick comment in relation to boxing mentioned in Kyles reply: Unfortunately I need to avoid boxing, mainly since float's are being chosen when maximum speed is required, and decimals when maximum accuracy is the priority (and taking the 20x+ performance hit is acceptable). Boxing would probably rules out decimals as a valid choice and defeat the purpose somewhat. Edit2: For reference, those suggesting generics as a possible answer to this question note that there are many issues which count generics out (at least for our needs). For an overview and further references see Using generics for calculations

    Read the article

  • Does GC guarantee that cleared References are enqueued to ReferenceQueue in topological order?

    - by Dimitris Andreou
    Say there are two objects, A and B, and there is a pointer A.x --> B, and we create, say, WeakReferences to both A and B, with an associated ReferenceQueue. Assume that both A and B become unreachable. Intuitively B cannot be considered unreachable before A is. In such a case, do we somehow get a guarantee that the respective references will be enqueued in the intuitive (topological when there are no cycles) order in the ReferenceQueue? I.e. ref(A) before ref(B). I don't know - what if the GC marked a bunch of objects as unreachable, and then enqueued them in no particular order? I was reviewing Finalizer.java of guava, seeing this snippet: private void cleanUp(Reference<?> reference) throws ShutDown { ... if (reference == frqReference) { /* * The client no longer has a reference to the * FinalizableReferenceQueue. We can stop. */ throw new ShutDown(); } frqReference is a PhantomReference to the used ReferenceQueue, so if this is GC'ed, no Finalizable{Weak, Soft, Phantom}References can be alive, since they reference the queue. So they have to be GC'ed before the queue itself can be GC'ed - but still, do we get the guarantee that these references will be enqueued to the ReferenceQueue at the order they get "garbage collected" (as if they get GC'ed one by one)? The code implies that there is some kind of guarantee, otherwise unprocessed references could theoretically remain in the queue. Thanks

    Read the article

  • C# - Alternative to System.Time.Timer, to call a function at a specific time.

    - by Fábio Antunes
    Hello everybody. I want to call a specific function on my C# application at a specific time. At first i thought about using a Timer (System.Time.Timer), but that soon became impossible to use. Why? Simple. The Timer Class requires a Interval in milliseconds, but considering that i might want the function to be executed, lets says in a week that would mean: 7 Days = 168 hours; 168 Hours = 10,080 minutes; 10,080 Minutes = 6,048,000 seconds; 6,048,000 Seconds = 6,048,000,000 milliseconds; So the Interval would be 6,048,000,000; Now lets remember that the Interval accepted data type is int, and as we know int range goes from -2,147,483,648 to 2,147,483,647. That makes Timer useless in this case once we cannot set a Interval bigger that 2,147,483,647 milliseconds. So i need a solution where i could specify when the function should be called. Something like this: solution.ExecuteAt = "30-04-2010 15:10:00"; solution.Function = "functionName"; solution.Start(); So when the System Time would reach "30-04-2010 15:10:00" the function would be executed in the application. How can this problem be solved? Thanks just by taking the time to read my question. But if you could provide me with some help i would be most grateful. Additional Info: What these functions will do? Getting climate information and based on that info: Starting / Shutting down other Applications (most of them Console Based); Sending custom Commands to those Console Applications; Power down, Rebooting, Sleep, Hibernate the computer; And if possible schedule the BIOS to Power Up the Computer;

    Read the article

  • Cross-platform game development: ease of development vs security

    - by alcuadrado
    Hi, I'm a member and contributor of the Argentum Online (AO) community, the first MMORPG from Argentina, which is Free Software; which, although it's not 3D, it's really addictive and has some dozens of thousands of users. Really unluckily AO was developed in Visual Basic (yes, you can laugh) but the former community, so imagine, the code not only sucks, it has zero portability. I'm planning, with some friends to rewrite the client, and as a GNU/Linux frantic, want to do it cross-platform. Some other people is doing the same with the server in Java. So my biggest problem is that we would like to use a rapid development language (like Java, Ruby or Python) but the client would be pretty insecure. Ruby/Python version would have all it's code available, and the Java one would be easily decompilable (yes, we have some crackers in the community) We have consider the option to implement the security module in C/C++ as a dynamic library, but it can be replaced with a custom one, so it's not really secure. We are also considering the option of doing the core application in C++ and the GUI in Ruby/Python. But haven't analysed all it's implications yet. But we really don't want to code the entire game in C/C++ as it doesn't need that much performance (the game is played at 18fps on average) and we want to develop it as fast as possible. So what would you choose in my case? Thank you!

    Read the article

  • Find the flaws in the concept...

    - by Trindaz
    A web based web browser. Sounds silly right? Here's a use case. All comments about what could go wrong, and if anyone has tried and failed at this, very much wanted User goes to www.theBrowser.com and logs in with credentials specific to theBrowser.com. User tells theBrowser what their username and password for various sites are User goes to theBrowser.com/?uri=somesite.com theBrowser sends off the http request with User's log in details, then sends the http response back to User. This lets theBrowser do weird and wonderful functions like change colours / style sheets / etc. to every site that gets passed through it. From a technical stand point, storing username and password and passing them along is not a challenge for one user, but if there were a few, I'd have to use some kind of server based browser software to store a session per user logged in at theBrowser.com. How could I do that? Will I have to start from scratch? Obviously privacy and security are issues. Would theBrowser.com be too great a risk, even if users are fully warned? Cheers, Dave

    Read the article

  • Python metaclass for enforcing immutability of custom types

    - by Mark Lehmacher
    Having searched for a way to enforce immutability of custom types and not having found a satisfactory answer I came up with my own shot at a solution in form of a metaclass: class ImmutableTypeException( Exception ): pass class Immutable( type ): ''' Enforce some aspects of the immutability contract for new-style classes: - attributes must not be created, modified or deleted after object construction - immutable types must implement __eq__ and __hash__ ''' def __new__( meta, classname, bases, classDict ): instance = type.__new__( meta, classname, bases, classDict ) # Make sure __eq__ and __hash__ have been implemented by the immutable type. # In the case of __hash__ also make sure the object default implementation has been overridden. # TODO: the check for eq and hash functions could probably be done more directly and thus more efficiently # (hasattr does not seem to traverse the type hierarchy) if not '__eq__' in dir( instance ): raise ImmutableTypeException( 'Immutable types must implement __eq__.' ) if not '__hash__' in dir( instance ): raise ImmutableTypeException( 'Immutable types must implement __hash__.' ) if _methodFromObjectType( instance.__hash__ ): raise ImmutableTypeException( 'Immutable types must override object.__hash__.' ) instance.__setattr__ = _setattr instance.__delattr__ = _delattr return instance def __call__( self, *args, **kwargs ): obj = type.__call__( self, *args, **kwargs ) obj.__immutable__ = True return obj def _setattr( self, attr, value ): if '__immutable__' in self.__dict__ and self.__immutable__: raise AttributeError( "'%s' must not be modified because '%s' is immutable" % ( attr, self ) ) object.__setattr__( self, attr, value ) def _delattr( self, attr ): raise AttributeError( "'%s' must not be deleted because '%s' is immutable" % ( attr, self ) ) def _methodFromObjectType( method ): ''' Return True if the given method has been defined by object, False otherwise. ''' try: # TODO: Are we exploiting an implementation detail here? Find better solution! return isinstance( method.__objclass__, object ) except: return False However, while the general approach seems to be working rather well there are still some iffy implementation details (also see TODO comments in code): How do I check if a particular method has been implemented anywhere in the type hierarchy? How do I check which type is the origin of a method declaration (i.e. as part of which type a method has been defined)?

    Read the article

  • Variant datatype library for C

    - by Joey Adams
    Is there a decent open-source C library for storing and manipulating dynamically-typed variables (a.k.a. variants)? I'm primarily interested in atomic values (int8, int16, int32, uint, strings, blobs, etc.), while JSON-style arrays and objects as well as custom objects would also be nice. A major case where such a library would be useful is in working with SQL databases. The most obvious feature of such a library would be a single type for all supported values, e.g.: struct Variant { enum Type type; union { int8_t int8_; int16_t int16_; // ... }; }; Other features might include converting Variant objects to/from C structures (using a binding table), converting values to/from strings, and integration with an existing database library such as SQLite. Note: I do not believe this is question is a duplicate of http://stackoverflow.com/questions/649649/any-library-for-generic-datatypes-in-c , which refers to "queues, trees, maps, lists". What I'm talking about focuses more on making working with SQL databases roughly as smooth as working with them in interpreted languages.

    Read the article

  • need some help in Abraham twitteroauth class

    - by diEcho
    Hello All, i m learning how to use twitter from twitter developer link on Authenticating Requests with OAuth page i was debugging my code with given procedure on Sending the user to authorization section there is written that if you are using the callback flow, your oauth_callback should have received back your oauth_token (the same that you sent, your "request token") and a field called the oauth_verifier. You'll need that for the next step. Here's the response I received: oauth_token=8ldIZyxQeVrFZXFOZH5tAwj6vzJYuLQpl0WUEYtWc&oauth_verifier=pDNg57prOHapMbhv25RNf75lVRd6JDsni1AJJIDYoTY my original code is require_once('twitteroauth/twitteroauth.php'); require_once('config.php'); /* Build TwitterOAuth object with client credentials. */ $connection = new TwitterOAuth(CONSUMER_KEY, CONSUMER_SECRET); /* Get temporary credentials. */ $request_token = $connection->getRequestToken(OAUTH_CALLBACK); /* Save temporary credentials to session. */ $_SESSION['oauth_token'] = $token = $request_token['oauth_token']; $_SESSION['oauth_token_secret'] = $request_token['oauth_token_secret']; /* If last connection failed don't display authorization link. */ switch ($connection->http_code) { case 200: /* Build authorize URL and redirect user to Twitter. */ echo "<br/>Authorize URL:".$url = $connection->getAuthorizeURL($token); //header('Location: ' . $url); break; default: /* Show notification if something went wrong. */ echo 'Could not connect to Twitter. Refresh the page or try again later.'; } and i m getting Authorize URL: https://twitter.com/oauth/authenticate?oauth_token=BHqbrTjsPcyvaAsfDwfU149aAcZjtw45nhLBeG1c i m not getting above URL having oauth_verifier. please tell me from where do i see/debug that url??

    Read the article

  • Multiselect Form Field in PDF

    - by Jason R. Coombs
    Using PDF, is it possible to create a single form element with multiple fields of which several can be selected? For example, in HTML, one can create a set of checkboxes associated with the same field name: <div>Select one for Member of the School Board</div> <input type="checkbox" name="field(school)" value="vote1"> <span class="label">Libby T. Garvey</span><br/> <input type="checkbox" name="field(school)" value="vote2"> <span class="label">Emma N. Violand-Sanchez</span><br/> In this case, the field name is "field(school)", and when the form is submitted, "field(school)" can be supplied 0, 1, or 2 times. Is there an equivalent construct in PDF where a single field can have multiple values. So far in my investigation, it appears that if fields are assigned the same name, it is only possible to select one field. If it is possible to implement this in PDF, what is this construct called and how can it be implemented? Edit: To clarify, I am aware that a PDF can contain multiple form fields with different field names, and those can be selected independently, but then the grouping is implicit and not explicit as with the HTML form. I would like to use a construct that makes the grouping of options explicit, and preferably allows for restrictions (e.g. at least one required, no more than 2 allowed, etc).

    Read the article

  • ruby gem not found although it is installed

    - by Eimantas
    I found some similar problems here on SO, but none seem to match my case (sorry if I overlooked). Here's my problem: I installed oauth-plugin gem to ruby gems dir, but trying to use it in rails app tells me that it's not being found. Here's the output of relevant commands: Instalation % s gem install oauth-plugin Successfully installed oauth-plugin-0.3.14 1 gem installed Installing ri documentation for oauth-plugin-0.3.14... Installing RDoc documentation for oauth-plugin-0.3.14... gem which oauth-plugin output: % gem which oauth-plugin /usr/lib/ruby/gems/1.8/gems/oauth-plugin-0.3.14/lib/oauth-plugin.rb gem env output: % gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2009-12-24 patchlevel 248) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/lib/ruby/gems/1.8 - /Users/eimantas/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => true - :bulk_threshold => 1000 - :gem => ["--no-ri", "--no-rdoc"] - :sources => ["http://gems.ruby.lt/", "http://rubygems.org/"] - REMOTE SOURCES: - http://gems.ruby.lt/ - http://rubygems.org/ Doing ls -l /usr/lib/ruby shows this: % ls -l /usr/lib/ruby lrwxr-xr-x 1 root wheel 76 Aug 14 2009 /usr/lib/ruby -> ../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/lib/ruby And the gem in question is in intended location. This is not a single gem that is not being found by rubygems (although it's located where it should be). Any guidance towards the solution is much appreciated.

    Read the article

  • Using Word COM objects in .NET, InlineShapes not copied from template to document

    - by Keith
    Using .NET and the Word Interop I am programmatically creating a new Word doc from a template (.dot) file. There are a few ways to do this but I've chosen to use the AttachedTemplate property, as such: Dim oWord As New Word.Application() oWord.Visible = False Dim oDocuments As Word.Documents = oWord.Documents Dim oDoc As Word.Document = oDocuments.Add() oDoc.AttachedTemplate = sTemplatePath oDoc.UpdateStyles() (I'm choosing the AttachedTemplate means of doing this over the Documents.Add() method because of a memory leak issue I discovered when using Documents.Add() to open from templates.) This works fine EXCEPT when there is an image (represented as an InlineShape) in the template footer. In that case the image does not appear in the resulting document. Specifically the image should appear in the oDoc.Sections.Item(1).Footers.Item(WdHeaderFooterIndex.wdHeaderFooterPrimary).Range.InlineShapes collection but it does not. This is not a problem when using Documents.Add(), however as I said that method is not an option for me. Is there an extra step I have to take to get the images from the template? I already discovered that when using AttachedTemplate I have to explicitly call UpdateStyles() (as you can see in my code snippet) to apply the template styles to the document, whereas that is done automatically when using Documents.Add(). Or maybe there's some crazy workaround? Your help is much appreciated! :)

    Read the article

  • Not all TFS Build type files are getting copied

    - by k4k4sh1
    Because I have several builds sharing some assemblies containing common build tasks, I have one TFSBuild.proj for all builds and import different targets depending on the build, like the following: <Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> <Import Project="Build_1.targets" Condition="'$(BuildDefinition)'=='Build_1'" /> <Import Project="Build_2.targets" Condition="'$(BuildDefinition)'=='Build_2'" /> <Import Project="Build_3.targets" Condition="'$(BuildDefinition)'=='Build_3'" /> </Project> Each target for a particular build has your usual content for a build type file, but in my case, I also reference some tasks inside assemblies checked into the same folder as TFSBuild.proj in source control. I wanted to add folders to contain some test build targets, since my folder was getting a bit full and cluttered. The following illustrates what I mean. $(TFS project)\build\ TFSBuild.proj Build_1.targets ... Assembly1.dll Assembly2.dll ... Folder\ Test_target_1.targets .... When I stated my build, however, I found that Test_target_1.targets and other files in Folder were not being copied to the build directory, while TFSBuild.proj and other files in the root level, as it were, of the build type folder were being copied. This caused my test build to not be able to reference files inside Folder, causing my build to immediately fail. I realize the simplest work-around would be to get rid of Folder and move all of its contents up to the build folder, but I would really like to have Folder if at all possible. Thanks for your help in advance.

    Read the article

  • Fluent NHibernate Automap does not take into account IList<T> collections as indexed

    - by Francisco Lozano
    I am using automap to map a domain model (simplified version): public class AppUser : Entity { [Required] public virtual string NickName { get; set; } [Required] [DataType(DataType.Password)] public virtual string PassKey { get; set; } [Required] [DataType(DataType.EmailAddress)] public virtual string EmailAddress { get; set; } public virtual IList<PreferencesDescription> PreferencesDescriptions { get; set; } } public class PreferencesDescription : Entity { public virtual AppUser AppUser { get; set; } public virtual string Content{ get; set; } } The PreferencesDescriptions collection is mapped as an IList, so is an indexed collection (when I require standard unindexed collections I use ICollection). The fact is that fluent nhibernate's automap facilities map my domain model as an unindexed collection (so there's no "position" property in the DDL generated by SchemaExport). ¿How can I make it without having to override this very case - I mean, how can I make Fluent nhibernate's automap make always indexed collections for IList but not for ICollection

    Read the article

  • rails, mysql charsets & encoding: binary

    - by Benjamin Vetter
    Hi, i've a rails app that runs using utf-8. It uses a mysql database, all tables with mysql's default charset and collation (i.e. latin1). Therefore the latin1 tables contain utf-8 data. Sure, that's not nice, but i'm not really interested in it. Everything works fine, because the connection encoding is latin1 as well and therefore mysql does not convert between charsets. Only one problem: i need a utf-8 fulltext index for one table: mysql> show create table autocompletephrases; ... AUTO_INCREMENT=310095 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci But: I don't want to convert between charsets in my rails app. Therefore I would like to know if i could just set config/database.yml production: adapter: mysql >>>> encoding: binary ... which just calls SET NAMES 'binary' when connecting to mySQL. It looks like it works for my case, because i guess it forces mysql to -not- convert between charsets (mySQL docs). Does anyone knows about problems about doing this? Any side-effects? Or do you have any other suggestions? But i'd like to avoid converting my whole database to utf-8. Many Thanks! Benjamin

    Read the article

  • Automatically generating Regex from set of strings residing in DB C#

    - by Muhammad Adeel Zahid
    Hello Everyone i have about 100,000 strings in database and i want to if there is a way to automatically generate regex pattern from these strings. all of them are alphabetic strings and use set of alphabets from English letters. (X,W,V) is not used for example. is there any function or library that can help me achieve this target in C#. Example Strings are KHTK RAZ given these two strings my target is to generate a regex that allows patterns like (k, kh, kht,khtk, r, ra, raz ) case insensitive of course. i have downloaded and used some C# applications that help in generating regex but that is not useful in my scenario because i want a process in which i sequentially read strings from db and add rules to regex so this regex could be reused later in the application or saved on the disk. i m new to regex patterns and don't know if the thing i m asking is even possible or not. if it is not possible please suggest me some alternate approach. Any help and suggestions are highly appreciated. regards Adeel Zahid

    Read the article

  • Ways std::stringstream can set fail/bad bit?

    - by Evan Teran
    A common piece of code I use for simple string splitting looks like this: inline std::vector<std::string> split(const std::string &s, char delim) { std::vector<std::string> elems; std::stringstream ss(s); std::string item; while(std::getline(ss, item, delim)) { elems.push_back(item); } return elems; } Someone mentioned that this will silently "swallow" errors occurring in std::getline. And of course I agree that's the case. But it occurred to me, what could possibly go wrong here in practice that I would need to worry about. basically it all boils down to this: inline std::vector<std::string> split(const std::string &s, char delim) { std::vector<std::string> elems; std::stringstream ss(s); std::string item; while(std::getline(ss, item, delim)) { elems.push_back(item); } if(ss.fail()) { // *** How did we get here!? *** } return elems; } A stringstream is backed by a string, so we don't have to worry about any of the issues associated with reading from a file. There is no type conversion going on here since getline simply reads until it sees a newline or EOF. So we can't get any of the errors that something like boost::lexical_cast has to worry about. I simply can't think of something besides failing to allocate enough memory that could go wrong, but that'll just throw a std::bad_alloc well before the std::getline even takes place. What am I missing?

    Read the article

  • Interface Casting vs. Class Casting

    - by Legatou
    I've been led to believe that casting can, in certain circumstances, become a measurable hindrance on performance. This may be moreso the case when we start dealing with incoherent webs of nasty exception throwing\catching. Given that I wish to create more correct heuristics when it comes to programming, I've been prompted to ask this question to the .NET gurus out there: Is interface casting faster than class casting? To give a code example, let's say this exists: public interface IEntity { IParent DaddyMommy { get; } } public interface IParent : IEntity { } public class Parent : Entity, IParent { } public class Entity : IEntity { public IParent DaddyMommy { get; protected set; } public IParent AdamEve_Interfaces { get { IEntity e = this; while (this.DaddyMommy != null) e = e.DaddyMommy as IEntity; return e as IParent; } } public Parent AdamEve_Classes { get { Entity e = this; while (this.DaddyMommy != null) e = e.DaddyMommy as Entity; return e as Parent; } } } So, is AdamEve_Interfaces faster than AdamEve_Classes? If so, by how much? And, if you know the answer, why?

    Read the article

  • Subversion - Do I need to reintegrate if I don't merge from trunk

    - by user314584
    Hi, I have read quite a bit about the need to re-integrate when you merge from a branch back to the trunk in SVN (This article was really helpful http://blogs.open.collab.net/svn/2008/07/subversion-merg.html). The problem seems to come from the fact that people are regularly updating the branch from the trunk which means that the final merge back is reflective. In my use-case, we want to create a release branch which will live for as long as it takes to stabilise the branch and fix any bugs. To maintain stability we don't want to merge up from the trunk but we do want to regularly merge fixes down from the release branch so that trunk gets all the bug fixes for free. We also don't want to wait until the end of QA to merge back to trunk. We therefore want to: 1.) Create the branch 2.) Make regular changes to the branch (and trunk) 3.) Merge back to trunk regularly (daily perhaps) Since we will never merge up from trunk I don't think that we need to worry about the problems that re-intergrating is designed to fix. Can anyone see a problem with this approach? Cheers, Matt

    Read the article

  • merge() multiple data frames (do.call ?)

    - by Vincent
    Hi everyone, here's my very simple question: merge() only takes two data frames as input. I need to merge a series of data frames from a list, using the same keys for every merge operation. Given a list named "test", I want to do something like: do.call("merge", test). I could write some kind of loop, but I'm wondering if there's a standard or built-in way to do this more efficiently. Any advice is appreciated. Thanks! Here's a subset of the dataset in dput format (note that merging on country is trivial in this case, but that there are more countries in the original data): test <- list(structure(list(country = c("United States", "United States", "United States", "United States", "United States"), NY.GNS.ICTR.GN.ZS = c(13.5054687, 14.7608697, 14.1115876, 13.3389063, 12.9048351), year = c(2007, 2006, 2005, 2004, 2003)), .Names = c("country", "NY.GNS.ICTR.GN.ZS", "year"), row.names = c(NA, 5L), class = "data.frame"), structure(list( country = c("United States", "United States", "United States", "United States", "United States"), NE.TRD.GNFS.ZS = c(29.3459277, 28.352838, 26.9861939, 25.6231246, 23.6615328), year = c(2007, 2006, 2005, 2004, 2003)), .Names = c("country", "NE.TRD.GNFS.ZS", "year"), row.names = c(NA, 5L), class = "data.frame"), structure(list( country = c("United States", "United States", "United States", "United States", "United States"), NY.GDP.MKTP.CD = c(1.37416e+13, 1.31165e+13, 1.23641e+13, 1.16309e+13, 1.0908e+13), year = c(2007, 2006, 2005, 2004, 2003)), .Names = c("country", "NY.GDP.MKTP.CD", "year"), row.names = c(NA, 5L), class = "data.frame"))

    Read the article

< Previous Page | 693 694 695 696 697 698 699 700 701 702 703 704  | Next Page >