Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 687/906 | < Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >

  • jQuery Autocomplete Json Ajax cross browser issue with Google Search Appliance

    - by skyfoot
    I am implementing a jquery autocomplete on a search form and am getting the suggestions from the Google Search Appliance Autocomple suggestions service which returns a result set in json. What I am trying to do is go off to the GSA to get suggestions when the user types something in the search box. The url to get the json suggestions is as follows: http://gsaurl/suggest?q=<query>&max=10&site=default_site&client=default_frontend&access=p&format=rich The json which is returned is as follows: { "query":"re", "results": [ {"name":"red", "type":"suggest"}, {"name":"read", "type":"suggest"}] } The jQuery autocomplete code is as follows: $(#q).autocomplete(searchUrl, { width: 320, dataType: 'json', highlight: false, scroll: true, scrollHeight: 300, parse: function(data) { var array = new Array(); for(var i=0;i<data.results.length;i++) { array[i] = { data: data.results[i], value: data.results[i].name, result: data.results[i].name }; } return array; }, formatItem: function(row) { return row.name; } }); This works in IE but fails in firefox as the data returned in the parse function is null. Any ideas why this would be the case? Workaround I created an aspx page to call the GSA suggest service and to return the json from the suggest service. Using this page as a proxy and setting it as the url in the jQuery autocomplete worked in both IE and FireFox.

    Read the article

  • Webservice returning 403 error

    - by user48408
    I'm wondering whether I'm receiving the 403 errors because there are too many attempted connections being made to the webservice. If this is the case how do I get around it? I've tried creating a new instance of InternalWebService each time and disposing of the old one but I get the same problem. I've disabled the firewall and the webservice is located locally at the moment. I beginning to think it may be a problem with the credentials but the control tree is populated via the webservice at some stage. If I browse to the webmethods in my browser I can run them all. I return an instance of the webservice from my login handler loginsession.cs: static LoginSession() { ... g_NavigatorWebService = new InternalWebService(); g_NavigatorWebService.Credentials = System.Net.CredentialCache.DefaultCredentials; ... } public static InternalWebService NavigatorWebService { get { return g_NavigatorWebService; } } I have a tree view control which uses the webservice to populate itself. IncidentTreeViewControl.cs: public IncidentTreeView() { InitializeComponent(); m_WebService = LoginSession.NavigatorWebService; ... } public void Populate() { m_WebService.BeginGetIncidentSummaryByCompany(new AsyncCallback(IncidentSummaryByClientComplete), null); m_WebService.BeginGetIncidentSummaryByDepartment(new AsyncCallback(IncidentSummaryByDepartmentComplete), null); ... } private void IncidentSummaryByClientComplete(IAsyncResult ar) { MyTypedDataSet data = m_WebService.EndGetIncidentSummaryByCompany(ar); //403 ..cont... } I'm getting the 403 on the last line.

    Read the article

  • segmentation fault on Unix - possible stack corruption

    - by bob
    hello, i'm looking at a core from a process running in Unix. Usually I can work my around and root into the backtrace to try identify a memory issue. In this case, I'm not sure how to proceed. Firstly the backtrace only gives 3 frames where I would expect alot more. For those frames, all the function parameters presented appears to completely invalid. There are not what I would expect. Some pointer parameters have the following associated with them - Cannot access memory at address Would this suggest some kind of complete stack corruption. I ran the process with libumem and all the buffers were reported as being clean. umem_status reported nothing either. so basically I'm stumped. What is the likely causes? What should I look for in code since libumem appears to have reported no errors. Any suggestions on how I can debug furhter? any extra features in mdb I should consider? thank you.

    Read the article

  • Problem Activating Sharepoint Timer Job

    - by Ben Robinson
    I have created a very simple sharepoint timer job. All i want it to do is iterate through a list and update each list item so that it triggers an existing workflow that works fine. In other words all i am trying to do is work around the limitation that workflows cannot be triggered on a scheduled basis. I have written a class that inherits from SPJobDefinition that does the work and i have a class that inherits from SPFeatureReceiver to install and activate it. I have created the feature using SPVisualdev that my coleagues have used in the past for other SP development. My Job class is below: public class DriverSafetyCheckTrigger : SPJobDefinition { private string pi_SiteUrl; public DriverSafetyCheckTrigger(string SiteURL, SPWebApplication WebApp):base("DriverSafetyCheckTrigger",WebApp,null, SPJobLockType.Job) { this.Title = "DriverSafetyCheckTrigger"; pi_SiteUrl = SiteURL; } public override void Execute(Guid targetInstanceId) { using (SPSite siteCollection = new SPSite(pi_SiteUrl)) { using (SPWeb site = siteCollection.RootWeb) { SPList taskList = site.Lists["Driver Safety Check"]; foreach(SPListItem item in taskList.Items) { item.Update(); } } } } } And the only thing in the feature reciever class is that i have overridden the FeatureActivated method below: public override void FeatureActivated(SPFeatureReceiverProperties Properties) { SPSite site = Properties.Feature.Parent as SPSite; // Make sure the job isn't already registered. foreach (SPJobDefinition job in site.WebApplication.JobDefinitions) { if (job.Name == "DriverSafetyCheckTrigger") job.Delete(); } // Install the job. DriverSafetyCheckTrigger oDriverSafetyCheckTrigger = new DriverSafetyCheckTrigger(site.Url, site.WebApplication); SPDailySchedule oSchedule = new SPDailySchedule(); oSchedule.BeginHour = 1; oDriverSafetyCheckTrigger.Schedule = oSchedule; oDriverSafetyCheckTrigger.Update(); } The problem i have is that when i try to activate the feature it throws a NullReferenceException on the line oDriverSafetyCheckTrigger.Update(). I am not sure what is null in this case, the example i have followed for this is this tutorial. I am not sure what I am doing wrong.

    Read the article

  • Parsing a string, Grammar file.

    - by defn
    How would I separate the below string into its parts. What I need to separate is each < Word including the angle brackets from the rest of the string. So in the below case I would end up with several strings 1. "I have to break up with you because " 2. "< reason " (without the spaces) 3. " . But Let's still " 4. "< disclaimer " 5. " ." I have to break up with you because <reason> . But let's still <disclaimer> . below is what I currently have (its ugly...) boolean complete = false; int begin = 0; int end = 0; while (complete == false) { if (s.charAt(end) == '<'){ stack.add(new Terminal(s.substring(begin, end))); begin = end; } else if (s.charAt(end) == '>') { stack.add(new NonTerminal(s.substring(begin, end))); begin = end; end++; } else if (end == s.length()){ if (isTerminal(getSubstring(s, begin, end))){ stack.add(new Terminal(s.substring(begin, end))); } else { stack.add(new NonTerminal(s.substring(begin, end))); } complete = true; } end++;

    Read the article

  • PostgreSQL: Rolling back a transaction within a plpgsql function?

    - by jamieb
    Coming from the MS SQL world, I tend to make heavy use of stored procedures. I'm currently writing an application uses a lot of PostgreSQL plpgsql functions. What I'd like to do is rollback all INSERTS/UPDATES contained within a particular function if I get an exception at any point within it. I was originally under the impression that each function is wrapped in it's own transaction and that an exception would automatically rollback everything. However, that doesn't seem to be the case. I'm wondering if I ought to be using savepoints in combination with exception handling instead? But I don't really understand the difference between a transaction and a savepoint to know if this is the best approach. Any advice please? CREATE OR REPLACE FUNCTION do_something( _an_input_var int ) RETURNS bool AS $$ DECLARE _a_variable int; BEGIN INSERT INTO tableA (col1, col2, col3) VALUES (0, 1, 2); INSERT INTO tableB (col1, col2, col3); VALUES (0, 1, 'whoops! not an integer'); -- The exception will cause the function to bomb, but the values -- inserted into "tableA" are not rolled back. RETURN True; END; $$ LANGUAGE plpgsql;

    Read the article

  • rails, mysql charsets & encoding: binary

    - by Benjamin Vetter
    Hi, i've a rails app that runs using utf-8. It uses a mysql database, all tables with mysql's default charset and collation (i.e. latin1). Therefore the latin1 tables contain utf-8 data. Sure, that's not nice, but i'm not really interested in it. Everything works fine, because the connection encoding is latin1 as well and therefore mysql does not convert between charsets. Only one problem: i need a utf-8 fulltext index for one table: mysql> show create table autocompletephrases; ... AUTO_INCREMENT=310095 DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci But: I don't want to convert between charsets in my rails app. Therefore I would like to know if i could just set config/database.yml production: adapter: mysql >>>> encoding: binary ... which just calls SET NAMES 'binary' when connecting to mySQL. It looks like it works for my case, because i guess it forces mysql to -not- convert between charsets (mySQL docs). Does anyone knows about problems about doing this? Any side-effects? Or do you have any other suggestions? But i'd like to avoid converting my whole database to utf-8. Many Thanks! Benjamin

    Read the article

  • Hover effect for parent unordered list inherited by child list

    - by elvista
    I have a simple menu <ul id="menu"> <li class="leaf"><a href="#">Menu Item 1</a></li> <li class="leaf"><a href="#">Menu Item 2</a></li> <li class="expanded"><a href="#">Menu Item 3</a> <ul> <li class="leaf"><a href="#">Menu Item a</a></li> <li class="leaf"><a href="#">Menu Item b</a></li> <li class="leaf"><a href="#">Menu Item c</a></li> </ul> </li> <li class="leaf"><a href="#">Menu Item 4</a></li> </ul> and ul#menu li a:hover {font-weight:bold;} The problem I am facing is when I hover above a ul li li, the parent as well as all its siblings gets the hover effect. I only want the list item I hovered above to get the effect. I tried ul#menu li.leaf a: hover {..}, ul#menu li.expanded a: hover {..} , but even in that case, when I hover above li.expanded, it's child inherits the style. How do I fix this?

    Read the article

  • Graph limitations - Should I use Decorator?

    - by Nick Wiggill
    I have a functional AdjacencyListGraph class that adheres to a defined interface GraphStructure. In order to layer limitations on this (eg. acyclic, non-null, unique vertex data etc.), I can see two possible routes, each making use of the GraphStructure interface: Create a single class ("ControlledGraph") that has a set of bitflags specifying various possible limitations. Handle all limitations in this class. Update the class if new limitation requirements become apparent. Use the decorator pattern (DI, essentially) to create a separate class implementation for each individual limitation that a client class may wish to use. The benefit here is that we are adhering to the Single Responsibility Principle. I would lean toward the latter, but by Jove!, I hate the decorator Pattern. It is the epitome of clutter, IMO. Truthfully it all depends on how many decorators might be applied in the worst case -- in mine so far, the count is seven (the number of discrete limitations I've recognised at this stage). The other problem with decorator is that I'm going to have to do interface method wrapping in every... single... decorator class. Bah. Which would you go for, if either? Or, if you can suggest some more elegant solution, that would be welcome. EDIT: It occurs to me that using the proposed ControlledGraph class with the strategy pattern may help here... some sort of template method / functors setup, with individual bits applying separate controls in the various graph-canonical interface methods. Or am I losing the plot?

    Read the article

  • How do I extract HTML content using Regex in PHP

    - by gAMBOOKa
    I know, i know... regex is not the best way to extract HTML text. But I need to extract article text from a lot of pages, I can store regexes in the database for each website. I'm not sure how XML parsers would work with multiple websites. You'd need a separate function for each website. In any case, I don't know much about regexes, so bear with me. I've got an HTML page in a format similar to this <html> <head>...</head> <body> <div class=nav>...</div><p id="someshit" /> <div class=body>....</div> <div class=footer>...</div> </body> I need to extract the contents of the body class container. I tried this. $pattern = "/<div class=\"body\">\(.*?\)<\/div>/sui" $text = $htmlPageAsIs; if (preg_match($pattern, $text, $matches)) echo "MATCHED!"; else echo "Sorry gambooka, but your text is in another castle."; What am I doing wrong? My text ends up in another castle.

    Read the article

  • Counting the number of objects that meets a certain criteria

    - by Candy Chiu
    The title doesn't tell the complete story. Please read the message. I have two objects: Adult and Child. Child has a boolean field isMale, and a reference to Adult. Adult doesn't reference Child. public class Adult { long id; } public class Child { long id; boolean isMale; Adult parent; } I want to create a query to list the number of sons each adult has including adults who don't have any sons. I tried: Query 1 SELECT adult, COUNT(child) FROM Child child RIGHT OUTER JOIN child.parent as adult WHERE child.isMale='true' GROUP BY adult which translates to sql select adult.id as col_0_0_, count(child.id) as col_1_0_, ... {omit properties} from Child child right outer join Adult adult on child.parentId=adult.id where child.isMale = 'true' group by adult.id Query 1 doesn't pick up adults that don't have any sons. Query 2: SELECT adult, COUNT(child.isMale) FROM Child child RIGHT OUTER JOIN child.parent as adult GROUP BY adult translates to sql: select adult.id as col_0_0_, count(child.id) as col_1_0_, ... {omit properties} from Child child right outer join Adult adult on child.parentId=adult.id group by adult.id Query 2 doesn't have the right count of sons. Basically COUNT doesn't evaluate isMale. The where clause in Query 1 filtered out Adults with no sons. How do I build a HQL or a Criteria query for this use case? Thanks.

    Read the article

  • Not all TFS Build type files are getting copied

    - by k4k4sh1
    Because I have several builds sharing some assemblies containing common build tasks, I have one TFSBuild.proj for all builds and import different targets depending on the build, like the following: <Project DefaultTargets="DesktopBuild" xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="3.5"> <Import Project="Build_1.targets" Condition="'$(BuildDefinition)'=='Build_1'" /> <Import Project="Build_2.targets" Condition="'$(BuildDefinition)'=='Build_2'" /> <Import Project="Build_3.targets" Condition="'$(BuildDefinition)'=='Build_3'" /> </Project> Each target for a particular build has your usual content for a build type file, but in my case, I also reference some tasks inside assemblies checked into the same folder as TFSBuild.proj in source control. I wanted to add folders to contain some test build targets, since my folder was getting a bit full and cluttered. The following illustrates what I mean. $(TFS project)\build\ TFSBuild.proj Build_1.targets ... Assembly1.dll Assembly2.dll ... Folder\ Test_target_1.targets .... When I stated my build, however, I found that Test_target_1.targets and other files in Folder were not being copied to the build directory, while TFSBuild.proj and other files in the root level, as it were, of the build type folder were being copied. This caused my test build to not be able to reference files inside Folder, causing my build to immediately fail. I realize the simplest work-around would be to get rid of Folder and move all of its contents up to the build folder, but I would really like to have Folder if at all possible. Thanks for your help in advance.

    Read the article

  • How to route all subdomains to a single host using mDNS?

    - by John Mee
    I have a development webserver hosting as "myhost.local" which is found using Bonjour/mDNS. The server is running avahi-daemon. The webserver also wants to handle any subdomains of itself. Eg "cat.myhost.local" and "dog.myhost.local" and "guppy.myhost.local". Given that myhost.local is on a dynamic ip address from dhcp, is there still a way to route all requests for the subdomains to myhost.local? I'm starting to think it not currently possible... http://marc.info/?l=freedesktop-avahi&m=119561596630960&w=2 You can do this with the /etc/avahi/hosts file. Alternatively you can use avahi-publish-host-name. No, he cannot. Since he wants to define an alias, not a new hostname. I.e. he only wants to register an A RR, no reverse PTR RR. But if you stick something into /etc/avahi/hosts then it registers both, and detects a collision if the PTR RR is non-unique, which would be the case for an alias.

    Read the article

  • Problems with jQuery load and getJSON only when using Chrome

    - by leftend
    I'm having an issue with two jQuery calls. The first is a "load" that retrieves HTML and displays it on the page (it does include some Javascript and CSS in the code that is returned). The second is a "getJSON" that returns JSON - the JSON returned is valid. Everything works fine in every other browser I've tried - except Chrome for either Windows or Mac. The page in question is here: http://urbanistguide.com/category/Contemporary.aspx When you click on a Restaurant name in IE/FF, you should see that item expand with more info - and a map displayed to the right. However, if you do this in Chrome all you get is the JSON data printed to the screen. The first problem spot is when the "load" function is called here: var fulllisting = top.find(".listingfull"); fulllisting.load(href2, function() { fulllisting.append("<div style=\"width:99%;margin-top:10px;text-align:right;\"><a href=\"#\" class=\"" + obj.attr("id") + "\">X</a>"); itemId = fulllisting.find("a.listinglink").attr("id"); ... In the above code, the callback function doesn't seem to get invoked. The second problem spot is when the "getJSON" function is called: $.getJSON(href, function(data) { if (data.error.length > 0) { //display error message } else { ... } In this case - it just seems to follow the link instead of performing the callback... and yes, I am doing a "return false;" at the end of all of this to prevent the link from executing. All of the rest of the code is inline on that page if you want to view the source code. Any ideas?? Thanks

    Read the article

  • Does GC guarantee that cleared References are enqueued to ReferenceQueue in topological order?

    - by Dimitris Andreou
    Say there are two objects, A and B, and there is a pointer A.x --> B, and we create, say, WeakReferences to both A and B, with an associated ReferenceQueue. Assume that both A and B become unreachable. Intuitively B cannot be considered unreachable before A is. In such a case, do we somehow get a guarantee that the respective references will be enqueued in the intuitive (topological when there are no cycles) order in the ReferenceQueue? I.e. ref(A) before ref(B). I don't know - what if the GC marked a bunch of objects as unreachable, and then enqueued them in no particular order? I was reviewing Finalizer.java of guava, seeing this snippet: private void cleanUp(Reference<?> reference) throws ShutDown { ... if (reference == frqReference) { /* * The client no longer has a reference to the * FinalizableReferenceQueue. We can stop. */ throw new ShutDown(); } frqReference is a PhantomReference to the used ReferenceQueue, so if this is GC'ed, no Finalizable{Weak, Soft, Phantom}References can be alive, since they reference the queue. So they have to be GC'ed before the queue itself can be GC'ed - but still, do we get the guarantee that these references will be enqueued to the ReferenceQueue at the order they get "garbage collected" (as if they get GC'ed one by one)? The code implies that there is some kind of guarantee, otherwise unprocessed references could theoretically remain in the queue. Thanks

    Read the article

  • Which combining css technique?

    - by DotnetShadow
    Hi there, Which of the following would you say is the best way to go when combining files for CSS: Say I have a master.css file that is used across all pages on my website (page1.aspx, page2.aspx) Page1.aspx - A specific page that has some unique css that is only ever used on that page, so I create a page1.css and it also uses another css grids.css Page2.aspx - Another specific page that is different from all other pages on the site and is different to page1.aspx, I'll name this page2.aspx and make a page2.css this doesn't use grids.css So would you combine the scripts as: Option1: Combine scripts csshandler.axd?d=master.css,page1.css,grids.css when visiting page1 Combine scripts csshandler.axd?d=master.css,page2.css when visiting page2 Benefits: Page specific, rendering quicker since only selectors for that page need to be matched up no unused selectors Drawback: Multiple combinations of master.css + page specific hence master.css has to be downloaded for each page Option2: Combine all scripts whether a page needs them or not csshandler.axd?d=master.css,page1.css,page2.css,grids.css (master, page1 and page2) that way it gets cached as one. The problem is that rendering maybe slower since it will have to try and match EVERY selector in the css with selectors on the page even the missing ones, so in the case of page2.aspx that doesn't use grids.css the selectors in grids.css will need to be parsed to see if they are in page2 which means rendering will be slow Benefits: One file will ever be downloaded and cached doesn't matter what page you visit Drawback: Unused selectors will need to be parsed by the browser slower rendering Option3: Leave the master file on it's own and only combine other scripts (the benefit of this is because master is used across all pages there is a chance that this is cached so doesn't need to keep on downloading csshandler.axd?d=Master.css csshandler.axd?d=page1.css,grids.css Benefits: master.css file can be cached doesn't matter what page you visit. Not many unused selectors as page spefic is applied Drawback: Initially minimum of 2 HTTP request will have to be made What do you guys think? Cheers DotnetShadow

    Read the article

  • Subversion - Do I need to reintegrate if I don't merge from trunk

    - by user314584
    Hi, I have read quite a bit about the need to re-integrate when you merge from a branch back to the trunk in SVN (This article was really helpful http://blogs.open.collab.net/svn/2008/07/subversion-merg.html). The problem seems to come from the fact that people are regularly updating the branch from the trunk which means that the final merge back is reflective. In my use-case, we want to create a release branch which will live for as long as it takes to stabilise the branch and fix any bugs. To maintain stability we don't want to merge up from the trunk but we do want to regularly merge fixes down from the release branch so that trunk gets all the bug fixes for free. We also don't want to wait until the end of QA to merge back to trunk. We therefore want to: 1.) Create the branch 2.) Make regular changes to the branch (and trunk) 3.) Merge back to trunk regularly (daily perhaps) Since we will never merge up from trunk I don't think that we need to worry about the problems that re-intergrating is designed to fix. Can anyone see a problem with this approach? Cheers, Matt

    Read the article

  • ai: Determining what tests to run to get most useful data

    - by Sai Emrys
    This is for http://cssfingerprint.com I have a system (see about page on site for details) where: I need to output a ranked list, with confidences, of categories that match a particular feature vector the binary feature vectors are a list of site IDs & whether this session detected a hit feature vectors are, for a given categorization, somewhat noisy (sites will decay out of history, and people will visit sites they don't normally visit) categories are a large, non-closed set (user IDs) my total feature space is approximately 50 million items (URLs) for any given test, I can only query approx. 0.2% of that space I can only make the decision of what to query, based on results so far, ~10-30 times, and must do so in <~100ms (though I can take much longer to do post-processing, relevant aggregation, etc) getting the AI's probability ranking of categories based on results so far is mildly expensive; ideally the decision will depend mostly on a few cheap sql queries I have training data that can say authoritatively that any two feature vectors are the same category but not that they are different (people sometimes forget their codes and use new ones, thereby making a new user id) I need an algorithm to determine what features (sites) are most likely to have a high ROI to query (i.e. to better discriminate between plausible-so-far categories [users], and to increase certainty that it's any given one). This needs to take into balance exploitation (test based on prior test data) and exploration (test stuff that's not been tested enough to find out how it performs). There's another question that deals with a priori ranking; this one is specifically about a posteriori ranking based on results gathered so far. Right now, I have little enough data that I can just always test everything that anyone else has ever gotten a hit for, but eventually that won't be the case, at which point this problem will need to be solved. I imagine that this is a fairly standard problem in AI - having a cheap heuristic for what expensive queries to make - but it wasn't covered in my AI class, so I don't actually know whether there's a standard answer. So, relevant reading that's not too math-heavy would be helpful, as well as suggestions for particular algorithms. What's a good way to approach this problem?

    Read the article

  • Using Word COM objects in .NET, InlineShapes not copied from template to document

    - by Keith
    Using .NET and the Word Interop I am programmatically creating a new Word doc from a template (.dot) file. There are a few ways to do this but I've chosen to use the AttachedTemplate property, as such: Dim oWord As New Word.Application() oWord.Visible = False Dim oDocuments As Word.Documents = oWord.Documents Dim oDoc As Word.Document = oDocuments.Add() oDoc.AttachedTemplate = sTemplatePath oDoc.UpdateStyles() (I'm choosing the AttachedTemplate means of doing this over the Documents.Add() method because of a memory leak issue I discovered when using Documents.Add() to open from templates.) This works fine EXCEPT when there is an image (represented as an InlineShape) in the template footer. In that case the image does not appear in the resulting document. Specifically the image should appear in the oDoc.Sections.Item(1).Footers.Item(WdHeaderFooterIndex.wdHeaderFooterPrimary).Range.InlineShapes collection but it does not. This is not a problem when using Documents.Add(), however as I said that method is not an option for me. Is there an extra step I have to take to get the images from the template? I already discovered that when using AttachedTemplate I have to explicitly call UpdateStyles() (as you can see in my code snippet) to apply the template styles to the document, whereas that is done automatically when using Documents.Add(). Or maybe there's some crazy workaround? Your help is much appreciated! :)

    Read the article

  • ruby gem not found although it is installed

    - by Eimantas
    I found some similar problems here on SO, but none seem to match my case (sorry if I overlooked). Here's my problem: I installed oauth-plugin gem to ruby gems dir, but trying to use it in rails app tells me that it's not being found. Here's the output of relevant commands: Instalation % s gem install oauth-plugin Successfully installed oauth-plugin-0.3.14 1 gem installed Installing ri documentation for oauth-plugin-0.3.14... Installing RDoc documentation for oauth-plugin-0.3.14... gem which oauth-plugin output: % gem which oauth-plugin /usr/lib/ruby/gems/1.8/gems/oauth-plugin-0.3.14/lib/oauth-plugin.rb gem env output: % gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.8.7 (2009-12-24 patchlevel 248) [i686-darwin10.2.0] - INSTALLATION DIRECTORY: /usr/lib/ruby/gems/1.8 - RUBY EXECUTABLE: /usr/bin/ruby - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - x86-darwin-10 - GEM PATHS: - /usr/lib/ruby/gems/1.8 - /Users/eimantas/.gem/ruby/1.8 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => true - :bulk_threshold => 1000 - :gem => ["--no-ri", "--no-rdoc"] - :sources => ["http://gems.ruby.lt/", "http://rubygems.org/"] - REMOTE SOURCES: - http://gems.ruby.lt/ - http://rubygems.org/ Doing ls -l /usr/lib/ruby shows this: % ls -l /usr/lib/ruby lrwxr-xr-x 1 root wheel 76 Aug 14 2009 /usr/lib/ruby -> ../../System/Library/Frameworks/Ruby.framework/Versions/Current/usr/lib/ruby And the gem in question is in intended location. This is not a single gem that is not being found by rubygems (although it's located where it should be). Any guidance towards the solution is much appreciated.

    Read the article

  • Can't operator == be applied to generic types in C#?

    - by Hosam Aly
    According to the documentation of the == operator in MSDN, For predefined value types, the equality operator (==) returns true if the values of its operands are equal, false otherwise. For reference types other than string, == returns true if its two operands refer to the same object. For the string type, == compares the values of the strings. User-defined value types can overload the == operator (see operator). So can user-defined reference types, although by default == behaves as described above for both predefined and user-defined reference types. So why does this code snippet fail to compile? void Compare<T>(T x, T y) { return x == y; } I get the error Operator '==' cannot be applied to operands of type 'T' and 'T'. I wonder why, since as far as I understand the == operator is predefined for all types? Edit: Thanks everybody. I didn't notice at first that the statement was about reference types only. I also thought that bit-by-bit comparison is provided for all value types, which I now know is not correct. But, in case I'm using a reference type, would the the == operator use the predefined reference comparison, or would it use the overloaded version of the operator if a type defined one? Edit 2: Through trial and error, we learned that the == operator will use the predefined reference comparison when using an unrestricted generic type. Actually, the compiler will use the best method it can find for the restricted type argument, but will look no further. For example, the code below will always print true, even when Test.test<B>(new B(), new B()) is called: class A { public static bool operator==(A x, A y) { return true; } } class B : A { public static bool operator==(B x, B y) { return false; } } class Test { void test<T>(T a, T b) where T : A { Console.WriteLine(a == b); } }

    Read the article

  • Best way to "un-promote" files in Accurev?

    - by Luke Rinard
    My company uses Accurev for source control, and for all its benefits, there's one simple action that I just can't figure out how to accomplish. Often we have someone accidentally push a file up too far in our stream structure -- from the "Development" stream to the "Release" stream, for example. What is the best way to "un-promote" this file? That is to say, to get the old version of the file back into the "Release" stream, and keep the new version of the file in the "Development" stream, where it belongs? Just doing a "Revert to Backed" or other Revert action on the file in the Release stream will either cause an old version of the file to propagate down into Development, or will make the file disappear entirely. In the above case, the developer will have to jump through hoops with setting basis times on streams, or use the command line tool to do a checkout of an old transaction, to get the file back. Sometimes the people in question are non-technical, so this is not a good solution. I have also considered moving the files to a "higher ground" stream, reverting, and then cross-promoting them to the lower stream again. This seems really kludgy. It seems like Accurev is obscure enough that Google is no help, so I turn to the good folks of StackOverflow for help -- has anybody figured out the "Accurevy" way to accomplish this?

    Read the article

  • Positioning / Scrolling problem with Flex popup.

    - by user284163
    Hi all, I'm trying to work out a specific problem I'm having with positioning in Flex using the PopUpManager. Basically I'm wanting to create a popup which will scroll with the parent container - this is necessary because the parent container is large and if the user's browser window isn't large enough (this will be the case the majority of the time) - they will have to use the scrollbar of the container to scroll down. The problem is that the popup is positioned relative to another component, and it needs to stay by that component. <?xml version="1.0" encoding="utf-8"?> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute"> <mx:Script> <![CDATA[ import mx.core.UITextField; import mx.containers.TitleWindow; import mx.managers.PopUpManager; private function clickeroo(event:MouseEvent):void { var popup:TitleWindow = new TitleWindow(); popup.width = 250; popup.height = 300; popup.title = "Example"; var tf:UITextField = new UITextField(); tf.wordWrap = true; tf.width = popup.width - 30; tf.text = "This window stays put and doesn't scroll when the hbox is scrolled (even with using the hbox as parent in the addPopUp method), I need the popup to be local to the HBox."; popup.addChild(tf); PopUpManager.addPopUp(popup, hbox, false); PopUpManager.centerPopUp(popup); } ]]> </mx:Script> <mx:HBox width="100%" height="2000" id="hbox"> <mx:Button label="Click Me" click="clickeroo(event)"/> </mx:HBox> </mx:Application> Could anyone give me any pointers in the right direction? Thanks.

    Read the article

  • merge() multiple data frames (do.call ?)

    - by Vincent
    Hi everyone, here's my very simple question: merge() only takes two data frames as input. I need to merge a series of data frames from a list, using the same keys for every merge operation. Given a list named "test", I want to do something like: do.call("merge", test). I could write some kind of loop, but I'm wondering if there's a standard or built-in way to do this more efficiently. Any advice is appreciated. Thanks! Here's a subset of the dataset in dput format (note that merging on country is trivial in this case, but that there are more countries in the original data): test <- list(structure(list(country = c("United States", "United States", "United States", "United States", "United States"), NY.GNS.ICTR.GN.ZS = c(13.5054687, 14.7608697, 14.1115876, 13.3389063, 12.9048351), year = c(2007, 2006, 2005, 2004, 2003)), .Names = c("country", "NY.GNS.ICTR.GN.ZS", "year"), row.names = c(NA, 5L), class = "data.frame"), structure(list( country = c("United States", "United States", "United States", "United States", "United States"), NE.TRD.GNFS.ZS = c(29.3459277, 28.352838, 26.9861939, 25.6231246, 23.6615328), year = c(2007, 2006, 2005, 2004, 2003)), .Names = c("country", "NE.TRD.GNFS.ZS", "year"), row.names = c(NA, 5L), class = "data.frame"), structure(list( country = c("United States", "United States", "United States", "United States", "United States"), NY.GDP.MKTP.CD = c(1.37416e+13, 1.31165e+13, 1.23641e+13, 1.16309e+13, 1.0908e+13), year = c(2007, 2006, 2005, 2004, 2003)), .Names = c("country", "NY.GDP.MKTP.CD", "year"), row.names = c(NA, 5L), class = "data.frame"))

    Read the article

  • Fluent NHibernate Automap does not take into account IList<T> collections as indexed

    - by Francisco Lozano
    I am using automap to map a domain model (simplified version): public class AppUser : Entity { [Required] public virtual string NickName { get; set; } [Required] [DataType(DataType.Password)] public virtual string PassKey { get; set; } [Required] [DataType(DataType.EmailAddress)] public virtual string EmailAddress { get; set; } public virtual IList<PreferencesDescription> PreferencesDescriptions { get; set; } } public class PreferencesDescription : Entity { public virtual AppUser AppUser { get; set; } public virtual string Content{ get; set; } } The PreferencesDescriptions collection is mapped as an IList, so is an indexed collection (when I require standard unindexed collections I use ICollection). The fact is that fluent nhibernate's automap facilities map my domain model as an unindexed collection (so there's no "position" property in the DDL generated by SchemaExport). ¿How can I make it without having to override this very case - I mean, how can I make Fluent nhibernate's automap make always indexed collections for IList but not for ICollection

    Read the article

< Previous Page | 683 684 685 686 687 688 689 690 691 692 693 694  | Next Page >