Search Results

Search found 16899 results on 676 pages for 'local'.

Page 586/676 | < Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >

  • git crlf configuration in mixed environment

    - by Jonas Byström
    I'm running a mixed environment, and keep a central, bare repository where I pull and push most of my stuff. This centralized repository runs on Linux, and I check out to Windows XP/7, Mac and Linux. In all repositories I put the following line in my .git/config: [core] autocrlf = true I don't have the flag safecrlf=true anywhere. First time when I modify stuff on my one Windows machine (XP) there is no problem and when I look at the diff, it looks fine. But when I do the same on the other Windows machine (7), all lines are shown as changed but local line endings are \r\n as expected (when checked in a hex editor). The same applies to a MacOSX can. Sometimes I get the feeling that the different systems wrestle on line endings, but I can't be sure (I'm loosing track of all the times I change specific files). I didn't use to have the autocrlf set, but set the flag many months back. Could that be causing my current problems? Do I need to clone everything again to loose some old baggage? Or are there other things that needs configuring too? I tried git checkout -- . about a million times, but with no success.

    Read the article

  • handle large Parcelable ArrayList in Android

    - by Gal Ben-Haim
    I'm developing an Android app that is a client to a JSON webservice API. I have classes of resource objects (some are nested) and I pass results from an IntentService that access the webserive using the Parcelable interface for all the resource classes. the webservice returns arrays or results that can be potentially large (because of the nesting, for example, a post object also contains comments array, each comment also contains a user object). currently I'm either inserting the results into a SQlite database or displaying them in a ListView. (my relevant methods are accepting ArrayList<resourceClass> as arguments). (some data need to be persistent stored and some should not). since I don't know what size of lists I can handle this way without reaching the memory limits, is this a good practice ? is it a better idea to save the parsed JSON to a local file immediately and pass the file path to the ResultReceiver, then either insert to database from that file or display the data ? is there a better way to handle this ? btw - I'm parsing the JSON as a stream with Gson's Reader so there shouldn't be memory issues at that stage.

    Read the article

  • How can I run an app's source code that I got from the Android source code?

    - by Wesley
    For all of you android devs out there that have the Android simulator running on your comp, you know that there are a few built in apps that are already 'installed' on your phone. I had an idea for an app that would utilize a function that is already being done in the spare parts app. I went on to the android developer site, dug through the source code files, and found the spare parts app, and am now trying to set it up so that running it from eclipse on my machine actually runs the app in the simulator. In other words, I want to be able to make changes to and adjust some of the things in that app for my own needs. But it won't compile, because of a number of different errors. How do I get that source code running on my local machine? Is there some special trick that I just dont know about? I thought that if I could get the source code than the rest would be easy, but it isn't being too easy. Any help here would be appreciated!!

    Read the article

  • Get the equivalent time between "dynamic" time zones

    - by doctore
    I have a table providers that has three columns (containing more columns but not important in this case): starttime, start time in which you can contact him. endtime, final hour in which you can contact him. region_id, region where the provider resides. In USA: California, Texas, etc. In UK: England, Scotland, etc starttime and endtime are time without timezone columns, but, "indirectly", their value has time zone of the region in which the provider resides. For example: starttime | endtime | region_id (time zone of region) | "real" st | "real" et ----------|----------|---------------------------------|-----------|----------- 03:00:00 | 17:00:00 | 1 (EGT => -1) | 02:00:00 | 16:00:00 Often I need to get the list of suppliers whose time range is within the current server time (taking into account the time zone conversion). The problem is that the time zones aren't "constant", ie, they may change during the summer time. However, this change is very specific to the region and not always carried out at the same time: EGT <= EGST, ART <= ARST, etc. The question is: 1. Is it necessary to use a webservice to update every so often the time zones in the regions? Does anyone know of a web service that can serve? 2. Is there a better approach to solve this problem? Thanks in advance. UPDATE I will give an example to clarify what I'm trying to get. In the table providers I found this records: idproviders | starttime | endtime | region_id ------------|-----------|----------|----------- 1 | 03:00:00 | 17:00:00 | 23 (Texas) 2 | 04:00:00 | 18:00:00 | 23 (Texas) If I execute the query in January, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +1 hour Server time = 02:00:00 I should get the following results: idproviders = 1 If I execute the query in June, with this information: Server time (UTC offset) = 0 hours Texas providers (UTC offset) = +2 hours (their local time has not changed, but their time zone has changed) Server time = 02:00:00 I should get the following results: idproviders = 1 and 2

    Read the article

  • Why can't I make an http request to the ASP.NET development server on localhost?

    - by Chris Farmer
    I have an ASP.NET project (VS2008 on Windows 7 with either webforms, MVC1, or MVC2 -- all the same result for me) which is just the File-New hello world web project. It's using the default ASP.NET development server, and when I start the server with F5, the browser never connects and I get a timeout. I tried to debug this by telnetting to the development server's port while it was running, and I got the same result: C:\Users\farmercs>telnet localhost 54752 Connecting To localhost...Could not open connection to the host, on port 54752: Connect failed I can see in the system tray that the server thinks it's running, and a netstat -s -n command shows that there is indeed an active TCP listener on that port. This worked in the not-too-distant past, and I could work on web projects using the development server. One thing that has changed since then was that I installed the Microsoft Loopback Adapter to accommodate a local development Oracle installation. I'm not sure this is the problem, but it seems a likely culprit. So, what could be blocking me from connecting? And if it's the loopback, then what is a good way for me to retain my ability to connect to my development Oracle server while still being able to use the ASP.NET development server?

    Read the article

  • Error in Getting Youtube Video Title, Description and thumbnail

    - by Muhammad Ayyaz Zafar
    I was getting youtube title and youtube description form the same code but now its not working I am getting following error: Warning: DOMDocument::load() [domdocument.load]: http:// wrapper is disabled in the server configuration by allow_url_fopen=0 in /home/colorsfo/public_html/zaroorat/admin/pages/addSongProcess.php on line 16 Warning: DOMDocument::load(http://gdata.youtube.com/feeds/api/videos/Y7G-tYRzwYY) [domdocument.load]: failed to open stream: no suitable wrapper could be found in /home/colorsfo/public_html/zaroorat/admin/pages/addSongProcess.php on line 16 Warning: DOMDocument::load() [domdocument.load]: I/O warning : failed to load external entity "http://gdata.youtube.com/feeds/api/videos/Y7G-tYRzwYY" in /home/colorsfo/public_html/zaroorat/admin/pages/addSongProcess.php on line 16 .................................... Following Coding is used to get Youtube Video Data: $url = "http://gdata.youtube.com/feeds/api/videos/".$embedCodeParts2[0]; $doc = new DOMDocument; @$doc->load($url); $title = $doc->getElementsByTagName("title")->item(0)->nodeValue; $videoDescription = $doc->getElementsByTagName("description")->item(0)->nodeValue; It was working before (This coding is working fine in Local server but on internet its not working) but now its not working. Please guide me how to fix this error. Thanks for your time.

    Read the article

  • Programming methods design phase assignment

    - by Shakir
    Hey, i have an assignment (NCC) which deals with the design phase. The Scenario is that you have four soccer divisions (divisions 1,2,3 and 4) which consist of 22 teams each and hence each team plays 42 games (home and away). The concept is similar to the barclays premier league whereby ranking is based on points or else goal difference or else goals scored by team. The difference is that the top 2 teams are promoted and the bottom 2 are relegated and this includes Div 1 and Div 4 as the top 2 of Div1 are promoted to the national league which is above division 1 regional league and bottom 2 of Div4 are relegated to the local league below Division 4 regional league. Hence there are 3 total leagues and 4 divisions in the regional league (which has 22 teams each). Now the referee has to add the result of the match and thus automatic tables have to be generated now, There are two reports League Tables for 4 divisions List of all results for any chosen team during the season by date it was played on There are a couple of things to be done... i know its gonna be terrible to make everything but atleast explain to me how i should go about drawing these and what things i should include (generally) Logical Data Structure Diagram (DSD) for each report Preliminary Program Structure (PSD) for each report Detailed Program Specification for each report Flowchart for each report There are other things but i think our teacher will give us clear "clues" for it Thanks a lot Update - Project so far: Data Structure Diagram Preliminary Program Structure

    Read the article

  • Creating A Single Threaded Server with AnyEvent (Perl)

    - by David Williams
    I'm working on creating a local service to listen on localhost and provide a basic call and response type interface. What I'd like to start with is a baby server that you can connect to over telnet and echoes what it receives. I've heard AnyEvent is great for this, but the documentation for AnyEvent::Socket does not give a very good example how to do this. I'd like to build this with AnyEvent, AnyEvent::Socket and AnyEvent::Handle. Right now the little server code looks like this: #!/usr/bin/env perl use AnyEvent; use AnyEvent::Handle; use AnyEvent::Socket; my $cv = AnyEvent->condvar; my $host = '127.0.0.1'; my $port = 44244; tcp_server($host, $port, sub { my($fh) = @_; my $cv = AnyEvent->condvar; my $handle; $handle = AnyEvent::Handle->new( fh => $fh, poll => "r", on_read => sub { my($self) = @_; print "Received: " . $self->rbuf . "\n"; $cv->send; } ); $cv->recv; }); print "Listening on $host\n"; $cv->wait; This doesn't work and also if I telnet to localhost:44244 I get this: EV: error in callback (ignoring): AnyEvent::CondVar: recursive blocking wait attempted at server.pl line 29. I think if I understand how to make a mini, single threaded server that simply prints out whatever its given and then waits for more input, I could take it a lot further from there. Any ideas?

    Read the article

  • Accessing we.config from Sharepoint web part

    - by philj
    I have a VS 2008 web parts project - in this project is a web.config file: something like this: ……. In my web part I am trying to access values in the appSetting section: I've tried all of the code below and each returns null: string Owner = ConfigurationManager.AppSettings.Get("MFOwner"); string stuff1 = ConfigurationManager.AppSettings["MFOwner"]; string stuff3 = WebConfigurationManager.AppSettings["MFOwner"]; string stuff4 = WebConfigurationManager.AppSettings.Get("MFOwner"); string stuff2 = ConfigurationManager.AppSettings["MFowner".ToString()]; I've tried this code I found: NameValueCollection sAll; sAll = ConfigurationManager.AppSettings; string a; string b; foreach (string s in sAll.AllKeys) { a = s; b = sAll.Get(s); } and stepped through it in debug mode - that is getting things like : FeedCacheTime FeedPageURL FeedXsl1 ReportViewerMessages which is NOT coming from anything in my web.config file....maybe a config file in sharepoint itself? How do I access a web.config (or any other kind of config file!) local to my web part??? thanks, Phil J

    Read the article

  • XSL transformation generating output from other nodes

    - by Abel Morelos
    I have the following XSL template: <xsl:template match="SOAP-ENV:Body/*[local-name()='Publisher']"> <html> <xsl:call-template name="body" /> </html> </xsl:template> The previous template generates the output I want, it's generating the tags containing the output generated by the "body" template. The issue I'm having is that before the opening tag I'm getting text output from a previous node. Not sure why this is happening since I'm not selecting these other nodes. For example: <SOAP-ENV:Header> <!-- Many child nodes here--> </SOAP-ENV:Header> <SOAP-ENV:Body> <publishParty:Publisher> <!-- Many child nodes here--> </publishParty:Publisher> </SOAP-ENV:Body> Given the previous sample XML fragment, my output would contain what I would expect of formatting the Publisher element, but I'm also getting the text nodes of the children of the SOAP-ENV:Header node. Any ideas? Thanks!

    Read the article

  • Java try finally variations

    - by Petr Gladkikh
    This question nags me for a while but I did not found complete answer to it yet (e.g. this one is for C# http://stackoverflow.com/questions/463029/initializing-disposable-resources-outside-or-inside-try-finally). Consider two following Java code fragments: Closeable in = new FileInputStream("data.txt"); try { doSomething(in); } finally { in.close(); } and second variation Closeable in = null; try { in = new FileInputStream("data.txt"); doSomething(in); } finally { if (null != in) in.close(); } The part that worries me is that the thread might be somewhat interrupted between the moment resource is acquired (e.g. file is opened) but resulting value is not assigned to respective local variable. Is there any other scenarios the thread might be interrupted in the point above other than: InterruptedException (e.g. via Thread#interrupt()) or OutOfMemoryError exception is thrown JVM exits (e.g. via kill, System.exit()) Hardware fail (or bug in JVM for complete list :) I have read that second approach is somewhat more "idiomatic" but IMO in the scenario above there's no difference and in all other scenarios they are equal. So the question: What are the differences between the two? Which should I prefer if I do concerned about freeing resources (especially in heavily multi-threading applications)? Why? I would appreciate if anyone points me to parts of Java/JVM specs that support the answers.

    Read the article

  • Why does System.Net.Mail work in one part of my c#.net web app, but not in another?

    - by Marc
    I have a web application that is running on IIS within my company's domain, and is being accessed via intranet. I have this application sending out email based on some user actions. For example, its a scheduling application in part, so if a task is completed, an email is sent out notifying other users of that. The problem is, the email works flawlessly in some cases, and not at all in others. I have a login.aspx page which sends out report emails when the page is loaded (its loaded once a day via windows task scheduler) - this always seems to work perfectly. I have an update page which is supposed to send email when text is entered and the "Update" button is clicked - this operation will fail most of the time. Both of these tasks use the same static overloaded method I wrote to send email using System.Net.Mail. I have tried using gmail as my smtp server (instead of our internal one), and get the same results. I investigated whether having the local SMTP Service running makes any difference, and it doesn't seem to. Besides, since C# is server-side code, shouldn't it only matter whats running on the server, and not the client? Please help me figure out whats wrong! Where should I look? What can I try?

    Read the article

  • How to export Oracle statistics

    - by A_M
    Hi, I am writing some new SQL queries and want to check the query plans that the Oracle query optimiser would come up with in production. My development database doesn't have anything like the data volumes of the production database. How can I export database statistics from a production database and re-import them into a development database? I don't have access to the production database, so I can't simply generate explain plans on production without going through a third party hosting organisation. This is painful. So I want a local database which is in some way representative of production on which I can try out different things. Also, this is for a legacy application. I'd like to "improve" the schema, by adding appropriate indexes. constraints, etc. I need to do this in my development database first, before rolling out to test and production. If I add an index and re-generate statistics in development, then the statistics will be generated around the development data volumes, which makes it difficult to assess the impact my changes on production. Does anyone have any tips on how to deal with this? Or is it just a case of fixing unexpected behaviour once we've discovered it on production? I do have a staging database with production volumes, but again I have to go through a third party to run queries against this, which is painful. So I'm looking for ways to cut out the middle man as much as possible. All this is using Oracle 9i. Thanks.

    Read the article

  • Highlighting correctly in an emacs major mode

    - by Paul Nathan
    Hi, I am developing an emacs major mode for a language (aka mydsl). However, using the techniques on xahlee's site doesn't seem to be working for some reason (possibly older emacs dialect..) The key issues I am fighting with are (1) highlighting comments is not working and (2), the use of regexp-opt lines is not working. I've reviewed the GNU manual and looked over cc-mode and elisp mode... those are significantly more complicated than I need. ;;;Standard # to newline comment ;;;Eventually should also have %% to %% multiline block comments (defun mydsl-comment-dwim (arg) "comment or uncomment" (interactive "*P") (require 'newcomment) (let ((deactivate-mark nil) (comment-start "#") (comment-end "") comment-dwim arg))) (defvar mydsl-events '("reservedword1" "reservedword2")) (defvar mydsl-keywords '("other-keyword" "another-keyword")) ;;Highlight various elements (setq mydsl-hilite '( ; stuff between " ("\"\\.\\*\\?" . font-lock-string-face) ; : , ; { } => @ $ = are all special elements (":\\|,\\|;\\|{\\|}\\|=>\\|@\\|$\\|=" . font-lock-keyword-face) ( ,(regexp-opt mydsl-keywords 'words) . font-lock-builtin-face) ( ,(regexp-opt mydsl-events 'words) . font-lock-constant-face) )) (defvar mydsl-tab-width nil "Width of a tab for MYDSL mode") (define-derived-mode mydsl-mode fundamental-mode "MYDSL mode is a major mode for editing MYDSL files" ;Recommended by manual (kill-all-local-variables) (setq mode-name "MYDSL script") (setq font-lock-defaults '((mydsl-hilite))) (if (null mydsl-tab-width) (setq tab-width mydsl-tab-width) (setq tab-width default-tab-width) ) ;Comment definitions (define-key mydsl-mode-map [remap comment-dwim] 'mydsl-comment-dwim) (modify-syntax-entry ?# "< b" mydsl-mode-syntax-table) (modify-syntax-entry ?\n "> b" mydsl-mode-syntax-table) ;;A gnu-correct program will have some sort of hook call here. ) (provide 'mydsl-mode)

    Read the article

  • URI scheme is not "file"

    - by Ankur
    I get the exception: "URI scheme is not file" The url I am playing with is ... and it very much is a file http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/domefisheye/ladybug/fish4.jpg What I am doing is trying to get the name of a file and then save that file (from another server) onto my computer/server from within a servlet. I have a String called "url", from thereon here is my code: url = Streams.asString(stream); //gets the URL from a form on a webpage System.out.println("This is the URL: "+url); URI fileUri = new URI(url); File fileFromUri = new File(fileUri); onlyFile = fileFromUri.getName(); URL fileUrl = new URL(url); InputStream imageStream = fileUrl.openStream(); String fileLoc2 = getServletContext().getRealPath("pics/"+onlyFile); File newFolder = new File(getServletContext().getRealPath("pics")); if(!newFolder.exists()){ newFolder.mkdir(); } IOUtils.copy(imageStream, new FileOutputStream("pics/"+onlyFile)); } The line causing the error is this one: File fileFromUri = new File(fileUri); I have added the rest of the code so you can see what I am trying to do.

    Read the article

  • php import to mysql hosted on godaddy

    - by julio
    Yeah, I know! It's not my choice. I am doing a large data import using a PHP script into a mysql DB hosted on godaddy. It seems their mysql connection gets killed every few hours regardless of what work it's doing. Their tech support is useless, and I've exhausted myself writing attempted workarounds. Right now, I'm trying to do a mysql_ping every few minutes, and if the ping returns false, I attempt to open up a new db connection. My script (which takes many hours to complete), keeps failing with the very unhelpful message of "mysql server has gone away". I understand mysql trying to close a connection that's been open too long, but the connection is not idle-- it's busy basically the whole time, and with the pings I've written in, it should not be idle longer than 5 minutes at most at any time. (These same scripts work with no errors on Amazon AWS servers, my local servers, etc.) Any help most appreciated! I'm about to give up.

    Read the article

  • How to approach parallel processing of messages?

    - by Dan
    I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches. Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type. The sequential version would be something like this (pseudocode): for each message in message_sequence <- SEQUENTIAL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently. Pros: predictable ordering of messages (ie, we are guaranteed a FIFO processing order) (potentially) lower latency of processing each message Cons: more processing resources available than handlers for a single message type (bad parallelization) bad use of processor cache since message objects need to be copied for each handler to use large overhead for small handlers The pseudocode of this approach would be as follows: for each message in message_sequence <- SEQUENTIAL parallel_for each handler in (handler_table for message.type) apply handler to message <- PARALLEL The second approach is to process the messages in parallel and apply the handlers to each message sequentially. Pros: better use of processor cache (keeps the message object local to all handlers which will use it) small handlers don't impose as much overhead (as long as there are other handlers also to be run) more messages are expected than there are handlers, so the potential for parallelism is greater Cons: Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic) The pseudocode is as follows: parallel_for each message in message_sequence <- PARALLEL for each handler in (handler_table for message.type) apply handler to message <- SEQUENTIAL The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage.. Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)? Thanks!

    Read the article

  • How to convert Unicode strings (\u00e2, etc) into NSString for display?

    - by karlbecker_com
    I am trying to support arbitrary unicode from a variety of international users. They have already put a bunch of data into sqlite databases on their iPhones, and now I want to capture the data into a database, then send it back to their device. Right now I am using a php page that is sending data back to from an internet mysql database. The data is saved in the mysql database properly, but when it's sent back it comes out as unicode text, such as Frank\u00e2\u0080\u0099s iPad instead of just Frank's iPad where the apostrophe should really be a curly apostrophe. The answer posted to another question indicates that there is no built-in Cocoa methods to convert the "\u00e2\u0080\u0099" portion of the unicode string from the webserver to an NSString object. Is this correct? That seems really surprising (and scarily disappointing), since Cocoa definitely allows input from many different Unicode characters, and I need to support any arbitrary language that I have never heard of, and all of the possible characters. I save them to and from the local sqlite database just fine now, but once I send it to a web server, then perhaps pull down different data, I want to ensure the data pulled from the web server is correctly formatted.

    Read the article

  • Store the cache data locally

    - by Lu Lu
    Hello, I develops a C# Winform application, it is a client and connect to web service to get data. The data returned by webservice is a DataTable. Client will display it on a DataGridView. My problem is that: Client will take more time to get all data from server (web service is not local with client). So I must to use a thread to get data. This is my model: Client create a thread to get data - thread complete and send event to client - client display data on datagridview on a form. However, when user closes the form, user can open this form in another time, and client must get data again. This solution will cause the client slowly. So, I think about a cached data: Client <---get/add/edit/delete--- Cached Data ---get/add/edit/delete---Server (web service) Please give me some suggestions. Example: cached data should be developed in another application which is same host with client? Or cached data is running in client. Please give me some techniques to implement this solution. If having any examples, please give me. Thanks.

    Read the article

  • Several Objective-C objects become Invalid for no reason, sometimes.

    - by farnsworth
    - (void)loadLocations { NSString *url = @"<URL to a text file>"; NSStringEncoding enc = NSUTF8StringEncoding; NSString *locationString = [[NSString alloc] initWithContentsOfURL:[NSURL URLWithString:url] usedEncoding:&enc error:nil]; NSArray *lines = [locationString componentsSeparatedByString:@"\n"]; for (int i=0; i<[lines count]; i++) { NSString *line = [lines objectAtIndex:i]; NSArray *components = [line componentsSeparatedByString:@", "]; Restaurant *res = [byID objectForKey:[components objectAtIndex:0]]; if (res) { NSString *resAddress = [components objectAtIndex:3]; NSArray *loc = [NSArray arrayWithObjects:[components objectAtIndex:1], [components objectAtIndex:2]]; [res.locationCoords setObject:loc forKey:resAddress]; } else { NSLog([[components objectAtIndex:0] stringByAppendingString:@" res id not found."]); } } } There are a few weird things happening here. First, at the two lines where the NSArray lines is used, this message is printed to the console- *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[NSCFDictionary count]: method sent to an uninitialized mutable dictionary object' which is strange since lines is definitely not an NSMutableDictionary, definitely is initialized, and because the app doesn't crash. Also, at random points in the loop, all of the variables that the debugger can see will become Invalid. Local variables, property variables, everything. Then after a couple lines they will go back to their original values. setObject:forKey never has an effect on res.locationCoords, which is an NSMutableDictionary. I'm sure that res, res.locationCoords, and byId are initialized. I also tried adding a retain or copy to lines, same thing. I'm sure there's a basic memory management principle I'm missing here but I'm at a loss.

    Read the article

  • iphone: cross platform references and referencing external framework resources

    - by dan
    hi there working on an iphone app and separate framework. the separate framework is for an API that i'm building for use in multiple future apps. this api now needs to reference resources (images). what i would like to do is keep the resources WITH the API framework as local set of resources. i followed the instructions from http://www.clintharris.net/2009/iphone-app-shared-libraries/ to setup my app's project to use the headers from the separate API framework. what i can't seem to figure out is how to automatically load the framework's resources into the app's xcode environment so they can be linked in at app compile time. sure, i can drag the resources across from the framework into the main app's set of resources. but that seems kinda ugly and another step that possibly can be automated (??) anyone know of a better way? it would be great if any changes from the framework would be automatically available in the main app (due to the project 'link-age'). thanks for any help/tips/suggestions...

    Read the article

  • Postgres error with Sinatra/Haml/DataMapper on Heroku

    - by sevennineteen
    I'm trying to move a simple Sinatra app over to Heroku. Migration of the Ruby app code and existing MySQL database using Taps went smoothly, but I'm getting the following Postgres error: PostgresError - ERROR: operator does not exist: text = integer LINE 1: ...d_at", "post_id" FROM "comments" WHERE ("post_id" IN (4, 17,... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. It's evident that the problem is related to a type mismatch in the query, but this is being issued from a Haml template by the DataMapper ORM at a very high level of abstraction, so I'm not sure how I'd go about controlling this... Specifically, this seems to be throwing up on a call of p.comments from my Haml template, where p represents a given post. The Datamapper models are related as follows: class Post property :id, Serial ... has n, :comments end class Comment property :id, Serial ... belongs_to :post end This works fine on my local and current hosted environment using MySQL, but Postgres is clearly more strict. There must be hundreds of Datamapper & Haml apps running on Postgres DBs, and this model relationship is super-conventional, so hopefully someone has seen (and determined how to fix) this. Thanks!

    Read the article

  • How do I include a frameset under CGI.pm?

    - by neversaint
    I want to have a cgi-script that does two things. Take the input from a form. Generate results base on the input values on a frame. I also want the frame to exist only after the result is generated/printed. Below is the simplified code of what I want to do. But somehow it doesn't work. What's the right way to do it? #!/usr/local/bin/perl use CGI ':standard'; print header; print start_html('A Simple Example'), h1('A Simple Example'), start_form, "What's your name? ",textfield('name'), p, "What's the combination?", p, checkbox_group(-name=>'words', -values=>['eenie','meenie','minie','moe'], -defaults=>['eenie','minie']), p, "What's your favorite color? ", popup_menu(-name=>'color', -values=>['red','green','blue','chartreuse']), p, submit, end_form, hr; if (param()) { # begin create the frame print <<EOF; <html><head><title>$TITLE</title></head> <frameset rows="10,90"> <frame src="$script_name/query" name="query"> <frame src="$script_name/response" name="response"> </frameset> EOF # Finish creating frame print "Your name is: ",em(param('name')), p, "The keywords are: ",em(join(", ",param('words'))), p, "Your favorite color is: ",em(param('color')), hr; } print end_html;

    Read the article

  • Top navigation link in Magento that I can't get rid of

    - by Chris Baily
    I'm working on a new theme for an existing Magento installation, and I've got a rogue link. The last guy apparently decided to hard code a link to the AW Blog extension he was using in the top navigation. See here: derm2go.com - link is "articles". I'm getting rid of AW Blog in favor of an integrated wordpress install, but when I uninstall AW Blog, the site breaks (everything after the nav dissapears) and I get this error in my logs: 2011-11-19T08:56:19+00:00 ERR (3): Warning: include() [function.include]: Failed opening 'Mage/Blog/Helper/Data.php' for inclusion (include_path='/chroot/home/dermtwog/derm2go.com/html/app/code/local:/chroot/home/dermtwog/derm2go.com/html/app/code/community:/chroot/home/dermtwog/derm2go.com/html/app/code/core:/chroot/home/dermtwog/derm2go.com/html/lib:.:/usr/share/pear') in /chroot/home/dermtwog/derm2go.com/html/lib/Varien/Autoload.php on line 93 I've searched everywhere, I can think of that might effect the nav menu, and I don't know where the link is coming from - it's not in the CMS/static blocks, its not in any of the default template files on the server (I deleted and reinstalled all of them) and it's showing up even when I change templates, so it's probably not in on the sub themes. Does anyone out there know of other files it could be hiding in? I'm assuming the last guy did a quick and dirty hack somehow - and maybe messed with core files? Would really rather not have to do a full reinstall.

    Read the article

  • Load SQL query result data into cache in advance

    - by Marc
    I have the following situation: .net 3.5 WinForm client app accessing SQL Server 2008 Some queries returning relatively big amount of data are used quite often by a form Users are using local SQL Express and restarting their machines at least daily Other users are working remotely over slow network connections The problem is that after a restart, the first time users open this form the queries are extremely slow and take more or less 15s on a fast machine to execute. Afterwards the same queries take only 3s. Of course this comes from the fact that no data is cached and must be loaded from disk first. My question: Would it be possible to force the loading of the required data in advance into SQL Server cache? Note My first idea was to execute the queries in a background worker when the application starts, so that when the user starts the form the queries will already be cached and execute fast directly. I however don't want to load the result of the queries over to the client as some users are working remotely or have otherwise slow networks. So I thought just executing the queries from a stored procedure and putting the results into temporary tables so that nothing would be returned. Turned out that some of the result sets are using dynamic columns so I couldn't create the corresponding temp tables and thus this isn't a solution. Do you happen to have any other idea?

    Read the article

< Previous Page | 582 583 584 585 586 587 588 589 590 591 592 593  | Next Page >