Search Results

Search found 20852 results on 835 pages for 'local seo'.

Page 403/835 | < Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >

  • JavaScript Exception/Error Handling Not Working

    - by Seán Hayes
    This might be a little hard to follow. I've got a function inside an object: f_openFRHandler: function(input) { console.debug('f_openFRHandler'); try{ //throw 'foo'; DragDrop.FileChanged(input); //foxyface.window.close(); } catch(e){ console.error(e); jQuery('#foxyface_open_errors').append('<div>Max local storage limit reached, unable to store new images in your browser. Please remove some images and try again.</div>'); } }, inside the try block it calls: this.FileChanged = function(input) { // FileUploadManager.addFileInput(input); console.debug(input); var files = input.files; for (var i = 0; i < files.length; i++) { var file = files[i]; if (!file.type.match(/image.*/)) continue; var reader = new FileReader(); reader.onload = (function(f, isLast) { return function(e) { if (files.length == 1) { LocalStorageManager.addImage(f.name, e.target.result, false, true); LocalStorageManager.loadCurrentImage(); //foxyface.window.close(); } else { FileUploadManager.addFileData(f, e.target.result); // add multiple files to list if (isLast) setTimeout(function() { LocalStorageManager.loadCurrentImage() },100); } }; })(file, i == files.length - 1); reader.readAsDataURL(file); } return true; LocalStorageManager.addImage calls: this.setItem = function(data){ localStorage.setItem('ImageStore', $.json_encode(data)); } localStorage.setItem throws an error if too much local storage has been used. I want to catch that error in f_openFRHandler (first code sample), but it's being sent to the error console instead of the catch block. I tried the following code in my Firebug console to make sure I'm not crazy and it works as expected despite many levels of function nesting: try{ (function(){ (function(){ throw 'foo' })() })() } catch(e){ console.debug(e) } Any ideas?

    Read the article

  • Pros/cons of reading connection string from physical file vs Application object (ASP.NET)?

    - by HaterTot
    my ASP.NET application reads an xml file to determine which environment it's currently in (e.g. local, development, production). It checks this file every single time it opens a connection to the database, in order to know which connection string to grab from the Application Settings. I'm entering a phase of development where efficiency is becoming a concern. I don't think it's a good idea to have to read a file on a physical disk ever single time I wish to access the database (very often). I was considering storing the connection string in Application["ConnectionString"]. So the code would be public static string GetConnectionString { if (Application["ConnectionString"] == null) { XmlDocument doc = new XmlDocument(); doc.Load(HttpContext.Current.Request.PhysicalApplicationPath + "bin/ServerEnvironment.xml"); XmlElement xe = (XmlElement) xnl[0]; switch (xe.InnerText.ToString().ToLower()) { case "local": connString = Settings.Default.ConnectionStringLocal; break; case "development": connString = Settings.Default.ConnectionStringDevelopment; break; case "production": connString = Settings.Default.ConnectionStringProduction; break; default: throw new Exception("no connection string defined"); } Application["ConnectionString"] = connString; } return Application["ConnectionString"].ToString(); } I didn't design the application so I figure there must have been a reason for reading the xml file every time (to change settings while the application runs?) I have very little concept of the inner workings here. What are the pros and cons? Do you think I'd see a small performance gain by implementing the function above? THANKS

    Read the article

  • Can I version dotfiles within a project without merging their history into the main line?

    - by istrasci
    I'm sure this title is fairly obscure. I'm wondering if there is some way in git to tell it that you want a certain file to use different versions of a file when moving between branches, but to overall be .gitignored from the repository. Here's my scenario: I've got a Flash Builder project (for a Flex app) that I control with git. Flex apps in Flash Builder projects create three files: .actionScriptProperties, .flexProperties, and .project. These files contain lots of local file system references (source folders, output folders, etc.), so naturally we .gitignore them from our repo. Today, I wanted to use a new library in my project, so I made a separate git branch called lib, removed the old version of the library and put in the new one. Unfortunately, this Flex library information gets stored in one of those three dot files (not sure which offhand). So when I had to switch back to the first branch (master) earlier, I was getting compile errors because master was now linked to the new library (which basically negated why I made lib in the first place). So I'm wondering if there's any way for me to continue to .gitignore these files (so my other developers don't get them), but tell git that I want it to use some kind of local "branch version" so I can locally use different versions of the files for different branches.

    Read the article

  • User friendly urls in joomla1.5

    - by Aruna
    Hi , I am working Joomla 1.5 . I am unaware of how to set user-friendly Urls to the site in Joomla1.5. Give suggestions for keeping User-Friendly urls ... I have changed the configuration for SEO as yes to apache mod_rewrite and Search engine friendly Urls .It changes the url as http://localhost/joomla/Joomla_1.5.7/publicationsform but it shows me a 404 error and it works only when i put http://localhost/joomla/Joomla_1.5.7/index.php/publicationsform how to resolve this . Also even when i put http://localhost/joomla/Joomla_1.5.7/index.php/publicationsform my css is not getting added..

    Read the article

  • Can't get syntax on to work in my gvim.

    - by user198553
    (I'm new to Linux and Vim, and I'm trying to learn Vim but I'm having some issues with it that I can't seen do fix) I'm in a Linux installation (Ubuntu 8.04) that I can't update, using Vim 7.1.138. My vim installation is in /usr/share/vim/vim71/. /home/user/ My .vimrc file is in /home/user/.vimrc, as follows: fun! MySys() return "linux" endfun set runtimepath=~/.vim,$VIMRUTNTIME source ~/.vim/.vimrc And then, in my /home/user/.vim/.vimrc: " =============== GENERAL CONFIG ============== set nocompatible syntax on " =============== ENCODING AND FILE TYPES ===== set encoding=utf8 set ffs=unix,dos,mac " =============== INDENTING =================== set ai " Automatically set the indent of a new line (local to buffer) set si " smartindent (local to buffer) " =============== FONT ======================== " Set font according to system if MySys() == "mac" set gfn=Bitstream\ Vera\ Sans\ Mono:h13 set shell=/bin/bash elseif MySys() == "windows" set gfn=Bitstream\ Vera\ Sans\ Mono:h10 elseif MySys() == "linux" set gfn=Inconsolata\ 14 set shell=/bin/bash endif " =============== COLORS ====================== colorscheme molokai " ============== PLUGINS ====================== " -------------- NERDTree --------------------- :noremap ,n :NERDTreeToggle<CR> " =============== DIRECTORIES ================= set backupdir=~/.backup/vim set directory=~/.swap/vim ...fact is the command syntax on is not working, neither in vim or gvim. And the strange thing is: If I try to set the syntax using the gvim toolbat, it works. Then, in normal mode in gvim, after activating using the toolbar, using the code :syntax off, it works, and just after doing this trying to do :syntax on doesn't work!! What's going on? Am I missing something?

    Read the article

  • ASP.Net Problem with Event Handlers and Control Creation Timing

    - by Oliver Weichhold
    What I am trying to achieve here is to display a number of LinkButtons in a RadGrid Column. The buttons are generated from a collection property member of the bound grid row item. The CollectionLinkButton control is nothing more than a asp:Panel derived control that populates its Child Controls from "DataItem.SomeCollection" and this is working fine. The problem I am facing is with this part: Collection='<%# DataBinder.Eval(Container, "DataItem.SomeCollection") %' This is because databound Collection Property is populated so late in the lifecycle of the page that the LinkButton Controls that the CollectionLinkButton class creates from the collection are not available yet during Postback when the Click event Handler is supposed to fire and I have currently no idea how to solve this problem. <radG:RadGrid ID="grid" runat="server" DataSourceID="ds_AB"> <MasterTableView> <Columns> <radG:GridTemplateColumn> <ItemTemplate> <local:CollectionLinkButton ID="LinkButton1" runat="server" CssClass="EntityLinkButton" Collection='<%# DataBinder.Eval(Container, "DataItem.SomeCollection") %>' CollectionProperty="Id" CollectionDisplayProperty="Name" Text='<%# DataBinder.Eval(Container, "DataItem.Name") %>'</local:CollectionLinkButton> </ItemTemplate> </radG:GridTemplateColumn>

    Read the article

  • Running an executable on network share with CustomAction with wix?

    - by martin
    Hello, i have created a msi-package which compresses some xml-files to a zip-file during installation. I have created a CustomAction for this purposes: <CustomAction Id="CompressMy" BinaryKey="zipEXE" ExeCommand="a -tzip &quot;[TEMPLATE_DIR]my.zip&quot; &quot;[TempSourceFolder]data.xml&quot;" Return="check" HideTarget="no" Impersonate="no" Execute="deferred" /> The installation works fine, if i try to install to a local drive, but recently a customer wanted to install [TEMPLATE_DIR] to a network drive on Windows Vista. The CustomAction fails, because of the elevated install-user hasn't mapped the network drive, even if the installer-calling user has mapped the drive. This happens also, if I try to install to an unc-path. I use 7zip for compressing. I have added it to my msi-package. I have tried to set Impersonate="yes", but then the Installations fails, if my TEMPLATE_DIR is f.e. the ProgramData-dir. Do you have any idea what i can do? I thinked about checking if TEMPLATE_DIR is a network path, but I didn't know how i can check this. Or do you have any other Ideas how I can provide a local and a network installation while using this custom action. Would be great if there are any advices, greetings, Martin

    Read the article

  • Sharing storage between servers

    - by El Yobo
    I have a PHP based web application which is currently only using one webserver but will shortly be scaling up to another. In most regards this is pretty straightforward, but the application also stores a lot of files on the filesystem. It seems that there are many approaches to sharing the files between the two servers, from the very simple to the reasonably complex. These are the options that I'm aware of Simple network storage NFS SMB/CIFS Clustered filesystems Lustre GFS/GFS2 GlusterFS Hadoop DFS MogileFS What I want is for a file uploaded via one webserver be immediately available if accessed through the other. The data is extremely important and absolutely cannot be lost, so whatever is implemented needs to a) never lose data and b) have very high availability (as good as, or better, than a local filesystem). It seems like the clustered filesystems will also provide faster data access than local storage (for large files) but that isn't of vita importance at the moment. What would you recommend? Do you have any suggestions to add or anything specifically to look out for with the above options? Any suggestions on how to manage backup of data on the clustered filesystems?

    Read the article

  • Value get changed even though I'm not using reference

    - by atch
    In code: struct Rep { const char* my_data_; Rep* my_left_; Rep* my_right_; Rep(const char*); }; typedef Rep& list; ostream& operator<<(ostream& out, const list& a_list) { int count = 0; list tmp = a_list;//----->HERE I'M CREATING A LOCAL COPY for (;tmp.my_right_;tmp = *tmp.my_right_) { out << "Object no: " << ++count << " has name: " << tmp.my_data_; //tmp = *tmp.my_right_; } return out;//------>HERE a_list is changed } I've thought that if I'll create local copy to a_list object I'll be operating on completely separate object. Why isn't so? Thanks.

    Read the article

  • git clone fails with "index-pack" failed?

    - by gct
    So I created a remote repo that's not bare (because I need redmine to be able to read it), and it's set to be shared with the group (so git init --shared=group). I was able to push to the remote repo and now I'm trying to clone it. If I clone it over the net I get this: remote: Counting objects: 4648, done. remote: Compressing objects: 100% (2837/2837), done. error: git-upload-pack: git-pack-objects died with error.B/s fatal: git-upload-pack: aborting due to possible repository corruption on the remote side. remote: aborting due to possible repository corruption on the remote side. fatal: early EOF fatal: index-pack failed I'm able to clone it locally without a problem, and I ran "git fsck", which only reports some dangling trees/blobs, which I understand aren't a problem. What could be causing this? I'm still able to pull from it, just not clone. I should note the remote git version is 1.5.6.5 while local is 1.6.0.4 I tried cloning my local copy of the repo, stripping out the .git folder and pushing to a new repo, then cloning the new repo and I get the same error, which leads me to believe it may be a file in the repo that's causing git-upload-pack to fail... Edit: I have a number of windows binaries in the repo, because I just built the python modules and then stuck them in there so everyone else didn't have to build them as well. If I remove the windows binaries and push to a new repo, I can clone again, perhaps that gives a clue. Trying to narrow down exactly what file is causing the problem now.

    Read the article

  • Using unset member variables within a class or struct

    - by Doug Kavendek
    It's pretty nice to catch some really obvious errors when using unset local variables or when accessing a class or struct's members directly prior to initializing them. In visual studio 2008 you get an "uninitialized local variable used" warning at compile-time and get a run-time check failure at the point of access when debugging. However, if you access an uninitialized struct's member variable through one of its functions, you don't get any warnings or assertions. Obviously the easiest solution is don't do that, but nobody's perfect. For example: struct Test { float GetMember() const { return member; } float member; }; Test test; float f1 = test.member; // Raises warning, asserts in VS debugger at runtime float f2 = test.GetMember(); // No problem, just keeps on going This surprised me, but it makes some sense -- the compiler can't assume calling a function on an unused struct is an error, or how else would you initialize or construct it? And anything fancier just quickly brings up so many other complications that it makes sense that it wouldn't bother classifying which functions are ok to call and when, especially just as a debugging help. I know I can set up my own assertions or error checking within the class itself, but that can complicate some simpler structs. Still, it would seem like within the context of the function call, wouldn't it know insides GetMember() that member wasn't initialized yet? I'm assuming it's not only relying on static compile-time deduction, given the Run-Time Check Failure #3 it raises during execution, so based on my current understanding of it it would seem reasonable for the same checks to apply. Is this just a limitation of this specific compiler/debugger (Visual Studio 2008), or more tied to how C++ works?

    Read the article

  • problem configure JBoss to work with JNDI

    - by Spiderman
    I am trying to bind connection to the DB using JNDI in my application that runs on JBoss. I did the following: I created the datasource file oracle-ds.xml filled it with the relevant xml elements: <datasources> <local-tx-datasource> <jndi-name>bilby</jndi-name> ... </local-tx-datasource> </datasources> and put it in the folder \server\default\deploy Added the relevant oracle jar file than in my application I performed: JndiObjectFactoryBean factory = new JndiObjectFactoryBean(); factory.setJndiName("bilby"); try{ factory.afterPropertiesSet(); dataSource = factory.getObject(); } catch(NamingException ne) { ne.printStackTrace(); } and this cause the error: javax.naming.NameNotFoundException: bilby not bound then in the output after this error occured I saw the line: 18:37:56,560 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jb oss.jca:service=DataSourceBinding,name=bilby' to JNDI name 'java:bilby' So what is my configuration problem? I think that it may be that JBoss first loads and runs the .war file of my application and only then it loads the oracle-ds.xml that contain my data-source definition. The problem is that they are both located in the same folder. Is there a way to define priority of loading them, or maybe this is not the problem at all. Any idea?

    Read the article

  • Write to a textfile using Javascript

    - by karikari
    Under Firefox, I want to do something like this : I have a .htm file, that has a button on it. This button, when I click it, the action will write a text inside a local .txt file. By the way, my .htm file is run locally too. I have tried multiple times using this code, but still cant make my .htm file write to my textfile: function save() { try { netscape.security.PrivilegeManager.enablePrivilege("UniversalXPConnect"); } catch (e) { alert("Permission to save file was denied."); } var file = Components.classes["@mozilla.org/file/local;1"] .createInstance(Components.interfaces.nsILocalFile); file.initWithPath( savefile ); if ( file.exists() == false ) { alert( "Creating file... " ); file.create( Components.interfaces.nsIFile.NORMAL_FILE_TYPE, 420 ); } var outputStream = Components.classes["@mozilla.org/network/file-output-stream;1"] .createInstance( Components.interfaces.nsIFileOutputStream ); outputStream.init( file, 0x04 | 0x08 | 0x20, 420, 0 ); var output = 'test test test test'; var result = outputStream.write( output, output.length ); outputStream.close(); } This part is for the button: <input type="button" value="write to file2" onClick="save();">

    Read the article

  • jQuery rewrite href

    - by user317188
    I have an onclick function that I want to add an anchor to the href value. I do not want to change the URL's, because I need the site to still function for people without javascript / for SEO purposes. so here is what I have tried using (amoung other things): jQuery('a[rel=ajax]').click(function () { jQuery(this).attr('href', '/#' + jQuery('a')); }); An orginal link looks like this: http://www.mysite.com/PopularTags/ URL re-write should look like this, so that AJAX will work: http://www.mysite.com/#PopularTags I was able to get some URL's to work by setting a links name value to the same as the href, but it did not work for links with sub-sections: http://www.mysite.com/Artist/BandName/ So not really sure. Thanks for the help.

    Read the article

  • Running migration on server when deploying with capistrano

    - by Pandafox
    Hi, I'm trying to deploy my rails application with capistrano, but I'm having some trouble running my migrations. In my development environment I just use sqlite as my database, but on my production server I use MySQL. The problem is that I want the migrations to run from my server and not my local machine, as I am not able to connect to my database from a remote location. My server setup: A debian box running ngnix, passenger, mysql and a git repository. What is the easiest way to do this? update: Here's my deploy script: set :application, "example.com" set :domain, "example.com" set :scm, :git set :repository, "[email protected]:project.git" set :use_sudo, false set :deploy_to, "/var/www/example.com" role :web, domain role :app, domain role :db, "localhost", :primary = true after "deploy", "deploy:migrate" When I run cap deploy, everything is working fine until it tries to run the migration. Here's the error I'm getting: ** [deploy:update_code] exception while rolling back: Capistrano::ConnectionError, connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2)) connection failed for: localhost (Errno::ECONNREFUSED: Connection refused - connect(2))) This is why I need to run the migration from the server and not from my local machine. Any ideas?

    Read the article

  • Check for modification failure in content Integration using visualSvn sever and cruise control.net

    - by harun123
    I am using CruiseControl.net for continous integration. I've created a repository for my project using VisualSvn server (uses Windows Authentication). Both the servers are hosted in the same system (Os-Microsoft Windows Server 2003 sp2). When i force build the project using CruiseControl.net "Failed task(s): Svn: CheckForModifications" is shown as the message. When i checked the build report, it says as follows: BUILD EXCEPTION Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: svn: OPTIONS of 'https://sp-ci.sbsnetwork.local:8443/svn/IntranetPortal/Source': **Server certificate verification failed: issuer is not trusted** (https://sp-ci.sbsnetwork.local:8443). Process command: C:\Program Files\VisualSVN Server\bin\svn.exe log **sameUrlAbove** -r "{2010-04-29T08:35:26Z}:{2010-04-29T09:04:02Z}" --verbose --xml --username ccnetadmin --password cruise --non-interactive --no-auth-cache at ThoughtWorks.CruiseControl.Core.Sourcecontrol.ProcessSourceControl.Execute(ProcessInfo processInfo) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.Svn.GetModifications (IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.Sourcecontrol.QuietPeriod.GetModifications(ISourceControl sourceControl, IIntegrationResult lastBuild, IIntegrationResult thisBuild) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.GetModifications(IIntegrationResult from, IIntegrationResult to) at ThoughtWorks.CruiseControl.Core.IntegrationRunner.Integrate(IntegrationRequest request) My SourceControl node in the ccnet.config is as shown below: <sourcecontrol type="svn"> <executable>C:\Program Files\VisualSVN Server\bin\svn.exe</executable> <trunkUrl> check out url </trunkUrl> <workingDirectory> C:\ProjectWorkingDirectories\IntranetPortal\Source </workingDirectory> <username> ccnetadmin </username> <password> cruise </password> </sourcecontrol> Can any one suggest how to avoid this error?

    Read the article

  • Global resources can't be resolved after publishing Website in VS2008

    - by Scoregraphic
    Hi there I have a web-project running in VS 2008. We have some global resource files (*.resx) in the App_GlobalResources folder for internationalisation. All this works like a charm on my local IIS installation out of VS. But when I publish my web-project to the local filesystem and/or another server, all the resources can no longer be found. So I guess the pre-compilation is somehow corrupting stuff. When I call the pre-compiled web, I get an error that the resource object with key xyz cannot be found, although it could be found before. I checked with .NET reflector if the resource stuff made it into the *.dlls. All those identifiers are there (bin/Web.dll, bin/<culture>/Web.resources.dll). The identifiers are loaded like this: <asp:MenuItem NavigateUrl="~/OrderNew.aspx" Text="<%$ Resources:MyProject, MenuNewOrder %>" Value="NewOrder"> The resource files are called MyProject.resx and MyProject.<culture>.resx where <culture> corresponds the the specific culture (i.e. MyProject.de-DE.resx). Any ideas how to solve this? I really appreciate any help. Thanks Edit: If I copy the App_GlobalResources folder manually to the output, the resources may be loaded normally. So I really really wonder what this pre-compilation is all about. I'm still interested in solving the issue "the right way".

    Read the article

  • How to maintain base files for development environment central while allowing people to change their

    - by Ittai
    Hi, what I'd like to do is have files in a central location so that when I add people to my development team they can see the base version of these files but meanwhile have the ability for the rest of the team to work with their own local version. I know I can just put the files in source-control (we use Tortoiese-SVN) and have my team change the local versions but I'd rather not as the exclamation mark signaling the file has been changed and needs to be committed, quite frankly, irritates me greatly. I'll give two examples of what I mean: We use quite a few build.xml files which relate to a single properties files which contains many definitions. Some of them can be different between team-members (mainly temporary working directories) and I'd like a new team-member to have the ability to get the properties file with the base config but change it if they wish. Have the eclipse settings file in the SVN so that when a new team-member joins they can just retrieve the files from the server and have a base system running. If they wish they will be able to change some of these settings. Thanks, Ittai

    Read the article

  • how to hide ssh expect user/password

    - by raindrop18
    my perl cgi script I have the password/user on clear text and want to hide it or the user enter the credential interactively.is that possible? here is my code. please any help!! i am very new for perl. #!/usr/local/bin/expect ####################################################################################################### # Input: It will handle two arguments -> a device and a show command. ####################################################################################################### # ######### Start of Script ###################### # #### Set up Timeouts - Debugging Variables log_user 0 set timeout 10 set userid "USER" set password "PASS" # ############## Get two arguments - (1) Device (2) Command to be executed set device [lindex $argv 0] set command [lindex $argv 1] spawn /usr/local/bin/ssh -l $userid $device match_max [expr 32 * 1024] expect { -re "RSA key fingerprint" {send "yes\r"} timeout {puts "Host is known"} } expect { -re "username: " {send "$userid\r"} -re "(P|p)assword: " {send "$password\r"} -re "Warning:" {send "$password\r"} -re "Connection refused" {puts "Host error -> $expect_out(buffer)";exit} -re "Connection closed" {puts "Host error -> $expect_out(buffer)";exit} -re "no address.*" {puts "Host error -> $expect_out(buffer)";exit} timeout {puts "Timeout error. Is device down or unreachable?? ssh_expect";exit} } expect { -re "\[#>]$" {send "term len 0\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect { -re "\[#>]$" {send "$command\r"} timeout {puts "Error reading prompt -> $expect_out(buffer)";exit} } expect -re "\[#>]$" set output $expect_out(buffer) send "exit\r" puts "$output\r\n"

    Read the article

  • Moving from SVN to HG : branching and backup

    - by rorycl
    My company runs svn right now and we are very familiar with it. However, because we do a lot of concurrent development, merging can become very complicated.. We've been playing with hg and we really like the ability to make fast and effective clones on a per-feature basis. We've got two main issues we'd like to resolve before we move to hg: Branches for erstwhile svn users I'm familiar with the "4 ways to branch in Mercurial" as set out in Steve Losh's article. We think we should "materialise" the branches because I think the dev team will find this the most straightforward way of migrating from svn. Consequently I guess we should follow the "branching with clones" model which means that separate clones for branches are made on the server. While this means that every clone/branch needs to be made on the server and published separately, this isn't too much of an issue for us as we are used to checking out svn branches which come down as separate copies. I'm worried, however, that merging changes and following history may become difficult between branches in this model. Backup If programmers in our team make local clones of a branch, how do they backup the local clone? We're used to seeing svn commit messages like this on a feature branch "Interim commit: db function not yet working". I can't see a way of doing this easily in hg. Advice gratefully received. Rory

    Read the article

  • Need help running Python app as service in Ubuntu with Upstart

    - by GreeenGuru
    I have written a logging application in Python that is meant to start at boot, but I've been unable to start the app with Ubuntu's Upstart init daemon. When run from the terminal with sudo /usr/local/greeenlog/main.pyw, the application works perfectly. Here is what I've tried for the Upstart job: /etc/init/greeenlog.conf # greeenlog description "I log stuff." start on startup stop on shutdown script exec /usr/local/greeenlog/main.pyw end script My application starts one child thread, in case that is important. I've tried the job with the expect fork stanza without any change in the results. I've also tried this with sudo and without the script statements (just a lone exec statement). In all cases, after boot, running status greeenlog returns greeenlog stop/waiting and running start greeenlog returns: start: Rejected send message, 1 matched rules; type="method_call", sender=":1.61" (uid=1000 pid=2496 comm="start) interface="com.ubuntu.Upstart0_6.Job" member="Start" error name="(unset)" requested_reply=0 destination="com.ubuntu.Upstart" (uid=0 pid=1 comm="/sbin/init")) Can anyone see what I'm doing wrong? I appreciate any help you can give. Thanks.

    Read the article

  • Debugging a Google Web Toolkit application that has an error when deployed on Google App Engine

    - by gerdemb
    I have a Google Web Toolkit application that I am deploying to Google App Engine. In the deployed application, I am getting a JavaScript error Uncaught TypeError: Cannot read property 'f' of null. This sounds like the JavaScript equivalent of a Java NullPointerException. The problem is that the GWT JavaScript is obfuscated, so it's impossible to debug in the browser and I can't reproduce the same problem in hosted mode where I could use the Java debugger. I think the reason I'm only seeing the error on the deployed application is that the database I'm using on the GAE server is triggering something differently than the test database I'm using during testing and development. So, any ideas about the best way to proceed? I've thought of the following things: Deploy a non-obsfucated version of my application. Despite a lot of Googling, I can't figure out how to do this using the automatic deploy script provided with the Google Eclipse Plugin. Does anyone know? Download and copy my GAE data to the local server Somehow point my development code to use the GAE server for data instead of the local test database. This seems like the best idea... Can anyone suggest how to proceed here? Finally, is there a way to catch these JavaScript errors on the production server and log them somewhere? Without logging, I won't have anyway to know if my users are having errors that don't occur on the server. The GWT.log() function is automatically stripped out of the production code...

    Read the article

  • Get Json and output to text file undecoded

    - by Gary
    Hi, I want to fetch json script and write it to a txt file undecoded, exactly how it was originally. I do have a script that I use that I am modifying but unsure what to commands to use. This script decodes, which is what I want to advoid. //Get Age list($bstat,$bage,$bdata) = explode("\t",check_file('./advise/roadsnow.txt',60*2+15)); //Test Age if ( $bage > $CacheMaxAge ) { //echo "The if statement evaluated to true so get new file and reset $bage"; $bage="0"; $file = file_get_contents('http://somesite.jsontxt'); $out = (json_decode($file)); $report = wordwrap($out->mainText, 100, "\n"); //$valid = $out->validTo; //write the data to a text file called roadsnow.txt $myFile = "./advise/roadsnow.txt"; $fh = fopen($myFile, 'w') or die("can't open file"); $stringData = $report; fwrite($fh, $stringData); } else { //echo the test evaluated to false; file is not stale so read local cache //print "we are at the read local cache"; $stringData = file_get_contents("./advise/roadsnow.txt"); } // if/else is done carry on with processing //Format file $data = $stringData

    Read the article

  • When i run rake db:create ,Error rake aborted! uninitialized constant Cucumber

    - by Big Bang Theory
    Hi I am trying to experiment on an open source application application . when i run $ rake db:create Following is the stacktrace rake aborted! uninitialized constant Cucumber /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:443:in `load_missing_constant' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:80:in `const_missing' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:92:in `const_missing' /home/BigBangTheory/Desktop/spot-us/lib/tasks/cucumber.rake:13 /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1882:in `in_namespace' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:910:in `namespace' /home/BigBangTheory/Desktop/spot-us/lib/tasks/cucumber.rake:12 /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load_without_new_constant_marking' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:521:in `new_constants_in' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8 /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.2/lib/tasks/rails.rb:8 /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/local/lib/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' /home/BigBangTheory/Desktop/spot-us/Rakefile:9 /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in `load' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2383:in `raw_load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2017:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2016:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2000:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:in `load' /usr/bin/rake:19 Any help ?

    Read the article

  • First could access the repository next cannot

    - by Banani
    Hi! I have configured svnserve (1.6.5,plain, without apache) on Fedora 12. I could ran the following svn subcommands which access the repository after configuration. Such as, commit, update,checkout, list. But, when next time ( after stopping,ctrl-c and then starting svnserve)I tried above commands, could not access the repository. This is happening both from local and remote machine. I ran svn and svnserve as below. 'svn commit svn://127.0.0.1/myrepository/' from local client. 'svnserve -d --foregorund --listen-port=3690 -r /path-to-repository/mypository/' To understand the problem better, I created another repository and found similar behavior . Frist I could access the repository and next I could not. I tried doing strace on svnserve, but don't uderstand much of it. Below is the partial output. accept(3, {sa_family=AF_INET, sin_port=htons(54425), sin_addr=inet_addr("127.0.0 .1")}, [16]) = 4 fcntl64(4, F_GETFD) = 0 fcntl64(4, F_SETFD, FD_CLOEXEC) = 0 waitpid(-1, 0xbfcdf31c, WNOHANG|WSTOPPED) = -1 ECHILD (No child processes) clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, chil d_tidptr=0xb7743758) = 9737 close(4) = 0 accept(3, 0xbfcdf2bc, [128]) = ? ERESTARTSYS (To be restarted) --- SIGCHLD (Child exited) @ 0 (0) --- sigreturn() = ? (mask now []) waitpid(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WNOHANG|WSTOPPED) = 9737 waitpid(-1, 0xbfcdf31c, WNOHANG|WSTOPPED) = -1 ECHILD (No child processes) My question: Why user are not able to access the repository anymore? What information the strace output gives about this problem? Any help is much appreciated. Thanks. Banani

    Read the article

< Previous Page | 399 400 401 402 403 404 405 406 407 408 409 410  | Next Page >