Search Results

Search found 24094 results on 964 pages for 'console log'.

Page 337/964 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • Executing python subprocess via git hook

    - by aljesco
    I'm running Gitolite over the Git repository and I have post-receive hook there written in Python. I need to execute "git" command at git repository directory. There are few lines of code: proc = subprocess.Popen(['git', 'log', '-n1'], cwd='/home/git/repos/testing.git' stdout=subprocess.PIPE, stderr=subprocess.PIPE) proc.communicate() After I make new commit and push to repository, scripts executes and says fatal: Not a git repository: '.' If I run proc = subprocess.Popen(['pwd'], cwd='/home/git/repos/testing.git' stdout=subprocess.PIPE, stderr=subprocess.PIPE) it says, as expected, correct path to git repository (/home/git/repos/testing.git) If I run this script manually from bash, it works correct and show correct output of "git log". What I'm doing wrong?

    Read the article

  • How to determine what user and group a Python script is running as?

    - by Chirael
    I have a CGI script that is getting an "IOError: [Errno 13] Permission denied" error in the stack trace in the web server's error log. As part of debugging this problem, I'd like to add a little bit of code to the script to print the user and (especially) group that the script is running as, into the error log (presumably STDERR). I know I can just print the values to sys.stderr, but how do I figure out what user and group the script is running as? (I'm particularly interested in the group, so the $USER environment variable won't help; the CGI script has the setgid bit set so it should be running as group "list" instead of the web server's "www-data" - but I need code to see if that's actually happening.)

    Read the article

  • Force NCover 1.5.8 to use v4 framework like testdriven.net does?

    - by Sam Holder
    I want to run coverage from the command line, but can't seem to get NCover 1.5.8 to instrument the code. It must be possible as when I run coverage tests with TestDriven.net it works. the difference seems to be that TD.NET is able to get NCover to use framework 4.0 (you get this in the log when it runs : MESSAGE: v4.0.30319) but from the command line I can't make it (I get this in the log : MESSAGE: v2.0.50727) So how can I make NCover play nice with nunit from the commandline, like it does with TD.NET?

    Read the article

  • Set environment variable in Ubuntu

    - by Junho Park
    In Ubuntu, I'd like to switch my JAVA_HOME environment variable back and forth between Java 5 and 6. I open a terminal and type in the following to set the JAVA_HOME environment variable: export JAVA_HOME=/usr/lib/jvm/java-1.5.0-sun And in that same terminal window, I type the following to check that the environment variable has been updated: echo $JAVA_HOME And I see /usr/lib/jvm/java-1.5.0-sun which is what I'm expecting to see. In addition, I modify ~/.profile and set the JAVA_HOME environment variable to /usr/lib/jvm/java-1.5.0-sun. And now for the problem--when I open a new terminal window and I check my JAVA_HOME environment variable by typing in echo $JAVA_HOME I see that my JAVA_HOME environment variable has been reverted back to Java 6. When I reboot my machine (or log out and back in, I suppose) the JAVA_HOME environment variable is set to Java 5 (presumably because of the modification I made in my ~/.profile). Is there a way around this so that I can change my JAVA_HOME environment without having to log out and back in (AND make that environment variable change stick in all new terminal windows)?

    Read the article

  • Php efficiency question --> Database call vs. File Write vs. Calling C++ executable

    - by JP19
    Hi, What I wish to achieve is - log all information about each and every visit to every page ofmy website (like ip address, browser, referring page, etc). Now this is easy to do. What I am interested is doing this in a way so as to cause minimum overhead (runtime) in the php scripts. What is the best approach for this efficiency-wise: 1) Log all information to a database table 2) Write to a file (from php directly) 3) Call a C++ executable, that will write this info to a file in parallel [so the script can continue execution without waiting for the file write to occur ...... is this even possible] I may be trying to optimize unnecessarily/prematurely, but still - any thoughts / ideas on this would be appreciated. (I think efficiency of file write/logging can really be a concern if I have say 100 visits per minute...) Thanks & Regards, JP

    Read the article

  • make: invoke command for multiple targets of multiple files?

    - by marvin2k
    Hi, I looking to optimize an existing Makefile. It's used to create multiple plots (using Octave) for every logfile in a given directory using an scriptfile for every plot which takes a logfilename as an argument. In the Moment, I use one single rule for every kind of plot available, with a handwritten call to Octave, giving the specific scriptfile/logfile as an argument. It would be nice, if every plot has "his" octave-script as a dependency (plus the logfile, of course), so only one plot is regenerated if his script is changed. Since I don't want to type that much, I wonder how I can simplifiy this by using only one general rule to build "a" plot? To make it clearer: Logfile: "$(LOGNAME).log" Scriptfile: "plot$(PLOTNAME).m" creates "$(LOGNAME)_$(PLOTNAME).png" The first thing I had in mind: %1_%2.png: %1.log $(OCTAVE) --eval "plot$<2('$<1')" But this seems not to be allowed. Could someone give me a hint?

    Read the article

  • SQL Get Latest Unique Rows

    - by Simpleton
    I have a log table, each row representing an object logging its state. Each object has a unique, unchanging GUID. There are multiple objects logging their states, so there will be thousands of entries, with objects continually inserting new logs. Everytime an object checks in, it is via an INSERT. I have the PrimaryKey, GUID, ObjectState, and LogDate columns in tblObjects. I want to select the latest (by datetime) log entry for each unique GUID from tblObjects, in effect a 'snapshot' of all the objects. How can this be accomplished?

    Read the article

  • Storing millions of URLs in a database for fast pattern matching

    - by Paras Chopra
    I am developing a web analytics kind of system which needs to log referring URL, landing page URL and search keywords for every visitor on the website. What I want to do with this collected data is to allow end-user to query the data such as "Show me all visitors who came from Bing.com searching for phrase that contains 'red shoes'" or "Show me all visitors who landed on URL that contained 'campaign=twitter_ad'", etc. Because this system will be used on many big websites, the amount of data that needs to log will grow really, really fast. So, my question: a) what would be the best strategy for logging so that scaling the system doesn't become a pain; b) how to use that architecture for rapid querying of arbitrary requests? Is there a special method of storing URLs so that querying them gets faster? In addition to MySQL database that I use, I am exploring (and open to) other alternatives better suited for this task.

    Read the article

  • C#: My callback function gets called twice for every Sent Request

    - by Madi D.
    I've Got a program that uploads/downloads files into an online server,Has a callback to report progress and log it into a textfile, The program is built with the following structure: public void Upload(string source, string destination) { //Object containing Source and destination to pass to the threaded function KeyValuePair<string, string> file = new KeyValuePair<string, string>(source, destination); //Threading to make sure no blocking happens after calling upload Function Thread t = new Thread(new ParameterizedThreadStart(amazonHandler.TUpload)); t.Start(file); } private void TUpload(object fileInfo) { KeyValuePair<string, string> file = (KeyValuePair<string, string>)fileInfo; /* Some Magic goes here,Checking The file and Authorizing Upload */ var ftiObject = new FtiObject () { FileNameOnHDD = file.Key, DestinationPath = file.Value, //Has more data used for calculations. }; //Threading to make sure progress gets callback gets called. Thread t = new Thread(new ParameterizedThreadStart(amazonHandler.UploadOP)); t.Start(ftiObject); //Signal used to stop progress untill uploadCompleted is called. uploadChunkDoneSignal.WaitOne(); /* Some Extra Code */ } private void UploadOP(object ftiSentObject) { FtiObject ftiObject = (FtiObject)ftiSentObject; /* Some useless code to create the uri and prepare the ftiObject. */ // webClient.UploadFileAsync will open a thread that // will upload the file and report // progress/complete using registered callback functions. webClient.UploadFileAsync(uri, "PUT", ftiObject.FileNameOnHDD, ftiObject); } I got a callback that is registered to the Webclient's UploadProgressChanged event , however it is getting called twice per sent request. void UploadProgressCallback(object sender, UploadProgressChangedEventArgs e) { FtiObject ftiObject = (FtiObject )e.UserState; Logger.log(ftiObject.FileNameOnHDD, (double)e.BytesSent ,e.TotalBytesToSend); } Log Output: Filename: C:\Text1.txt Uploaded:1024 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:1024 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:2048 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:2048 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:3072 TotalFileSize: 665241 Filename: C:\Text1.txt Uploaded:3072 TotalFileSize: 665241 Etc... I am watching the Network Traffic using a watcher, and only 1 request is being sent. Some how i cant Figure out why the callback is being called twice, my doubt was that the callback is getting fired by each thread opened(the main Upload , and TUpload), however i dont know how to test if thats the cause. Note: The reason behind the many /**/ Comments is to indicate that the functions do more than just opening threads, and threading is being used to make sure no blocking occurs (there a couple of "Signal.WaitOne()" around the code for synchronization)

    Read the article

  • Other SecurityManager implementations available?

    - by mhaller
    Is there any other implementation (e.g. in an OSS project) of a Java SecurityManager available which has more features than the one in the JDK? I'm looking for features like configurable at runtime policies updateable at runtime, read from other data sources than a security.policy file Thread-aware, e.g. different policies per Thread Higher-level policies, e.g. "Disable network functions, but allow JDBC traffic" Common predefined policies, e.g. "Allow read-access to usual system properties like file.encoding or line.separator, but disallow read-access to user.home" Monitoring and audit trace logging, e.g. "Log all file access, log all network access going NOT to knownhost.example.org" Blocking jobs "requesting" a permission until an administrator grants permission, letting the thread/job continue ... I'm pretty sure that application servers (at least the commercial ones) have their own SecurityManager implementation or at least their own policy configuration. I'm wondering if there is any free project with similar requirements.

    Read the article

  • PowerShell copy fails without warning

    - by boink
    Howdy, am trying to copy a file from IE cache to somewhere else. This works on w7, but not Vista Ultimate. In short: copy-item $f -Destination "$targetDir" -force (I also tried $f.fullname) The full script: $targetDir = "C:\temp" $ieCache=(get-itemproperty "hkcu:\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders").cache $minSize = 5mb Write-Host "minSize:" $minSize Add-Content -Encoding Unicode -Path $targetDir"\log.txt" -Value (get-Date) Set-Location $ieCache #\Low\Content.IE5 for protected mode #\content.ie5 for unprotected $a = Get-Location foreach ($f in (get-childitem -Recurse -Force -Exclude *.dat, *.tmp | where {$_.length -gt $minSize}) ) { Write-Host (get-Date) $f.Name $f.length Add-Content -Encoding Unicode -Path $targetDir"\log.txt" -Value $f.name, $f.length copy-item $f -Destination "$targetDir" -force } End of wisdom. Please help!

    Read the article

  • php session not working (as with cookies disabled), but cookies are enabled for the domain. What oth

    - by SWilk
    Hi, I have a problem with a client, who cannot log in to our system. We have PHP-based B2B system, which uses cookies to store session-id. The client cannot log in and is redirected to the login page, without any error message. He claims he have cookies enabled in his Firefox. Also, if he had cookies disabled, my javascript would detect this and would show him an red, very descriptive error info. He does not seen anything like this. What else could have same effect on sessions like disabled cookies in browser? Are there any proxies which filters cookies? Any AV software, etc? What to look for? Our login form works for any other user without problems.

    Read the article

  • Ending tail -f started in a shell script

    - by rangalo
    I have the following. A Java process writing logs to the stdout A shell script starting the Java process Another shell script which executes the previous one and redirects the log I check the log file with the tail -f command for the success message. Even if I have exit 0 in the code I cannot end the tail -f process. Which doesn't let my script to finish. Is there any other way of doing this in Bash? The code looks like the following. function startServer() { touch logfile startJavaprocess > logfile & tail -f logfile | while read line do if echo $line | grep -q 'Started'; then echo 'Server Started' exit 0 fi done }

    Read the article

  • Application not releasing database connection Spring.net + NHibernate

    - by anupam3m
    Even after successful transaction.Application connection with the database persist.in Nhibernate log it shows Nhibernate Log 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - executing flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] <(null) - registering flush begin 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] <(null) - registering flush end 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - post flush 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - before transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.ConnectionManager [(null)] <(null) - aggressively releasing database connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Connection.ConnectionProvider [(null)] <(null) - Closing connection 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - transaction completion 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Transaction.AdoTransaction [(null)] <(null) - running AdoTransaction.Dispose() 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.SessionImpl [(null)] <(null) - closing session 2010-05-21 14:45:08,428 [Worker] [0] DEBUG NHibernate.Impl.BatcherImpl [(null)] <(null) - running BatcherImpl.Dispose(true) Underneath given is my dataconfiguration file -- Risco.Rsp.Ac.RMAC.Mapping Risco.Rsp.Ac.Logging.Appenders -- Please help me out with this issue.Thanks

    Read the article

  • Best practice -- Content Tracking Remote Data (cURL, file_get_contents, cron, et. al)?

    - by user322787
    I am attempting to build a script that will log data that changes every 1 second. The initial thought was "Just run a php file that does a cURL every second from cron" -- but I have a very strong feeling that this isn't the right way to go about it. Here are my specifications: There are currently 10 sites I need to gather data from and log to a database -- this number will invariably increase over time, so the solution needs to be scalable. Each site has data that it spits out to a URL every second, but only keeps 10 lines on the page, and they can sometimes spit out up to 10 lines each time, so I need to pick up that data every second to ensure I get all the data. As I will also be writing this data to my own DB, there's going to be I/O every second of every day for a considerably long time. Barring magic, what is the most efficient way to achieve this? it might help to know that the data that I am getting every second is very small, under 500bytes.

    Read the article

  • How do I prevent WIX CAQuietExec from logging the command line?

    - by Noel Abrahams
    In order to prevent command windows from popping up during installation I am using the WIX built-in custom action CAQuietExec. First I define the command line: <CustomAction Id="A01" Property="QtExecCmdLine" Value="&quot;MyExe.exe&quot; /password [PASSWORD]" /> NB: The PASSWORD property is defined as Hidden. This prevents the Windows installer from writing the property value to the log. Then I call into the embedded WIX extension: <CustomAction Id="A02" BinaryKey="WixCA" DllEntry="CAQuietExec" Execute="immediate" Return="ignore" /> This works fine. However, when I go to the temp folder and open up the MSI log I see the following entry: CAQuietExec: "C:\Program Files\MyExe.exe" /password INCLEARTEXT I.e. the password is displayed in clear text and not hidden. How do I prevent the CAQuietExec from logging the password in clear text?

    Read the article

  • Help debugging Apache, Passenger and Rails problem

    - by Matt Dressel
    We have an environment running that uses Apache, Passenger and rails. The system is handling most request normally, yet certain requests do not make it to the rails application. For instance, a request to /books is successful, but /books/1 hits apache and passenger, but does not even make it to rails. We set the apache log level to debug and the passenger log level to 3 so that we could monitor all incoming requests. We could see each request coming through and even the /books/1 request is being handled by passenger. But it never gets to rails. Is there any way to determine where the request goes between Passenger and rails or where debugging information might live? Has anyone ever seen any problems with passenger spawning or queuing? We have spawning set to conservative. Also, we have had some permission/ownership problems in the past, so I am not ruling this out yet. Thanks in advance

    Read the article

  • Using a general class for execution with try/catch/finally?

    - by antirysm
    I find myself having a lot of this in different methods in my code: try { runABunchOfMethods(); } catch (Exception ex) { logger.Log(ex); } What about creating this: public static class Executor { private static ILogger logger; public delegate void ExecuteThis(); static Executor() { // logger = ...GetLoggerFromIoC(); } public static void Execute(ExecuteThis executeThis) { try { executeThis(); } catch (Exception ex) { logger.Log(ex); } } } And just using it like this: private void RunSomething() { Method1(someClassVar); Method2(someOtherClassVar); } ... Executor.Execute(RunSomething); Are there any downsides to this approach? (You could add Executor-methods and delegates when you want a finally and use generics for the type of Exeception you want to catch...)

    Read the article

  • What system does before launching iPhone app's main() function?

    - by Eonil
    My app takes too much time to loading. So I put a NSLog in main() function like this to measure loading time from first: int main(int argc, char *argv[]) { NSLog(@"main"); NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; int retVal = UIApplicationMain(argc, argv, nil, nil); [pool release]; return retVal; } But, the log displayed at really later time. Default.png displayed about 5 seconds, all loading process completed in 1~2 seconds after log appeared. What's happening before executing main() function on iPhone app?

    Read the article

  • vba excel: do something every time a certain variable is changed

    - by every_answer_gets_a_point
    im doing a bunch of stuff to the variable St For i = 1 To 30000 Randomize e1 = Rnd e2 = Rnd z1 = Sqr(-2 * Log(e1)) * Cos(2 * 3.14 * e2) z2 = Sqr(-2 * Log(e1)) * Sin(2 * 3.14 * e2) St = So * Exp((r - (sigma ^ 2) / 2) * T + sigma * Sqr(T) * z1) C = C + Application.WorksheetFunction.Max(St - K, 0) St = So * Exp((r - (sigma ^ 2) / 2) * T - sigma * Sqr(T) * z1) C = C + Application.WorksheetFunction.Max(St - K, 0) St = So * Exp((r - (sigma ^ 2) / 2) * T + sigma * Sqr(T) * z2) C = C + Application.WorksheetFunction.Max(St - K, 0) St = So * Exp((r - (sigma ^ 2) / 2) * T - sigma * Sqr(T) * z2) C = C + Application.WorksheetFunction.Max(St - K, 0) Next i how do i get notified every time the variable changes?

    Read the article

  • Generator speed in python 3

    - by Will
    Hello all, I am going through a link about generators that someone posted. In the beginning he compares the two functions below. On his setup he showed a speed increase of 5% with the generator. I'm running windows XP, python 3.1.1, and cannot seem to duplicate the results. I keep showing the "old way"(logs1) as being slightly faster when tested with the provided logs and up to 1GB of duplicated data. Can someone help me understand whats happening differently? Thanks! def logs1(): wwwlog = open("big-access-log") total = 0 for line in wwwlog: bytestr = line.rsplit(None,1)[1] if bytestr != '-': total += int(bytestr) return total def logs2(): wwwlog = open("big-access-log") bytecolumn = (line.rsplit(None,1)[1] for line in wwwlog) getbytes = (int(x) for x in bytecolumn if x != '-') return sum(getbytes)

    Read the article

  • can Yahoo and Hotmail contacts api be used without leaving the site?

    - by Dr.Dredel
    I might be missing something but I'm trying to implement a contacts retrieval mechanism akin to the one that is offered by Google for Yahoo and Hotmail. Both APIs seem to require the user to actually go to their sites to log in. The documentation is really convoluted for both. I was hoping someone has done this and can point me to a simple way (if there is one) to allow the user to log in directly in my app and then for me to go and fetch their contacts for them (preferably in XML, but JSON would also do nicely). I currently have a Perl script that goes and gets the gmail stuff and works very nicely. I was (maybe wildly optimistically) hoping that Yahoo and Microsoft would have similarly useful mechanisms.

    Read the article

  • Error on installing xdebug via MacPorts

    - by Nareille
    I wanted to install xdebug via MacPorts, trying the terminal command sudo port install php5-xdebug But after a while the installation breaks, giving me an error ---> Configuring php5 Error: org.macports.configure for port php5 returned: configure failure: command execution failed Error: Failed to install php5 Please see the log file for port php5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_tarballs_ports_lang_php5/php5/main.log Error: The following dependencies were not installed: php5 To report a bug, follow the instructions in the guide: http://guide.macports.org/#project.tickets Error: Processing of port php5-xdebug failed I checked with phpinfo(), that my version of php is 5.3.1 What am I missing? Thanks! (I'm on a Mac OSX Lion, running Apache with XAMPP, I installed PEAR and PHPUnit successfully)

    Read the article

  • sql server replication algorithm.

    - by reggie
    Anyone know how the underlying replication model in sql server works? Do they essentially depend on UTC datetime values to determine if something is new or do they keep a table of all the changes (like a table of tableID+rowid that have changed). I am building my own "replication" system and was planning on using the dates to know what to replicate. Then I started wondering what would happen if the date got off in the computer for some reason. The obvious choice is to keep a log of the changes as you go and once you replicate those changes, you remove from the log of changes. But thats a lot of extra work, instead of just checking dates. I figure if sql server replication works by just checking the dates, then that should be good enough for me. Any wisdom here? thanks

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >