Search Results

Search found 23555 results on 943 pages for 'command timeout'.

Page 302/943 | < Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >

  • Issue using GAE appcfg.py

    - by JustSmith
    I get nothing out of appcfg.py besides the default output. I'm trying to upload some data to my development project with no luck at at all. From the instructions on the Google App Engine page the steps are as follows: Edit app.yaml update with appcfg.py make upload script upload with appcfg.py After step one I try to run the update and it never shows any success. The following commands product the same output: appcfg.py appcfg.py update appDir appcfg.py update appDir/ appcfg.py update /appDir If i try to follow the instructions from the appcfg.py output and type help upload and get: "help <action>" I get a response from the system, This command is not supported by the help utility. Try "update /?". cause I'm calling the system help command. If I use the command appcfg.py help upload I get the same result as just typing appcfg.py Can someone show me examples of the syntax to update the dev site, upload data to it and get appcfg.py to actually give help on its commands? Also I'm just assuming that the upload script and the .csv file that are being uploaded are in they myApp directory. Appreciate any help,

    Read the article

  • Java process is not terminating after starting an external process

    - by tangens
    On Windows I've started a program "async.cmd" with a ProcessBuilder like this: ProcessBuilder processBuilder = new ProcessBuilder( "async.cmd" ); processBuilder.redirectErrorStream( true ); processBuilder.start(); Then I read the output of the process in a separate thread like this: byte[] buffer = new byte[ 8192 ]; while( !interrupted() ) { int available = m_inputStream.available(); if( available == 0 ) { Thread.sleep( 100 ); continue; } int len = Math.min( buffer.length, available ); len = m_inputStream.read( buffer, 0, len ); if( len == -1 ) { throw new CX_InternalError(); } String outString = new String( buffer, 0, len ); m_output.append( outString ); } Now it happened that the content of the file "async.cmd" was this: REM start a command window start cmd /k The process that started this extenal program terminated (process.waitFor() returned the exit value). Then I sent an readerThread.interrupt() to the reader thread and the thread terminated, too. But there was still a thread running that wasn't terminating. This thread kept my java application running even if it exited its main method. With the debugger (eclipse) I wasn't able to suspend this thread. After I quit the opened command window, my java program exited, too. Question How can I quit my java program while the command window stays open?

    Read the article

  • Compiling and using NTL c++ library for Windows

    - by Martin Lauridsen
    Hi there, I have compiled the NTL inifite precision integer arithmetic library for c++, using Microsoft Visual Studio 2008. I did as explained, on this site, using the Visual Studio interface, rather than from the command prompt. Actually I would rather do it from the command prompt, but I was not sure how to. Anyhow, I got the library compiled, and I now want to compile a program using the library, from the command prompt. The program I am trying to compile, has been tested on a linux system, where I compile it with the following c++ -I/appl/htopopt/Linux_x86_64/NTL-5.4.2/include mpqs.cpp main.cpp -o main -L/appl/htopopt/Linux_x86_64/NTL-5.4.2/lib -lntl -L/appl/htopopt/Linux_x86_64/gmp-4.2.1/lib -lgmp -lm Nevermind the gmp stuff, I dont have that installed on Windows. It is purely an optional thing that will make the NTL run faster. Anyhow, this works fine on linux. Now on Windows I write the following cl /EHsc /I D:\Downloads\WinNTL-5_5_2\include mpqs.cpp main.cpp /link /LIBPATH:"D:\Documents\Visual Studio 2008\Projects\ntl\Debug" But this results in the following errors: mpqs.cpp mpqs.cpp(38) : error C2039: 'find_smooth_vals' : is not a member of 'QS' d:\desktop\qs\mpqs.h(12) : see declaration of 'QS' mpqs.cpp(41) : error C2065: 'M' : undeclared identifier mpqs.cpp(41) : error C2065: 'n' : undeclared identifier mpqs.cpp(42) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(42) : error C2228: left of '.size' must have class/struct/union type is ''unknown-type'' mpqs.cpp(43) : error C2065: 'sieve_table' : undeclared identifier mpqs.cpp(44) : error C2065: 'qx_table' : undeclared identifier mpqs.cpp(44) : error C3861: 'test_smoothness': identifier not found mpqs.cpp(45) : error C2065: 'smooth_indices' : undeclared identifier mpqs.cpp(45) : error C2228: left of '.push_back' must have class/struct/union type is ''unknown-type'' main.cpp Generating Code... It is as if, my mpqs.h file is not included into the compilation process? Also I dont understand why it complains about .push_back() for a vector type? Help is much appreciated!

    Read the article

  • HTTP Content-type header for cached files

    - by Brian
    Hello, Using Apache with mod_rewrite, when I load a .css or .js file and view the HTTP headers, the Content-type is only set correctly the first time I load it - subsequent refreshes are missing Content-type altogether and it's creating some problems for me. Specifically, gzip is not compressing these files. I can get around this by appending a random query string value to the end of each filename, eg. http://www.site.com/script.js?12345 However, I don't want to have to do that, since caching is good and all I want is for the Content-type to be present. I've tried using a RewriteRule to force the type but still didn't solve the problem. Any ideas? Thanks, Brian More Details: HTTP headers WITHOUT random query string value: http://localhost/script.js GET /script.js HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; If-Modified-Since: Thu, 29 Apr 2010 15:49:56 GMT If-None-Match: "3440e9-119ed-485621404f100" Cache-Control: max-age=0 HTTP/1.1 304 Not Modified Date: Thu, 29 Apr 2010 20:19:44 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Connection: Keep-Alive Keep-Alive: timeout=5, max=100 Etag: "3440e9-119ed-485621404f100" Vary: Accept-Encoding X-Pad: avoid browser bug HTTP headers WITH random query string value: http://localhost/script.js?c947344de8278053f6edbb4365550b25 GET /script.js?c947344de8278053f6edbb4365550b25 HTTP/1.1 Host: localhost User-Agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.6; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: */* Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://localhost/ Cookie: PHPSESSID=ke3p35v5qbus24che765p9jni5; HTTP/1.1 200 OK Date: Thu, 29 Apr 2010 20:14:40 GMT Server: Apache/2.2.14 (Unix) mod_ssl/2.2.14 OpenSSL/0.9.8l DAV/2 PHP/5.3.1 Last-Modified: Thu, 29 Apr 2010 15:49:56 GMT Etag: "3440e9-119ed-485621404f100" Accept-Ranges: bytes Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 24605 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: application/javascript

    Read the article

  • Can I get a reference to a pending transaction from a SqlConnection object?

    - by Rune
    Hey, Suppose someone (other than me) writes the following code and compiles it into an assembly: using (SqlConnection conn = new SqlConnection(connString)) { conn.Open(); using (var transaction = conn.BeginTransaction()) { /* Update something in the database */ /* Then call any registered OnUpdate handlers */ InvokeOnUpdate(conn); transaction.Commit(); } } The call to InvokeOnUpdate(IDbConnection conn) calls out to an event handler that I can implement and register. Thus, in this handler I will have a reference to the IDbConnection object, but I won't have a reference to the pending transaction. Is there any way in which I can get a hold of the transaction? In my OnUpdate handler I want to execute something similar to the following: private void MyOnUpdateHandler(IDbConnection conn) { var cmd = conn.CreateCommand(); cmd.CommandText = someSQLString; cmd.CommandType = CommandType.Text; cmd.ExecuteNonQuery(); } However, the call to cmd.ExecuteNonQuery() throws an InvalidOperationException complaining that "ExecuteNonQuery requires the command to have a transaction when the connection assigned to the command is in a pending local transaction. The Transaction property of the command has not been initialized". Can I in any way enlist my SqlCommand cmd with the pending transaction? Can I retrieve a reference to the pending transaction from the IDbConnection object (I'd be happy to use reflection if necessary)?

    Read the article

  • HTTP caching headers: how should must-revalidate work?

    - by Bobby Jack
    Using trac, I'm getting a response with the following header: Cache-control: must-revalidate Moreover, no 'Expires' header is being sent. Our local proxy, however, is caching these responses, so when an edit is made, pages need to be 'hard refreshed' to update. Is the proxy misbehaving? Other headers that might be relevant: Connection Keep-Alive Proxy-Connection Keep-Alive Keep-Alive timeout=15, max=100

    Read the article

  • How to override the default init.tcl

    - by Sean Murphy
    I'm working on a project where I want to make use of TCL as the command interpreter. I have a working c library object which I can load from within the tcl shell but my problem is finding a way to automatically do this while starting a tclsh. Essentially my ultimate goal is to be able to run a script and have it load my library and run some initial startup tcl code before dropping me back to the tclsh command prompt in interactive mode. e.g. tclsh -f myscript.tcl --then-switch-to-interactive or EXPORT TCLINIT=myscript.tcl tclsh The basic goal is to avoid having to distribute tclsh but rather rely in local user installations of tcl. All I would like to distribute is my library, a startup script and a shell command to launch the tclsh with the library preloaded. I've tried using the environment variables TCLINIT and TCL_LIBRARY but they seem to have no effect. The only workable solutions I've found so far are to add "source myscript.tcl" to either the end of /usr/share/tcltk/tcl8.5.init.tcl or ~/.tclshrc However both of these "solutions" are non perfect as they require modification of the default users workspace. It strikes me that there must be a way to handle this in TCL, but my research so far hasn't yielded anything. Does anyone have any suggestions?

    Read the article

  • Qt Socket blocking functions required to run in QThread where created. Any way past this?

    - by Alexander Kondratskiy
    The title is very cryptic, so here goes! I am writing a client that behaves in a very synchronous manner. Due to the design of the protocol and the server, everything has to happen sequentially (send request, wait for reply, service reply etc.), so I am using blocking sockets. Here is where Qt comes in. In my application I have a GUI thread, a command processing thread and a scripting engine thread. I create the QTcpSocket in the command processing thread, as part of my Client class. The Client class has various methods that boil down to writing to the socket, reading back a specific number of bytes, and returning a result. The problem comes when I try to directly call Client methods from the scripting engine thread. The Qt sockets randomly time out and when using a debug build of Qt, I get these warnings: QSocketNotifier: socket notifiers cannot be enabled from another thread QSocketNotifier: socket notifiers cannot be disabled from another thread Anytime I call these methods from the command processing thread (where Client was created), I do not get these problems. To simply phrase the situation: Calling blocking functions of QAbstractSocket, like waitForReadyRead(), from a thread other than the one where the socket was created (dynamically allocated), causes random behaviour and debug asserts/warnings. Anyone else experienced this? Ways around it? Thanks in advance.

    Read the article

  • Build number increment not reflected in AssemblyVersion

    - by awshepard
    I've browsed through some of the discussion on auto-incrementing build numbers, but in the impatience of youth decided to roll my own and re-invent the wheel. I know there are probably better ways to go about this (which I'm definitely going to investigate), but my question centers more around the Assembly and/or Version classes. My approach was to write a separate exe (BuildIncrementer) that takes a command line parameter for file name, does a regex match on the contents to grab the [assembly: AssemblyVersion...] string, do the modifications that I want (increment the build number, etc.), then write the contents back to the file. This approach works as-is. The next thing I did was in the project that I wanted to use this on, I set up a pre-build command line that is simply the command to execute that BuildIncrementer.exe on this project's AssemblyInfo.cs file. This too works, updating the assembly info as desired. The problem comes when I run the project, it sends an email containing the current version, obtained with Assembly.GetExecutingAssembly().GetName().Version.ToString(). BUT, the version showing up is the previous version. When my AssemblyInfo.cs says [assembly: AssemblyVersion("1.0.2.49667")], I get sent 1.0.1.45660, which was the previous build. Anyone have any ideas why that might be?

    Read the article

  • How do I exclude data from local table schema_migrations from being pushed to Heroku DB?

    - by Thierry Lam
    I was able to push my Ruby on Rails app with MySQL(local dev) to the Heroku server along with migrating my model with the command heroku rake db:migrate. I have also read the documentation on Database Import/Export. Is that doc referring to pushing actual data from my local dev DB to whichever Heroku's DB? Do I need to modify anything in the file database.yml to make it happen? I ran the following command: heroku db:push and I am getting the error: Sending data 2 tables, 3 records !!! Caught Server Exception | ETA: --:--:-- Taps Server Error: PGError ERROR: duplicate key value violates unique constraint "unique_schema_migrations" I have 2 tables, one I create for my app and the other schema_migrations. The total number of entries among the 2 tables is 3. I'm also printing the number of entries I have in the table I have created and it's showing 0. Any ideas what I might be missing or what I am doing wrong? EDIT: I figured out the above, Heroku's DB already have schema_migrations the moment I ran migrate. New question: Does anyone know how I can exclude data from a specific table from being pushed to Heroku DB. The table to exclude in this case will be schema_migrations. Not so good solution: I googled around and someone else was having the same issue. He suggested naming the schema_migrations table to zschema_migrations. In this way data from the other tables will be pushed properly until it fails on the last table. It's a pretty bad solution but will do for the time being. A better solution will be to use an existing Rails command which can reset a specific table from a database. I don't think Rake can do that.

    Read the article

  • sudo taking long time

    - by Sam
    On a Ubuntu 9 64bit Linux machine, sudo takes longer time to start. "sudo echo hi" takes 2-3 minutes. strace on sudo tells poll("/etc/pam.d/system-auth", POLLIN) timesout after 5 seconds and there are multiple calls(may be a loop) to same system call (which causes 2-3min delay). Any idea why sudo has to wait for /etc/pam.d/system-auth? Any tunable to make sudo to timeout faster? Thanks Samuel

    Read the article

  • Reducing apache VIRT and RES memory usage

    - by lisa
    On a quad-core server with 8GB of ram I have apache processes that use up to 2.3GB RES memory and 2.6GB VIRT memory. Here is a copy of the top -c command http://imgur.com/x8Lq9.png Is there a way to reduct the memory usage for these apache processes? These are my httpd.conf settings Timeout 160 TraceEnable Off ServerSignature Off ServerTokens ProductOnly FileETag None StartServers 6 <IfModule prefork.c> MinSpareServers 4 MaxSpareServers 16 </IfModule> ServerLimit 400 MaxClients 320 MaxRequestsPerChild 10000 KeepAlive On KeepAliveTimeout 4 MaxKeepAliveRequests 80

    Read the article

  • Connecting Error to Remote Oracle XE database using ASP.NET

    - by imsatasia
    Hello, I have installed Oracle XE on my Development machine and it is working fine. Then I installed Oracle XE client on my Test machine which is also working fine and I can access Development PC database from Browser. Now, I want to create an ASP.Net application which can access that Oracle XE database. I tried it too, but it always shows me an error on my TEST machine to connect database to the Development Machine using ASP.Net. Here is my code for ASP.Net application: protected void Page_Load(object sender, EventArgs e) { string connectionString = GetConnectionString(); OracleConnection connection = new OracleConnection(connectionString); connection.Open(); Label1.Text = "State: " + connection.State; Label1.Text = "ConnectionString: " + connection.ConnectionString; OracleCommand command = connection.CreateCommand(); string sql = "SELECT * FROM Users"; command.CommandText = sql; OracleDataReader reader = command.ExecuteReader(); while (reader.Read()) { string myField = (string)reader["nID"]; Console.WriteLine(myField); } } static private string GetConnectionString() { // To avoid storing the connection string in your code, // you can retrieve it from a configuration file. return "User Id=System;Password=admin;Data Source=(DESCRIPTION=" + "(ADDRESS=(PROTOCOL=TCP)(HOST=myServerAddress)(PORT=1521))" + "(CONNECT_DATA=(SERVICE_NAME=)));"; }

    Read the article

  • How to check the backtrace of a "USER process" in the Linux Kernel Crash Dump

    - by Biswajit
    I was trying to debug a USER Process in Linux Crash Dump. The normal steps to go to the crash dump are: Go to the path where the dump is located. Use the command crash kernel_link dump.201104181135. Where kernel_link is a soft link I have created for vmlinux image. Now you will be in the CRASH prompt. If you run the command foreach <PID Of the process> bt Eg: crash> **foreach 6920 bt** **PID: 6920 TASK: ffff88013caaa800 CPU: 1 COMMAND: **"**climmon**"**** #0 [ffff88012d2cd9c8] **schedule** at ffffffff8130b76a #1 [ffff88012d2cdab0] **schedule_timeout** at ffffffff8130bbe7 #2 [ffff88012d2cdb50] **schedule_timeout_uninterruptible** at ffffffff8130bc2a #3 [ffff88012d2cdb60] **__alloc_pages_nodemask** at ffffffff810b9e45 #4 [ffff88012d2cdc60] **alloc_pages_curren**t at ffffffff810e1c8c #5 [ffff88012d2cdc90] **__page_cache_alloc** at ffffffff810b395a #6 [ffff88012d2cdcb0] **__do_page_cache_readahead** at ffffffff810bb592 #7 [ffff88012d2cdd30] **ra_submit** at ffffffff810bb6ba #8 [ffff88012d2cdd40] **filemap_fault** at ffffffff810b3e4e #9 [ffff88012d2cdda0] **__do_fault** at ffffffff810caa5f #10 [ffff88012d2cde50] **handle_mm_fault** at ffffffff810cce69 #11 [ffff88012d2cdf00] **do_page_fault** at ffffffff8130f560 #12 [ffff88012d2cdf50] **page_fault** at ffffffff8130d3f5 RIP: 00007fd02b7e9071 RSP: 0000000040e86ea0 RFLAGS: 00010202 RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fd02b7e9071 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000040e86ec0 RBP: 0000000040e87140 R8: 0000000000000800 R9: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 00007fff16ec43d0 R13: 00007fd02bcadf00 R14: 0000000040e87950 R15: 0000000000001000 ORIG_RAX: ffffffffffffffff CS: 0033 SS: 002b If you check the above backtrace it shows the kernel functions used for scheduling/handling page fault but not the functions that were executed in the USER process (here eg. climmon). So I am not able to debug this process as I am not able to see the functions executed in that process. Can any one help me with this case?

    Read the article

  • How to put large text data (~20mb) into sql cs 3.5 database?

    - by Anindya Chatterjee
    I am using following query to insert some large text data : internal static string InsertStorageItem = "insert into Storage(FolderName, MessageId, MessageDate, StorageData) values ('{0}', '{1}', '{2}', @StorageData)"; and the code I am using to execute this query is as follows : string content = "very very large data"; string query = string.Format(InsertStorageItem, "Inbox", "AXOGTRR1445/DSDS587444WEE", "4/19/2010 11:11:03 AM"); var command = new SqlCeCommand(query, _sqlConnection); var paramData = command.Parameters.Add("@StorageData", System.Data.SqlDbType.NText); paramData.Value = content; paramData.SourceColumn = "StorageData"; command.ExecuteNonQuery(); But at the last line I am getting this following error : System.Data.SqlServerCe.SqlCeException was unhandled by user code Message=The data was truncated while converting from one data type to another. [ Name of function(if known) = ] Source=SQL Server Compact ADO.NET Data Provider HResult=-2147467259 NativeError=25920 StackTrace: at System.Data.SqlServerCe.SqlCeCommand.ProcessResults(Int32 hr) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommandText(IntPtr& pCursor, Boolean& isBaseTableCursor) at System.Data.SqlServerCe.SqlCeCommand.ExecuteCommand(CommandBehavior behavior, String method, ResultSetOptions options) at System.Data.SqlServerCe.SqlCeCommand.ExecuteNonQuery() at Chithi.Client.Exchange.ExchangeClient.SaveItem(Item item, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.DownloadNewMails(Folder folder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeParentChildFolder(WellKnownFolder wellknownFolder, Folder parentFolder) at Chithi.Client.Exchange.ExchangeClient.SynchronizeFolders() at Chithi.Client.Exchange.ExchangeClient.WorkerThreadDoWork(Object sender, DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.OnDoWork(DoWorkEventArgs e) at System.ComponentModel.BackgroundWorker.WorkerThreadStart(Object argument) InnerException: Now my question is how am I supposed to insert such large data to sqlce db? Regards, Anindya Chatterjee http://abstractclass.org

    Read the article

  • How To Find Reasons of Why Site Goes Online/Offline

    - by HollerTrain
    Seems today a website I manage has been going online and offline throughout the entire day. I have no idea what is causing the issue so I am seeking guidance on where to start. It is a Wordpress based site. So here is what I DO know: I use a program that pings the server every minute and when the server is not responding me it emails me, so I can know exactly when the site is online and offline. The site between 8pm to 12pm 12.28, and around the 1a hour early morning 12.29 (New York City timezone, and all times below are in same timezone). At the time of the ups/downs I see a lot of strain on the memory usage. Look at the load average when the site is going online/offline (http://screencast.com/t/BRlfXkqrbJII). Then I ran this command to restart http (http://screencast.com/t/usVtYWZ2Qi) and the memory usage then goes down to this (http://screencast.com/t/VdTIy3bgZiQB). An hour after I restarted http, the site then went offline/online so restarting the http didn't do much help. When the site is going offline/online, I ran the top command and get this (http://screencast.com/t/zEwr7YQj3). Here is a top command when the site is at it's lowest (http://screencast.com/t/eaMfha9lbT - so this would be dubbged "normal"). Here is a bandwidth report (http://screencast.com/t/AS0h2CH1Gypq). The traffic doesn't seem to be that much (http://screencast.com/t/s7hrWNNic1K), but looking at my times the site is going up/down this may be one of the reasons? I have the dvp Nitro package at Media Temple (http://mediatemple.net/webhosting/nitro/). So at this point I would request some help in trying to figure out what the cause of this is, and how I can go about pinpointing this issue. ANY HELP is greatly appreciated.

    Read the article

  • Apache/2.2.20 (Ubuntu 11.10) gzip compression won't work on php pages, content is chunked

    - by FamousInteractive
    I'm running into a problem with a new production server whereto I'm transferring projects. The HTML output of the PHP applications isn't compressed by the Apache mod_deflate module. Other resources, as stylesheet and javascript files, even html pages, which are served with the same Content-type (text/html) as the PHP output, are compressed! The projects use the following rules (from HTML5 boilerplate) in the .htaccess: <IfModule mod_deflate.c> # Force deflate for mangled headers developer.yahoo.com/blogs/ydn/posts/2010/12/pushing-beyond-gzipping/ <IfModule mod_setenvif.c> <IfModule mod_headers.c> SetEnvIfNoCase ^(Accept-EncodXng|X-cept-Encoding|X{15}|~{15}|-{15})$ ^((gzip|deflate)\s*,?\s*)+|[X~-]{4,13}$ HAVE_Accept-Encoding RequestHeader append Accept-Encoding "gzip,deflate" env=HAVE_Accept-Encoding </IfModule> </IfModule> # HTML, TXT, CSS, JavaScript, JSON, XML, HTC: <IfModule filter_module> FilterDeclare COMPRESS FilterProvider COMPRESS DEFLATE resp=Content-Type $text/html FilterProvider COMPRESS DEFLATE resp=Content-Type $text/css FilterProvider COMPRESS DEFLATE resp=Content-Type $text/plain FilterProvider COMPRESS DEFLATE resp=Content-Type $text/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $text/x-component FilterProvider COMPRESS DEFLATE resp=Content-Type $application/javascript FilterProvider COMPRESS DEFLATE resp=Content-Type $application/json FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/xhtml+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/rss+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/atom+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $application/vnd.ms-fontobject FilterProvider COMPRESS DEFLATE resp=Content-Type $image/svg+xml FilterProvider COMPRESS DEFLATE resp=Content-Type $image/x-icon FilterProvider COMPRESS DEFLATE resp=Content-Type $application/x-font-ttf FilterProvider COMPRESS DEFLATE resp=Content-Type $font/opentype FilterChain COMPRESS FilterProtocol COMPRESS DEFLATE change=yes;byteranges=no </IfModule> </IfModule> We have a testing machine that runs the same Apache, OS and PHP version. On that machine the compression works just fine on the PHP output. I've checked and compared Apache and PHP config files, all the same as far as I can tell. I've tried several manners of outputting the content of the PHP, using output buffering or just plain echoing the content. Same thing, no compression. Example response headers of a PHP output: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Accept-Ranges: bytes Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: public Pragma: no-cache Vary: User-Agent Keep-Alive: timeout=5, max=98 Connection: Keep-Alive Transfer-Encoding: chunked Content-Type: text/html; charset=utf-8 Example of response headers on a css file: HTTP/1.1 200 OK Date: Wed, 25 Apr 2012 23:30:59 GMT Server: Apache Last-Modified: Mon, 04 Jul 2011 19:12:36 GMT Vary: Accept-Encoding,User-Agent Content-Encoding: gzip Cache-Control: public Expires: Fri, 25 May 2012 23:30:59 GMT Content-Length: 714 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/css; charset=utf-8 Does anyone has a clue or experienced the same "problem"? thanks!

    Read the article

  • memcached and PHP ... massive lag with sessions

    - by Ben Dauphinee
    I'm working on a new server built with Unbuntu 10.04, running php-fastcgi, nginx, and memcached. phpinfo() script loads and works great, same as a test memcached script. For any script using sessions, page load time rockets through the roof. --- memcached.ini --- extension=memcached.so memcache.hash_strategy = "consistent" memcache.max_failover_attempts = 100 memcache.allow_failover = 1 session.save_handler = memcached session.save_path = "tcp://127.0.0.1:11211?persistent=1&weight=1&timeout=1&retry_interval=15" Let me know if you need to see any other configs.

    Read the article

  • Unknown user 'app' with capistrano

    - by trobrock
    This is my first time trying to set up capistrano to deploy a rails application. I am deploying from my local machine to my remote server that has the repo, web, app, and mysql servers all on the same machine. I am following this walk through: http://www.capify.org/index.php/From_The_Beginning I get to the command cap deploy:start Then I get this error: *** [err :: example.com] sudo: unknown user: app command finished failed: "sh -c 'cd /var/www/example/current && sudo -p '\\''sudo password: '\\'' -u app nohup script/spin'" on example.com Am I supposed to add an 'app' user, or is there a way of changing what user the command runs as? This is my deploy.rb: set :application, "example" set :repository, "[email protected]:example.git" set :user, "trobrock" set :branch, 'master' set :deploy_to, "/var/www/example" set :scm, :git # Or: `accurev`, `bzr`, `cvs`, `darcs`, `git`, `mercurial`, `perforce`, `subversion` or `none` role :web, "example.com" # Your HTTP server, Apache/etc role :app, "example.com" # This may be the same as your `Web` server role :db, "example.com", :primary => true # This is where Rails migrations will run And obviously everywhere it says example.com is my servers hostname and every it just says example is the app name.

    Read the article

  • Does Java 6 open a default port for JMX remote connections?

    - by Bob Cross
    My specific question has to do with JMX as used in JDK 1.6: if I am running a Java process using JRE 1.6 with com.sun.management.jmxremote in the command line, does Java pick a default port for remote JMX connections? Backstory: I am currently trying to develop a procedure to give to a customer that will enable them to connect to one of our processes via JMX from a remote machine. The goal is to facillitate their remote debugging of a situation occurring on a real-time display console. Because of their service level agreement, they are strongly motivated to capture as much data as possible and, if the situation looks too complicated to fix quickly, to restart the display console and allow it to reconnect to the server-side. I am aware the I could run jconsole on JDK 1.6 processes and jvisualvm on post-JDK 1.6.7 processes given physical access to the console. However, because of the operational requirements and people problems involved, we are strongly motivated to grab the data that we need remotely and get them up and running again. EDIT: I am aware of the command line port property com.sun.management.jmxremote.port=portNum The question that I am trying to answer is, if you do not set that property at the command line, does Java pick another port for remote monitoring? If so, how could you determine what it might be?

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Setting Connection Parameters via ADO for MSSQL

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some psuedo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Cancel page forward/back hotkeys in Firefox with Greasemonkey

    - by Stimulating Pixels
    First the background: In Firefox 3.6.3 on Mac OS X 10.5.8 when entering text into a standard the hotkey combination of Command+LeftArrow and Command+RightArrow jump the cursor to the start/end of the current line, respectively. However, when using CKEditor, FCKEditor and YUI Editor, Firefox does not seem to completely recognize that it's a text area. Instead, it drops back to the default function for those hotkeys which is to move back/forward in the browser history. After this occurs, the text in the editor is also cleared when you return to the page making it very easy to loose whatever is being worked on. I'm attempting to write a greasemonkey script that I can use to capture the events and prevent the page forward/back jumps from being executed. So far, I've been able to see the events with the following used as a .user.js script in GreaseMonkey: document.addEventListener('keypress', function (evt) { // grab the meta key var isCmd = evt.metaKey; // check to see if it is pressed if(isCmd) { // if so, grab the key code; var kCode = evt.keyCode; if(kCode == 37 || kCode == 39) { alert(kCode); } } }, false ); When installed/enabled, pressing command+left|right arrow key pops an alert with the respective code, but as soon as the dialog box is closed, the browser executes the page forward/back move. I tried setting a new code with evt.keyCode = 0, but that didn't work. So, the question is, can this Greasemonkey script be updated so that it prevents the back/forward page moves? (NOTE: I'm open to other solutions as well. Doesn't have to be Greasemonkey, that's just the direction I've tried. The real goal is to be able to disable the forward/back hotkey functionality.)

    Read the article

  • Apache won't serve images larger than ~2K

    - by dtbaker
    Hello, Just upgraded an old box to Ubuntu to 10.04.2 LTS. Apache will not display images to a browser that are over about 2K. Small images seem to display fine. Static HTML and PHP continues to works fine as well. Installed: apache2 2.2.14-5ubuntu8.4 apache2-mpm-prefork 2.2.14-5ubuntu8.4 apache2-utils 2.2.14-5ubuntu8.4 apache2.2-bin 2.2.14-5ubuntu8.4 apache2.2-common 2.2.14-5ubuntu8.4 here is an ngrep of an image that doesn't display fine in the browser: T 192.168.0.4:32907 - 192.168.0.54:80 [AP] GET /path/path/logo.png HTTP/1.1..Host: 192.1 68.0.54..Connection: keep-alive..Accept: application/xml,application/xhtml+ xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5..User-Ag ent: Mozilla/5.0 (X11; U; Linux x86_64; en-US) AppleWebKit/534.13 (KHTML, like Gecko) Chrome/9.0.597.98 Safari/534.13..Accept-Enco ding: gzip,deflate,sdch..Accept-Language: en-US,en;q=0.8..Accept- Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3.... T 192.168.0.54:80 - 192.168.0.4:32907 [A] HTTP/1.1 200 OK..Date: Wed, 09 Mar 2011 05:28:38 GMT..Server: Apa che/2.2.14 (Ubuntu)..Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT ..ETag: "17b6f4-15fe-491dd63eb2f40"..Accept-Ranges: bytes..Conten t-Length: 5630..Keep-Alive: timeout=15, max=100..Connection: Keep -Alive..Content-Type: image/png.....PNG........IHDR...!...v...... .%.....sRGB.........bKGD..............pHYs.................tIME.. etc... This looks ok to me! I have tried firefox and chrome, both display small images fine but when a large image is requested the browser prompts to download the file. When the image file is saved to the local computer it is corrupt, it also takes a long time to save which makes me think the browser cannot see the content-length header sent from apache. Also when I look at the saved image file it includes the headers from apache, along with a bit of garbage at the top, like so: vi logo.png: ^@^UÅd^@$^]V^S^H^@E^@^Q,n!@^@@^F^@^@À¨^@6À¨^@^D^@P^Y¬rÇŹéw^P^@Ú^@^@^A^A^H ^@^GÝ^]^@pbSHTTP/1.1 200 OK^M Date: Wed, 09 Mar 2011 04:47:04 GMT^M Server: Apache/2.2.14 (Ubuntu)^M Last-Modified: Tue, 05 Oct 2010 11:59:17 GMT^M ETag: "17b6ff-157c-491dd63eb2f40"^M Accept-Ranges: bytes^M Content-Length: 5500^M Keep-Alive: timeout=15, max=94^M Connection: Keep-Alive^M Content-Type: image/png^M ^M PNG^M etc... Any ideas? It's driving me nuts. There is nothing in apache error logs, and permissions are fine (because the image data is there, it's just somewhat corrupt). There's no proxy or iptables on this ubuntu box either. Thanks heaps!! Dave ps: just tried on IE from a different computer, same problem :( pps: rebooted server, no help.

    Read the article

< Previous Page | 298 299 300 301 302 303 304 305 306 307 308 309  | Next Page >