Search Results

Search found 5140 results on 206 pages for 'crazy bash'.

Page 167/206 | < Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >

  • win 7 something is causing excessive disk usage, maybe chrome?

    - by camcam
    On my Win 7 computer, this happens: I work for several hours without problems, also using Chrome Suddenly, just after refreshing a page in Chrome, disk starts being used excessively (here I can close Chrome or not, no matter, the disk won't stop) Disk diod is on all the time and I hear it running like crazy, all computer is a little slowed down It last 5-10 minutes In the meantime, I go to Windows Task Manager and observe what processes are using disk and turn them off one by one - but no success in stopping the excessive disk usage After approximately 10 minutes everything stops I go to Chrome (or re-open it) and refresh the page with mixed results - sometimes the whole process repeats immediately, sometimes not Basically, it is almost always Chrome refreshing random page that starts the excessive disk usage, but killing Chrome process does not stop the disk. Going to the same page in Firefox is not causing problems. Windows Search is turned off. I would like to know what is really happening. Perhaps there is a utility which would allow me to see which process is really using the disk, so that I can disable the service ? (not chrome, because killing chrome does not change anything) or even better, perhaps there is a way to fix it?

    Read the article

  • How can I implement incremental (find-as-you-type) search on command line?

    - by florianbw
    I'd like to write small scripts which feature incremental search (find-as-you-type) on the command line. Use case: I have my mobile phone connected via USB, Using gammu --sendsms TEXT I can write text messages. I have the phonebook as CSV, and want to search-as-i-type on that. What's the easiest/best way to do it? It might be in bash/zsh/Perl/Python or any other scripting language. Edit: Solution: Modifying Term::Complete (http://search.cpan.org/~jesse/perl-5.12.0/lib/Term/Complete.pm) did what I want. See below for the answer.

    Read the article

  • How can I keep gnu screen from becoming unresponsive after losing my SSH connection?

    - by Mikey
    I use a VPN tunnel to connect to my work network and then SSH to connect to my work PC running cygwin. Once logged in I can attach to a screen session and everything works great. Now, after a while, I walk away from my computer and sooner or later, the VPN tunnel times out. The SSH connection on each end eventually times out and then I eventually come back to my computer to do some work. Theoretically, this should be a simple matter of just restarting the VPN, reconnecting via SSH, and then running "screen -r -d". However apparently when the sshd daemon times out on the cygwin PC, it leaves the screen session in some kind of hung state. I can reproduce a similar hung state by clicking the close box on a cygwin bash shell window while it's running a screen session. Is there any way to get the screen session to recover once this has happened, so that I don't lose anything?

    Read the article

  • Windows Not Honoring DHCP Scope

    - by jerhinesmith
    Please bear with me as I'm not a networking person by trade. Our current configuration at work includes two Windows Servers serving as DHCP/Active Directory servers (if that makes sense) -- one replicating from the other. On both machines, the DNS resolution is set up as: Main Windows Box (10...* address) Public IP Address (for Verizon) Public IP Address (secondary Verizon) Secondary Windows Box (10...* address) Assuming our domain is foo.com, we maintain the foo.com website on a hosted VPS with it's own IP address. The problem is that even though bar.foo.com is an internal server and is defined in DNS on the Primary Windows machine, when I ping bar or even bar.foo.com it resolves to the hosted IP address instead of the 10.* address. I tried taking both of the Public IP addresses out of the DHCP scope, and that seemed to work, but it completely slowed down access to any external sites, so that wasn't acceptable. I also tried adding the two Windows machine as the DNS servers on my desktop. That too worked, but I'd rather not have everything enter their DNS servers, as the above setup should theoretically be working. Is there anything I could check to see why pinging bar.foo.com isn't resolving to the DNS entry on the Windows machines? Here's a summary of the ping results, if they help: Pinging from servers with static IP bar.foo.com resolves with correct IP address Pinging from linux machines not joined to the domain bar.foo.com resolves with correct IP address Pinging from user's desktop machines, joined to the domain, but dynamic IP bar.foo.com resolves with incorrect IP address This is driving me crazy!

    Read the article

  • Get CruiseControl to talk to github with the correct public key.

    - by Danny Lister
    Hi All, Has anybody installed git and ControlControl and got CruiseControl to pull from GitHub on a window 2003 server. I keep getting public key errors (access denied) - Which is good i suppose as that confirms git is talking to github. However what is not good is that I dont not know where to install the rsa keys so they will be picked up by the running process (git in the context of cc.net). Any help would save me a lot of hair! I have tried installing the keys into; c:\Program Files\Git.ssh Whereby running git bash and cd ~ take me to: c:\Program Files\Git Current error from CC.net is Error Message: ThoughtWorks.CruiseControl.Core.CruiseControlException: Source control operation failed: Permission denied (publickey). fatal: The remote end hung up unexpectedly . Process command: C:\Program Files\Git\bin\git.exe fetch origin Thanks in advance

    Read the article

  • Puzzling TCP performance over 3G / UMTS

    - by lemonsqueeze
    I'm using 3G as my primary internet connection, and TCP over this thing is getting more puzzling every day. For example: Downloading from kernel.org is crazy fast: $wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.6.8.tar.bz2 increases to ~500kB/s after a few secs ! Some servers are incredibly slow, for instance www.graphic-pc.com:Same thing, downloading a big file with wget it starts at ~30kB/s for a split second, then collapses to 5-10k or even worse. Web browsing is decent but somewhat unreliable. Randomly, a page will take really long to load or even fail to load, but a reload can succeed almost immediately. Now, by chance i started playing with OpenVPN over UDP on top of the 3G connection, and OMG suddenly everything's extremely fast !Same www.graphic-pc.com now shoots at 100-200kB/s ! What's going on here ??? How come it is so much better with the VPN than without ?? And why does graphic-pc.com crawl when kernel.org flies ?Something to do with my tcp stack (or the server), or some buggy router in between ?? Notes: Setup is laptop running Ubuntu Lucid and a Huawei 3G dongle (So direct pppd connection). I can reproduce this pretty much any time during the day and I'm not moving, so it's clearly not cell environment or internet congestion. (although kernel.org without VPN sometimes does worse in the evening, 60kB or so - but still 500kB with VPN !) For 2) wireshark shows retransmitted packets, dup ack's, even out of order sometimes. I've tried playing with different /proc/sys/net/ipv4 parameters (tcp_rmem, window_scaling, tcp_congestion...) doesn't seem to make a difference. Update: Tried under windows 7 (no VPN) with some interesting results: tcp settings : default tcp_optimizer kernel.org : 10 kB/s 20 kB/s graphic-pc.com: 8 kB/s 70 kB/s ! tcp_optimizer turned on ctcp among other things. Have to check what os graphic-pc.com is running, my bet is linux's tcp_westwood and ms ctcp don't mix well here...

    Read the article

  • User permissions linux. (proftpd / nginx)

    - by user55745
    I've been having a complete nightmare trying to configure proftpd. I've got proftp server working with an sql database. However I want to have any files uploaded able to viewed by the webserver running on the same box. The folders get created in /var/tmp/ as rwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:35 50730c4346512 drwx------ 2 ftpuser ftpgroup 4096 Oct 8 20:38 50730f3a811ca I've tried adding www-data to group with the following usermod -g www-data ftpuser But this doesn't allow the web server access. In proftpd.conf I have the following umask set Umask 0022 It doesn't seem to make a difference what I set that value to. /etc/group (sure I've messed up one of these two but I'm getting desperate) ftpgroup:x:2001:www-data www-data:x:33:ftpgroup /etc/passwd www-data:x:33:33:www-data:/var/www:/bin/sh proftpd:x:108:65534::/var/run/proftpd:/bin/false ftp:x:109:65534::/srv/ftp:/bin/false ftpuser:x:2001:33:proftpd user www-data:/bin/null:/bin/false The ftpuser table in the database has uid / gid set to 2oo1 for both. I'm going absolutely crazy trying to solve this any help would be greatly appreciated. p.s Also, although if I manually connect to the ftp server I can upload files via FileZilla. Although this isn't working for the web-camera, although there is talky talky going on between the server and the camera.

    Read the article

  • External Storage for 2TB of backups and 4TB of data RAID level? HW vs Software?

    - by Jerry Mayers
    I have a Mac Mini set up as a media center/file server. Currently I just have a hodgepodge mess of external drives for storage. I'm maxed out, and I have some new laptops on the way with much larger drives and I need to work out a good storage solution for backing them up, as well as storing media on the server. I need around 2 TB of storage for the time machine backups from my various systems and around 2 TB more for media. I would like to build this to handle around 6 TB total so I have some growing room. Since I'm using a Mac Mini as the server I need to use external enclosure(s) that support USB 2 or Firewire 800 (preferred) or gigabit Ethernet. Performance of the system isn't a huge concern since the majority of the access from other computers is done over 802.11N. I plan on using 2TB drives, for the final version, but initially I'll try and use my existing 2 (1TB) drives + some new 2TB drives, and swap the 1TB ones out as I fill up. As to the actual questions: Should I use hardware RAID in some enclosure? Because if the enclosure dies I have to find an identical one to get to my data right? Wouldn't a software RAID be better as I can use any method of connecting the drives to the system? Remember OS X server is my OS. What if I had to reinstall OS X, can I restore the software RAID easily? What RAID version should I use? For the 2TB used for the time machine disk I don't see why I need RAID here, just a single 2TB drive since its already the backup, but for the remaining 4TB it would be the only copy of the data so I should build some redundancy. I had a RAID 5 setup using a cheep RAID PCI card years ago running RAID 5 in a 2 TB array and when a drive died it wanted 48 hours to rebuild. Is this crazy slow for a setup of this size or is this to be expected? Any suggestions as to drive enclosures?

    Read the article

  • how can I send messages to/from a Websphere Message Broker from an embedded C client (no JVM)?

    - by queBurro
    What are my options for pubsubing (or point to point but pubsub is better) messages to and from an IBM message broker from an embedded headless C/C++ linux client that doesn't have a JVM? Ideally we want large file transfer (2GB once per day off of the client) encryption (SSL) reliable ('assured' delivery / QoS2, maybe QoS1 would do) The client in question currently only has exes and some bash scripts, I've been playing with MQTTv3 and RSMB, but for that I'd have to chomp the large files up (and reassemble back home) and I don't want to get into that if there's a transport that will do this for me? I've looked at MQTTv5 (but our client's got no JVM); JMS (no JVM) and XMS? which again looks like it gives me a C API but then needs the JVM to be installed on the client (or am I wrong?) any clues or hints would be appreciated, cheers

    Read the article

  • rails - Redirecting console output to a file

    - by egarcia
    On a bash console, if I do this: cd mydir ls -l > mydir.txt The operator captures the standard input and redirects it to a file; so I get the listing of files in mydir.txt instead of in the standard output. Is there any way to do something similar on the rails console? I've got a ruby statement that generates lots of prints (~8k lines) and I'd like to be able to see it completely, but the console only "remembers" the last 1024 lines or so. So I thought about redirecting to a file - If anyone knows a better option, I'm all ears.

    Read the article

  • Varnish does not start properly (crashes after startup) with no error messages

    - by Matthew Savage
    I am running Varnish (2.0.4 from the Ubuntu unstable apt repository, though I have also used the standard repository) in a test environment (Virtual Machines) on Ubuntu 9.10, soon to be 10.04. When I have a working configuration and the server starts successfully it seems like everything is fine, however if, for whatever reason, I stop and then restart the varnish daemon it doesn't always startup properly, and there are no errors going into syslog or messages to indicate what might be wrong. If I run varnish in debug mode (-d) and issue start when prompted then 7 times out of time it will run, but occasionally it will just shut down 'silently'. My startup command is (the $1 allows for me to pass -d to the script this lives in): varnishd -a :80 $1 \ -T 127.0.0.1:6082 \ -s malloc,1GB \ -f /home/deploy/mysite.vcl \ -u deploy \ -g deploy \ -p obj_workspace=4096 \ -p sess_workspace=262144 \ -p listen_depth=2048 \ -p overflow_max=2000 \ -p ping_interval=2 \ -p log_hashstring=off \ -h classic,5000009 \ -p thread_pool_max=1000 \ -p lru_interval=60 \ -p esi_syntax=0x00000003 \ -p sess_timeout=10 \ -p thread_pools=1 \ -p thread_pool_min=100 \ -p shm_workspace=32768 \ -p thread_pool_add_delay=1 and the VCL looks like this: # nginx/passenger server, HTTP:81 backend default { .host = "127.0.0.1"; .port = "81"; } sub vcl_recv { # Don't cache the /useradmin or /admin path if (req.url ~ "^/(useradmin|admin|session|sessions|login|members|logout|forgot_password)") { pipe; } # If cache is 'regenerating' then allow for old cache to be served set req.grace = 2m; # Forward to cache lookup lookup; } # This should be obvious sub vcl_hit { deliver; } sub vcl_fetch { # See link #16, allow for old cache serving set obj.grace = 2m; if (req.url ~ "\.(png|gif|jpg|swf|css|js)$") { deliver; } remove obj.http.Set-Cookie; remove obj.http.Etag; set obj.http.Cache-Control = "no-cache"; set obj.ttl = 7d; deliver; } Any suggestions would be greatly appreciated, this is driving me absolutely crazy, especially because its such an inconsistent behaviour.

    Read the article

  • Perl : how to interrupt/resume loop by user hitting a key?

    - by Michael Mao
    Hi all: This is for debugging purpose. I've got a for loop that generates some output to Cygwin bash's CLI. I know that I can redirect outputs to text files and use Perl or even just a normal text editor to inspect the values printed out for a particular iteration, just feel that a bit painful. What I am now thinking is, to place a special subroutine inside the for loop, so that it would be "interrupted" at the beginning of each iteration, and Perl script should only resume to run after user/programmer hits a key(the Enter Key from keyboard?) In this way I can directly inspect the values printed out during each iteration. Is there a simple way to do this, without using additional libraries/CPAN ? Many thanks to the hints/suggestions in advance.

    Read the article

  • Active Directory: how to be SURE users can change their own passwords?

    - by Latro
    Working on some project where a tool we have has to authenticate against AD connecting via LDAPS and perform password changes if required or requested. IN THEORY, the tool does that, and we have seen it work in other projects. IN PRACTICE, against this particular directory, it fails. Been driving me crazy. The particulars of the situation: Windows 2003 AD Defined a "technical user" for the LDAP connection with rights to change users passwords When password change is required - in this case, because pwdLastSet is 0 - the tool uses the technical account to go, bind to the controller and change the user password. If password change is not required but the user request it, then the bind is done with the user account. That last condition is the one that doesnt work. With the technical user the password change is possible, but with the user itself, it isnt. We get an error like this: LDAP access failed: javax.naming.directory.InvalidAttributeValueException: [LDAP: error code 19 - 0000052D: AtrErr: DSID-03190F00, #1: 0: 0000052D: DSID-03190F00, problem 1005 (CONSTRAINT_ATT_TYPE), data 0, Att 9005a (unicodePwd) no idea what DSID-03190F00 means cause it doesnt seem to be anywhere in google :-/ Been looking at several MS documentation pages and frankly, I'm not understanding one bit of it. There is some "control access right" called User-Change-Password that may, or may not, control what objects have the right to change their own password, which may, or may not, have to do with ACE and ACLs... There is GPO. There is maybe the password policy but it is only set to ask for passwords of 6 chars or more... Can anybody explain to me in easy-to-check steps how can I go and tell the AD admin guy (who is as lost as me) what to do to ensure that users in the AD directory (objectClass top,person,organizationalPerson and user) are able to change their own passwords by themselves? Thanks in advance

    Read the article

  • Creating standalone, console (shell) for domain-specific operations

    - by mr.b
    Say that I have a system service, and I want to offer a low-level maintenance access to it. For that purpose, I'd like to create a standalone, console application that somehow connects to server process and lets user type in commands, allow it to use auto-completion and auto-suggestion on single/double TAB press (just like linux bash shell, mysql cli, cmd.exe, and countless others), allow command line editing capabilities (history, cursor keys to move around text..), etc. Now, it's not that much of a problem to create something like that by rolling my own from scratch, handling user input, scanning pressed keys, and doing correct actions. But, why reinvent the wheel? Is there some library/framework that helps with this kind of problems, just like readline library that offers improved command-line editing capabilities under linux? Of course, this new "shell" would respond only to valid, domain-specific commands, and would suggest valid arguments, options, switches... Any ideas? Thanks!

    Read the article

  • How to get number of rows deleted from mysql in schell script

    - by simonlord
    Hi all I can't work out how to get the mysql client to return the number of rows deleted to the shell when running a delete. Does anyone know what option will enable this? Or ways around it? Here's what i'm trying, but i get no output: #!/bin/bash deleted=`mysql mydb -e "delete from mytable where insertedtime < '2010-04-01 00:00:00'"|tail -n 1` I was expecting something like this as the output from mysql: deleted 999999 Which is why i have the tail -n 1 so i only pick up the count and not the column name. Any help would be most appreciated.

    Read the article

  • Use matching value of a RegExp to name the output file.

    - by fx42
    I have this file "file.txt" which I want to split into many smaller ones. Each line of the file has an id field which looks like "id:1" for a line belonging to id 1. For each id in the file, I like to create a file named idid.txt and put all lines that belong to this id in that file. My brute force bash script solution reads as follows. count=1 while [ $count -lt 19945 ] do cat file.txt | grep "id:$count " >> ./sets/id$count.txt count='expr $count + 1' done Now this is very inefficient as I have do read through the file about 20.000 times. Is there a way to do the same operation with only one pass through the file? - What I'm probably asking for is a way to use the value that matches for a regular expression to name the associated output file.

    Read the article

  • Why does a document in Word 2007 stop recognizing the mouse after the document loses focus?

    - by alt234
    When I open a document in Word 2007, everything works fine, I can edit, highlight text, etc. However, the instant Word loses focus, when I focus back the document doesn't recognize anything the mouse does. The tabbed menu at the top seems to recognize the mouse but the document itself does not. I can scroll through via the scroll-wheel and I can type. However, typing just shows up where the mouse cursor last was before focus was taken away. I've tried clearing some word data registry keys. I've also found that some Word Add-ins can cause problems. LaserFiche is one I see mentioned a lot. As far as I can tell I have no add-ons though. Any ideas? It's crazy-annoying. UPDATE- - Word is the only program that has this problem - Typically I have Toad (Oracle DB management app), an XP virtual machine with various apps running on it, Skype, Google Talk, and maybe a handful of other programs at any given time open... Windows Media player, Outlook. - Yes, this happens even if nothing else is running. From a fresh restart as well. - I'm running Vista 64 with SP1 - According to Windows Update, I have the latest of everything. This has been happening for a couple of months now. Just never took the time to look into because I usually never have to use word.

    Read the article

  • View plain text files with different background colors in Mac OSX, for different programming languag

    - by Werner
    Hi, I work with Mac OS X Leopard. I usually have 5 or 10 text files opened at the same time with different programming languages; one for a bash script, another for a python one, etc. When I use exposé all of them look the same, so it is difficult to select them. I wonder how could I work with just plain text files in OSX, so when they are opened in an editor the background color changes or some other thign, so when using exposé it is clear to me which window belongs to what language. I thought about inserting some kind of info to the last line of each document, and then creat some applescript that converts it to RTF or someother text document which includes color in bacjground, so then it is opened with textmate or someother app. Do you know a better approach for this? Thanks

    Read the article

  • Why does my program occasionally segfault when out of memory rather than throwing std::bad_alloc?

    - by Bradford Larsen
    I have a program that implements several heuristic search algorithms and several domains, designed to experimentally evaluate the various algorithms. The program is written in C++, built using the GNU toolchain, and run on a 64-bit Ubuntu system. When I run my experiments, I use bash's ulimit command to limit the amount of virtual memory the process can use, so that my test system does not start swapping. Certain algorithm/test instance combinations hit the memory limit I have defined. Most of the time, the program throws an std::bad_alloc exception, which is printed by the default handler, at which point the program terminates. Occasionally, rather than this happening, the program simply segfaults. Why does my program occasionally segfault when out of memory, rather than reporting an unhandled std::bad_alloc and terminating?

    Read the article

  • Master File Table Corrupt, any way to save data?

    - by domen
    hi. I've used search, but none of the results match my problem so I didn't have to ask separate question. I've Installed Windows 7 RTM recently and since then partitions located on one of my HDDs have gone "crazy". They used to "freeze" and didn't open in explorer for some time (minute or two, usually), sometimes all partitions of the drive wouldn't show until reboot and finally, one of those partitions started showing "disk structure is corrupted and unreadable" warning, it appeared in Disk Management window as RAW and chkdsk showed "mft corrupt". There were no important data on the partition and I didn't have enough time to analyze the problem at the moment, so I just reformatted it and ran antivirus scan on system. After that problem settled for some time, but yesterday the problematic HDD vanished again from the system. After reboot chkdsk identified mft of four partitions corrupt and now they are all in same conditions as the above mentioned one. But the difference is that the files stored in them are extremely important. and just for info: I upgraded from Win7 build 7077, but had some performance issues, so I reformatted system drive and installed fresh Win7 RTM on it. I've downloaded TestDisk and it shows all the partitions marked as NTFS (not RAW) and my knowledge of the program wasn't sufficient to obtain any other info from it :-) and the images that could help describe the problem (sorry, I'm not allowed to post images and more than one hyperlink): http:// img22.imageshack.us/img22/5909/chkdskz.jpg http:// img198.imageshack.us/img198/5576/computeray.jpg I'm interested, is there a way to let me restore the MFT or just access files so I can backup them before reformatting the drive. Thanks for your time. :) P.S. my reformatted drive is showing no problems, could there be a problem with windows 7 itself? I googled, but with no results.

    Read the article

  • GitHub on windows :|

    - by Sonic Soul
    i've been experimenting with github as my personal code rep.. and it has been a bit of a disaster with windows. i've used Subversion, CVS, and Perforce in the past.. none were as annoying to use as github. i've figured out the PGP part, although my workstation no longer lets me check in, and after searching around it turns out that github bash is using putty which is not that reliable and should be configured with something else.. i was not able to configure it with windows shell extension for a nice visual of what is part of the repository, what is modified, and easy check ins, and easy pushes.. has anyone successfully configured some kind of windows shell client and can efficiently and quickly synchronize various machines? It just seems to be more pain to use than it is worth..

    Read the article

  • Git doesn't sync files until committed, even if checked out in a different branch

    - by DertWaiter
    Okay, I have git 1.7.11.1 on Windows and I have a local test repository with 2 branches. One is master with index.php and help.php. I then create another branch called slave :) I run from git bash rm help.php and it disappears from the folder, but I don't stage anything. I switch to checkout master branch and it is supposed to restore file help.php because it is not modified in the master branch, isn't it? And it does not do it. When I go back to the slave branch and commit and then switch to checkout master then help.php appears. Is that the way it is supposed to to work? Why?

    Read the article

  • Relinking a deleted file

    - by mbac32768
    Sometimes people delete files they shouldn't, a long-running process still has the file open, and recovering the data by catting /proc/<pid>/fd/N just isn't awesome enough. Awesome enough would be if you could "undo" the delete by running some magic option to ln that would let you re-link to the inode number (recovered through lsof). I can't find any Linux tools to do this, least with cursory Googling. What do you got, serverfault? EDIT1: The reason catting the file from /proc/<pid>/fd/N isn't awesome enough is because the process which still has the file open is still writing to it. A delete removes the reference to the inode from the filesystem namespace. What I want is a way of re-creating the reference. EDIT2: 'debugfs ln' works but the risk is too high since it frobs raw filesystem data. The recovered file is also crazy inconsistent. The link count is zero and I can't add links to it. I'm worse off this way since I can just use /proc/<pid>/fd/N to access the data without corrupting my fs.

    Read the article

  • Whitespaces in CLASSPATH

    - by Shyam
    I am working on a Windows PC and have cygwin on it! I have organized all my jars under a directory within a few directories! I am writing a bash script to set the CLASSPATH by iterating through the directory that is passed as a parameter as follows: for JAR_FILE in `ls *.jar` do CLASSPATH="$DIRECTORY_TO_LOOK_FOR_JARS"/$JAR_FILE:$CLASSPATH done Whenever there are spaces in the directory that is passed like /cygdrive/c/Documents and Settings/user/My Jars and I run java -cp $CLASSPATH somepackage.someclass, it throws an error stating that the class and is not found, because the CLASSPATH variable is getting split after /cygdrive/c/Documents. Can someone help me to solve this issue?

    Read the article

  • Magento cache wrong read permissions?

    - by Lucasmus
    There seems to be a problem in Magento's reading of the var/cache directory. I've disabled Full Page Caching for testing. When I execute the bash command chmod -R 777var/cache/` before loading the page, it loads ~3 seconds quicker (the time it takes before 'mage::dispatch::routers_match' is reached in the Profiler is reduced from ~4 seconds to ~1 second). This speed-up remains a while but then is lost until the chmod is called again. I'm guessing this has to do with writing permissions somehow? The odd thing is, the cache contents are afaik owned by the process that is executing magento (the web user). Does anyone have any clues what could be the problem or what could be changed to prevent this?

    Read the article

< Previous Page | 163 164 165 166 167 168 169 170 171 172 173 174  | Next Page >