Search Results

Search found 24623 results on 985 pages for 'linux'.

Page 417/985 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Untar with date filter

    - by Don
    Is there any way to untar and only extract those files that are above a certain date including directory structure?? I restored a backup on a play server but it was a few days old. However I have a tar archive of the entire structure that is more up to date and healthy so now I want to extract all files (including directory structure) based on a date filter on the files if possible?

    Read the article

  • Need advice in setting up server. fastCGI, suExec, speed, security, etc.

    - by lewisqic
    I am running my own dedicated server with centOS 5 and WHM/cPanel. I would like to configure my server to meet my needs but I need a little help. It will only be my own websites being run on this server. I'm still a little green when it comes to server administration so please forgive my ignorance. What I Would Like to Have: I need some public directories to be writable (for user image uploads and things like that) but I don't want those directories to have 777 permissions. I need individual accounts to have the ability to set custom php settings for their own account without affecting other accounts, whether through a php.ini file or through .htaccess or any other method. I would like things to run as fast as possible, whether that means using a php optimizer or cacher, such as eaccelerator or xcache or anything else. I need things to be as secure as possible. Here Are My Questions What should I use for my php handler? DSO? CGI? fastCGI? suPHP? Other? Should I be using suEXEC? What are the benefits or downfalls of this? What php optimizer/cacher is best to use? Are there any other security tips I need to know about all of this? I'd appreciate any advice or direction that can be offered. Thanks!

    Read the article

  • Copying a large directory tree locally? cp or rsync?

    - by Rory
    I have to copy a large directory tree, about 1.8 TB. It's all local. Out of habit I'd use rsync, however I wonder if there's much point, and if I should rather use cp. I'm worried about permissions and uid/gid, since they have to be preserved in the clopy (I know rsync does this). As well as thinks like symlinks. The destination is empty, so I don't have to worry about conditionally updating some files. It's all local disk access, so I don't have to worry about ssh or network. The reason I'd be tempted away from rsync, is because rsync might do more than I need. rsync checksums files. I don't need that, and am concerned that it might take longer than cp. So what do you reckon, rsync or cp?

    Read the article

  • Setting up Web server so it is easy to migrate

    - by Nyxynyx
    Hi I am about to move my site from a VPS to another host's dedicated server. One of my concern is about scaling the site in the future that involves a change of server. Now that I am starting the dedicated server from scratch with only the OS, this means that I need to install the web server stack, including Apache and its mods, PHP, MySQL, PostgreSQL, Tomcat, Solr and a few other softwares like ImageMagick and git. Question: Is there a way for me to setup this new dedicated server such that I can easily migrate the entire site, both the technology stack and the code to the a newer server (upgrade from this new dedicated server) easily without reinstalling and reconfiguring everything? The code for the website is being handled by git and github so thats not a problem. I'm more conerned about the rest of the software required. Side question: The current VPS uses CentOs with cpanel and it seems that many packages are outdated on yum and cpanel interfers with the installation of many packages. Which OS should I go with for the new server? Ubuntu?

    Read the article

  • why won't php 5.3.3 compile libphp5.so on redhat ent

    - by spatel
    I'm trying to upgrade to php 5.3.3 from php 5.2.13. However, the apache module, libphp5.so will not be compiled. Below is a output I got along with the configure options I used. The configure statement is a reduced version of what I normally use. ========== './configure' '--disable-debug' '--disable-rpath' '--with-apxs2=/usr/local/apache2/bin/apxs' ... ** ** ** Warning: inter-library dependencies are not known to be supported. ** ** ** All declared inter-library dependencies are being dropped. ** ** ** Warning: libtool could not satisfy all declared inter-library ** ** ** dependencies of module libphp5. Therefore, libtool will create ** ** ** a static module, that should work as long as the dlopening ** ** ** application is linked with the -dlopen flag. copying selected object files to avoid basename conflicts... Generating phar.php Generating phar.phar PEAR package PHP_Archive not installed: generated phar will require PHP's phar extension be enabled. clicommand.inc pharcommand.inc directorytreeiterator.inc directorygraphiterator.inc invertedregexiterator.inc phar.inc Build complete. Don't forget to run 'make test'. ============= php 5.2.13 recompiles just fine so something is up with 5.3.3. Any help would be greatly appreciated!!

    Read the article

  • Sniff packets using tcpdump

    - by denisk
    I have a completely noob question. I want to see all packets that come to my computer from particular site (google.com). So I start tcpdump sudo tcpdump -i eth0 host google.com and enter google.com in a browser and hit enter - nothing gets captured. I can't figure out why it happen. What do I do wrong? Edit It appeared that I was listening to the wrong interface. I had changed eth0 to any and it worked. It was ppp1 that needed listening. Thanks for your answers!

    Read the article

  • how to correctly download tomcat 6 on centos 5.5

    - by user582862
    hi guys, i am a big confused about how to install tomcat 6 on centos 5.5 final. this is what i am trying to do: # cd /etc/yum.repos.d/ # wget http://jpackage.org/jpackage50.repo # yum install tomcat6 tomcat6-webapps tomcat6-admin-webapps but when i type the widget command, this is what i get: Resolving www.jpackage.org... failed: Temporary failure in name resolution. wget: unable to resolve host address `www.jpackage.org' could anyone kindly show me the right way please. really in trouble at the moment with this. thanks in advance.

    Read the article

  • How to secure postfix to find out whether the emails are coming really from the sender?

    - by codeworxx
    Is it possible to secure postfix in a way, that incoming emails are checked on whether the email comes really from the sender? Is that possible to write php script and chose a sender, like the mail is really coming from the sender and what are the possibilities for postfix to find out that this mail is not actually coming from the real sender? What I have found out and activated are the options smtpd_sender_restrictions = reject_unknown_sender_domain unknown_address_reject_code = 554 smtpd_client_restrictions = reject_unknown_client unknown_client_reject_code = 554 Please mention, whether I have missed out on any points!

    Read the article

  • free -m output, should I be concerend about this servers low memory?

    - by Michael
    This is the output of free -m on a production database (MySQL with machine. 83MB looks pretty bad, but I assume the buffer/cache will be used instead of Swap? [admin@db1 www]$ free -m total used free shared buffers cached Mem: 16053 15970 83 0 122 5343 -/+ buffers/cache: 10504 5549 Swap: 2047 0 2047 top ouptut sorted by memory: top - 10:51:35 up 140 days, 7:58, 1 user, load average: 2.01, 1.47, 1.23 Tasks: 129 total, 1 running, 128 sleeping, 0 stopped, 0 zombie Cpu(s): 6.5%us, 1.2%sy, 0.0%ni, 60.2%id, 31.5%wa, 0.2%hi, 0.5%si, 0.0%st Mem: 16439060k total, 16353940k used, 85120k free, 122056k buffers Swap: 2096472k total, 104k used, 2096368k free, 5461160k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 20757 mysql 15 0 10.2g 9.7g 5440 S 29.0 61.6 28588:24 mysqld 16610 root 15 0 184m 18m 4340 S 0.0 0.1 0:32.89 sysshepd 9394 root 15 0 154m 8336 4244 S 0.0 0.1 0:12.20 snmpd 17481 ntp 15 0 23416 5044 3916 S 0.0 0.0 0:02.32 ntpd 2000 root 5 -10 12652 4464 3184 S 0.0 0.0 0:00.00 iscsid 8768 root 15 0 90164 3376 2644 S 0.0 0.0 0:00.01 sshd

    Read the article

  • umask is being ignored on Gentoo while creating new files

    - by drcelus
    I have a server running Gentoo and hosting a drupal installation. Whenever a Drupal update is executed, the directory permissions of the updated module turn from 755 to 744 preventing the application from accessing the files. The umask is defined as 022 under /etc/profile and the Apache server is running under user and group nobody. I believe this has nothing to do with the drupal installation since if I create a directory as root, the same happens, it is created with 744 permissions, since the umask is 022 shouldn't it be created as 755 ? Why is the umask being ignored and how do I tell the server to create the directories with permission 755 ?

    Read the article

  • Ubuntu 12.04 3.2 Kernel showing incorrect "cache size" in cpuinfo?

    - by Tom G
    2x Xeon E5620 . 16 Cores altogether. /proc/cpuinfo shows cache is only @ 4096kb According to intel this should have 12MB of "smart cache". Doing searched for E5620 and CPUinfo shows the correct number: cache size : 12288 KB However mine shows this: processor : 15 v endor_id : GenuineIntel cpu family : 6 model : 44 model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz stepping : 2 microcode : 0x1 cpu MHz : 2400.104 cache size : 4096 KB fpu : yes fpu_exception : yes cpuid level : 11 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx bogomips : 4800.20 clflush size : 64 cache_alignment : 64 address sizes : 40 bits physical, 48 bits virtual Any idea on why it shows like it's missing 8MB in CPU cache? .

    Read the article

  • Best Practice: Migrating Email Boxes (maildir format)

    - by GruffTech
    So here's the situation. I've got about 20,000 maildir email accounts chewing up a several hundred GB of space on our email server. Maildir by nature keeps thousands of tiny a** little files, instead of one .mbox file or the like... So i need to migrate all of these several millions of files from one server to the other, for both space and life-cycle reasons. the conventional methods i would use all work just fine. rsync is the option that comes immediately to mind, however i wanted to see if there are any other "better" options out there. Rsync not handling multi-threaded transfers in this situation sucks because it never actually gets up to speed and saturates my network connection, because of this the transfer from one server to another will take hours beyond hours, when it shouldn't really take more then one or two. I know this is highly opinionated and subjective and will therefore be marked community wiki.

    Read the article

  • Apache configuration FollowSymlinks- Apply to php scripts?

    - by Josh
    I have Options set to none for my webroot directory. I also have a symmlink /var/coderoot - /var/webroot/coderoot In the php script I can do include("/var/coderoot/file"); and it works fine. Regardless of the option (yes I save and restart apache.) Does follow symlinks only apply to symlinks used in a certain way? Is there a performance loss using the include with a symmlink?

    Read the article

  • Why does't rsync use delta-transfer for local files ?

    - by o_O Tync
    I have a big iso image which is currently being downloaded by a torrent client with space-reservation turned on: that means, file size is not changing while some chunks in in (4 Mib) are constantly changing because of a download. At 90% download I do the initial rsync to save time later: $ rsync -Ph DVD.iso /some/target/ sending incremental file list DVD.iso 2.60G 100% 40.23MB/s 0:01:01 (xfer#1, to-check=0/1) sent 2.60G bytes received 73 bytes 34.59M bytes/sec total size is 2.60G speedup is 1.00 Then, when the file's fully downloaded, I rsync again: total size is 2.60G speedup is 1.00 Speedup=1 says delta-transfer was not used, although 90% of the file has not changed. Why?!

    Read the article

  • How can I measure TCP timeout limit on NAT firewall for setting keepalive interval?

    - by jmanning2k
    A new (NAT) firewall appliance was recently installed at $WORK. Since then, I'm getting many network timeouts and interruptions, especially for operations which would require the server to think for a bit without a response (svn update, rsync, etc.). Inbound SSH sessions over VPN also timeout frequently. That clearly suggests I need to adjust the TCP (and ssh) keepalive time on the servers in question in order to reduce these errors. But what is the appropriate value I should use? Assuming I have machines on both sides of the firewall between which I can make a connection, is there a way to measure what the time limit on TCP connections might be for this firewall? In theory, I would send a packet with gradually increasing intervals until the connection is lost. Any tools that might help (free or open source would be best, but I'm open to other suggestions)? The appliance is not under my control, so I can't just get the value, though I am attempting to ask what it currently is and if I can get it increased.

    Read the article

  • Apache Virtual Hosts behind Cisco Router

    - by Theo
    I'm setting up an Apache 2.2 Ubuntu web server for internal services that is also supposed to be accessed from outside our LAN. Our LAN has a single external IP that is the external IP of our RV042 Cisco router. We have set up several A records on our external DNS server that point to this IP. Our internal DNS server resolve the same records to the internal IP of our web server, so computers from inside the network can access them using the same address as if they were outside. We forwarded the router's external 80 port to our web server's 80 port. I have set up one Virtual Host for each domain name in our list, and my httpd.conf is something like this: ServerName web.domain.com NameVirtualHost *:80 <VirtualHost *:80> ServerName alfresco.domain.com <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass /alfresco http://localhost:8080/alfresco ProxyPassReverse /alfresco http://localhost:8080/alfresco ProxyPass /share http://localhost:8080/share ProxyPassReverse /share http://localhost:8080/share </VirtualHost> <VirtualHost *:80> ServerName crm.domain.com DocumentRoot /var/www/sugarcrm </VirtualHost> Now, this works if we are in our LAN. However, if we are outside of our LAN we reach our web server's default page saying: It Works! This is the default web page for this server. But we can't reach the virtual hosts, as if the domain name is not being preserved when the router forward the packets to the web server. Am I doing something wrong? How can I check what is going on? What should be the settings to make this work from outside?

    Read the article

  • Routing with VPN and asymmetric communication

    - by Louis
    I'm stumbling on a problem that requires your advice. Keywords : networking, route, openVPN Problem : I have a local network with several physical servers and VMs. These machines have ip's in the range 10.10.x.x. I can access these machines from the Internet with the help of openVPN. These machines can : access each other within the local 10.10.x.x subnet access the Internet via the VPN can themselves be accessed (via SSH) from the Internet via the VPN. There is one machine however that behaves strangely and I don't know why. I can SSH into this machine from anywhere via SSH and I can also PING it from anywhere (including the Internet). However from this machine (i.e. when logged into it) I cannot access the Internet or ping machines outside the local network. In other words it will not go beyond the VPN. My question is why? Here are some technical details: The machine's Network Config (running Debian 6.0.3): allow-hotplug eth0 iface eth0 inet static address 10.10.10.200 netmask 255.255.0.0 network 10.10.10.0 broadcast 10.10.10.255 gateway 10.10.10.200 The machine's Routing : Destination Gateway Genmask Flags MSS Window irtt Iface 127.0.0.1 0.0.0.0 255.255.255.255 UH 0 0 0 lo 10.10.0.0 10.10.10.250 255.255.0.0 UG 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.10.10.200 0.0.0.0 UG 0 0 0 eth0 The VPN's Network Config (running Debian 6.0.3): # This is the local network interface auto eth1 allow-hotplug eth1 iface eth1 inet static address 10.10.10.250 netmask 255.255.0.0 broadcast 10.10.10.255 gateway 10.10.10.250 The VPN's routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.10.0.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0 private 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.10.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 10.10.10.250 0.0.0.0 UG 0 0 0 eth1 0.0.0.0 private 0.0.0.0 UG 0 0 0 eth0 net.ipv4.ip_forward = 1 on both machines. there are no iptables set anywhere. Thanks in advance for any feedback.

    Read the article

  • how to figure out why sites on my server aren't loading

    - by Derek
    I seem to randomly receive "page cannot load, cannot connect to server" errors for sites on one of my servers. when this happens, it seems to only happen on certain IPs or IP ranges at a time. I say this because while I'll get the error from my home laptop I'll be able to access the site fine from my work computer or from an offsite VPS. DNS records should already be fully propagated as these records were updated months ago. I have no idea how to diagnose what's going on. Is there a tool in cpanel or outside on the web that can help me figure out what's going on?

    Read the article

  • Lots of files being used by blank web page. What are they?

    - by byronyasgur
    I am trying to optimise a website and I was using the network waterfall facility in Google Chrome. When I looked at the results there were lots of files which I didnt recognise. I first thought they might be something to do with Google Chrome itself, so I put a blank HTML file on my desktop and checked but there was nothing in the waterfall except the file itself. So I put a blank file on my server and I got the output below. What are all these files, are they all necessary, is this normal and do I need to be in any way concerned. My hosting provider has always been excellent in every regard that I'm aware of. My host is shared hosting, using cpanel and is based on a LAMP server. I also note that a couple of those file have problems but I have no idea how to fault find that or whether it's a concern. EDIT: I have cleared the cache so I don't think it's a browser cache issue.

    Read the article

  • Checkpoint - Routing into the tunnel

    - by Fake4d
    I have a simple question for my checkpoint infrastructure. Do i have to route a net which i wanna access over a configured firewall VPN Tunnel. Explanation: I have two firewalls connected over a VPN which have several nets behind them. I need to access a new net at the other firewall and put them in their encryption Domain. Now here is the question: Do i have to route it in the operating system (SecurePlat)? Thanks!

    Read the article

  • Are PHP session files ever deleted?

    - by GetFree
    I see there are thousands of files in my "/tmp" directory (a CentOS machine) and almost all of them are PHP session files. I'm worried about the possible impact this might have on my system. Are those files ever deleted either by the OS, Apache or PHP? or I have to take care of it myself?

    Read the article

  • Why doesn't SSHFS let me look into a mounted directory?

    - by Jan
    I use SSHFS to mount a directory on a remote server. There is a user xxx on client and server. UID and GID are identical on both boxes. I use sshfs -o kernel_cache -o auto_cache -o reconnect -o compression=no \ -o cache_timeout=600 -o ServerAliveInterval=15 \ [email protected]:/mnt/content /home/xxx/path_to/content to mount the directory on the remote server. When I log in as xxx on the client I have no problems. I can cd into /home/xxx/path_to/content. But when I log in on the client as another user zzz and then $ ls -l /home/xxx/path_to I get this d????????? ? ? ? ? ? content and on $ ls -l /home/xxx/path_to/content I get ls: cannot access content: Permission denied When I do $ ls -l /mnt on the remote server I get drwxr-xr-x 6 xxx xxx 4096 2011-07-25 12:51 content What am I doing wrong? The permissions seem to be correct to me. Am I wrong?

    Read the article

  • Debian 5.0.8 won't detect my ethernet adapter

    - by PaulH
    I'm installing Debian 5.0.8 on a Dell Latitude E6410 using the network install x86 CD. Unfortunately, it won't detect my Ethernet adapter. Specifically, I get the error: No Ethernet card was detected. If you know the name of the driver needed by your Ethernet card, you can select it from the list. lspci shows: 00:19.0 Ethernet controller: Intel Corporation Device 10ea (rev 05) That should correspond to the e100e driver. But, if I select it from the list, the install CD still fails to detect it. Ubuntu 10.10 and Windows 7 are able to use this device with no problem. Does anybody know what I can do to make Debian use my Ethernet adapter? Thanks, PaulH

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >