Search Results

Search found 25088 results on 1004 pages for 'dsl linux'.

Page 431/1004 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • CSF Unresolved issue

    - by josephmarhee
    I began receiving service failures for CSF/LFD once the limit was reached in iptables preventing the service from working properly. I flushed all iptables rules, and redid by rules using CIDR rather than the individual IPs that were listed and the issue persists. Error: The VPS iptables rule limit (numiptent) is too low (1527/1536) - stopping firewall to prevent iptables blocking all connections, at line 1459 This is after restarting CSF, which gave me: You have an unresolved error when starting csf. You need to restart csf successfully to remove this warning CSF still seems to be trying to enforce rules that no longer exists (lists entire chains upon trying to be restarted,only to fail with that error). Any idea of what's going on?

    Read the article

  • Finding the current user authenticated by basic auth (Apache)

    - by jtd
    When you log in through a basic auth page, is the username you authenticated as stored anywhere (on the server or client machine), maybe in an environment variable? Background: I have a common web administration page for an e-mail server and I'd like to know who is doing what. When a user successfully logs in via basic auth, I somehow want to be able to identify them and log their actions. So each time a request is submitted, I can write to a log file. The basic format would be: $username ran a $function against $useraccount so if a user changed someone's permissions, eg: Admin-Bob ran a permission change against User-Scott So if errors occur, I can easily trace back in the log file what actions lead to the cause. I tried checking the %ENV hash to no avail, any Ideas? I don't really want to get into PHP-like sessions, because that would mean scrapping my basic auth, which gives me a fine degree of control already. If I have to code something with sessions, I'd need to implement a system to block users after maximum tries and so on, which I don't really want to code. I think this is better geared towards serverfault because it pertains to Apache moreso than the programming language. Sessions can be done in a myriad of languages.

    Read the article

  • Multiple user directories on EC2

    - by Joseph
    Im trying to set up multiple user directories on EC2 running Ubuntu, but im not sure how to set it up correctly so that i can serve files in the following format: http://<ec2 ip address>/user_1/public_html/file1.html and http://<ec2 ip address>/user_2/public_html/file3.html and so on for every user that i add. I tried looking for the httpd.conf file but i coulndt find it i only found apache2.conf Thank you guys.

    Read the article

  • mod_rewrite filename from mod_pagespeed back to normal files

    - by British Sea Turtle
    I am hoping someone can help me with this problem. I am moving to a new server and not using mod_pagespeed any more. However we have lots of external links to images on our site using the strange mod_pagespeed filenames. This is not an issue but we do not want to have lots of 404 errors. So I have lots of links like the following : http://www.domain.com/images/150x150xlink.png.pagespeed.ic.pPXw45HSQm.png http://www.domain.com/images/paris_01.gif.pagespeed.ce.vfrkuKUaj0.gif http://www.doamin.com/images/1st2.gif.pagespeed.ce.OUg38q6VbZ.gif How can I redirect them to : http://www.domain.com/images/150x150xlink.png http://www.domain.com/images/paris_01.gif http://www.doamin.com/images/1st2.gif There are thousands of files like this so I am hoping for a simple solution with mod_rewrite, I tried this but it does not work. So any help would be appreciated. RewriteCond %{REQUEST_URI} \.gif\.pagespeed\. [NC] RewriteRule ^(.*?\.gif)\..*\.gif$ $1 [NC,L]

    Read the article

  • rsync --link-dest behaviour when run as sudo

    - by fotNelton
    In order to create regular backups, I'm using rsync together with --link-dest so as to create hard-links for unchanged files. For example: rsync -ax \ --partial --delete --delete-excluded --inplace \ --exclude-from=/tmp/temp_excludes \ --link-dest=/Volumes/Backup/current \ /Users /Volumes/Backup/2012-06-25 This works very well as long as I start the process from my normal user account. Though as soon as I start the process using sudo it behaves erradically, meaning that rsync copies all the unchanged files instead of hard-linking them. Since sudo modifies the environment, I've already also tried sudo -E in conjunction with making sure that my sudoers file has the corresponding option set. Well, that didn't work either. So, the question is, how can I run rsync using sudo? Whereas the above example only shows a backup of the Users directory, I also need to backup some system files that I can only access as root.

    Read the article

  • How to execute a command whenever a file changes?

    - by Denilson Sá
    I want a quick and simple way to execute a command whenever a file changes. I want something very simple, something I will leave running on a terminal and close it whenever I'm finished working with that file. Currently, I'm using this: while read; do ./myfile.py ; done And then I need to go to that terminal and press Enter, whenever I save that file on my editor. What I want is something like this: while sleep_until_file_has_changed myfile.py ; do ./myfile.py ; done Or any other solution as easy as that. BTW: I'm using Vim, and I know I can add an autocommand to run something on BufWrite, but this is not the kind of solution I want now. Update: I want something simple, discardable if possible. What's more, I want something to run in a terminal because I want to see the program output (I want to see error messages). About the answers: Thanks for all your answers! All of them are very good, and each one takes a very different approach from the others. Since I need to accept only one, I'm accepting the one that I've actually used (it was simple, quick and easy-to-remember), even though I know it is not the most elegant.

    Read the article

  • VirtualBox to use dual monitors

    - by fnord_ix
    I am am running Kubuntu Hardy Heron, with a dual monitor setup, and have VirtualBox on it running Windows XP in seamless mode. My problem is, I can't get virtualbox to extend to the second monitor. Has anyone been able to achieve this or know if it can be achieved?

    Read the article

  • What software this log file comes from? [closed]

    - by mickula
    From what software comes this logfile? Please specify full name. Internal IP Threshold FlowsDiff 40 flows/s, Diff: 73 flows/s Sum 26.962 flows/300s (89 flows/s), 32.162.000 packets/300s (107.206 packets/s), 1,198 GByte/300s (32 MBit/s) External 87.98.238.221, 26.958 flows/300s (89 flows/s), 32.156.000 packets/300s (107.186 packets/s), 1,198 GByte/300s (32 MBit/s) External 89.230.69.49, 2 flows/300s (0 flows/s), 2.000 packets/300s (6 packets/s), 0,000 GByte/300s (0 MBit/s) External 89.231.190.149, 1 flows/300s (0 flows/s), 3.000 packets/300s (10 packets/s), 0,000 GByte/300s (0 MBit/s) External 89.239.101.20, 1 flows/300s (0 flows/s), 1.000 packets/300s (3 packets/s), 0,000 GByte/300s (0 MBit/s)

    Read the article

  • setting up a second monitor in centos

    - by Rob
    I have CentOS installed on my laptop. I hooked up my TV via VGA and it works, just not as I'd like it to. The left side (on the tv) is cut off, like the image is justified too far left. I want it to be centered, but I also want to use a different resolution. You see, I use a netbook, and thus my laptop screen is tiny, meaning some things cant fit in the same window without scrolling. I want my TV to fix that for me.

    Read the article

  • How do you delete an iFolder from iFolder admin interface

    - by cheshirekow
    There are only two buttons at the bottom of the screen "enable" and "disable". When I check the box next to an iFolder one of them is lit (depending on what the state of the folder is)... but there is no button to delete the folder (as it seems there should be from the documentation). There is a delete button in the "orphaned" tab but how do you "orphan" an iFolder? I'm logged in to the admin interface as admin, who is currently the owner of the folder I wish to delete.

    Read the article

  • Primary/secondary ethernet interfaces in Ubuntu 9.10

    - by Josh
    I have an Ubuntu 9.10 machine with three ethernet interfaces, eth0, eth1 and eth2. eth2 is connected to a private network. eth0 and eth2 are connected to two different LANs. Either one will provide access to the internet. All three networks have DHCP servers. Using Ubuntu's the default settings (And Gnome), when I boot up all the interfaces are active and my system gets three IP addresses. However any attempt to access the internet results in connection timeouts and other weirdness. I suspect that traffic is going out on one NIC (like eth0) and coming back in on another (like eth1). I'm not sure what's going on. The only way I can access the internet at the moment is to bring two of the devices down with ifdown. How can I configure eth0 as my primary interface so all trafic goes out by default on that interface, while keeping the other two active? Also, I want to make sure Avahi broadcasts properly on all three IPs so that the computers on the LAN of eth1 can still connect to myHostname.local...

    Read the article

  • Traffic shaping L2TP/IPsec VPN (via accounts not connection)

    - by Cromulent
    I need to be able to control the amount of bandwidth a specific user account can use on a VPN connection. One account I want to be able to use the VPN with no restrictions and another account I want to limit to a reasonable amount of bandwidth (say 10GB or so a month). I'm aware that you can traffic shape individual connections but that does not quite solve the problem as the limited account can just disconnect and reconnect to get a new connection. I need to be able to limit bandwidth on a login basis for a given period of time (monthly limit). I'm really not that familiar with traffic shaping in general so any advice would be appreciated. Thank you.

    Read the article

  • Unified inbox shows twice on Thunderbird

    - by That Help Vampire Guy
    I'm using Thunderbird 24. If I show folders in Unified mode, my inbox folder shows up twice. If I choose the "All" folders mode, I see only one inbox. The issue started when I was using Ubuntu 12.04, but now I'm on Fedora 19. (I have migrated the folders on /home). I do remember having it not-duplicated, but then it started while still on Ubuntu. I noticed it when using the Converation plugin, but I had previously used the plugin without it happening. I have disabled the plugin and it persists. What I have tried If I close Thunderbird, rename the .thunderbird folder on my /home to something else, then it will create a new config profile, I have to set up everything again, and then it works as expected, see images below: Before resetting Unified vs All Folders After resetting Unified vs All Folders (I'm trying to avoid resetting the profile and creating a fresh new one, because the server -- MS Exchange -- doesn't support IMAP labels, so I'd lose all the tags on my messages, and I have organized it based on tags instead of folder).

    Read the article

  • Apache Virtual host Subdomains points to same directory

    - by Jakobud
    I have setup subdomains using Apache before and have never really ran into any big problems. But with this (I believe Centos) server that is one of my clients, I'm not understanding what I'm doing wrong. Here is the .conf that apache is loading: Listen 80 NameVirtualHost *:80 <VirtualHost *:80> ServerName www.thedomain.com DocumentRoot /u1/thedomain.com/public RailsEnv production </VirtualHost> <VirtualHost *:80> ServerName subdomain.thedomain.com DocumentRoot /u1/subdomain.thedomain.com/public_html </VirtualHost> When I access either the primary or subdomain addresses, they both point to the primary www.thedomain.com content. Any thoughts? UPDATE: Yes I did a configtest and graceful after making the changes.

    Read the article

  • PHP unable to allocate memory.

    - by AlReece45
    On my way to the office this morning, every website on our shared VPS started giving the same error (several times, not the typical memory_limit error which is fatal): Warning: Unknown: Unable to allocate memory for pool. in Unknown on line 0 The shared server is a 64-bit OpenVZ container running cPanel. There are only ~6 VPSes on the host-- this is the largest one at only 4GB. The host itself has 24GB RAM. As the below graphs show, the memory usage on the host and VPS are both rather low. CPU Usage/Disk/Host all seem to be normal. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. Apache 2.2, PHP 5.2 (mod_php) Restarting Apache has corrected the problem for now. However, I'd like to prevent it from happening again and I'm not sure what was limiting the memory. RlimitMem was set to 583653034, yet the memory usage is about the same as it usually is. There's seems to be plenty of memory: what caused this error? VPS Memory Usage Host Memory Usage APC Information apc.ttl=0 apc.shm_size=0 apc.mmap_file_mask=(blank) 1 Segment(s) with 32.0 MBytes (mmap memory, pthread mutex locking)

    Read the article

  • How to understand cpu family/model/stepping fields in /proc/cpuinfo [closed]

    - by Victor Sorokin
    I have following in cpuinfo: processor : 0 vendor_id : AuthenticAMD cpu family : 15 model : 107 model name : AMD Athlon(tm) 64 X2 Dual Core Processor 5600+ stepping : 2 According to Wikipedia page there are two kinds of 5600+ -- one of 90nm technology, another of 65nm. How can I understand which one I have? There seem to be no direct correspondence between contents of cpuinfo and info on Wikipedia page. AMD site seems to use some other naming scheme for processors too. How can I map values of family, model and stepping from cpuinfo to the data available on Wikipedia/AMD?

    Read the article

  • haproxy backend default location

    - by magd1
    If you go to www.company.com, I want it to redirect to /something/something on my server, but the URL still shows www.company.com Is this possible in haproxy? backend new_marketing_server *** set default URL to /something/something*** mode http balance roundrobin timeout server 10m option httpclose server server1 10.86.151.142:80 minconn 32000 maxconn 3200 check port 80 inter 2000 server server2 10.122.13.189:80 minconn 32000 maxconn 3200 check port 80 inter 2000

    Read the article

  • Creating a link to name changing directory

    - by groove1534
    I have an Ubuntu 12.04 installed using wubi + Win7. I'm trying to create a link to "my documents" directory which located in my C drive: C:\Users\Myuser\My Documents\ Since the Ubuntu is installed in D:\, which is the "host", my C drive is accessible via /media/some_changing_hex. This hex get changed each time I restart my machine. So I need, somehow, to create a link that uses regex OR a link that somehow gets the the first (in this case - only) subdirectory in /media (something like all_subdirectories[0]). So how do I do that?

    Read the article

  • What's a good tool for collecting statistics on filesystem usage?

    - by Kamil Kisiel
    We have a number of filesystems for our computational cluster, with a lot of users that store a lot of really large files. We'd like to monitor the filesystem and help optimize their usage of it, as well as plan for expansion. In order to this, we need some way to monitor how these filesystems are used. Essentially I'd like to know all sorts of statistics about the files: Age Frequency of access Last accessed times Types Sizes Ideally this information would be available in aggregate form for any directory so that we could monitor it based on project or user. Short of writing something up myself in Python, I haven't been able to find any tools capable of performing these duties. Any recommendations?

    Read the article

  • Cloning to a smaller hard drive with DDRescue

    - by krebshack
    I am currently working with a 700 GB Seagate hard drive that's beginning to fail. I'll call this "SDB" from now on. I'd like to clone it while I'm still able to. However, the only hard drive that I have available is a 500 GB WD hard drive. I'll call this "SDC" from now on. The partition scheme on SDB is as follows: 9.77 GB is allocated to a recovery partition and the remaining 688.87 GB is allocated to a Windows partition. Both are formatted using NTFS. There is no partition scheme on SDC. I know how to clone one hard drive to another using DDRescue but I've only done it using hard drives that are the same size. For your reference, I'll normally use the command "ddrescue -v -r 3 /dev/sdb /dev/sdc example.log". I'd like to know if it's possible to do this with DDRescue. I've read the manual from GNU (http://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html) and I haven't seen anything indicating that it is possible. I'm just looking for some confirmation that this is a correct impression. If it's not possible, then it would be helpful if any of y'all would be able to make some work around suggestions. But please don't feel obligated to do that. I don't want to have my one thread bogged down with two many questions.

    Read the article

  • I have a perl script that is supposed to run indefinitely. It's being killed... how do I determine who or what kills it?

    - by John O
    I run the perl script in screen (I can log in and check debug output). Nothing in the logic of the script should be capable of killing it quite this dead. I'm one of only two people with access to the server, and the other guy swears that it isn't him (and we both have quite a bit of money riding on it continuing to run without a hitch). I have no reason to believe that some hacker has managed to get a shell or anything like that. I have very little reason to suspect the admins of the host operation (bandwidth/cpu-wise, this script is pretty lightweight). Screen continues to run, but at the end of the output of the perl script I see "Killed" and it has dropped back to a prompt. How do I go about testing what is whacking the damn thing? I've checked crontab, nothing in there that would kill random/non-random processes. Nothing in any of the log files gives any hint. It will run from 2 to 8 hours, it would seem (and on my mac at home, it will run well over 24 hours without a problem). The server is running Ubuntu version something or other, I can look that up if it matters.

    Read the article

  • Untar with date filter

    - by Don
    Is there any way to untar and only extract those files that are above a certain date including directory structure?? I restored a backup on a play server but it was a few days old. However I have a tar archive of the entire structure that is more up to date and healthy so now I want to extract all files (including directory structure) based on a date filter on the files if possible?

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >