Search Results

Search found 41561 results on 1663 pages for 'linux command'.

Page 560/1663 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • udev rule not being executed

    - by jyavenard
    I have the following device that udevadm lists as: looking at device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0/ttyUSB0/tty/ttyUSB0': KERNEL=="ttyUSB0" SUBSYSTEM=="tty" DRIVER=="" looking at parent device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0/ttyUSB0': KERNELS=="ttyUSB0" SUBSYSTEMS=="usb-serial" DRIVERS=="pl2303" ATTRS{port_number}=="0" looking at parent device '/devices/pci0000:00/0000:00:1c.7/0000:09:00.0/usb6/6-2/6-2:1.0': KERNELS=="6-2:1.0" SUBSYSTEMS=="usb" DRIVERS=="pl2303" ATTRS{bInterfaceNumber}=="00" ATTRS{bAlternateSetting}==" 0" ATTRS{bNumEndpoints}=="03" ATTRS{bInterfaceClass}=="ff" ATTRS{bInterfaceSubClass}=="00" ATTRS{bInterfaceProtocol}=="00" ATTRS{supports_autosuspend}=="1" So I created the rule: KERNEL=="ttyUSB0", SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", DRIVERS=="pl2303", KERNELS=="6-2:1.0", SYMLINK+="cc128serial" this doesn't work. However if I do: KERNEL=="ttyUSB0", SUBSYSTEM=="tty", SUBSYSTEMS=="usb-serial", DRIVERS=="pl2303", SYMLINK+="cc128serial" then it works. I tried with KERNELS=="6*" etc.. to no available any ideas ? thanks

    Read the article

  • Crontab -- scheduling my backups

    - by Garfonzo
    I want to do a backup every Friday night (no, this is not the whole backup routine, just part of it). Each Friday night's backup will not be overwritten until 4 weeks later. So, essentially, I have a four revolving backups: Week1, week2, week3, and week4. Now, I need the week1 backup script to run every 4 weeks. But I also want week2's script to run every four weeks. I know that I can tell the crontab to execute something every X weeks/days/hours/whatever. However, how do I set it up so that each of these four scripts actually run on different weeks, how do I avoid all 4 scripts running on the same night, then dutifully waiting for weeks only to all run again? Thanks.

    Read the article

  • Server load increases by lot of httpd request with same PID

    - by user3740955
    I can see that my server load increases to more than 200-300 range. Before 1 week the maximum load was around 20-25. In top and ps -ef i can see a lot of httpd threads and the PPID of most of the httpd request are of the same PID. When i verified this the parent process ID is of root. Please let me know how i can reduce the server load. I have searched a lot for this but not able to find out a proper solution for this. Please let me know. Please see below a part of the top output. apache 29698 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29700 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29701 2062 10 16:54 ? 00:00:02 /usr/sbin/httpd apache 29702 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29703 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29705 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29706 2062 3 16:54 ? 00:00:00 /usr/sbin/httpd apache 29707 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29708 2062 1 16:54 ? 00:00:00 /usr/sbin/httpd apache 29709 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29710 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29711 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd apache 29712 2062 0 16:54 ? 00:00:00 /usr/sbin/httpd Server version: Apache/2.2.3

    Read the article

  • HAProxy overload protection

    - by user2050516
    using the HAProxy, would it be possible to configure an overload protection, to limit the amount of requests sent to the backing http server(s) to a given rate (z.B 100 Request per second ). If the threshold is exceeded requests should be answered with a default response. I am interested in requests per second not connections per second as a connection can have many requests. And yes to improve the servers is not an option here. If yes a configuration example to achieve that would be excellent. Thank you in advance.

    Read the article

  • Website and file/directory permissions

    - by mathiass
    I've been given a task to fix this one website. One of its issues is that on one page, the images have broken links - the images are not showing, and clicking on the image (i.e. direct link to the image file) results in a 403 (Forbidden) error. I am looking for some feedback on what could be the possible cause. The directory where the images are stored has the following permissions: drwxrws--- www "group" 10240 Aug 2008 "image directory name" I had to hide the names. I checked the page source code, and everything seems to be in place. The rest of the site, and other images outside that image directory are showing fine. I was told that recently there have been some changes to the server. I'm trying to assume that there is no fault in the source code, and the permissions are - or used to be - correct (since the site has been working before, and no recent changes to the site itself have been made). My only thoughts at the moment is that either: a) the directory permission should be: drwxrws--x (executable) for the other users, or b) there is a change in the server settings that I don't know of. Is there anything else I should check?

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • How to find the reason for a weekly downtime on an Ubuntu web server hosted by AWS?

    - by IceSheep
    We started monitoring our web server using Pingdom and found out that we have a downtime of a few minutes every Sunday at 0:00 UTC. The test runs every minute and checks if a successful HTTP response (code 200) is returned on port 80. The test fails due to a timeout (no response after 30 seconds). Here's what we've already checked – without success: Since we run our webserver behind a load balancer, I've set the Pingdom test on the load balancer's public DNS and the webserver's public DNS in order to find out if there's a problem with the AWS load balancer – both tests return the same result We set up Munin on our webserver. Everything looked fine even after the failure. Since the last failure lasted only 2 minutes I suppose Munin couldn't capture a potential problem (it only checks every 5 minutes) I have checked /var/log/apache2/error.log and /var/log/syslog for suspicious entries I have checked /etc/cron.weekly and /etc/crontab for suspicious entries I have searched for files created or last-modified during 0:00 and 0:15 using this method: touch -t 201209020000 start touch -t 201209020015 end find / -newer start -and ! -newer end (nothing found) Has anybody experienced a similar problem? Any proposals on how to find the reason for this behavior? It's Ubuntu 10.04 LTS running on an AWS m1.large instance. Thanks!

    Read the article

  • How to configure OpenVPN server to use custom default gateway?

    - by Arenim
    I have a vpn server at address 10.1.0.2 and the server have another ip in it's network -- 10.0.0.2 in his subnet (it's a tun2socks router). But default server's gateway is NOT 10.0.0.2 (and it's ok) but another external IP. I want all the client's traffic to be forwarded through this ip address -- 10.0.0.2. Here is part of my server's config: dev tap0 server-bridge 10.1.0.1 255.255.255.0 10.1.0.50 10.1.0.100 push "route 10.0.0.0 255.255.255.0" ; now client can ping 10.0.0.2 push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 10.1.0.1" push "dhcp-option WINS 10.1.0.1" in fact i want some like push "redirect-gateway 10.0.0.2" How can I achieve this?

    Read the article

  • Apache2 WebServer not allowing me to view website/files in /var/www

    - by CitadelCSAlum
    I used to be able to access websites/files that were stored in the directory /var/www I have not used this for a while, but now I have a need to store, media in this directory or in the directory/var/www/images I noticed that my apache web server wasnt running correctly so I did a complete package removal and then reinstalled, but I am still unable to access a test page inde.html in the directory /var/www/index.html by going to http://myipaddresshere/index.html Is there some initial configuration I need to do to allow me to store HTML and media files in this directory and be able to access them from the browser? I dont remember having to do anything before.

    Read the article

  • How do I reduce RAM usage on my server?

    - by Abs
    I have recently launched a site that is very popular but I am having trouble with scalability. My site makes heavy use of FFmpeg and at peak times RAM usage hits the 2 GB point quickly and the swap file starts getting used. CPU usage starts rising too. Users complain that the site is slow. They say this because all FFmpeg instances run very slow because of the number running at the same time. Users make use of FFmpeg on my server in real time. Is there anything I can consider or do to ease down the usage of the server and RAM just shooting up? Maybe there is something better than FFmpeg (!). Is the only solution "throwing some cash" at a more powerful server? I have given little information, please ask for more, so this problem can be solved.

    Read the article

  • ZFS & Deduplicating FLAC Data

    - by jasongullickson
    I'm experimenting with using ZFS to deduplicate a large library of FLAC files. The purpose of this is twofold: Reduce storage utilization Reduce bandwidth needed to sync the library with cloud storage Many of these files are of the same music tracks but from different physical media. This means that for the most part they are the same and usually close to the same size, which makes me think that they should benefit from block-level deduplication. However in my testing I'm not seeing good results. When I create a pool and add three of these tracks (identical songs from different source media) zpool list reports 1.00 dedupe. If I copy all of the files (make exact duplicates of the three) dedupe climbs, so I know that it is enabled and functioning, but it's not finding any duplication in the original collection of files. My first thought was that perhaps some of the variable header data (metadata tags, etc.) might be mis-aligning the bulk of the data in these files (the audio frames) but even making the header data consistent across the three files doesn't seem to have any impact on deduplication. I'm considering taking alternate routes (testing other dedupe filesystems as well as some custom code) but since we're already using ZFS and I like the ZFS replication options, I'd prefer to use ZFS dedupe for this project; but perhaps it's simply not capable of working well with this sort of data. Any feedback regarding tuning that might improve dedupe performance for this sort of dataset, or confirmation that ZFS dedupe is not the right tool for this job are appreciated.

    Read the article

  • How do I minimize Evolution to the system tray in Ubuntu?

    - by Jephir
    In Ubuntu some applications can be set to minimize instead of exit on close. For example, Empathy minimizes to the system tray (mail icon) when the close button is pressed in the application window. How do I make Evolution do this as well? Essentially I would like to have Evolution hidden in the system tray instead of having to re-launch it every ten minutes to check for new messages (or leave it open and clutter the taskbar).

    Read the article

  • Proper way to configure ~/.Xsession with a standalone window manager to gracefully end a session

    - by cYrus
    I'm using xdm and my ~/.Xsession looks like this: # <initialization stuff here> exec openbox It works, but I've noticed that when I log out Openbox doesn't gracefully kill all the applications. In particular Google Chrome complains about that. How can I make sure to wait for all processes to exit (just like others configurations: Gnome, KDE, Windows ...)? The only (ugly) solution that I've found involves sleep and kill into ~/.Xsession.

    Read the article

  • Screen a running process

    - by LiraNuna
    Sometimes I forget to run a program under a screen session and can't stop it in the middle, and I know it's going to take long. Is there a way to screen an already running process without restarting it?

    Read the article

  • How to format pendrive from fat32 to ext3 in windows7

    - by newb
    I am trying to make a live usb of OPHCRACK and tried to boot from FAT32 pendrive. But after making live usb and boot from it the ophcrack didnt work. After searching a while i came to understand that ophcrack will not work in a fat32 pendrive and we have to convert it into ext3. But i am getting hard time finding a method or software which can be used to convert fat32 pendrive to ext3 in windows 7. Can you suggest any method or software's for this purpose

    Read the article

  • Url rewrite rule

    - by vvr
    How to redirect a page form show.php?id=(15charstring) to show/(15charstring) I tried like this it is doing reverse means it is redirecting /show/(15chars) to show.php?id=(15chars) RewriteEngine on RewriteRule ^/show/([a-zA-Z0-9]{15})$ http://site.com/show.php?id=$1 Second case is i have to redirect to another page if he added &m=true to the url show.php?id=(15chars)&m=true html/show.php?id=(15chars).

    Read the article

  • iptables, forward traffic for ip not active on the host itself

    - by gucki
    I have kvm guest which's netword card is conntected to the host using a tap device. The tap device is part of a bridge on the host together with eth0 so it can access the public network. So far everything works, the guest can access the public network and it can be accessed from the public network. Now the kvm process on the host provides a vnc server for the guest which listens on 127.0.0.1:5901 on the host. Is there any way to make this vnc server accessible by the ip address which the guest is using (ex. 192.168.0.249), without interrupting the guest from using the same ip (port 5901 is not used by the guest)? It should also work when the guest is not using any ip address at all. So basically I just want to fake IP xx is on the host and only answer/ forward traffic to port 5901 to the host itself. I tried using this NAT rule on the host, but it doesn't work. Ip forwarding is enabled at the host. iptables -t nat -A PREROUTING -p tcp --dst 192.168.0.249 --dport 5901 -j DNAT --to-destination 127.0.0.1:5901 I assume this is because the IP 192.168.0.249 is not not bound to any interfaces and so no ARP requests for it get answered and so no packets for this IP arrive at the host. How can make it work? :)

    Read the article

  • Permissions issue on Fedora with separate home partition

    - by Tres
    I am running Fedora 12 and I've setup a partition separate from my root partition to keep shared files and home directories. Now, I've been having permission issues where it says the user cannot chdir into their home directory (/files/home/*). Now, I fixed this originally by chmodding / to 0755 and the home directories also to 0755. And yes, the user is the owner:group of their home directory. Now get this, I didn't change a thing, rebooted, everything still works. Great, right? I boot the server up a day later, and now same ol issue. This is a home server that wasn't on at all at any point in between the working state and non-working state. Also, nothing else was modified. Any ideas? Thanks!

    Read the article

  • How can I redirect HTTPS(S) traffic to anothr gateway?

    - by PsyStyle
    I have a network like 192.168.0.0/15 with the default gateway set to 192.168.0.1. Al the workstations of the network use this gateway for all kind of accesses to the Internet. Now I am testing a new Internet connection with another provider and for this I am using a second gateway on the same subnet with 192.168.0.2 as IP address. I want to redirect only HTTP and HTTPS traffic to this second gateway without touching the address of the default gateway set inside every workstation. How can I accomplish this task? What I have to change inside the first's gateway firewall configuration or routes? I tried with a dnat like DNAT loc:192.168.0.1 loc:192.168.0.2 tcp 80 but nothing worked. I use Shorewall for simplicity in configuration but I can understand even theorical answers which I will try to adapt to my case

    Read the article

  • Kernel compiling with -j2+ parameter ends prematurely with no error message or output bzImage

    - by Minix
    I've noticed quite a while ago that compiling a kernel with the parameter -j set to 1 or more doesn't produce a bzImage. Instead, it ends prematurely without any advice. I have reproduced the same behavior in both my netbook and home server. As far as I'm aware, the point where the compilation stops is random - Compiling twice with the same parameters will probably stop at different files. However, when I run make with no -j* parameter the compilation ends just fine and outputs a working bzImage. Both machines run Intel Atom (N270 on the netbook and 330 on the server) and I've compiled for these processors. If I recall correctly, I've tried compiling both with Atom and with generic x86_64 options. The kernel version I'm building is 2.6.34.1 I've always compiled normally with those options in my Core2Duo and Pentium Dual Core machines. Has anyone experienced this issue? Any ideas why does this happens? Is there a fix or workaround?

    Read the article

  • I dont understand why [closed]

    - by gcc
    I dont understand why they vote down my question ,I couldont understand them -they do know nothing ,donot try to solve it-but they do vote down.If you do know nothing , dont fuck something or dont try to mess up

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >