Search Results

Search found 26947 results on 1078 pages for 'util linux'.

Page 407/1078 | < Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >

  • Extract part of a image from a big image

    - by rajat
    I have a 6 images , and each image has a certain section that i want to save as a separate image , the problem is that it has to be accurate because i am doing some animation using the sub-image so they should exactly . so I want to accurately extract a that part from each of the 6 images , i can't do it using a image editor in which i have to make the bounding box myself because it will not be accurate , is there any program that lets me do this by like defining a box using numerical values. PS: I don't want to write matlab or opencv program for this .

    Read the article

  • Apache worker is crashing after 3.000 users

    - by user1618606
    I activated Apache Worker on my VPS and I'm having problems, 'cause the website is crashing when 3000 users are accessing the website. I'm using http://whos.amung.us/stats/2jzwlvbhvpft/ as counter. My Apache Worker configuration: KeepAlive On MaxKeepAliveRequests 0 KeepAliveTimeout 1 <IfModule mpm_worker_module> ServerLimit 20000 StartServer 8000 MinSpareThreads 10400 MaxSpareThreads 14200 ThreadLimit 5 ThreadsPerChild 5 MaxClients 20000 MaxRequestsPerChild 0 </IfModule> The VPS have the SO: Debian 64 LAMP, memory: 14gb and CPU: 24ghz What I could to do to give a best performance?

    Read the article

  • find kernel config option in menuconfig

    - by puchu
    I've upgraded my gentoo-sources today to 3.3.8, and now I am looking at diff between old kernel's defconfig and old kernel's .config: there are about 20 changes. I want to apply this changes manually to new kernel's menuconfig. Where can I find tool like: menuconfig-find -v 3.3.8-gentoo CONFIG_KVM_AMD >> Virtualization >> -> Kernel-based Virtual Machine (KVM) support >> -> KVM for AMD processors support

    Read the article

  • 2 Printers 1 Queue

    - by Shazburg
    My issue: When an order is processed, the same document needs to be printed on two printers. My proposed solution: Create a single queue in CUPS with a backend script that spits the job out to the two real printers queues. My problem: Documentation. Maybe I'm looking at every ring around the bullseye, but I can't find anything that lays out the rules for writing a CUPS backend script. In the end, I have several questions: Is there already an option to do this in CUPS that I've missed? The line I use to add my queue is "lpadmin -p MultiPass -E -v multipass -P Generic PostScript Printer". But DeviceURI is bad unless I specify a directory like "-v multipass:/tmp". Why is this? For testing, my script does nothing but capture ARGV and write it out to a text file one line per argument. Problem is, I'm getting nothing. Logs show the job as successful, but I'm pretty sure my meager attempt at a backend isn't even being run. I've tried to keep this question brief, so please ask for more info as I'm sure I've left out the most important part in all this. Honestly, I'm just done chasing my own tail. Thank you for your time.

    Read the article

  • Does anyone know why rsync would keep sending the files over and over again?

    - by beagleguy
    I'm trying to using rsync to backup some files, about half a TB. It's now it a state where it keeps sending the same files everytime it runs. for example: rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt I then verify those files are copied over... then the next time it runs it does the same thing rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt any idea why it's getting stuck on these files? I've tried to wipe the whole dest directory out and start over but no luck. thanks,

    Read the article

  • CentOS PHP Sessions not working even though the PHP Info page says it is

    - by Blake
    I have PHP installed properly from the Remi repo on CentOS 6 (64 bit). As shown in the image above, the PHP information page shows sessions as working and installed, yet I get this error: Fatal error: Call to undefined function session_create() in /var/www/lighttpd/index.php on line 1 I've tried multiple reinstalls, different PHP RPM's, and yet nothing will get sessions going. How can I get PHP sessions working?

    Read the article

  • Changing path to basedir of mysql

    - by shantanuo
    When-ever I need to start mysql from command line, I need to cd to the base directory and then use mysql command as shown below: # cd /home/ec2-user/percona-5.5.30-tokudb-7.0.1-fedora-x86_64/ # ./bin/mysql Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 3 mysql> How do I start mysql simply by typing "mysql" at command prompt? I tried to export the path but it did not work. export path=$PATH:/home/ec2-user/percona-5.5.30-tokudb-7.0.1-fedora-x86_64/bin/

    Read the article

  • Initrd and Initramfs

    - by nitins
    I have read about the differences between the two from stackoverflow. But I am still finding it difficult to understand tmpfs and the real advantages of initramfs over initrd. I find that on RedHat EL 5 or Ubuntu 12.04, I have only initrd files in /boot. However RedHat EL 6 has both intird and intramfs files. Does that mean only Redhat 6 has implemented intiramfs and we still have initrd image there?

    Read the article

  • How do I change the .bash_history file location?

    - by Brian Graham
    I'm running CentOS 6.x and want to move the .bash_history to a different location. The home directories of my users are (because I run a VPS) in /var/www/vhost/<domain>.<tld> which is FTP accessible (and it should be). Because of this, I have changed the AuthorizedKeysFile for SSH connections out of the normal ~/.ssh/authorized_keys since FTP connections would easily be able to locate them. At the same time I want to move the .bash_history file to /home/%u/.bash_history where %u is the current user.

    Read the article

  • How can I set deadline as the I/O scheduler for USB Flash devices by using udev rules?

    - by ????
    I have set CFQ as the default I/O scheduler. I often get bad performance when I write data into a Flash device. This is resolved if I use deadline as the I/O scheduler for USB Flash devices. I can't always change the scheduler manually, right? I think writing udev rules is a good idea. Can someone please write rules for me? I want: When I plug in a USB device, detect the type of the device. If it is a portable USB hard disk, do nothing (I think if a device has more than one partitions, it always a portable hard disk. If it is a USB Flash device, set deadline as it's scheduler.

    Read the article

  • VLC RTP Streaming in FC12

    - by Matt D
    I'm trying to get VLC to work streaming RTP audio/video over my office network. The goal is multicast a/v streaming. In all test cases, we are streaming from VLC to VLC. I am able to stream from Windows to Windows, and from Fedora to Windows, but not from Windows to Fedora. Additionally, I am unable to receive a LOCAL stream from one instance of VLC to another, within Fedora. I don't see any reason why this would be. The buffer indicator (where the elapsed/total time is normally displayed) never shows any connectivity, so it would appear to be a network problem, but since I am able to stream from Fedora to Windows (same IP, same port) I thought it would be something else. Does anyone know of a solution to this issue?

    Read the article

  • s3fs changing s3 permissions?

    - by magd1
    My developer believes that s3fs is changing my bucket's permissions. Is this possible? I want my bucket to be public, but it keeps reverting back to private. Here's my fstab. s3fs#production /mnt/production fuse use_cache=/tmp,use_rrs=1,allow_other,uid=1000,gid=1000 0 0 My developer mentioned the "-o default_acl (default="private")" option. The documentation refers to "canned acl", but I don't understand what these are.

    Read the article

  • dnsmasq Client TTL

    - by user548971
    I have a situation where my hosts file is constantly changing. Because of this I don't want clients to cache ip addresses resolved using the hosts file. Here is the command that starts dnsmasq for me: /usr/sbin/dnsmasq -K -R -y -Z -b -E -S 8.8.8.8 -l /tmp/dhcp.leases -r /tmp/resolv.conf.auto --stop-dns-rebind --rebind-localhost-ok --dhcp-range=lan,192.168.2.2,192.168.2.249,255.255.255.0,12h -2 eth0 In looking at this site: http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq-man.html I see that the -T option has this description: -T, --local-ttl=<time> When replying with information from /etc/hosts or the DHCP leases file dnsmasq by default sets the time-to-live field to zero, meaning that the requester should not itself cache the information. This is the correct thing to do in almost all situations. This option allows a time-to-live (in seconds) to be given for these replies. This will reduce the load on the server at the expense of clients using stale data under some circumstances. My command doesn't have the -T option. Do I need it or does dnsmasq default TTL to zero without it?

    Read the article

  • Lost Page Write I/O Errors on CentOS LVM setup

    - by Gregg Leventhal
    I have a CentOS 6 box with LVM setup and one of the PVs is a USB disk (I know). One of them is getting the error: Oct 30 10:57:07 alpha01 kernel: lost page write due to I/O error on dm-3 Oct 30 10:57:07 alpha01 kernel: Buffer I/O error on device dm-3, logical block 4 Which is causing problems with all of the LVs on it. pvs shows the PV as unknown device. I can ls to the logical volumes and they show up in lvdisplay, but first I get a bunch of IO errors. I made sure the cables are secure between the USB drive. What should I do to get this back up and running for the meanwhile? Should I unmount each LV and run an fsck.ext4 on each one like fsck.ext4 -y /dev/vg1/lv_logvolname ?

    Read the article

  • How to set which IP to use for a HTTP request?

    - by GetFree
    This is probably a silly question. I'm doing some http requests using wget from the command line, and I want those connections to be made through one specific IP of the 4 IPs my server has. Those http requests go to one specific range of IPs so I only want those to be routed differently. The 4 interfaces in my server are eth0, eth0:0, eth0:1, eth0:2. I tried with the following command: route add -net 192.164.10.0/24 dev eth0:0 But when I see the routing table it says: Destination Gateway Genmask Flags MSS Window irtt Iface 192.164.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 The interface is set to eth0 not eth0:0 as my command says. What am I doing wrong?

    Read the article

  • Fedora Core 11 won't boot without a monitor

    - by feihtthief
    I have a P4 system that I installed Fedora 11 on. It will not boot without a monitor attached. The monitor can be off (not even have power plugged in), but must be attached. Without a monitor the hard disk thrashes around a bit like it's starting up services, but does not get to the point where I can ssh into the box. I have set the default runlevel to 3 and removed the rhgb entry from grub. Any suggestions welcome. Edit: I have already set the run-level to 3. The machine boots up fine with the monitor plugged in to the point where I can SSH into it. as soon as i unplug the monitor and reboot, it will not boot to that point.

    Read the article

  • How do I Forward root's email to an external email address?

    - by ErebusBat
    I have a small server (Ubuntu 10.04) at my house and I would like to forward root's email to my gmail hosted domain to get security notifications and what not. I ripped everything out and started from scratch and ran into some other issues. I now have sendmail working in the sense that I can mail [email protected] and get the mail. HOWEVER, adding an address to /root/.forward does not actually forward the message. I get the following in my logs: Dec 22 14:04:37 batcave sendmail[4695]: oBML4bAT004695: to=<root@batcave>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30075, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBML4bJ9004696 Message accepted for delivery) Dec 22 14:04:39 batcave sm-mta[4698]: STARTTLS=client, relay=[69.145.248.18], version=TLSv1/SSLv3, verify=FAIL, cipher=DES-CBC3-SHA, bits=168/168 Dec 22 14:04:40 batcave sm-mta[4698]: oBML4bJ9004696: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:00:03, xdelay=00:00:03, mailer=relay, pri=120336, relay=[69.145.248.18] [69.145.248.18], dsn=2.0.0, stat=Sent (OK 01/D4-00853-216621D4) You can see where my local sendmail instance accepts it then hands it off to my ISP, but with the wrong address ([email protected]).

    Read the article

  • LPR command won't recognize CUPS printer

    - by Datapimp23
    I have a cups server with one shared printer configured on it. It prints test pages without problems. printername (Idle, Accepting Jobs, Shared) Description: desc Location: Driver: Zebra ZPL Label Printer (grayscale, 2-sided printing) Connection: socket://172.20.50.26 Defaults: job-sheets=none, none media=oe_w288h432_4x6in sides=one-sided This is the output from lpstat -t. it shows that the printer is idle and accepting requests admin@SERVER:~$ lpstat -t scheduler is running no system default destination device for printername: socket://172.20.50.26 printername accepting requests since Thu 26 Jan 2012 01:29:35 PM CET printer printername is idle. enabled since Thu 26 Jan 2012 01:29:35 PM CET Now when I want to send a printjob to it via an LPR command it won't recognize the printer /usr/bin/lpr -P printername test.pdf Result lpr: ttn_seg_zebra1: unknown printer What am I missing here ?

    Read the article

  • How to automatically set default quota limits for users on XFS filesystem, when the new account is created

    - by acidburn2k
    I guess the title explains the problem pretty well. Do you have an idea for a mechanism, which will automatically assign default quota values for every new account created (sort as the skel scheme works, but in this area)? Now, I am looking for a generic clean solution, not some ugly cron based scripts, or wrapper scripts for creating users. I would also like to avoid any external, unmaintained stuff (like forgotten pam modules, and such). Anything what could lead to overhead and extra work in future isn't really the solution, nor is checking for new accounts every minute.

    Read the article

  • Cannot boot from Yumi multiboot USB stick

    - by Amator
    I've just created a multiboot USB stick using Yumi. I tried to start my notebook (Asus K70IO) using it, but all I see is just a black screen with blinking underscore even after waiting for minutes. If during this time I remove the USB stick I get the message: "Operating system load error". How do I properly load my Yumi USB stick and use it? I've tried formatting it using Yumi's checkbox to format the stick in FAT32 too, but it didn't help. Now I tried to use Sardu 2.0.5 and met same problem: black screen and blinkin underscore, if I remove stick I see "Operating system load error" and my OS starts to boot. At the same time if I create bootable USB stick from ISO using UltraISO it boots smoothly.

    Read the article

  • After installing Ubuntu how do I get rid of unity and go back to gnome?

    - by aseq
    After I have installed the newest Ubuntu LTS release (12.04, still in beta though) I am greeted with an unfamiliar and difficult to use desktop environment. I believe it is called unity. However I have used gnome for a decade and a half and I would not like to move to this new and (for me) unusable desktop environment. What is a quick and easy way to remove (most) of unity and bring back gnome, as well as configure my display manager to load gnome by default with the environment as close as possible to the way it was before?

    Read the article

< Previous Page | 403 404 405 406 407 408 409 410 411 412 413 414  | Next Page >