Search Results

Search found 7595 results on 304 pages for 'dev jadeja'.

Page 194/304 | < Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >

  • unable to install new versions of linux on my PC

    - by iamrohitbanga
    I have an MSI-RC410 motherboard with onboard ATI Radeon XPress 200 series Graphics card. I have used OpenSuse-11.0 on my PC for over an year. but when I tried to upgrade to OpenSuse-11.1, I managed to install it only to find that several features were missing. for example /dev/cdrom was missing. Also install DVD of Fedora 11 did not work. I have also tried Ubuntu-9.10 live cd. When I boot from the CD i get the initramfs command prompt. Still able to install old versions of these OSs. What is the reason for this behavior? Does it have to do anything with the new version of the linux kernel not being compatible with my hardware? What should i do to install new versions of Linux?

    Read the article

  • Differences in memory consumption between two identical D7 sites?

    - by aendrew
    I'm running Drupal on a news site that has a lot of different View blocks on the front page (~5 total, all cached). In trying to reduce the memory footprint of the site, I've checked out source from SVN to a local development install to try and convert some of those blocks into more optimized code. Here's the weird thing. Devel module lists memory consumption at 50mb on the Production site (Running Nginx, PHP 5.2.17, XCache and Zend Optimizer.) but only 14mb on my development site (Running Apache2, PHP 5.2.13 and XCache). These are nearly-identical versions of the same site — frankly, the Production site should use even less memory as I've disabled some of the modules running on the Dev site. Any idea why this might be the case?

    Read the article

  • Streaming video from a point-and-shoot camera that doesn't support it

    - by egasimus
    I have a Canon IXUS 120is (PowerShot SD940) - a nice digital camera that's a couple of years old. It does record fairly decent video, but, alas, can't function as a webcam - and I need to stream video over the Web. I've installed CHDK on it, and while it's quite flexible, doesn't seem to provide a solution to my problem. I suppose that the video footage is written to the SD card in real time - is there a hack that allows me to monitor the file as it is written, and broadcast its contents over the Internet? Perhaps connecting its the camera's slot to my laptop's card reader via SDIO? I'm running Windows, but I'm roughly familiar with Linux; another question has suggested a file-to-/dev/video driver - do such tools exist?

    Read the article

  • ffmpeg open webcam using YUYV but i want MJPEG

    - by Pavel
    I need ffmpeg to open webcam (logitech c910) in MJPEG mode, because the webcam can give ~24 using MJPEG "protocol" and only ~10 fps using the YUYV. Can i choose between them using ffmpeg command line? xx@(none) ~ $ v4l2-ctl --list-formats ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUV 4:2:2 (YUYV) Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : MJPEG My current command line: ffmpeg -y -f alsa -i hw:3,0 -f video4linux2 -r 20 -s 1280x720 -i /dev/video0 -acodec libfaac -ab 128k -vcodec libx264 /tmp/web.avi ffmpeg produces corrupted h264 stream when i record from webcam, but normal h264 strem when i record from x11grab. Another codecs (mjpeg, mpeg4) works well with webcam... But this is another story.

    Read the article

  • Why does this package (ppa:ondrej/php5, it's PHP 5.5) break the apache2 installation?

    - by Panique
    Problem Currently this package (ppa:ondrej/php5) is quite popular for installing the latest version of PHP 5.5. I've worked quite much with it, and everything ran smoothy, on several (dev) servers. But from today (?) this breaks the apache2 installation (it empties the /etc/apache2/sites-available/default file). This is reproduceable. Way to reproduce (on naked Ubuntu 64 12.04 LTS) // basic installs sudo apt-get update sudo apt-get install apache2 sudo apt-get install php5 Apache is fine, nano /etc/apache2/sites-available/default has valid content now // getting PHP 5.5.x sudo apt-get install python-software-properties (for add-apt-repository) sudo add-apt-repository ppa:ondrej/php5 sudo apt-get update sudo apt-get install php5 // php -v shows successful install of PHP 5.5.x now Apache is broken, nano /etc/apache2/sites-available/default is empty now Question Why does this happen ? According to https://launchpad.net/~ondrej/+archive/php5 there were no changes in the last few days.

    Read the article

  • setting visual bell to flash in iTerm

    - by blackwing
    Hi, I am using iTerm on OSX (leopard) to ssh to a linux machine. I run screen on the dev machine to save my work between sessions. I am not a big fan of audio bell and I don't like screen's default 'Wuff Wuff' bell (or any other little message shown at bottom of the page). What I like though is to have flash (foreground and blackground colors swapped for a fraction of a second) as my visual bell. I used to use PuTTY and it is as simple as ticking a checkbox but I can't find such an option in iTerm. My question is how can I set my visual bell to flash? The ideal answer would work with iTerm on local computer, iTerm sshed to a linux server, and iTerm sshed to a linux server and ran screen.

    Read the article

  • Strategies for very fast delivery of webpages.

    - by Cherian
    I run a website Cucumbertown with an initial pay load of nearly 9KB zipped. All my js is delayed loaded with requirejs and modernizer is the only exception. Now all my webpages are Nginx cached and only 10-15% hits go to the backend proxy. And the cache is invalidated by logged in users as proxy_cache_bypass. So for an anonymous user its nearly always a cache hit. I have some basic OS tuning with default via ip dev eth0 initcwnd 15 net.ipv4.tcp_slow_start_after_idle 0 Despite an all cache & large initcwnd my pages still take 2.5 – 3 seconds. I have a yslow score of And page speed at Are there strategies that can help deliver webpages even faster than this? Deliver pages at 1+ second time for 10KB payload? Notes: My servers run of a fairly good data center from Linode at Fremont.

    Read the article

  • test of ICMP block

    - by Marcos
    In my bash scripts I have been using something like: until fping -u google.com; do echo "$0[$$] Network/DNS down?? $(date)" 1>&2 && sleep $(($RANDOM%(1 + ++trynum * 1) +1)).222; done to test for online connectivity. It halts in place, sleeping growing random intervals, until it can ping google.com again. Problem: At some sites ICMP pings are blocked altogether, and web pages are still reachable. What's a short way to test for this general case? Based on that test I will switch over to an http-based test like the exit status of curl -s google.com >/dev/null if that is a good one.

    Read the article

  • KVM Slow performance on XP Guest

    - by Gregg Leventhal
    The system is very slow to do anything, even browse a local folder, and CPU sits at 100% frequently. Guest is XP 32 bit. Host is Scientific Linux 6.2, Libvirt 0.10, Guest XP OS shows ACPI Multiprocessor HAL and a virtIO driver for NIC and SCSI. Installed. CPUInfo on host: processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 42 model name : Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz stepping : 7 cpu MHz : 3200.000 cache size : 8192 KB physical id : 0 siblings : 8 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dts tpr_shadow vnmi flexpriority ept vpid bogomips : 6784.93 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <vcpu placement='static' cpuset='0'>1</vcpu> <os> <type arch='x86_64' machine='rhel6.3.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <cpu mode='custom' match='exact'> <model fallback='allow'>SandyBridge</model> <vendor>Intel</vendor> <feature policy='require' name='vme'/> <feature policy='require' name='tm2'/> <feature policy='require' name='est'/> <feature policy='require' name='vmx'/> <feature policy='require' name='osxsave'/> <feature policy='require' name='smx'/> <feature policy='require' name='ss'/> <feature policy='require' name='ds'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='dtes64'/> <feature policy='require' name='ht'/> <feature policy='require' name='pbe'/> <feature policy='require' name='tm'/> <feature policy='require' name='pdcm'/> <feature policy='require' name='ds_cpl'/> <feature policy='require' name='xtpr'/> <feature policy='require' name='acpi'/> <feature policy='require' name='monitor'/> <feature policy='force' name='sse'/> <feature policy='force' name='sse2'/> <feature policy='force' name='sse4.1'/> <feature policy='force' name='sse4.2'/> <feature policy='force' name='ssse3'/> <feature policy='force' name='x2apic'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='none'/> <source file='/var/lib/libvirt/images/Server-10-9-13.qcow2'/> <target dev='vda' bus='virtio'/> <alias name='virtio-disk0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </disk>

    Read the article

  • Installing Ubuntu to a USB drive

    - by Carl Smotricz
    I'm having a rough time getting Ubuntu to run from a 250 GB USB hard drive. I booted Ubuntu 9.10 from a CD and ran the regular "install" to the attached USB drive. I used the "advanced" option on the drive partition question to put the boot loader on /dev/sdb (the USB disk) but when I boot the machine it doesn't recognize there's a boot loader on the USB drive (it offers to boot from 2 other devices but not the USB disk). I also tried booting from the Ubuntu CD and using usb-creator-gtk to set up the USB drive. Seems to me this is meant to work with flash drives. I got a bootable USB disk but it looked and worked like the CD, i.e. it gave me options of "live CD" operation, installing, memtest, etc. That's not the way I want to run the system. Some help in installing Ubuntu, bootable into a "full" running system on my USB drive would be appreciated.

    Read the article

  • Tutuorial for Quick Look Generator for Mac

    - by vgm64
    I've checked out Apple's Quick Look Programming Guide: Introduction to Quick Look page in the Mac Dev Center, but as a more of a science programmer rather than an Apple programmer, it is a little over my head (but I could get through it in a weekend if I bash my head against it long enough). Does anyone know of a good basic Quick Look Generators tutorial that is simple enough for someone with only very modest experience with Xcode? For those that are curious, I have a filetype called .evt that has an xml header and then binary info after the header. I'm trying to write a generator to display the xml header. There's no application bundle that it belongs to. Thanks!

    Read the article

  • Compiling PHP 5.3.3 on Ubuntu 8.04: Could not find libevent

    - by Nick
    When attempting to ./configure PHP 5.3.3 on Ubuntu 8.04, I get the error: checking for libevent >= 1.4.11 install prefix... configure: error: Could not find libevent >= 1.4.11 in /usr/local/ I tried installing the libevent-dev and libevent1 packages, but same error. I then removed the packages, downloaded and compiled libevent from source. Same error. Locate shows that libevent was installed to /usr/local/lib/libevent.so with all its friends in /usr/local/lib/. I tried configuring with the option: --with-libevent-dir=/usr/local/lib/ Basically the same error: checking for libevent >= 1.4.11 install prefix... configure: error: Could not find libevent >= 1.4.11 in /usr/local/lib/ Any suggestions??

    Read the article

  • How to access an SD card from a virtual machine?

    - by Punit Soni
    I want to format an SD card from my Linux virtual machine. I have a built-in SD card reader in my laptop. I tried using VirtualBox and VMware Player and installed Ubuntu 10.04 guest. None of them are showing the SD card reader as a device. I can access the SD card from the Windows host. I am not interested in solutions using shared folders as I want to access the SD card as hardware (it should show up in /dev). I basically want to set up the SD card for BeagleBoard, but I don't want to install physical Ubuntu in my PC.

    Read the article

  • How to copy directories using debugfs?

    - by tjbp
    The debugfs manpage gives the impression that the command 'rdump . .' will recursively copy all files found on the specified filesystem from the debugfs cwd to the native filesystem's cwd. Instead I seem to receive a syntax error, and no copy is initiated? These are the commands I run: cd /path/to/transfer/destination debugfs /dev/sda1 -R rdump . . My task is to copy the entire contents of a clean yet unmountable USB storage device to its host machine's HD. The host machine does not support the inode size used by the USB device's filesystem (256) and its software is not upgradeable, so my intention was to use debugfs to transfer the files. If anyone has any other suggestions for this task I'd be grateful.

    Read the article

  • Debian server doesn't free memory after backup

    - by stan31337
    I have production server that is running Debian 6.0.6 Squeeze #uname -a Linux debsrv 2.6.32-5-xen-amd64 #1 SMP Sun Sep 23 13:49:30 UTC 2012 x86_64 GNU/Linux Every day cron executes backup script as root: #crontab -e 0 5 * * * /root/sites_backup.sh > /dev/null 2>&1 #nano /root/sites_backup.sh #!/bin/bash str=`date +%Y-%m-%d-%H-%M-%S` tar pzcf /home/backups/sites/mysite-$str.tar.gz /var/sites/mysite/public_html/www mysqldump -u mysite -pmypass mysite | gzip -9 > /home/backups/sites/mysite-$str.sql.gz cd /home/backups/sites/ sha512sum mysite-$str* > /home/backups/sites/mysite-$str.tar.gz.DIGESTS cd ~ Everything works perfectly, but I notice that Munin's memory graph shows increase of cache and buffers after backup. Then I just download backup files and delete them. After deletion Munin's memory graph returns cache and buffers to the state that was before backup. Here's Munin graph: Unfortunately I don't have enough rep to add image here. So here's a link:

    Read the article

  • unable to install new versions of linux on my PC

    - by iamrohitbanga
    I have an MSI-RC410 motherboard with onboard ATI Radeon XPress 200 series Graphics card. I have used OpenSuse-11.0 on my PC for over an year. but when I tried to upgrade to OpenSuse-11.1, I managed to install it only to find that several features were missing. for example /dev/cdrom was missing. Also install DVD of Fedora 11 did not work. I have also tried Ubuntu-9.10 live cd. When I boot from the CD i get the initramfs command prompt. Still able to install old versions of these OSs. What is the reason for this behavior? Does it have to do anything with the new version of the linux kernel not being compatible with my hardware? What should i do to install new versions of Linux?

    Read the article

  • Unable to enable InnoDB in mySQL on Ubuntu 10.04

    - by spowers
    I am trying to enable InnDB on my linux server. I have installed Ubuntu 10.04 JeOS on an ESX server. I then installed mySQL and tomcat using aptitude. However when I use SHOW ENGINES; in mySQL it does not appear that InnoDB was installed. I then tried following the directions in the documentation. http://dev.mysql.com/doc/refman/5.1/en/innodb.html However I get the following when trying to enable a plugin: ERROR 1123 (HY000): Can't initialize function 'InnoDB'; Plugin initialization function failed. I would appreciate some advice as to how to approach this problem.

    Read the article

  • Monitoring mongrel with monit

    - by matnagel
    I wrote a monit.d file for mongrels which works in this version: check process redmine with pidfile /home/redmine/service/redmine.pid group webservice start program = "/usr/bin/mongrel_rails start -p 41328 -e production -d --pid /home/redmine/service/redmine.pid --user redmine --group redmine -a 127.0.0.1 -c /home/redmine/app" stop program = "/usr/bin/mongrel_rails stop --pid /home/redmine/service/redmine.pid -c /home/redmine/app && rm /home/redmine/service/redmine.pid > /dev/null 2>&1 if cpu greater 50% for 2 cycles then alert if cpu greater 80% for 3 cycles then restart if totalmem greater 60.0 MB for 5 cycles then restart if loadavg (5min) greater 4 for 8 cycles then restart if 3 restarts within 5 cycles then timeout $ Checking monit control file syntax... $ Control file syntax OK I want to also monitor the http response, so I add this line at the end: if failed port 41328 protocol http with timeout 10 seconds then restart Now monit complains: $ Checking monit control file syntax... $ /etc/monit.d/redmine:16: Error: exceeded maximum number of program arguments 'http' $ ERROR: CHECK MONIT CONFIG FILE SYNTAX How do I correctly monitor the port?

    Read the article

  • Unable to write into character device file in Ubuntu

    - by Surjya Narayana Padhi
    I just written a linux character driver. I created one character device file named X. I can see that file in /dev folder. Now I want to do some read/write operation into this file. I opened the filed in VI editor and write some text into it. I used :wq and exited. It didn't show any error. Now when I do cat on that same file I am not able to see any content. I tried it several times. The same situation. Please let me know If I am doing something wrong....

    Read the article

  • mdadm auto grow raid

    - by johannes
    I have a raid0/1 on lvm logical volumes. I resized the logical volumes. Now I want to resize the raid to use the complete logical volumes. This can be done with mdadm /dev/md? --grow -z newsize But somehow I can't figure out how to calculate the newsize argument. Is there a way to tell mdadm to grow to the biggest possible size? If not, how do I calculate the biggest possible size of the raid to use for the newsize argument?

    Read the article

  • Piping stream into tar on FreeBSD

    - by Casey Jordan
    I am trying to pipe a tar/gzip archive into tar to decompress it. The script I have is part of a self extracting installer, where my archive is appended to the script. This works fine on linux, and the script looks like this: export TMPDIR=`mktemp -d /tmp/selfextract.XXXXXX` echo "TEMP: $TMPDIR" ARCHIVE=`awk '/^__ARCHIVE_BELOW__/ {print NR + 1; exit 0; }' $0` tail -n+$ARCHIVE $0 | tar xz -C $TMPDIR exit 0 __ARCHIVE_BELOW__ The tar archive as a string is after the ARCHIVE_BELOW but I omitted it from here since it's huge. However, when I do this on FreeBSD I get the following error: tar: Failed to open '/dev/sa0' I read that this is because free BSD expects to read from that device by default and you can tell it to read from stdin by passing -f - like so: tail -n+$ARCHIVE $0 | tar zxf - -C $TMPDIR However, when I do this I just get the error: tar: Damaged tar archive tar: Retrying... Can anyone point out what I am doing wrong here? I need to do it this way (Via piping) for efficiency reasons. Thanks

    Read the article

  • Unable to enable InnoDB in mySQL on Ubuntu 10.04

    - by spowers
    I am trying to enable InnDB on my linux server. I have installed Ubuntu 10.04 JeOS on an ESX server. I then installed mySQL and tomcat using aptitude. However when I use SHOW ENGINES; in mySQL it does not appear that InnoDB was installed. I then tried following the directions in the documentation. http://dev.mysql.com/doc/refman/5.1/en/innodb.html However I get the following when trying to enable a plugin: ERROR 1123 (HY000): Can't initialize function 'InnoDB'; Plugin initialization function failed. I would appreciate some advice as to how to approach this problem.

    Read the article

  • Why does partition tool GParted read the 190GB of data twice when shrink a 250GB partition to 190GB?

    - by Jian Lin
    When using GParted to shrink a 250GB partition to 190GB, I thought it will move the 60GB of data back into the 190GB region and call it done. But instead it reads the 190GB of data twice, the first time taking about 1 hour and the second time for 2 hours. The question is: 1) how come it touches the 190GB of data instead of the 60GB of data? 2) how come it reads it twice? Update: i am suspecting this: it says "moving /dev/sdb1 to the right and then shrink it to 190GB"... so is that the reason, first it is to shrink the partition to 190GB, and then move it to the right? So it is not moving to the right and then shrink it, but to shrink it first and move it. (cannot move first because the original 250GB is the whole hard drive). Also, why move it to the right?

    Read the article

  • Bash Script to Back Up Backs Up Itself

    - by Jay LaCroix
    I have the following bash script that creates a tar.gz of my filesystem on a Kubuntu PC. The problem is, that it also tries to backup the tar.gz backup file, even though I am storing the backup in /tmp and omitting /tmp from the backup. I am wondering why it's backing up the file in /tmp even though I told it not to. #!/bin/bash # init DATE=$(date +20%y%m%d) sudo tar -cvpzf /tmp/`hostname`_$DATE.tar.gz \ --exclude=/proc \ --exclude=/lost+found \ --exclude=/sys \ --exclude=/mnt \ --exclude=/media \ --exclude=/dev \ --exclude=/tmp \ --exclude=/home/jlacroix/Desktop \ --exclude=/home/jlacroix/Documents \ --exclude=/home/jlacroix/Music \ --exclude=/home/jlacroix/Pictures \ --exclude=/home/jlacroix/Projects \ --exclude=/home/jlacroix/Roms \ --exclude=/home/jlacroix/Videos \ --exclude=/home/jlacroix/.VirtualBox\ VMs \ --exclude=/home/jlacroix/.SpiderOak \ / scp /tmp/`hostname`_$DATE.tar.gz jlacroix@Pluto:/share/Recovery/Snapshots sudo rm /tmp/`hostname`_$DATE.tar.gz

    Read the article

  • NFS-Root not working when booting over PXE

    - by Randy
    I am desperately trying to get a diskless client running over PXE-Boot using a NFS-Share as a root file system. I did this before some years ago but for some reason I am stucked at this since days. The TFTP-Server itself is running fine and booting a netinstaller works also fine. The kernel and initrd are loaded also but the bootprocess stops with this (screenshot) kernel panic. I'm using the squeeze standard i386-Kernel and I have prepared the initrd with this config: MODULES=most BUSYBOX=y KEYMAP=n COMPRESS=gzip BOOT=nfs DEVICE= NFSROOT=auto I also tried MODULES=netboot with the same outcome. My PXE-configuration looks like this: LABEL linux KERNEL diskless/debian-default/vmlinuz-2.6.32-5-686 APPEND root=/dev/nfs initrd=diskless/debian-default/vmlinuz-2.6.32-5-686 nfsroot=192.168.140.2:/storage/nfs-boot-images/default-squeeze ip=dhcp rw Furthermore I have captured the network communication of the client via tcpdump and learned that the client isn't even trying to connect to the NFS-share. Does anybody has got an idea what is going wrong here?

    Read the article

< Previous Page | 190 191 192 193 194 195 196 197 198 199 200 201  | Next Page >