Search Results

Search found 36619 results on 1465 pages for 'damn small linux'.

Page 402/1465 | < Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >

  • Full HD video playback acceleration with mplayer on Ubuntu Lucid

    - by pts
    I know that for an NVidia card I can sudo apt-get install nvidia-current mplayer, reboot, and then use mplayer -vo vdpau -vc ffmpeg12vdpau,ffwmv3vdpau,ffvc1vdpau,ffh264vdpau FILE.mkv to get accelerated video playback of H.264 and other codecs, so even full HD videos can be played back with only little CPU. (And there are many other options, e.g. XBMC also supports VDPAU.) But how do I get accelerated video playback if I have a recent ATI or Intel video card on Ubuntu Lucid? How do I figure out if my video card has acceleration built in? The solution has to work with mplayer or mplayer2. It's OK for me to recompile mplayer(2), but I'd prefer installing both the kernel and the X.org X server from a binary package repository.

    Read the article

  • How do I reference the value of a constructed environment variable in a loop?

    - by Rob Spieldenner
    What I'm trying to do is loop over environment variables. I have a number of installs that change and each install has 3 IPs to push files to and run scripts on, and I want to automate this as much as possible (so that I only have to modify a file that I'll source with the environment variables). The following is a simplified version that once I figure out I can solve my problem. So given in my.props: COUNT=2 A_0=foo B_0=bar A_1=fizz B_1=buzz I want to fill in the for loop in the following script #!/bin/bash . <path>/my.props for ((i=0; i < COUNT; i++)) do <script here> done So that I can get the values from the environment variables. Like the following(but that actually work): echo $A_$i $B_$i or A=A_$i B=B_$i echo $A $B returns foo bar then fizz buzz

    Read the article

  • Install grub on 2nd hard drive

    - by jldupont
    I have 2 HDs in my machine: Drive 1 with grub and my Windows XP OS Drive 2 with only Ubuntu 9.04 I would like to be able to boot directly from drive 2. I am missing grub on drive 2... how do I add it? EDIT: I ended up reinstalling the whole OS.

    Read the article

  • Make puppet agent restart itself

    - by SamKrieg
    I've got a file that notifies the puppet agent. In the network module, the proxy settings are included in the .gemrc file like this: file { "/root/.gemrc": content => "http_proxy: $http_proxy\n", notify => Service['puppet'], } The problem is that puppet stops and does not restart. Aug 31 12:05:13 snch7log01 puppet-agent[1117]: (/Stage[main]/Network/File[/root/.gemrc]/content) content changed '{md5}2b00042f7481c7b056c4b410d28f33cf' to '{md5}60b725f10c9c85c70d97880dfe8191b3' Aug 31 12:05:13 snch7log01 puppet-agent[1117]: Caught TERM; calling stop I assume the code does something like /etc/init.d/puppet stop && /etc/init.d/puppet start Since puppet is not running, it cannot start itself... it kind of makes sense. How to make puppet restart itself when this file changes? Note that this file may not exist as well.

    Read the article

  • Run a script after killing lxsession (xorg)

    - by user284194
    I am trying to run a program automatically within a bash script after killing the LXDE session. My script consists of: #!/bin/sh pkill lxsession; sh /home/pi/RetroPie/EmulationStation/emulationstation My aim is to log out of the LXDE session and run EmulationStation on my Raspberry Pi with a bash script. I'm using pkill lxsession; to bypass lxsession's logout confirmation dialog. As it stands, this script just gets me to the command line from a working LXDE desktop. Thanks for reading.

    Read the article

  • Fluxbox startup file not working

    - by Jack
    I am placing apps into my fluxbox startup file as per the instructions, however nothing starts up except fluxbox. It doesn't matter what app I try, so it isn't an app problem. here is my startup file: #!/bin/sh # # fluxbox startup-script: # # Lines starting with a '#' are ignored. # Change your keymap: xmodmap "/home/josh/.Xmodmap" # Applications you want to run with fluxbox. # MAKE SURE THAT APPS THAT KEEP RUNNING HAVE AN ''&'' AT THE END. tint2 & tilda & # And last but not least we start fluxbox. # Because it is the last app you have to run it with ''exec'' before it. exec fluxbox # or if you want to keep a log: # exec fluxbox -log "/home/josh/.fluxbox/log" I have also tried tests such as "touch ~/testwoked" and such, nothing works. It makes no difference if the file is executable or not.

    Read the article

  • How can I make monodevelop render text in KDE?

    - by Spikolynn
    Monodevelop from git in KDE 4.10.2 does not render text in code edit tabs I tried with xfce and text is rendered ok there. I tried disabling composition with alt shift f12 and restarting x server but it was no better. I also tried disabling font softening in monodevelop options and disabling plugins. I also tried temporarily deleting my KDE profile. This is dual screen setup on Nvidia with nouveau. OS is slackware64-current.

    Read the article

  • ubuntu input/output error

    - by rplevy
    I'm having a problem with Ubuntu that I'm finding hard to troubleshoot for reasons that will become clear: reboot -bash: /sbin/reboot: Input/output error dmesg -bash: /bin/dmesg: Input/output error ps -e ps: error while loading shared libraries: /lib/libproc-3.2.8.so: cannot read file data: Input/output error lsof -bash: /usr/bin/lsof: Input/output error fsck -bash: /sbin/fsck: Input/output error badblocks -bash: /sbin/badblocks: Input/output error So I can't see what is going on, and I can't remotely reboot. What can I do to get to the bottom of this? Interestingly: init 0 Segmentation fault I can cat /var/syslog but not /var/log/messages or several other important files. less and more don't work, neither do tail or head, etc.

    Read the article

  • Centos Server/MySQL server problem

    - by Jake
    Hello all, I currently run a website we get about 15,000-20,000 hits a day. We currently run a very active forum, that is hosted using Vbulletin software. We have 4.5 Million Posts, 80,000 Threads, with about 11,000 members of which just under a third is active all the time. Now I am running a Intel Xeon Quad Core (2.13Ghz) with 4GB of RAM, Centos 5.5 and running DirectAdmin on the box to manage it. I also run the current stable version of Apache, MySQL, and php. This is the only site that is hosted on this machine. Now during random times of day sometimes when it gets busy the server load can get to like 20, but this can also happen when we only have like 200 users active too. I dont understand what is causing these problems. Sometimes I get pages that can generate in .2 seconds other times it takes like 5-8 seconds. I have customized the my.cnf file and that has not helped out anything, I didnt know where else to turn so if anyone has any suggestions please let me know. Thank You In advance.

    Read the article

  • "pull" process/job into the background

    - by Mustafa Ismail Mustafa
    I know of terminating a command with & and then moving it into the background by pressing Ctrl-Z and then bg [pid], and I also know of nohup. But say you started a process that turned out to take much longer than one expected, is there a way of pulling, so to speak, this process from another terminal screen into the background so that even if I log off from the server the process would continue?

    Read the article

  • Understanding RedHats recommended tuned profiles

    - by espenfjo
    We are going to roll out tuned (and numad) on ~1000 servers, the majority of them being VMware servers either on NetApp or 3Par storage. According to RedHats documentation we should choose the virtual-guestprofile. What it is doing can be seen here: tuned.conf We are changing the IO scheduler to NOOP as both VMware and the NetApp/3Par should do sufficient scheduling for us. However, after investigating a bit I am not sure why they are increasing vm.dirty_ratio and kernel.sched_min_granularity_ns. As far as I have understood increasing increasing vm.dirty_ratio to 40% will mean that for a server with 20GB ram, 8GB can be dirty at any given time unless vm.dirty_writeback_centisecsis hit first. And while flushing these 8GB all IO for the application will be blocked until the dirty pages are freed. Increasing the dirty_ratio would probably mean higher write performance at peaks as we now have a larger cache, but then again when the cache fills IO will be blocked for a considerably longer time (Several seconds). The other is why they are increasing the sched_min_granularity_ns. If I understand it correctly increasing this value will decrease the number of time slices per epoch(sched_latency_ns) meaning that running tasks will get more time to finish their work. I can understand this being a very good thing for applications with very few threads, but for eg. apache or other processes with a lot of threads would this not be counter-productive?

    Read the article

  • How to determine the Kerberos realm from an LDAP directory?

    - by tstm
    I have two Kerberos realms I can authenticate against. One of them I can control, and the other one is external from my point of view. I also have an internal user database in LDAP. Let's say the realms are INTERNAL.COM and EXTERNAL.COM. In ldap I have user entries like this: 1054 uid=testuser,ou=People,dc=tml,dc=hut,dc=fi shadowFlag: 0 shadowMin: -1 loginShell: /bin/bash shadowInactive: -1 displayName: User Test objectClass: top objectClass: account objectClass: posixAccount objectClass: shadowAccount objectClass: person objectClass: organizationalPerson objectClass: inetOrgPerson uidNumber: 1059 shadowWarning: 14 uid: testuser shadowMax: 99999 gidNumber: 1024 gecos: User Test sn: Test homeDirectory: /home/testuser mail: [email protected] givenName: User shadowLastChange: 15504 shadowExpire: 15522 cn: User.Test userPassword: {SASL}[email protected] What I would like to do, somehow, is to specify per-user basis to which authentication server / realm the user is authenticated against. Configuring kerberos to handle multiple realms is easy. But how to I configure other instances, like PAM, to handle the fact that some users are from INTERNAL.COM and some from EXTERNAL.COM? There needs to be an LDAP lookup of some kind where the realm and the authentication name is fetched from, and then the actual authentication itself. Is there a standardized way to add this information to LDAP, or look it up? Are there some other workarounds for a multi-realm user base? I might be ok with a single realm solution, too, as long as I can specify the user name - realm -combination for the user separately.

    Read the article

  • Revo 3610 not doing hdmi handshake

    - by DoomStone
    I am having a problem with my Revo 3610 witch is connected to my tv via hdmi. For some reason will it not do the hdmi handshake with the tv, so my tv does not think that there are anything in the hdmi port. I have tested the tv and it works find, with my laptop and dvd. It dose work some times, but this time have it failed for 2 days in a row, and i have tried rebooting, turning the tv off and on, and so on nothing helps. I can trick the TV to listen to the HDMI with connecting with my laptop and then change the hdmi back to my revo, this on the other hand results in the image going thoug nicely but there are a big fat "Check signal cable." on the screen. I have also tryed changing the resolution in the revo but this dose not help ether. Have any one had this problem before, and if so how did you fix it? Example: http://i.imgur.com/gguZ4.jpg

    Read the article

  • Why is scp not overwriting my destination file?

    - by Noli
    I'm trying to back up a file via the command scp /tmp/backup.tar.gz hostname:/home/user/backup.tar.gz When I run it, the scp progress bar shows up and it looks like its transferring the file, however when I log into the destination server to check the file, the timestamp and filesize haven't changed from the older version, so it looks like scp didn't overwrite the old file at all. It only sees to work when I manually delete the file from the destination server. I'm running ubuntu, and this is happening on two servers: one cygwin ssh, and one fedora core 3. Anyone have any idea why this is happening? I thought scp would ONLY overwrite existing files.. Thanks

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • strange behaviour - dhclient needs to be run twice in order to connect to wireless

    - by splicer
    I am trying to connect my to my wlan without the use of NetworkManager. I run the following commands after boot: iwconfig wlan0 enc <WEP passwd> mode managed essid <name> channel 6 ifconfig wlan0 up dhclient wlan0 At this point, dhclient stalls for ages (perhaps 2 minutes), then it returns with PING 192.168.1.254 (192.168.1.254) from 192.168.1.65 wlan0: 56(84) bytes of data. --- 192.168.1.254 ping statistics --- 3 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3000ms pipe 3 .. The strange thing is that when I run pkill dhclient; dhclient wlan0 right after this, it connects in about <3 seconds. Any idea what could be the cause of this problem? Edit: oh, and I did try using the -timeout flag on dhclient but that didn't seem to make any difference (it still stalled for ages).

    Read the article

  • How can I check whether a volume is mounted where it is supposed to be using Python?

    - by Ben Hymers
    I've got a backup script written in Python which creates the destination directory before copying the source directory to it. I've configured it to use /external-backup as the destination, which is where I mount an external hard drive. I just ran the script without the hard drive being turned on (or being mounted) and found that it was working as normal, albeit making a backup on the internal hard drive, which has nowhere near enough space to back itself up. My question is: how can I check whether the volume is mounted in the right place before writing to it? If I can detect that /external-backup isn't mounted, I can prevent writing to it. The bonus question is why was this allowed, when the OS knows that directory is supposed to live on another device, and what would happen to the data (on the internal hard drive) should I later mount that device (the external hard drive)? Clearly there can't be two copies on different devices at the same path! Thanks in advance!

    Read the article

  • Script execution flow stopped?

    - by vijay.shad
    Hi all, Now my script is able to start server, But I am still have some problem with my script. When the start server command is executed, the control does not pass the line and does not execute further of that line. Please tell me what is the problem and how can I get smooth execution of the my script.

    Read the article

  • Disabling networkmanager for a specific interface

    - by bdonlan
    I'd like to do some experimentation with hostap without disabling my primary wireless interface. How do I tell networkmanager to keep its hands off a specific interface or interfaces while allowing it to continue managing all other interfaces normally? I'm using Ubuntu 9.04. (Wasn't sure if this should go on superuser or serverfault, as networkmanager isn't much of a 'server' tool - if it belongs on serverfault please feel free to move it) Edit: I've tried adding this to /etc/network/interfaces: allow-hotplug wlan2 iface wlan2 inet static address 192.168.49.1 netmask 255.255.255.0 But this has no apparent effect, even after restarting NetworkManager. Here's my /etc/NetworkManager/nm-system-settings.conf: [main] plugins=ifupdown,keyfile [ifupdown] managed=false Edit[2]: Looks like I needed to restart nm-system-settings, then NetworkManager.

    Read the article

< Previous Page | 398 399 400 401 402 403 404 405 406 407 408 409  | Next Page >