Search Results

Search found 24624 results on 985 pages for 'linux rrt'.

Page 372/985 | < Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >

  • Is it possible to have DisplayLink USB display hotplugging with Xorg 1.13 on kernel 3.4?

    - by lkraav
    keithp seems to be the only one on the interwebs to have written anything about the subject and he worked with 3.5_rc. I don't want to go above 3.4 at the moment for various stability reasons and am trying to see whether I can get this to work. Xorg 1.13 recognizes the display on connection, "udl" module is loaded, xorg-video-modesetting driver also loads, display lights up. So everything seems to be good. I emerged xrandr-9999 (not many changes on top of 1.3.5): $ xrandr --listproviders Providers: number : 2 Provider 0: id: 69 cap: 0x0 crtcs: 2 outputs: 4 associated providers: 0 name:Intel Provider 1: id: 338 cap: 0x0 crtcs: 1 outputs: 1 associated providers: 0 name:modesetting But I can't get any further, just like this guy: $ xrandr --setprovideroutputsource 338 69 X Error of failed request: BadValue (integer parameter out of range for operation) Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Value in failed request: 0x152 Serial number of failed request: 11 Current serial number in output stream: 12 $ xrandr --setprovideroutputsource 1 0 X Error of failed request: 148 Major opcode of failed request: 139 (RANDR) Minor opcode of failed request: 35 () Serial number of failed request: 11 Current serial number in output stream: 12 Any thoughts?

    Read the article

  • How to close all background processes in unix?

    - by Gabi Purcaru
    I have something like: cd project && python manage.py runserver & cd utilities && ./coffee_auto_compiler.py And I want both of them to close on Ctrl-C (or some other command). How can I accomplish that? EDIT: I tried using jobs -x kill and kill `jobs -p `, but it doesn't seem to kill what I need. Here is what I mean: moon 8119 0.0 0.0 7556 3008 pts/0 S 13:17 0:00 /bin/bash moon 8120 6.8 0.4 24568 18928 pts/0 S 13:17 0:00 python manage.py runserver jobs -p give me just process 8119, but I also need to close 8120, since it's the thing that the first command opened. If it helps, the commands are actually in a Makefile, and I want it to run two daemons at the same time (and somehow close them at the same time). And yes, I'm using ubuntu, with bash

    Read the article

  • Fluxbox startup file not working

    - by Jack
    I am placing apps into my fluxbox startup file as per the instructions, however nothing starts up except fluxbox. It doesn't matter what app I try, so it isn't an app problem. here is my startup file: #!/bin/sh # # fluxbox startup-script: # # Lines starting with a '#' are ignored. # Change your keymap: xmodmap "/home/josh/.Xmodmap" # Applications you want to run with fluxbox. # MAKE SURE THAT APPS THAT KEEP RUNNING HAVE AN ''&'' AT THE END. tint2 & tilda & # And last but not least we start fluxbox. # Because it is the last app you have to run it with ''exec'' before it. exec fluxbox # or if you want to keep a log: # exec fluxbox -log "/home/josh/.fluxbox/log" I have also tried tests such as "touch ~/testwoked" and such, nothing works. It makes no difference if the file is executable or not.

    Read the article

  • Replacing compiz/metacity with openbox reduces workspaces to 1

    - by Brian
    I like to use the GNOME desktop, but I prefer to replace its window manager with openbox, with 4 workspaces. However, when I run openbox --replace, the number of workspaces available drops to 1. If I go into obconf, workspaces is still configured to be 4 (~/.config/openbox/rc.xml shows the same). I can get the workspaces to reappear by changing the value in obconf to anything else, and then back to 4. I have just been dealing with this problem since Ubuntu 9.04 (now up to 10.10) since I don't reboot very often. But it's really annoying to have to reset my workspaces whenever I do have to reboot. Changing the value in rc.xml and running openbox --reconfigure does not seem to have any effect. So what is obconf doing that I'm not (sends a dbus message perhaps [EDIT: watching with dbus-monitor I see no messages when changing the workspaces value in obconf])? I was hoping there would be a cleaner way to change the window manager than just running openbox --replace at login. So my questions are: Is there a better way to specify an alternate window manager (i.e. a way that doesn't cause the workspaces to break)? If not, how can I automatically set the number of workspaces back to 4? Update: I finally got around to trying what I commented on MrShunz's answer (adding WINDOW_MANAGER=/usr/bin/openbox to ~/.gnomerc). But the effect is the same as openbox --replace.

    Read the article

  • How to download all file content from a folder using wget and http

    - by user1526912
    I am trying to use wget and http to download all contents from folderAA below to directory /root/sstest wget -r --directory-prefix="/root/sstest" -o /root/sstest2.log http://site.com/folder1/folder2/folderAA/ When I submit the above command nothing is downloaded. If I submit a wget request for a specific file from folderAA the file is actually downloaded to /root/sstest: wget -r --directory-prefix="/root/sstest" -o /root/sstest2.log http://site.com/folder1/folder2/folderAA/file.txt Can someone tell me why I cannot download all file content from folderAA at once using the first wget request?

    Read the article

  • 284 GiB of data, 217.4 GiB of space

    - by Malfist
    I want to reinstall my OS, but I don't have the hard drive space to backup any more (I have a RAID 1 array, so I haven't done it for a while). In my /home I have 284.8 GiB of data, and I have a spare 250 GB (or 217.4 GiB) hard drive that I've been using for backup. What type of compression algorithm (if any) is capable of this type of compression? I don't care about the time, I have a quad core though, so something that utilizes all 4 cores would be great. I have tried 7zip with no success. Ran on one core for two days and failed because of lack of space. Any ideas?

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

  • p2v v2v v2p tool from open source?

    - by neolix
    we have centos, fedora, ubuntu server and desktop we are looking for good open source tool for p2v v2v v2p and we are not using vmware here only we use xen or kvm. Same of the server shifted to new hardware and same of the server on xen or kvm. Can same help me !!

    Read the article

  • Different files on shared partition?

    - by Matt Robertson
    I am dual-booting Windows 8 and Ubuntu 12.04. My partition scheme looks like this: /dev/sda1 - Windows 8 (nfts) /dev/sda2 - Ubuntu / (ext4) /dev/sda3 - Ubuntu home (ext4) /dev/sda5 - swap /dev/sda6 - Shared data partition (exfat) (First off, yes I do have exfat libraries installed on Ubuntu) I created some PNG images in Windows and saved them on my shared partition. From Ubuntu, I edited the images in GIMP and saved them (replacing the ones on the shared partition). When I boot into Windows, the files appear unchanged - exactly like they did before I edited them from Ubuntu. I even added a folder and deleted some other files, but none of these changes exist in Windows. When I boot into Ubuntu, all of the changes are still there. It is as if Windows is caching the old file structure... How is this possible? Thanks in advance. Edit -- commands output ~~ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 465.8G 0 disk +-sda1 8:1 0 165.1G 0 part +-sda2 8:2 0 21.3G 0 part / +-sda3 8:3 0 98.9G 0 part /home +-sda4 8:4 0 1K 0 part +-sda5 8:5 0 7.8G 0 part [SWAP] +-sda6 8:6 0 172.7G 0 part /mnt/shared_data ~~ /etc/fstab # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # /dev/sda2 UUID=8f700f65-b5c7-4afc-a6fb-8f9271e0fb5e / ext4 errors=remount-ro 0 1 # /dev/sda3 UUID=f0d688b7-22bd-4fa7-bc1b-a594af2933fa /home ext4 defaults 0 2 # /dev/sda5 UUID=3bc2399b-5deb-4f04-924b-d4fc77491997 none swap sw 0 0 # /dev/sda6 UUID=F2DE-BC47 /mnt/shared_data exfat defaults 0 3 ~~ /etc/mtab /dev/sda2 / ext4 rw,errors=remount-ro 0 0 proc /proc proc rw,noexec,nosuid,nodev 0 0 sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0 none /sys/fs/fuse/connections fusectl rw 0 0 none /sys/kernel/debug debugfs rw 0 0 none /sys/kernel/security securityfs rw 0 0 udev /dev devtmpfs rw,mode=0755 0 0 devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=0620 0 0 tmpfs /run tmpfs rw,noexec,nosuid,size=10%,mode=0755 0 0 none /run/lock tmpfs rw,noexec,nosuid,nodev,size=5242880 0 0 none /run/shm tmpfs rw,nosuid,nodev 0 0 /dev/sda3 /home ext4 rw 0 0 /dev/sda6 /mnt/shared_data fuseblk rw,nosuid,nodev,allow_other,blksize=4096 0 0 binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,noexec,nosuid,nodev 0 0 gvfs-fuse-daemon /home/matt/.gvfs fuse.gvfs-fuse-daemon rw,nosuid,nodev,user=matt 0 0

    Read the article

  • How to run KDM or GDM over ssh

    - by Xolve
    I have a computer on LAN running ssh. I can normally tunnel the GUI application using ssh computer-name -X program-name But I wam my full desktop to be running on a remote computer using ssh so that I can just use that computer remotely like a local desktop. For this I think I will need to run KDM (or GDM ) remotely, what configuration do I need to do to make this happen?

    Read the article

  • How do I burn Xubuntu Live CD

    - by Julian
    I downloaded the 600+ MB Xubunto ISO. Burnt it to a DVD using Nero Burning Rom, as a Bootable DVD. My bootup sequence won't detect XUbuntu and still only detects windows on my Hard Drive even after I set my BIOS to boot from the CDROM first. How do I burn the Live CD with Nero? I'm thinking maybe I should extract the contents and then burn the folder as data to my DVD. P.S: I only have DVDs lying around.

    Read the article

  • Disabling networkmanager for a specific interface

    - by bdonlan
    I'd like to do some experimentation with hostap without disabling my primary wireless interface. How do I tell networkmanager to keep its hands off a specific interface or interfaces while allowing it to continue managing all other interfaces normally? I'm using Ubuntu 9.04. (Wasn't sure if this should go on superuser or serverfault, as networkmanager isn't much of a 'server' tool - if it belongs on serverfault please feel free to move it) Edit: I've tried adding this to /etc/network/interfaces: allow-hotplug wlan2 iface wlan2 inet static address 192.168.49.1 netmask 255.255.255.0 But this has no apparent effect, even after restarting NetworkManager. Here's my /etc/NetworkManager/nm-system-settings.conf: [main] plugins=ifupdown,keyfile [ifupdown] managed=false Edit[2]: Looks like I needed to restart nm-system-settings, then NetworkManager.

    Read the article

  • DNS propagation delay or bad configuration?

    - by Javier Martinez
    I have been waiting the DNS propagation for almost 24 hours. I'am no impatient, but I want to know if I configured my zone good or I have any error in it. I think that is good, because if I use my server dns like my DNS secondary I can resolve and lookup host well. ; ; BIND data file for mydomain.net ; $TTL 86400 @ IN SOA mydomain.net. mydomain.net. ( 20120629 ; Serial 10800 ; Refresh 3 hours 3600 ; Retry 1 hour 604800 ; Expire 1 week 86400 ) ; Negative Cache TTL ; @ IN NS ns1 @ IN NS ns2 IN MX 10 mail ns1 IN A 5.39.X.Y ns2 IN A 5.39.X.Z There is not any errors in /var/syslog about bind daemon. Is everything correct? Do I only need to wait up to 48 hours for the right DNS propagation? My nslookup from a remote machine with the nameserver of the bind host: $ nslookup mydomain.net Server: bind-host-ip Address: bind-host-ip#53 Name: mydomain.net Address: domain-ip

    Read the article

  • MySQL - allow connection from remote machine as root user

    - by Senthil Kumar
    Hi all, When I installed MySQL server in Windows, there was an option "Allow root connection from remote machine". I checked that option and I had no probs when using it. I installed MySQL server in Ubuntu 9.04 using apt-get install. I can connect to the sql server from the same machine but when I try to connect from a virtual machine, it doesn't work. My guess is that I should allow root connection from remote machine. How to do that?

    Read the article

  • High level command line program for burning CDs and DVDs?

    - by stickmangumby
    I'm sick of screwing around trying to script a clean solution to burn multiple files and folders to CDs and DVDs with wodim, growisofs and genisoimage. I'm looking for a high level command line program that uses sensible defaults and takes arguments something like this: [program-name] [cd|dvd] /path/to/dir1/ /path/to/dir2/ /path/to/file ... It should then do all the low level copying and ISO generation transparently and just burn the damn disk! Does anyone have any suggestions? I've looked at several programs but it seems there are too many choices to trawl through and not enough information about them online. Thanks :)

    Read the article

  • Basic OpenVPN setup not working

    - by WalterJ89
    I am attempting to connect 2 win7 (x64+ x32) computers (there will be 4 in total) using OpenVPN. Right now they are on the same network but the intention is to be able to access the client remotely regardless of its location. The Problem I am having is I am unable to ping or tracert between the two computers. They seem to be on different subnets even though I have the mask set to 255.255.255.0. The server ends up as 10.8.0.1 255.255.255.252 and the client 10.8.0.6 255.255.255.252. And a third ends up as 10.8.0.10. I don't know if this a Windows 7 problem or something I have wrong in my config. Its a very simple set up, I'm not connecting two LANs. this is the server config (removed all the extra lines because it was too ugly) port 1194 proto udp dev tun ca keys/ca.crt cert keys/server.crt key keys/server.key # This file should be kept secret dh keys/dh1024.pem server 10.8.0.0 255.255.255.0 ifconfig-pool-persist ipp.txt client-to-client duplicate-cn keepalive 10 120 comp-lzo persist-key persist-tun status openvpn-status.log verb 6 this is the client config client dev tun proto udp remote thisdomainis.random.com 1194 resolv-retry infinite nobind persist-key persist-tun ca keys/ca.crt cert keys/client.crt key keys/client.key ns-cert-type server comp-lzo verb 6 Is there anything I missed in this? keys are all correct and the vpn's connect fine, its just the subnet or route issue. Thank You EDIT it seems on the server the openvpn-status.log has the routes for the client SERVER OpenVPN CLIENT LIST Updated,Wed May 19 18:26:32 2010 Common Name,Real Address,Bytes Received,Bytes Sent,Connected Since client,192.168.10.102:50517,19157,20208,Wed May 19 17:38:25 2010 ROUTING TABLE Virtual Address,Common Name,Real Address,Last Ref 10.8.0.6,client,192.168.10.102:50517,Wed May 19 17:38:56 2010 GLOBAL STATS Max bcast/mcast queue length,0 END Also this is from the client.log file: Which seems to be correct C:\WINDOWS\system32\route.exe ADD 10.8.0.0 MASK 255.255.255.0 10.8.0.5 Another EDIT 'route print' on the server shows the route: Destination Mask Gateway Interface 10.8.0.0 255.255.255.0 10.8.0.2 10.8.0.1 the same on the client shows 10.8.0.0 255.255.255.0 10.8.0.5 10.8.0.6 So the routes are there.. what can the problem be? Is there anything wrong with my configs? Why would OpenVPN be having problems communicating?

    Read the article

  • Out Of Memory Error - Magento

    - by robobobobo
    Ok normally I understand when my server is giving me out of memory errors, but this one has me stumped! I'm running a magento based site, with one or two plugins in it and the rest is pretty basic. The site runs and loads fine wiht no issues. However in the backend - Configuration - Payment Methods it gives me the following out of memory error Fatal error: Out of memory (allocated 39059456) (tried to allocate 85 bytes) in ########/Varien/Simplexml/Element.php on line 84 Now this is where I'm confused..it's allocated more than it tried to allocate? Am I correct there? So how is it running out of memory? My server has 6Gb ram, an SSD and 2 CPU's running WHM with a few other low traffic sites on it. I set my php memory limit to 100mb, 1000mb and finally unlimited but all to no avail! I'm completely lost here, would really appreciate some expertise on this Cheers

    Read the article

  • VirtualBox - multiple guests, each with a single bridged adapter?

    - by Martin
    I am running a dedicated server (located at Hetzner, Germany) that runs VirtualBox in order to virtualize several services accross multiple virtual guests. Those guests are supposed to communicate with each other (for instance, a virtual web server has to access a virtual database server); to be reachable from the dedicated server (for instance, SSH access); and to access the Internet via the dedicated server (for instance, to download security updates) Currently, this is achieved by having host-only adapter vboxnet0 on the dedicated server and two virtual interfaces on each guest. There, virtual adapter eth0 is attached to vboxnet0 (to achieve (1) and (2)), virtual adapter eth1 is attached to VirtualBox' NAT (to achieve (3)). Via eth0, the guests have access to a DHCP and a DNS server, both running on the dedicated server (there, bound to vboxnet0). This allows me to assign custom IP addresses and names. Via eth1, VirtualBox pushes a proper route that enables each guest to access the Internet (via eth0 on the dedicated server). This setup with two virtual adapters frequently leads to problems and at leasts complicates many things. For instance, on the dedicated server there is OpenVPN which allows to access the virtual machines via the Internet; futhermore, there is Shorwall that controls the incoming and outgoing network traffic between the Internet, the dedicated server, and the individual virtual machines. Not to mention automatic installation of servers via PXE... Therefore, I would prefer to have only one single virtual adapter on each guest which would be used for both incoming and outgoing connections. As far as I understand, one would basically use a bridged interface for that very purpose. Now the question arises: Which interface on the dedicated server would the bridge use? eth0 on the host server is not an option, as this is prohibited by the provider. A virtual interface eth0:0 would not make any sense, as a bridge always uses a physical interface (eth0 in this case). Would it be possible to create a bridged interface in each virtual machine that would "dangle in the air"? Thus, without a complement on the dedicated server? How would I have to set up the routing on the host server? Please note that the host / dedicated server has only one network adapter (eth0) which is connected to the provider's network. Regards, Martin

    Read the article

  • My new Intel X25-M G2 and the alignment thingy

    - by Oli
    I just bought a new SSD for my laptop, which is going to be a server running ArchLinux with grub2, GPT and btrfs. My layout should look like this: (grub-partition?) /boot ext2 75MB / btrfs 15GB /home btrfs remaining What do I need to do to create these partitions in a correctly aligned fashion using parted? Do I need to consider alignment when formatting each partition with the desired file system?

    Read the article

  • Amarok 2.1.1 Does not go to the next song

    - by nigative
    Hi, I just updated Amarok from KDE3 version and it looks a bit weird and different. But my problem is after I updated my music collection and started to play it (it is all loaded into my playlist), amarok doesn't start next song after the current song is finished. I have repeat(repeat playlist) and random(random tracks) options enabled. Thanks.

    Read the article

  • Postfix how to triggering my script when outgoing email status is sent?

    - by Laszlo Malina
    I want to run a program when postfix has successfully sent out a mail (local or remote). I would like to pass the headers to program and if possible also the destination ip or address (exclude spam filter delivery). I just have an idea: Delivery Status Notification processing via uniqe transport program, but I'd prefer the above. My goal is to be recorded lifetime (events) of email: it came, it went out (from, to, subject, datetime, message id, message status: bounce, sent). I would only need the state of the outgoing mail, because incoming and bounce program is working. It is possible to trigger a program (similar to a transport pipe/spawn) or DSN "cheat" stay? Thanks in advance for any reply!

    Read the article

< Previous Page | 368 369 370 371 372 373 374 375 376 377 378 379  | Next Page >