Search Results

Search found 26263 results on 1051 pages for 'linux guest'.

Page 387/1051 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • freeradius maximum session time problem

    - by haw3d
    hello I'm using openvpn and free-radius for control user accounts. for maximum session time for an user, free-radius has sqlcounter.conf that control that, but after a connection has disconnected that is useful and cannot destroy a connection. for control account time dynamically i need another script that do that. but should anytime that a connection has established a trigger run. is anyway to fire a custom trigger or script when a connection has established? or any way to control session time dynamically?

    Read the article

  • OpenSSL 0.9.8k or higher on CentOS 5?

    - by davr
    I need to upgrade OpenSSL on my CentOS server to 0.9.8k or higher, however the latest version in the official CentOS repositories is 0.9.8e, much too old. Is there a 3rd party repository I can use that has newer versions of OpenSSL libraries? If not, can someone provide a quick walkthrough of compiling a newer version of OpenSSL for CentOS? I need it to replace the built in version, so the walkthrough would have to explain how to create a CentOS-compatible RPM.

    Read the article

  • Amazon EC2 tools for Debian?

    - by Jonik
    What is the recommended way of getting command-line Amazon EC2 tools on Debian? So, basically the same as this question, but for EC2 instead of S3. Ubuntu has ec2-ami-tools and ec2-api-tools, but I couldn't find equivalent packages for Debian. A blog post titled "Install EC2 AMI & API tools in Debian" talks about installing Amazon's packages outside package management, but that seems a little clumsy.

    Read the article

  • Install grub on 2nd hard drive

    - by jldupont
    I have 2 HDs in my machine: Drive 1 with grub and my Windows XP OS Drive 2 with only Ubuntu 9.04 I would like to be able to boot directly from drive 2. I am missing grub on drive 2... how do I add it? EDIT: I ended up reinstalling the whole OS.

    Read the article

  • Does Ubuntu Server have any sort of cron job to automatically clear /tmp?

    - by DWilliams
    I know it clears out /tmp on reboots, but I haven't been able to find any sort of cron job on my server that clears /tmp. I recently set up a script that writes lots of files to /tmp and my server usually goes several months between reboots so I'm concerned about it being cluttered. I've seen several other distros that have a tmpwatch script installed by default. Ubuntu's repository seems to have replaced tmpwatch with tmpreaper. Is there any mechanism in place on Ubuntu (8.04 currently, soon to be upgraded to 10.04 when I get around to it) to clean up temp files on a server that doesn't regularly reboot or do I need to install tmpreaper?

    Read the article

  • smtpd_helo_restrictions = ..., reject_unknown_helo_hostname occasionally rejects mail I care about, how to handle?

    - by lkraav
    I have configured my postfix as follows: smtpd_helo_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unknown_helo_hostname This is working well because most spambots don't seem to have correct reverse lookups. But every once in a while I run into mail I care about getting reject, because the mail source server admin doesn't care about configuring his server correctly. For example here the server introduces itself as "srv1.xbmc.org" which has no DNS record and fails my basic check. Jan 6 04:42:36 mail postfix/smtpd[660]: connect from xbmc.org[205.251.128.242] Jan 6 04:42:37 mail postfix/smtpd[660]: NOQUEUE: reject: RCPT from xbmc.org[205.251.128.242]: 450 4.7.1 <srv1.xbmc.org>: Helo command rejected: Host not found; from=<[email protected]> to=<[email protected]> proto=ESMTP helo=<srv1.xbmc.org> I have tried to contact the server admin several times, but there is no response. What is the optimal way to handle this from my side? Is adding these "special" hosts to mynetworks = my only option? Is perhaps my whole smtpd_helo_restrictions setup wrong in some significant way?

    Read the article

  • Kernel Memory Leak in Ubuntu 9.10?

    - by kayahr
    After some days of work (Using suspend-to-ram during the night) I notice I loose more and more available memory. Even when I close all applications the situation doesn't improve. I even went down to the command line and closed ALL running processes except the init process and the bash I'm working in. I unmounted all these ram disks which Ubuntu is using, I even unloaded all modules which could be unloaded. But still "free" tells me that 1 GB of RAM is used (without buffers/cache). In "top" there is no visible process which occupies all this memory. The only way to free the memory is restarting the machine. How can I find out where I lose all this memory? Is there a known "suspect" who can cause a problem like this? I'm using Ubuntu 9.10 64 bit on a Dell Latitude E6500 (4 GB RAM) with the latest closed-source nvidia driver and Gnome with Compiz. The applications I use most of the time are firefox and eclipse. Any hints how I can find the problem? I'm not a kernel hacker so if the solution is patching the kernel or something like that then I might be out of the game...

    Read the article

  • System information shown when booting Debian

    - by WebDevHobo
    When booting Debian, you'll see it printing a lot of information about the system variables and such. I don't really need to see all that, so I'd like to modify some scripts to make sure that on boot, it just does what it has to do, without printing it on the screen. Just something I fancy. Offcourse, still seeing errors would be nice. But that long slur of text, I could do without. I've tried looking it up, but I can't find documentation on this specific thing anywhere.

    Read the article

  • Triple monitor setting in Linux with USB-HDMI adapter

    - by Oscar Carballal
    I'm trying to set up a triple monitor desktop at my office using Fedora 17, but it seems impossible, let me explain the setting: Laptop ASUS K53SD with 2 graphic cards, Intel and nVidia (Screen controled by Intel card) 24" Full HD monitor connected to the HDMI output (controlled by Intel card) 23" Full HD monitor connected to an USB-HDMI adapter (via framebuffer in /dev/fb2, apparently) VGA output (not used) controlled by nVidia card First of all, the USB-HDMI adapter works perfectly, it gives me a green screen (which means the communication is OK) and I can make it work if I set up a single monitor setting via framebuffer in Xorg. Here I leave the page where I got the instructions: http://plugable.com/2011/12/23/usb-graphics-and-linux Now I'm trying to set up the the two main monitors (laptop and 24") with the intel driver and the 23" with the framebuffer, but the most succesful configuration I get is the two main monitors working and the third disconnected. Do you have any idea what can I do to make this work? Here I leave my xRandr output and my Xorg conf: -> xrandr Screen 0: minimum 320 x 200, current 3286 x 1080, maximum 8192 x 8192 LVDS1 connected 1366x768+0+0 (normal left inverted right x axis y axis) 344mm x 193mm 1366x768 60.0*+ 1024x768 60.0 800x600 60.3 56.2 640x480 59.9 VGA2 disconnected (normal left inverted right x axis y axis) HDMI1 connected 1920x1080+1366+0 (normal left inverted right x axis y axis) 531mm x 299mm 1920x1080 60.0*+ 50.0 25.0 30.0 1680x1050 59.9 1680x945 60.0 1400x1050 74.9 59.9 1600x900 60.0 1280x1024 75.0 60.0 1440x900 75.0 59.9 1280x960 60.0 1366x768 60.0 1360x768 60.0 1280x800 74.9 59.9 1152x864 75.0 1280x768 74.9 60.0 1280x720 50.0 60.0 1440x576 25.0 1024x768 75.1 70.1 60.0 1440x480 30.0 1024x576 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 720x576 50.0 848x480 60.0 720x480 59.9 640x480 72.8 75.0 66.7 60.0 59.9 720x400 70.1 DP1 disconnected (normal left inverted right x axis y axis) 1920x1080_60.00 60.0 The Xorg file: # Xorg configuration file for using a tri-head display Section "ServerLayout" Identifier "Layout0" Screen 0 "HDMI" 0 0 Screen 1 "USB" RightOf "HDMI" Option "Xinerama" "on" EndSection ########### MONITORS ################ Section "Monitor" Identifier "USB1" VendorName "Unknown" ModelName "Acer 24as" Option "DPMS" EndSection Section "Monitor" Identifier "HDMI1" VendorName "Unknown" ModelName "Acer 23SH" Option "DPMS" EndSection ########### DEVICES ################## Section "Device" Identifier "Device 0" Driver "intel" BoardName "GeForce" BusID "PCI:0:02:0" Screen 0 EndSection Section "Device" Identifier "USB Device 0" driver "fbdev" Option "fbdev" "/dev/fb2" Option "ShadowFB" "off" EndSection ############## SCREENS ###################### Section "Screen" Identifier "HDMI" Device "Device 0" Monitor "HDMI1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection Section "Screen" Identifier "USB" Device "USB Device 0" Monitor "USB1" DefaultDepth 24 SubSection "Display" Depth 24 EndSubSection EndSection

    Read the article

  • Unattended Kickstart Install

    - by Eric
    I've looked around quite a bit and have seen similar setup and questions, but none seem to work for me. I'm using the following command to create a custom ISO: /usr/bin/livecd-creator --config=/usr/share/livecd-tools/test.ks --fslabel=TestAppliance --cache=/var/cache/live This works great and it creates the ISO with all of the packages and configs I want on it. My issue is that I want the install to be unattended. However, every time I start the CD, it asks for all of the info such as keyboard, time zone, root password, etc. These are my basic settings I have in my kickstart script prior to the packages section. cdrom install autopart autostep xconfig --startxonboot rootpw testpassword lang en_US.UTF-8 keyboard us timezone --utc America/New_York auth --useshadow --enablemd5 selinux --disabled services --enabled=iptables,rsyslog,sshd,ntpd,NetworkManager,network --disabled=sendmail,cups,firstboot,ip6tables clearpart --all So after looking around, I was told that I need to modify my isolinux.cfg file to either do "ks=http://X.X.X.X/location/to/test.ks" or "ks=cdrom:/test.ks". I've tried both methods and it still forces me to go through the install process. When I tail the apache logs on the server, I see that the ISO never even tries to get the file. Below are the exact syntax I'm trying on my isolinux.cfg file. label http menu label HTTP kernel vmlinuz0 append initrd=initrd0.img ks=http://192.168.56.101/files/test.ks ksdevice=eth0 label localks menu label LocalKS kernel vmlinuz0 append initrd=initrd0.img ks=cdrom:/test.ks label install0 menu label Install kernel vmlinuz0 append initrd=initrd0.img root=live:CDLABEL=PerimeterAppliance rootfstype=auto ro liveimg liveinst noswap rd_NO_LUKS rd_NO_MD rd_NO_DM menu default EOF_boot_menu The first 2 give me a "dracut: fatal: no or empty root=" error until I give it a root= option and then it just skips the kickstart completely. The last one is my default option that works fine, but just requires a lot of user input. Any help would be greatly appreciated.

    Read the article

  • How do I burn Xubuntu Live CD

    - by Julian
    I downloaded the 600+ MB Xubunto ISO. Burnt it to a DVD using Nero Burning Rom, as a Bootable DVD. My bootup sequence won't detect XUbuntu and still only detects windows on my Hard Drive even after I set my BIOS to boot from the CDROM first. How do I burn the Live CD with Nero? I'm thinking maybe I should extract the contents and then burn the folder as data to my DVD. P.S: I only have DVDs lying around.

    Read the article

  • MySQL Master - Master Broken

    - by Recc
    I've Inherited a Mysql master master system, I've noticed the second master (lets call it slave from now on as it's running on a 'slave' machine) stopped getting its db's updated. I saw that Master: Slave_IO_Running: Yes Slave_SQL_Running: Yes Slave: (with an error I truncated) Slave_IO_Running: Yes Slave_SQL_Running: No Last_Errno: 1062 Last_Error: Error 'Duplicate entry '3' for key 'PRIMARY'' on [...] I don't know what caused it to process considering we cant get duplicate there. What's important is to resume normal operations; Right now I've stop slave; on the Master and stop slave; on the Slave because I saw that if I change records on the Slave the changes Do Get Propagated to Master which is in active use. How do I: Force sync EVERYTHING from master to slave without affecting data on master? Then hopefully have slave pickup replication as usual? UPDATE OK I Tried deleting all tables on slave then it complained in that error section that the 'table' doesnt exist. So i made a no data dump of Master, and made sure I have only empty tables in Secondary (slave). I start slave; on slave BUT now it's complaining about bloody alter table statements for instance: Last_Errno: 1060 Last_Error: Error 'Duplicate column name [...] Query: 'ALTER TABLE [...] How to skip the fracking alter statements I just want to replicate the bloody data and be done with it, my tables have the lates changes already FFS and now its complaining about changes made after the replication seized weeks ago How do I reset the log or something? OUTSTANDING Why would this start happening? The "Secondary" is propagating to "Primary". "Primary" is not propagating to "Secondary". But any fixes I tried to do left it in the same state Yes-Yes Yes-No with same Last_Error. I think around that time the server was taken off the network, could that confuse MySQL in some way?

    Read the article

  • Permissions nightmare - tried all I know

    - by Ben
    Working on a new client's dev site, which is a wordpress install on a Plesk box. I have SSH root access, and FTP access through a separate account. What I've done so far Initially I couldn't make any changes to any files at all. The permissions on all the template files looked a little screwy (644), so I figured change them to allow group, and add myself to the group: CHMOD Recursive on the theme folder to set everything to 664 Quickly realised I'd broken it, set the folders to 755, kept files as 664 Ownership on all files is a mixture of root:root and 500:500 (there is no user nor group with the ID of 500 on the server). Added myself to the group 'root' so I could modify the files too The Problem This worked OK, in terms of being able to edit the existing files, so I began working. However, I can't upload to the directory, even having run CHOWN -R root:root templatefolder/ and being in the root group. I feel like I must be missing something obvious, and it's doing my head in. Questions: Files in the install owned by 500 with group 500 - I've looked in /etc/group and /etc/passwd and there is no user nor group with this ID. Is that left over from another developer's setup or the previous server (they moved recently)? Is being in the 'root' group enough, or do I need to own the theme folder as 'myftpuser' in order to upload and create new files? Like I say, I have edit access, so I got myself this far. I'm now questioning what to do next!

    Read the article

  • Application (was Firefox) crash on first load on Ubuntu Linux on older Dell Laptop

    - by Ira Baxter
    I've had a Dell Latitude laptop since about 2000 without managing to destroy it. A month ago the Windows 2000 system on it did something stupid to its file system and Windows was completely lost. No point in reinstalling Windows 2000, so I installed an Ubuntu Linux on the laptop. Everything seems normal (installed, rebooted, I can log in, run GnuChess, poke about). ... but ... when I attempt to launch Firefox from the top bar menu icon, I get a bunch of disk activity, the whirling cursor icon goes round a bit and then (WAS: everything stops: icon, mouse. Literally nothing happens for 5 minutes. Ubuntu is dead, as far as I can tell. EDIT : on further investigation, spinning icon, mouse operated by touchpad freeze. There's apparantly a little disk activity occuring about every 5 seconds. I wait 5-10 minutes, behavior doesn't change) A reboot, and I can repeat this reliably. So on the face of it, everything works but Firefox. That seems really strange. The only odd thing about this system when Firefox is booting is that while it has an Ethernet port (that worked fine under Windows), it isn't actually plugged into an Ethernet. As this is the first Firefox boot since the Ubuntu install, maybe Firefox mishandles Internet access? Why would that crash Ubuntu? (I need to go try the obvious experiment of plugging it in). EDIT: I tried to run the Disk manager tool, not that I cared what it was, just a menu-available application. It started up like Firefox, I get a little tag in the lower left saying Disk P*** something had started, and then the same behavior as Firefox. At this point, I don't think its the Ethernet. Is it possible that the Ubuntu disk driver can't handle the disk controller in this older laptop? The install seemed to go fine.

    Read the article

  • How to mount remote samba share from local host with multiple groups?

    - by Dragos
    I am using mount.cifs to mount a remote samba share (both client and server are Ubuntu server 8.04) like this: mount.cifs //sambaserver/samba /mountpath -o credentials=/path/.credentials,uid=someuser,gid=1000 $ cat .credentials username=user password=password I mounted a user from local system with username and password with mount.cifs but the problem is that the user is part of multiple groups on the remote system and with mount.cifs I can only specify one gid. Is there a way to specify all the gids that the remote user has? Is there a way to: Mount the remote samba with multiple groups on the local system? Browse the mount from 1) with the terminal since I want to pass some files from samba as arguments to local programs. Other solutions would be: nautilus sftp:// which runs through gvfs; but the newer gnome does not write to disk the ~/.gvfs anymore so I can't browse it in terminal. And the last solution would be NFS but that means that I have to synchronize the uids and gids on the local system with the ones from the server.

    Read the article

  • Default gateway is in different subnet. How to configure in RHEL6.2

    - by Dmytro Leonenko
    I have two subnets routed to my server from ISP. I have only one gateway ip. The gateway is on the same VLAN as my IP address. For example netowrk 1 is 1.0.0.0/24 and network 2 is 2.0.0.0/24. Both are routed to eth0 by my ISP. Gateway is 1.0.0.1. My host ip is 2.0.0.1/24 (eth0) So I can configure default gateway manually with ip route add default dev eth0 ip route add default via 1.0.0.1 and then internet connection works properly. How do I configure it in /etc/sysconfig/network-scripts/ifcfg-eth0 ? I tried to set GATEWAY=1.0.0.1 but it doesn't work. Tried to set GATEWAY and GATEWAYDEV in /etc/sysconfig/network and it does only what first command from listing above do.

    Read the article

  • Slight delay when switching modes in vim using tmux or screen

    - by Ton van den Heuvel
    Switching to and from insert mode in Vim is no longer instantaneous since I use tmux. After pressing Esc in insert mode, it takes a noticeable amount of time to actually get out of insert mode. After pressing Esc and any other key afterwards the switch is immediate, and the command for the key pressed after Esc is executed. Any idea what might cause this? The Vim configuration is not the problem as the delay does not occur when I run Vim outside tmux, so this is probably related to tmux somehow. I use gnome-terminal btw. Also worth noting, it seems I can not define key bindings in tmux for Esc, my plan was to bind Esc to: bind Escape send-keys ^[ Alas, it seems binding anything to Esc for tmux does not work. The same problem occurs in screen as well.

    Read the article

  • Maximizing TCP connections on HAProxy load balancer

    - by imaginative
    I am currently using HAProxy in order to load balance tcp connections from clients to my Erlang app server. The connection is persistent, which means I'm limited to roughly 64K clients on an optimized server (I'm currently running HAProxy on an m1.large EC2 instance). My app server is designed to horizontally scale based on the number of TCP connections. What's worrying me though is I'll need an equal number of HAProxy servers as app servers since it's a 1:1 connection. Is there currently a way to "proxy" the tcp connection to the app server so that once HAProxy sends the client off to my Erlang server, it can free up the connection, ready to serve another client? Are there any papers, existing solutions out there I can read so that I only have to worry about the 64K limit on my app servers, and not on the load balancing servers themselves?

    Read the article

  • How to run KDM or GDM over ssh

    - by Xolve
    I have a computer on LAN running ssh. I can normally tunnel the GUI application using ssh computer-name -X program-name But I wam my full desktop to be running on a remote computer using ssh so that I can just use that computer remotely like a local desktop. For this I think I will need to run KDM (or GDM ) remotely, what configuration do I need to do to make this happen?

    Read the article

  • Downloading multiple files with wget and handling parameters

    - by coure2011
    How can I download multiple files using wget? I also want to rename the files. Here are the commands I'm running one by one (copy/paste on terminal): wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720774/PS11.rar -O part11.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812721094/PS12.rar -O part12.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720804/PS13.rar -O part13.rar wget -c --load-cookies cookies.txt http://www.filesonic.com/file/812720854/PS14.rar -O part14.rar ........ and so on.. What can I do to download all these files one by one?

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >