Search Results

Search found 25088 results on 1004 pages for 'dsl linux'.

Page 351/1004 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • VNC on Xen failure

    - by BCable
    The following config works and creates a good VM in Xen: # Kernel Setup kernel = "/boot/vmlinuz-2.6.18.8-xenU" # Memory memory = "256" # Disk disk = [ "file:/opt/xen/domains/110/sda1.img,sda1,w", "file:/opt/xen/domains/110/swap.img,sda2,w" ] # container name name = "110" hostname = "boo" # Networking vif = ["type=ieomu, bridge=xenbr0"] # VNC vnc = 1 #vfb = [ 'type=vnc,vncdisplay=2,vnclisten=0.0.0.0,vncpasswd=110' ] # Behavior Settings root = "/dev/sda1" extra = "fastboot" But when I uncomment the VFB line, I get the following error after it hangs for at least 30 seconds: [root@customer 110]# xm create boo.cfg Using config file "./boo.cfg". Error: Device 0 (vkbd) could not be connected. Hotplug scripts not working. Any ideas? Part two of this question: Sometimes it actually works, and a port is opened. When this happens, nmap shows the VNC ports open and I can connect via the VNC client, but it just hangs at "Connection established." and no VNC display shows up. I've tried multiple VNC clients (TightVNC, TightVNC Java Console, RealVNC), but they all fail to connect. Does VNC through Xen require X to be started in order to function? I was under the impression that it would show the console screen, so I'm confused as to why all these issues are occurring. Thanks!

    Read the article

  • Apache access.log interpretation

    - by Pantelis Sopasakis
    In the log file of apache (access.log) I find log entries like the following: 10.20.30.40 - - [18/Mar/2011:02:12:44 +0200] "GET /index.php HTTP/1.1" 404 505 "-" "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.7.62 Version/11.01" Whose meaning is clear: The client with IP 10.20.30.40 applied a GET HTTP method on /index.php (that is to say http://mysite.org/index.php) receiving a status code 404 using Opera as client/browser. What I don't understand is entries like the following: 174.34.231.19 - - [18/Mar/2011:02:24:56 +0200] "GET http://www.siasatema.com HTTP/1.1" 200 469 "-" "Python-urllib/2.4" So here what I see is that someone (client with IP 174.34.231.19) accessed http://www.siasatema.com and got a 200 HTTP status code(?). It doesn't make sense to me... the only interpretation I can think of is that my apache server acts like proxy! Here are some other requests that don't have my site as destination... 187.35.50.61 - - [18/Mar/2011:01:28:20 +0200] "POST http://72.26.198.222:80/log/normal/ HTTP/1.0" 404 491 "-" "Octoshape-sua/1010120" 87.117.203.177 - - [18/Mar/2011:01:29:59 +0200] "CONNECT 64.12.244.203:80 HTTP/1.0" 405 556 "-" "-" 87.117.203.177 - - [18/Mar/2011:01:29:59 +0200] "open 64.12.244.203 80" 400 506 "-" "-" 87.117.203.177 - - [18/Mar/2011:01:30:04 +0200] "telnet 64.12.244.203 80" 400 506 "-" "-" 87.117.203.177 - - [18/Mar/2011:01:30:09 +0200] "64.12.244.203 80" 400 301 "-" "-" I believe that all these are related to some kind of attack or abuse of the server. Could someone explain to may what is going on and how to cope with this situation? Update 1: I disabled mod_proxy to make sure that I don't have an open proxy: # a2dismod proxy Where from I got the message: Module proxy already disabled I made sure that there is no file proxy.conf under $APACHE/mods-enabled. Finally, I set on my browser (Mozzila) my IP as a proxy and tried to access http://google.com. I was not redirected to google.com but instead my web page appeared. The same happened with trying to access http://a.b (!). So my server does not really work as a proxy since it does not forward the requests... But I think it would be better if somehow I could configure it to return a status code 403. Here is my apache configuration file: <VirtualHost *:80> ServerName mysite.org ServerAdmin webmaster@localhost DocumentRoot /var/www/ <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> Update 2: Using a block, I restrict the use of other methods than GET and POST... <Limit POST PUT CONNECT HEAD OPTIONS DELETE PATCH PROPFIND PROPPATCH MKCOL COPY MOVE LOCK UNLOCK> Order deny,allow Deny from all </Limit> <LimitExcept GET> Order deny,allow Deny from all </LimitExcept> Now methods other that GET are forbidden (403). My only question now is whether there is some trick to boot those how try to use my server as a proxy out...

    Read the article

  • Start multiple instances of Firefox; Xephyr rootless mode

    - by Vi
    How can I have multiple independent instances of Mozilla Firefox 3.5 on the same X server, but started from different user accounts (consequently, different profiles)? Limited success was only with Xephyr :1, DISPLAY=:1 /usr/local/bin/firefox, but Xephyr has no Cygwin/X's "rootless" mode so it's not comfortable. The idea is to have one Firefox instance for various "Serious Business" things and the other for regular browsing with dozens of add-ons securely isolated.

    Read the article

  • Uninstall Git completely on Ubuntu?

    - by Millisami
    I installed Git on Ubuntu Lucid (latest) manually as following. cd ~/tmp wget http://kernel.org/pub/software/scm/git/git-1.7.0.6.tar.gz tar -xzvf git-1.7.0.6.tar.gz cd git-1.7.0.6.tar.gz ./configure sudo make sudo make install Now, how can I completely uninstall it?

    Read the article

  • Configuration Help for Sendmail Required

    - by Vinayak Mahadevan
    Hi I need some help with respect to sendmail configuration. The basic problem is that I have some employees working from other places and they need access to their mail. So what I have done right now is whatever mails which are meant for them which are generated from within the company and collected by my internal mail server is bounced to an external mail server from where the employees access it. This is done through a email id on a different domain. This was working fine till I restricted the external mailing access for certain users using rulesets in sendmail.cf. Once I had put that in place only people who had external mailing rights could send mails to people outside the office. What I would like to know is that is there anyway where I can expose sendmail on two different ips and thereby configure everybody's email id to point to the same internal mail server using 2 different ips. one ip when inside the company and one ip outside the company. Is it possible that I have one static ip configured for both internal access and external access or is there any otherway it can be done with sendmail. Can anybody help me Sorry for the long post Regards Vinayak

    Read the article

  • collectd:Monitoring server not showing clients

    - by Quintin Par
    I have setup a monitoring server with the following setup. <Plugin network> Listen "0.0.0.0" "25826" </Plugin> Now my clients are sending data to the monitoring server(verified through tcpdump). Even the collection folder shows that the data is being dumped /var/lib/collectd/rrd [ec2-user at x rrd]$ ll total 4 drwxr-xr-x 11 root root 4096 Nov 20 17:53 x-web-1.y.com [ec2-user at x rrd]$ I have also verified with find . -mmin 1 to see if its being constantly updated. [ec2-user@x rrd]$ find . -mmin 1 ./x-web-1.y.com/interface-eth0/if_errors.rrd ./x-web-1.y.com/interface-eth0/if_packets.rrd ./x-web-1.y.com/interface-eth0/if_octets.rrd ./x-web-1.y.com/disk-xvda1/disk_time.rrd ./x-web-1.y.com/disk-xvda1/disk_ops.rrd ./x-web-1.y.com/disk-xvda1/disk_octets.rrd ./x-web-1.y.com/disk-xvda1/disk_merged.rrd But when i look it up through collectd-web, I don't see the clients What might be wrong in my setup?

    Read the article

  • Ubuntu: Unsupported Sources

    - by Scott
    What kind of software is available in that source tree ? Also what is the best way to see what kind of software is available in a repository ? I have always reluctant to enable it because I didn't want to wind up with an unstable system.

    Read the article

  • How to make Shared Keys .ssh/authorized_keys and sudo work together?

    - by farinspace
    I've setup the .ssh/authorized_keys and am able to login with the new "user" using the pub/private key ... I have also added "user" to the sudoers list ... the problem I have now is when I try to execute a sudo command, something simple like: $ sudo cd /root it will prompt me for my password, which I enter, but it doesn't work (I am using the private key password I set) Also, ive disabled the users password using $ passwd -l user What am I missing? Somewhere my initial remarks are being misunderstood ... I am trying to harden my system ... the ultimate goal is to use pub/private keys to do logins versus simple password authentication. I've figured out how to set all that up via the authorized_keys file. Additionally I will ultimately prevent server logins through the root account. But before I do that I need sudo to work for a second user (the user which I will be login into the system with all the time). For this second user I want to prevent regular password logins and force only pub/private key logins, if I don't lock the user via" passwd -l user ... then if i dont use a key, i can still get into the server with a regular password. But more importantly I need to get sudo to work with a pub/private key setup with a user whos had his/her password disabled. Edit: Ok I think I've got it (the solution): 1) I've adjusted /etc/ssh/sshd_config and set PasswordAuthentication no This will prevent ssh password logins (be sure to have a working public/private key setup prior to doing this 2) I've adjusted the sudoers list visudo and added root ALL=(ALL) ALL dimas ALL=(ALL) NOPASSWD: ALL 3) root is the only user account that will have a password, I am testing with two user accounts "dimas" and "sherry" which do not have a password set (passwords are blank, passwd -d user) The above essentially prevents everyone from logging into the system with passwords (a public/private key must be setup). Additionally users in the sudoers list have admin abilities. They can also su to different accounts. So basically "dimas" can sudo su sherry, however "dimas can NOT do su sherry. Similarly any user NOT in the sudoers list can NOT do su user or sudo su user. NOTE The above works but is considered poor security. Any script that is able to access code as the "dimas" or "sherry" users will be able to execute sudo to gain root access. A bug in ssh that allows remote users to log in despite the settings, a remote code execution in something like firefox, or any other flaw that allows unwanted code to run as the user will now be able to run as root. Sudo should always require a password or you may as well log in as root instead of some other user.

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • sudoer scheme for another web developer that retains my future control of a virtual server?

    - by Tchalvak
    Background: Virtual Private Server I have a virtual private server that I'm looking to host multiple websites on, and provide access to another web developer. I don't care about putting too many constraints on him, though I wouldn't mind isolating the site that he'll be developing from other sites on the server that I will develop. The problem: retain control Mainly what I want is to make sure that I retain control over the server in the future. I want to reserve the ability to create/promote/demote and other administrative functions that don't deal with web software. If I make him an admin, he can sudo su - and become root and remove root control from me, for example. I need him not to be able to: take away other admin permissions change the root password have control over other security/administrative functions I would like him to still be able to: install software (through apt-get) restart apache access mysql configure mysql/apache reboot edit web development configuration type files in /etc/ Other Standard Setups would be happily considered I've never really set up a good sudoers file, so simple example setups would be very useful, even if they're only somewhat similar to the settings that I'm hoping for above. Edit: I have not yet finalized permissions, standard, useful sudo setups are certainly an option, the lists above are more what I'm hoping I can do, I don't know that that setup can be done.

    Read the article

  • placing shell script under systemd control

    - by Calvin Cheng
    Assuming I have a shell script like this:- #!/bin/sh # cherrypy_server.sh PROCESSES=10 THREADS=1 # threads per process BASE_PORT=3035 # the first port used # you need to make the PIDFILE dir and insure it has the right permissions PIDFILE="/var/run/cherrypy/myproject.pid" WORKDIR=`dirname "$0"` cd "$WORKDIR" cp_start_proc() { N=$1 P=$(( $BASE_PORT + $N - 1 )) ./manage.py runcpserver daemonize=1 port=$P pidfile="$PIDFILE-$N" threads=$THREADS request_queue_size=0 verbose=0 } cp_start() { for N in `seq 1 $PROCESSES`; do cp_start_proc $N done } cp_stop_proc() { N=$1 #[ -f "$PIDFILE-$N" ] && kill `cat "$PIDFILE-$N"` [ -f "$PIDFILE-$N" ] && ./manage.py runcpserver pidfile="$PIDFILE-$N" stop rm -f "$PIDFILE-$N" } cp_stop() { for N in `seq 1 $PROCESSES`; do cp_stop_proc $N done } cp_restart_proc() { N=$1 cp_stop_proc $N #sleep 1 cp_start_proc $N } cp_restart() { for N in `seq 1 $PROCESSES`; do cp_restart_proc $N done } case "$1" in "start") cp_start ;; "stop") cp_stop ;; "restart") cp_restart ;; *) "$@" ;; esac From the bash script, we can essentially do 3 things: start the cherrypy server by calling ./cherrypy_server.sh start stop the cherrypy server by calling ./cherrypy_server.sh stop restart the cherrypy server by calling ./cherrypy_server.sh restart How would I place this shell script under systemd's control as a cherrypy.service file (with the obvious goal of having systemd start up the cherrypy server when a machine has been rebooted)? Reference systemd service file example here - https://wiki.archlinux.org/index.php/Systemd#Using_service_file

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • GRUB error: unknown filesystem

    - by Ali
    I replaced my old laptop drive which was win7 and ubuntu dual boot with an SSD. Now I connected the old drive through a USB adapter and I want to boot from it. But this comes up: unknown filesystem grub rescue> As i need the programs from old drive I have to boot from it time to time and I don't want to install those software on the new drive. It takes so time to exchange the drives so I want to boot from USB. how can I fix this?

    Read the article

  • Find a process by name and kill it

    - by Fabiano PS
    So, I want to send a kill to a process, I know it's name ps -ef | grep '_rails master' root 2388 1 0 19:46 ? 00:00:04 unicorn_rails master -c /web/hero/config/unicorn.rb -E production -D root 2582 2172 0 20:28 pts/0 00:00:00 grep --color=auto _rails master It is unicorn_rails master [..] how do I kill it? I tried so far: sed and expr. But cant pass it as param to kill

    Read the article

  • Virtual machine on ubuntu

    - by MITHIYA MOIZ
    I have configured virtual machine on ubuntu with the help of below article, https://help.ubuntu.com/9.04/serverguide/C/libvirt.html I managed to finish all the part except the major portion getting virtual host to talk to real network, Which I guess should be done only via bridge interface. Via virtual machine manager I try to choose any interface it gives me interface not bridged When I try to bridge the interceface eth0 as below auto br0 iface br0 inet static address 192.168.0.223 network 192.168.0.0 netmask 255.255.255.0 broadcast 192.168.0.255 gateway 192.168.0.1 bridge_ports eth0 bridge_fd 9 bridge_hello 2 bridge_maxage 12 bridge_stp off I cannot communicate with this interface to network, host server looses all the communication to network. But when I remote bridge interface from /etc/network/interfaces And configure eth0 as below it works fine The primary network interface auto eth0 iface eth0 inet static address 192.168.0.223 netmask 255.255.255.0 network 192.168.0.0 broadcast 192.168.0.255 dns-nameservers 62.215.6.51 gateway 192.168.0.1 how can i setup bridge interface correctly and how would my /etc/netwrok/interfaces file would look a like.

    Read the article

  • i keep getting a 403 forbidden permission error on my fedora server through my local network

    - by kdavis8
    Trying to view a javascript file on my home server I get the following error: Forbidden You don't have permission to access /jquery-1.8.2.js on this server. Apache/2.2.22 (Fedora) Server at 192.168.1.3 Port 80 I have given all users access to the file like this: sudo chmod -R 777 /var/www/html/jquery-1.8.2.js I have even gone as far as changing the user & group properties in the httpd.conf file.

    Read the article

  • When I restart my LXC environment, the container does not re-bind to the IP address

    - by RoboTamer
    The IP does no longer respond to a remote ping With restart I mean: lxc-stop -n vm3 lxc-start -n vm3 -f /etc/lxc/vm3.conf -d -- /etc/network/interfaces auto lo iface lo inet loopback up route add -net 127.0.0.0 netmask 255.0.0.0 dev lo down route add -net 127.0.0.0 netmask 255.0.0.0 dev lo # device: eth0 auto eth0 iface eth0 inet manual auto br0 iface br0 inet static address 192.22.189.58 netmask 255.255.255.248 gateway 192.22.189.57 broadcast 192.22.189.63 bridge_ports eth0 bridge_fd 0 bridge_hello 2 bridge_maxage 12 bridge_stp off post-up ip route add 192.22.189.59 dev br0 post-up ip route add 192.22.189.60 dev br0 post-up ip route add 192.22.189.61 dev br0 post-up ip route add 192.22.189.62 dev br0 -- /etc/lxc/vm3.conf lxc.utsname = vm3 lxc.rootfs = /var/lib/lxc/vm3/rootfs lxc.tty = 4 #lxc.pts = 1024 # pseudo tty instance for strict isolation lxc.network.type = veth lxc.network.flags = up lxc.network.link = br0 lxc.network.name = eth0 lxc.network.mtu = 1500 #lxc.cgroup.cpuset.cpus = 0 # security parameter lxc.cgroup.devices.deny = a # Deny all access to devices lxc.cgroup.devices.allow = c 1:3 rwm # dev/null lxc.cgroup.devices.allow = c 1:5 rwm # dev/zero lxc.cgroup.devices.allow = c 5:1 rwm # dev/console lxc.cgroup.devices.allow = c 5:0 rwm # dev/tty lxc.cgroup.devices.allow = c 4:0 rwm # dev/tty0 lxc.cgroup.devices.allow = c 4:1 rwm # dev/tty1 lxc.cgroup.devices.allow = c 4:2 rwm # dev/tty2 lxc.cgroup.devices.allow = c 1:9 rwm # dev/urandon lxc.cgroup.devices.allow = c 1:8 rwm # dev/random lxc.cgroup.devices.allow = c 136:* rwm # dev/pts/* lxc.cgroup.devices.allow = c 5:2 rwm # dev/pts/ptmx lxc.cgroup.devices.allow = c 254:0 rwm # rtc # mounts point lxc.mount.entry=proc /var/lib/lxc/vm3/rootfs/proc proc nodev,noexec,nosuid 0 0 lxc.mount.entry=devpts /var/lib/lxc/vm3/rootfs/dev/pts devpts defaults 0 0 lxc.mount.entry=sysfs /var/lib/lxc/vm3/rootfs/sys sysfs defaults 0 0

    Read the article

  • Connect to SVN through Eclipse on Ubuntu

    - by Gene R
    We have a Subversion server running on an internal server. I'm trying to connect to it through SubClipse (Eclipse) on Ubuntu. When I enter the URL: svn://servername/site/trunk as I do from Windows. I get the following error: Error validating location: "org.tigris.subversion.javahl.ClientException: svn: Malformed network data" Anybody got any ideas?

    Read the article

  • How to enable an external USD harddrive with ubuntu

    - by LarsOn
    Hello I'm trying to install a new LaCie Hard Disk design by Neil Poulton 1TB USB 2.0 GParted reports /dev/sda1 (with exclamation mark and key sign) ntfs 1 KiB unallocated 320 MiB /dev/sda2 hfs+ 2.84 MiB unallocated 931.2 GiB When trying to create a partition with Disk Utility it says Daemon is inhibited It seems I can't create the partition that way. Can you recommend how I can proceed? Thank you

    Read the article

  • Setting up dante socks server

    - by skerit
    I want to tunnel all my internet traffic through my vps, so I'm trying to install a proxy server. However: I can't seem to browse the internet through Dante. I get the ERR_EMPTY_RESPONSE error. This is my config: logoutput: stderr /home/user/dantelog internal: eth1 port=1080 external: eth1 method: username pam user.privileged: proxy user.notprivileged: nobody user.libwrap: nobody client pass { from: 10.0.0.0/8 port 1-65535 to: 0.0.0.0/0 } Do I really have to run 2 proxy servers: one for http and one for socks? or is there something else I can do?

    Read the article

  • SFTP ChRoot result in broken pipe

    - by Patrick Pruneau
    I have a website that I want to add some restricted access to a sub-folder. For this, I've decided to use CHROOT with SFTP (I mostly followed this link : http://shapeshed.com/chroot_sftp_users_on_ubuntu_intrepid/) For now, I've created a user (sio2104) and a group (magento).After following the guide, my folder list look like this : -rw-r--r-- 1 root root 27 2012-02-01 14:23 index.html -rw-r--r-- 1 root root 21 2012-02-01 14:24 info.php drwx------ 15 root root 4096 2012-02-25 00:31 magento As you can see, i've chown root:root the folder magento I wanted to jail-in the user and ...everything else by the way. Also in the magento folder, I chown sio2104:magento everything so they can access what they want. Finally, I've added this to sshd_config file : #Subsystem sftp /usr/lib/openssh/sftp-server Subsystem sftp internal-sftp Match Group magento ChrootDirectory /usr/share/nginx/www/magento ForceCommand internal-sftp AllowTCPForwarding no X11Forwarding no PasswordAuthentication yes #UsePAM yes And the result is...well, I can enter my login, password and it's all finished with a "broken pipe" error. $ sftp [email protected] [....some debug....] [email protected]'s password: debug1: Authentication succeeded (password). Authenticated to 10.20.0.50 ([10.20.0.50]:22). debug1: channel 0: new [client-session] debug1: Requesting [email protected] debug1: Entering interactive session. Write failed: Broken pipe Connection closed Verbose mode gives nothing to help. Anyone have an idea of what I've done wrong? If I try to login with ssh or sftp with my personnal user, everything works fine.

    Read the article

  • Setup git repository on gentoo server using gitosis & ssh

    - by ikso
    I installed git and gitosis as described here in this guide Here are the steps I took: Server: Gentoo Client: MAC OS X 1) git install emerge dev-util/git 2) gitosis install cd ~/src git clone git://eagain.net/gitosis.git cd gitosis python setup.py install 3) added git user adduser --system --shell /bin/sh --comment 'git version control' --no-user-group --home-dir /home/git git In /etc/shadow now: git:!:14665:::::: 4) On local computer (Mac OS X) (local login is ipx, server login is expert) ssh-keygen -t dsa got 2 files: ~/.ssh/id_dsa.pub ~/.ssh/id_dsa 5) Copied id_dsa.pub onto server ~/.ssh/id_dsa.pub Added content from file ~/.ssh/id_dsa.pub into file ~/.ssh/authorized_keys cp ~/.ssh/id_dsa.pub /tmp/id_dsa.pub sudo -H -u git gitosis-init < /tmp/id_rsa.pub sudo chmod 755 /home/git/repositories/gitosis-admin.git/hooks/post-update 6) Added 2 params to /etc/ssh/sshd_config RSAAuthentication yes PubkeyAuthentication yes Full sshd_config: Protocol 2 RSAAuthentication yes PubkeyAuthentication yes PasswordAuthentication no UsePAM yes PrintMotd no PrintLastLog no Subsystem sftp /usr/lib64/misc/sftp-server 7) Local settings in file ~/.ssh/config: Host myserver.com.ua User expert Port 22 IdentityFile ~/.ssh/id_dsa 8) Tested: ssh [email protected] Done! 9) Next step. There I have problem git clone [email protected]:gitosis-admin.git cd gitosis-admin SSH asked password for user git. Why ssh should allow me to login as user git? The git user doesn't have a password. The ssh key I created is for the user expert. How this should work? Do I have to add some params to sshd_config?

    Read the article

  • Sharing hp Deskjet F380 using cups via http driver issues with xp client

    - by ageis23
    Hi the problem is xp doesn't have built in drivers for my printer but vista does. On vista it works perfectly without any issues. However when I try using xp, it insists that I select a driver from the selection xp offers by default. The drivers I've downloaded from HP don't support networking. Hp have stated they're non networkable. Is there anything I can do about this? Any help is greatly appreciated and would save me getting ear ache!

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >