Search Results

Search found 33182 results on 1328 pages for 'linux port'.

Page 368/1328 | < Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >

  • Ubuntu 9.10 Samba NT_STATUS_CONNECTION_REFUSED errors on remote machines.

    - by user40730
    SAMBA I'm running Samba on Ubuntu 9.10 on a MacBook Pro using Parallels. When I run the smbtree command, I get the following errors: peterv@MBP17U<2005$: sudo smbtree Enter root's password: session request to 192.168.1.156 failed (Called name not present) HADEN \SERVER2 cli_start_connection: failed to connect to SERVER2<20 (0.0.0.0). Error NT_STATUS_CONNECTION_REFUSED \MBP17WIN MBP17win cli_start_connection: failed to connect to MBP17WIN<20 (0.0.0.0). Error NT_STATUS_CONNECTION_REFUSED \MBP17U \MBP17U\IPC$ IPC Service () \MBP17U\Perl \MBP17U\Home \MBP17U\print$ \MBP17 MBP17 cli_start_connection: failed to connect to MBP17<20 (0.0.0.0). Error NT_STATUS_CONNECTION_REFUSED Fri Apr 16 05:24:47 EDT 2010 The MBP17 failure is an OS X system, the SERVER2 failure is a Windows XP system. Running testparm shows no errors. Can someone please help me out?

    Read the article

  • Mail troubleshooting

    - by Jason Swett
    I'm just trying to send myself an e-mail. On on Ubuntu using sendmail. For some reason, it doesn't work. Here's the command I'm running and what shows up when I run it: jason@ve:~$ echo "Subject: test" | /usr/lib/sendmail -v [email protected] [email protected]... Connecting to [127.0.0.1] via relay... 220 ve.5wrvhfxg.vesrv.com ESMTP Sendmail 8.14.3/8.14.3/Debian-9.1ubuntu1; Wed, 29 Dec 2010 13:51:49 -0800; (No UCE/UBE) logging access from: localhost.localdomain(OK)-localhost.localdomain [127.0.0.1] >>> EHLO ve.5wrvhfxg.vesrv.com 250-ve.5wrvhfxg.vesrv.com Hello localhost.localdomain [127.0.0.1], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-EXPN 250-VERB 250-8BITMIME 250-SIZE 250-DSN 250-ETRN 250-DELIVERBY 250 HELP >>> VERB 250 2.0.0 Verbose mode >>> MAIL From:<[email protected]> SIZE=14 250 2.1.0 <[email protected]>... Sender ok >>> RCPT To:<[email protected]> >>> DATA 250 2.1.5 <[email protected]>... Recipient ok 354 Enter mail, end with "." on a line by itself >>> . 050 <[email protected]>... Connecting to 205.186.165.157. via esmtp... 050 <[email protected]>... Deferred: Connection refused by 205.186.165.157. 250 2.0.0 oBTLpnEj012261 Message accepted for delivery [email protected]... Sent (oBTLpnEj012261 Message accepted for delivery) Closing connection to [127.0.0.1] >>> QUIT 221 2.0.0 ve.5wrvhfxg.vesrv.com closing connection It seems to me that the "Connection refused by 205.186.165.157" part is where things are going wrong, but I have no idea where or how to begin troubleshooting. Any advice?

    Read the article

  • HP DL380 reboot problems

    - by dvoina
    I have recently installed RHEL 5.3 on a HP DL 380 G5. Then I installed HP's PSP(Proliant Support Pack). Since then I cannot reboot the system anymore. The system just stays in "Broadcast message from root (tty0). The system is going for reboot NOW" Neither halt, poweroff, reboot nor init 6 works.

    Read the article

  • how to resolve error in ubuntu

    - by Bipul
    i m using ubuntu 9.04. when i m running any command like sudo apt-get update ,i get following error message"E: Type 'l.com/ubuntu' is not known on line 45 in source list /etc/apt/sources.list E: The list of sources could not be read. ".due to this problem i m not able to download anything.please help

    Read the article

  • SSL certificates work fine from command line but fails in script

    - by jrallison
    I'm trying to setup email notifications for my continuous integration server. I have a script which uses nail to send the email when the build works: #!/bin/bash echo "Build Worked!" | nail -A myisp -s 'Build Success' [email protected] When I run this from the command line with sh build-worked, it works and I receive the email. However, when I start the continuous integration server which executes the same script, I get the following error: nail: /opt/bitnami/common/lib/libssl.so.0.9.8: no version information available (required by nail) nail: /opt/bitnami/common/lib/libcrypto.so.0.9.8: no version information available (required by nail) Error with certificate at depth: 0 issuer = /C=ZA/ST=Western Cape/L=Cape Town/O=Thawte Consulting cc/OU=Certification Services Division/CN=Thawte Premium Server CA/[email protected] subject = /C=US/ST=California/L=Mountain View/O=Google Inc/CN=smtp.gmail.com err 20: unable to get local issuer certificate Continue (y/n)? could not initiate SSL/TLS connection: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed . . . message not sent. I must be messing some configuration, any ideas?

    Read the article

  • Advantages of Ubuntu LTS versions over regular Ubuntu?

    - by Adam Matan
    Do the LTS versions of Ubuntu have any advantages for the non-paying customers (who don't get any support?) From the tech spec only, these versions seem outdated in many aspects - mainly drivers and installed software versions. For instance, My previous (bounty!) problem regarding the AGN 5100 drivers would have been solved under Ubuntu 9.04.

    Read the article

  • authbind, privbind or iptables REDIRECT (port 80 to 8080)?

    - by chris_l
    Hi, I'd like to run Glassfish v3 as a non-privileged user on Linux (Debian), but make it available on port 80. I'm currently doing this with iptables: iptables -t nat -I PREROUTING -p tcp -d x.x.x.x --dport 80 -j REDIRECT --to-port 8080 This works, but I wonder: If this has any significant performance impact compared to binding directly to port 80 If I could make a similar setup also work for HTTPS (or if that must run on 443) If there's a way to avoid other users from binding to port 8080 (in case my server crashes) - maybe block that port permanently to other users somehow? ...or if I should use authbind/privbind instead? Problem: I couldn't make it work with authbind or privbind so far. For authbind, I edited asadmin's last line to: exec authbind --deep "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... For privbind: exec privbind -u glassfish "$JAVA" -Djava.net.preferIPv4Stack=true -jar ... (Only) with these settings, I can successfully perform a create-domain --domainport 80. This proves, that authbind and privbind actually work (the authbind version of the script is called by the glassfish user; the privbind version is called by root of course). However, in both cases I get the following exception, when starting the domain (start-domain): [#|2010-03-20T13:25:21.925+0100|SEVERE|glassfishv3.0|javax.enterprise.system.core.com.sun.enterprise.v3.server|_ThreadID=11;_ThreadName=FelixStartLevel;|Shutting down v3 due to startup exception : Permission denied: 80=com.sun.enterprise.v3.services.impl.monitor.MonitorableSelectorHandler@1fc25e5|#] I haven't found a solution for that yet (after searching the web, it seems, that this isn't so easy?) But maybe, the solution with iptables is good enough - what do you think? Thanks, Chris

    Read the article

  • CentOS Backup BASH Script

    - by user1062058
    I just wrote this script for backing up everything into a tar.gz file. Does it look okay? How can I get the tar file to transfer itself over to another server after executing? FTP from itself? I'm going to put this script into a weekly cron. #!/bin/bash rm ~/backup.tar.gz #removes old backup BACKUP_DIRS=$HOME #$HOME is builtin, it goes to /home/ and all child dirs tar -cvzf backup.tar.gz $BACKUP_DIRS # run tar -zxvf to extract backup.tar.gz Thanks.

    Read the article

  • Xenserver 5.5 doesn't see RAID volume

    - by Roy Chan
    Hi Gurus, I am trying to install Xenserver on a Dell precision 490 workstation. After booting into the install wizard and next-ed a few times, On the disk step, it only shows physical harddrive but not the RAID (RAID-10) volume that I set up on the Dell RAID. Is there a special option that I have to set on the boot? or do I need a special driver for this? Please Advise Thanks

    Read the article

  • Reset KDE System Monitor (KSysGuard)

    - by Deltik
    Something went wrong while I was attempting to restore a backup, and KDE System Guard ceased to display properly. This is the correct display (command running from root: kdesudo ksysguard): This is the incorrect display (command: ksysguard): Here in the incorrect display, the menu bar is missing, and the tab "Process Table" is unclickable. I have already tried to remove the directory ~/.kde/share/apps/ksysguard/ but to no avail. My question: How do I restore KSysGuard back to factory defaults/normal functionality?

    Read the article

  • Using PAM and vsftpd without root access

    - by Zizzencs
    I'm trying to set up a few vsftpd instances on a machine that I have no root access to. The authentication should be done through PAM with pam_listfile, like this: pam_listfile.so item=group sense=allow file=/path/filename onerr=fail I can ask the administrator to set up a PAM service for me and include that line but he is not willing to create 6 PAM services for my 6 vsftpd instances and I really need different /path/filename set for each vsftpd server. Is there a way to solve this problem? Can I somehow not use absolute path as the parameter? (I know the correct solution would be to use one vsftpd instance and set up virtual users properly. However unfortunately I have to work what I have and the users already exist in an Active Directory and are authenticated to the system using another PAM service.)

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • cset as non-root to set cpu affinity for running processes

    - by RaveTheTadpole
    I've been playing with cset to set cpu affinity for running processes. I'm recreating the built-in "shield" function manually with set and proc, to add some subsets for specific threads of my application. I have a bash script that is calling cset to create the sets, and move the correct threads to the correct sets. It works when run with sudo. Now I'd like to make this script executable by another user, who does not have sudo powers. I trust this user enough to be responsible with cset, but don't want to open up the wide powers of root. I thought that CAP_SYS_NICE -- which is needed for sched_setaffinity, which I just assume cset must use -- on the script would be sufficient, but that didn't work. I tried extending CAP_SYS_NICE to the cset program (which is a thin python wrapper for the cset python library). No dice. The output of cap_to_text on my CAP_SYS_NICE'd scripts is "=cap_ipc_lock,cap_sys_nice,cap_sys_resource+eip" (it has ipc_lock and sys_resource for other reasons; I think only sys_nice is relevant). Any ideas?

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • ssh-agent on ubuntu rapidly restarts

    - by Santa Claus
    I am attempting to use ssh-agent on Ubuntu 13.10 so that I will not have to enter my passphrase to unlock a key every time I want to use ssh or git. As you can see below, ssh-agent appears to be restarting for some reason. These commends were executed within a period of less than 5 seconds: andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-pqm5J0s70NxG/agent.2820; export SSH_AUTH_SOCK; SSH_AGENT_PID=2821; export SSH_AGENT_PID; echo Agent pid 2821; andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-VpkOH2WKjT1M/agent.2822; export SSH_AUTH_SOCK; SSH_AGENT_PID=2823; export SSH_AGENT_PID; echo Agent pid 2823; andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-EQ6X9JHNiBOO/agent.2824; export SSH_AUTH_SOCK; SSH_AGENT_PID=2825; export SSH_AGENT_PID; echo Agent pid 2825; andrew@zaphod:~$ ssh-agent SSH_AUTH_SOCK=/tmp/ssh-8Iij8kFkaapz/agent.2826; export SSH_AUTH_SOCK; SSH_AGENT_PID=2827; export SSH_AGENT_PID; echo Agent pid 2827; andrew@zaphod:~$ My guess is that ssh-agent is crashing, but how would I know? What log file would it log to?

    Read the article

  • problem when view the super block in ext3 file system

    - by xuczhang
    I tried to view the superblock by command "dd" in ext3 file system. dd if=/dev/sda3 bs=4096 skip=1 count=1 of=superblock But the result in superblock file is not correct(I compare the value of Inodes count I got from dumpe2fs). The device file /dev/sda3 is started at the boot block and then the superblock of the group0? And another question is the boot block and superblock's size are both BLOCKSIZE(here is 4096)? The disk format of ext2/ext3(I think they are the same) are shown below:

    Read the article

  • How to remotely open gedit with SFTP URL in Gnome through SSH?

    - by Álvaro Justen
    My setup is weird and I can't change it now. I have two machines: local-machine: it's my desktop running Ubuntu with Gnome remote-machine: it's one virtual machine, also running Ubuntu but without X In both machines I have my private and public SSH keys. I need to run SSH from remote-machine to local-machine and run gedit (in local-machine, under the default $DISPLAY) but openning a file in remote-machine throught SFTP. Something like this: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 gedit sftp://remote-machine/some/file" The command above doesn't work. gedit shows this message: Could not open the file sftp://remote-machine/some/file. gedit cannot handle sftp: locations. Note that: /some/file exists on remote-machine. I can SSH normally from remote-machine to local-machine using my SSH key without any problems! I can run the command DISPLAY=:0.0 gedit sftp://remote-machine/some/file in a terminal on local-machine and gedit opens the file on remote-machine without any problems - but the terminal in which I executed the command is running in DISPLAY :0 (really, it's gnome-terminal). I also tried -t option of SSH client (to force pseudo-tty allocation) but it didn't work. If I try to run DISPLAY=:0.0 gedit sftp://remote-machine/some/file in local-machine but under a tty (for example in tty1, by pressing <Ctrl>+<Alt>+<F1>) it doesn't not work - I get the same error when running from remote-machine. I found that if I pass the environment variable DBUS_SESSION_BUS_ADDRESS with a correct value, it works! So, if I do something like that: myuser@local-machine:~$ env | grep DBUS_SESSION_BUS_ADDRESS > env.txt myuser@local-machine:~$ scp env.txt remote-machine: and then: myuser@remote-machine:~$ ssh local-machine "DISPLAY=:0.0 $(cat env.txt) gedit sftp://remote-machine/some/file" it works! The problem is that I'm not on local-machine so I can't get the correct value for this env variable. Is there any other way to make this work?

    Read the article

  • What can cause an increase in inactive memory and how to reclame it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB

    Read the article

  • using squid for apache?

    - by ajsie
    so i have set up apache serving my php pages. i read about squid but don't understand why/how i should use it to speed up my web server. from what i've learned squid is located in same network (or another) and caches content requested by the web browsers, and then when another web browser wants a same page, squid returns that page cached locally, so it never sends a request to the apache server (faster response time for the client, and reduced load for the server). so it seems that squid is for the client side (web browser), and has nothing to do with the server side (apache). but then some people tell others how they have speeded up apache using squid. so im confused. could squid be used on the server side too? and how will it work?

    Read the article

  • OpenWrt logging: how to find out "wifi deauthentication"

    - by user62367
    If someone starts to use the wifi, i can see that with logread: Jan 23 21:04:47 router daemon.info hostapd: wlan0: STA XX:XX:XX:XX:XX:XX IEEE 802.11: authenticated But how can i see, that he/she's disconnecting? Theres no "bla-bla deauthenticated bla" line in logread, or even a thing that points to that someone get's disconnected.. I tried to google: http://wiki.openwrt.org/doc/uci/system But it doesn't writes about loglevel. Can anyone help me find out, how to find out that someone disconnects it's wifi from the router? The logread doesn't even writes a line when someone disconnects. Please help!! It's important! Thank you!:\

    Read the article

  • Autologin on Ubuntu Server

    - by hekevintran
    I have a machine running Ubuntu Server. It has only a command-line interface. How can I make the system login with a specific user automatically (I don't want to type the username/password). I know that this is insecure and I don't care.

    Read the article

  • Setting a custom timeout to nmblookup

    - by C2H5OH
    As part of a batch script, I have the following command: hostname=$(nmblookup -A $ip_address | awk '$2 == "<20>" {print $1}') Which works fine from a functinality perspective, even for unresolved hosts. The problem is that when the IP address is not reachable or the remote machine does not respond to the SMB request, the command takes about ten seconds to complete. Therefore, the question is simple: is there a way to lower the elapsed time in such cases? Or, in other words, is there a way to set a custom timeout for the nmblookup command? NOTE: I'm interested in solutions that do not make use of SIGALRM or similar mechanisms; if they exist. The nmblookup version is 3.6.3 from Ubuntu 12.04 LTS.

    Read the article

  • Searching for a specific option in a man page

    - by mitch_feaster
    I often find myself man'ing a command just to learn about one specific option. Most of the time I can search to the option just fine, unless it's something like ffmpeg or gcc where I have to step through about 40 matches until I get to the actual description of the option... Sometimes I can get lucky and search for the word "options" to get close and then refine it from there, but it would be nice if I could reliably jump straight to the option in question. It would be cool if there was a tool that could parse out the options and build a database on which you could do searches, but after looking at the groff markup for a few pages I've determined it would only be a best-guess effort due to the lack of meta-information in groff markup... In my ideal world woman mode in emacs would support searching for specific options... :) Any tips for jumping straight to a specific option in a man page?

    Read the article

< Previous Page | 364 365 366 367 368 369 370 371 372 373 374 375  | Next Page >