Search Results

Search found 33882 results on 1356 pages for 'command window'.

Page 552/1356 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Pip installs on Archlinux fails to build egg

    - by stmfunk
    I was trying to install nltk on my Archlinux server but it repeatedly fails with the following error output /usr/lib/python3.3/distutils/dist.py:257: UserWarning: Unknown distribution option: 'entry_points' warnings.warn(msg) /usr/lib/python3.3/distutils/dist.py:257: UserWarning: Unknown distribution option: 'zip_safe' warnings.warn(msg) /usr/lib/python3.3/distutils/dist.py:257: UserWarning: Unknown distribution option: 'test_suite' warnings.warn(msg) usage: setup.py [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...] or: setup.py --help [cmd1 cmd2 ...] or: setup.py --help-commands or: setup.py cmd --help error: invalid command 'bdist_egg' /tmp/pip_build_root/nltk/distribute-0.6.21-py3.3.egg Traceback (most recent call last): File "./distribute_setup.py", line 143, in use_setuptools raise ImportError ImportError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "", line 16, in File "/tmp/pip_build_root/nltk/setup.py", line 23, in distribute_setup.use_setuptools() File "./distribute_setup.py", line 145, in use_setuptools return _do_download(version, download_base, to_dir, download_delay) File "./distribute_setup.py", line 125, in _do_download _build_egg(egg, tarball, to_dir) File "./distribute_setup.py", line 116, in _build_egg raise IOError('Could not build the egg.') OSError: Could not build the egg. ---------------------------------------- Cleaning up... Command python setup.py egg_info failed with error code 1 in /tmp/pip_build_root/nltk Storing complete log in /root/.pip/pip.log This error is also occurring for matplotlib buts thats the only other library I found it to fail on so far. pyyaml installs fine. The install works perfectly under virtualenv on my mac which is using python 2.7 but the server is using python 3.3. Any help is appreciated.

    Read the article

  • Mutual piping on linux

    - by user21919
    I would like the output of A to be input for B and at the same time the output of B to be the input for A, is that possible? I tried the naïve thing: creating named pipes for A (pipeA) and B (pipeB) and then: pipeB | A | pipeA & pipeA | B | pipeB & But that does not work (pipeB is empty and switching the order would not help either). Any help would be appreciated. Example: Command A could be compiled form of this C program: #include <stdio.h> int main() { printf("0\n"); int x = 0; while (scanf("%d", &x) != EOF) { printf("%d\n", x + 1); } return 0; } Command B could be compiled form of this C program: #include <stdio.h> int main() { int x = 0; while (scanf("%d", &x) != EOF) { printf("%d\n", x + x); } return 0; }

    Read the article

  • Unable to run cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] But the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is it ignoring my --sout input?

    Read the article

  • Is execution of sync(8) still required before shutting down linux?

    - by Amos Shapira
    I still see people recommend use of "sync; sync; sync; sleep 30; halt" incantations when talking about shutting down or rebooting Linux. I've been running Linux since its inception and although this was the recommended procedure in the BSD 4.2/4.3 and SunOS 4 days, I can't recall that I had to do that for at least the last ten years, during which I probably went through shutdown/reboot of Linux maybe thousands of times. I suspect that this is an anachronism since the days that the kernel couldn't unmount and sync the root filesystem and other critical filesystems required even during single-user mode (e.g. /tmp), and therefore it was necessary to tell it explicitly to flush as much data as it can to disk. These days, without finding the relevant code in the kernel source yet (digging through http://lxr.linux.no and google), I suspect that the kernel is smart enough to cleanly unmount even the root filesystem and the filesystem is smart enough to effectively do a sync(2) before unmounting itself during a normal "shutdown"/"reboot"/"poweorff". The "sync; sync; sync" is only necessary in extreme cases where the filesystem won't unmount cleanly (e.g. physical disk failure) or the system is in a state that only forcing a direct reboot(8) will get it out of its freeze (e.g. the load is too high to let it schedule the shutdown command). I also never do the "sync" procedure before unmounting removable devices, and never hit a problem. Another example - Xen allows the DomU to be sent a "shutdown" command from the Dom0, this is considered a "clean shutdown" without anyone having to login and type the magical "sync; sync; sync" first. Am I right or was I lucky for a few thousands of system shutdowns?

    Read the article

  • How to take screenshots of WPF applications in correct size and content

    - by Thomas W.
    I usually take screenshots of single windows via the built-in key combination Alt+Print. Unfortunately this does not work well for more and more applications - all of them are WPF applications. Usually the screen shots have at least one of the following properties: the screenshot is larger than expected and contains parts of the screen around the actual window the screenshot has the correct size but includes parts of other windows, e.g. the Windows task bar. Of course the task bar might be in front of the window, but taking screen shots of "normal" programs works fine. How do I take screenshots of WPF application which are correct in size and content? I'd like to avoid the extra effort of checking all the screenshots for correctness, reproducing the situation, taking them again in case of issues or repairing/faking them manually in any pixel manipulation program (e.g. Paint.NET). I observe this on Windows 7 x64 SP 1, all official updates installed, but it might apply to other Windows versions as well (not tested yet). .NET 4.5 is installed. The application itself might only need the built-in .NET 3.5.1. It's reproducible on a virtual machine with the same settings. Examples: Screenshot of an application running in maximized mode. The screen shot includes parts of the task bar. Screenshot of a progress dialog which is behind the task bar. The screenshot also includes the task bar, while it doesn't for non WPF applications.

    Read the article

  • Creating a pseudoterminal to make sudo happy

    - by larsks
    I need to automate the provisioning of a cloud instance (running Fedora 17) for which the following initial facts are true: I have ssh-key based access to a remote user (cloud) That user has password-free root access via sudo. Manual configuration is as simple as logging in and running sudo su - and having at it, but I would like to fully automate this process. The trick is that the system defaults to having the requiretty option enabled for sudo, which means that an attempt to do something like this: ssh remotehost sudo yum -y install puppet Will fail: sudo: sorry, you must have a tty to run sudo I am working around this right now by first pushing over a small Python script that will run a command on a pseudoterminal: import os import sys import errno import subprocess pid, master_fd = os.forkpty() if pid == 0: # child process: now that we're attached to a # pty, run the given command. os.execvp(sys.argv[1], sys.argv[1:]) else: while True: try: data = os.read(master_fd, 1024) except OSError, detail: if detail.errno == errno.EIO: break if not data: break sys.stdout.write(data) os.wait() Assuming that this is named pty, I can then run: ssh remotehost ./pty sudo yum -y install puppet This works fine, but I'm wondering if there are solutions already available that I haven't considered. I would normally think about expect, but it's not installed by default on this system. screen can do this in a pinch, but the best I came up with was: screen -dmS sudo somecommand ...which does work but eats the output. Are there any other tools available that will allocate a pseudoterminal for me that are going to be generally available?

    Read the article

  • big cpu load on vmware server / linux

    - by dezfafara
    Hi, I currently using a server 2.x hosting 4 virtual machines on a linux system Today, on my physical server, I saw an enormous load average: this is the "top" of the server, illustrating my 4 virtual guests. top - 11:02:02 up 194 days, 23:09, 5 users, load average: 18.78, 12.05, 13.55 Tasks: 113 total, 4 running, 109 sleeping, 0 stopped, 0 zombie Cpu0 : 71.6%us, 19.0%sy, 0.0%ni, 8.8%id, 0.0%wa, 0.3%hi, 0.3%si, 0.0%st Cpu1 : 74.3%us, 10.4%sy, 0.0%ni, 15.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu2 : 72.5%us, 17.6%sy, 0.0%ni, 9.8%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Cpu3 : 79.5%us, 4.6%sy, 0.0%ni, 16.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8178884k total, 8129980k used, 48904k free, 134904k buffers Swap: 10490436k total, 148k used, 10490288k free, 6129728k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7312 root 6 -10 1149m 921m 559m R 97 11.5 107947:09 vmware-vmx 6995 root 6 -10 779m 687m 317m R 92 8.6 107374:31 vmware-vmx 6693 root 6 -10 880m 659m 409m S 85 8.3 76947:33 vmware-vmx 12937 root 6 -10 960m 719m 523m S 75 9.0 67219:49 vmware-vmx In bold are the cpu usage for my 4 virtuals guests These guests are running on a linux system, and the appropriate process are usually 5% - 15% of cpu I don't understang why , since a few days I have this big problem. This is the "top" on a virtual guest which is at 95% of cpu load top - 11:23:15 up 194 days, 23:13, 4 users, load average: 0.25, 0.47, 0.59 Tasks: 92 total, 2 running, 90 sleeping, 0 stopped, 0 zombie Cpu(s): 1.4%us, 7.7%sy, 0.0%ni, 90.5%id, 0.5%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 382296k total, 369732k used, 12564k free, 145156k buffers Swap: 979924k total, 13956k used, 965968k free, 86988k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3691 root 20 0 23948 1148 960 S 13.0 0.3 15339:23 vmware-guestd 3840 root 20 0 19880 584 512 S 7.7 0.2 1729:17 hald-addon-stor This virtual guest state is ok ... If anyone has any ideas .. Thanks

    Read the article

  • How can I switch from a custom linux network namespace back to the default one?

    - by Martin
    With ip netns exec you can execute a command in a custom network namespace - but is there also a way to execute a command in the default namespace? For example, after executing these two commands: sudo ip netns add test_ns sudo ip netns exec test_ns bash How can the newly created bash execute programs in the default network namespace? There is no ip netns exec default or anything similar as far as I've found. My scenario is: I want to run a SSH server in a separate network namespace (to keep the rest of the system unaware of the network connection, as the system is used for network testing), but want to be able to execute programs in the default network namespace via the SSH connection. What I've found out so far: Created network namespaces are listed as files under /var/run/netns (but there is no file for the default namespace) The ip netns exec code can be found here: http://git.kernel.org/cgit/linux/kernel/git/shemminger/iproute2.git/tree/ip/ipnetns.c#n132 - I haven't grasped everything that it is doing yet, but it doesn't look very promising. ip netns identify $$ as suggested by Howto query and change network namespace on linux? returns nothing when in the default network namespace

    Read the article

  • Debian x86_64 + Nginx + PHP5-FPM optimization

    - by Olal'a
    I used to have a VPS (512MB) from Linode and I was running nginx + php5-fpm (which comes with php5.3.3) on Debian Lenny (i686). The total memory usage was about 90-100MB. Now I have another VPS (different hosting company) and I also run nginx + php5-fpm on Debian Lenny (x86_64). The system is 64-bit, so the memory usage is higher now, about 210-230MB, which I think is too much. Here is my php5-fpm.conf: pm = dynamic pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 2 pm.max_spare_servers = 5 pm.max_requests = 300 That's what top command tells me: top - 15:36:58 up 3 days, 16:05, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 209 total, 1 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni, 99.9%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 532288k total, 469628k used, 62660k free, 28760k buffers Swap: 1048568k total, 408k used, 1048160k free, 210060k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 22806 www-data 20 0 178m 67m 31m S 1 13.1 0:05.02 php5-fpm 8980 mysql 20 0 241m 55m 7384 S 0 10.6 2:42.42 mysqld 22807 www-data 20 0 162m 43m 22m S 0 8.3 0:04.84 php5-fpm 22808 www-data 20 0 160m 41m 23m S 0 8.0 0:04.68 php5-fpm 25102 www-data 20 0 151m 30m 21m S 0 5.9 0:00.80 php5-fpm 10849 root 20 0 44100 8352 1808 S 0 1.6 0:03.16 munin-node 22805 root 20 0 145m 4712 1472 S 0 0.9 0:00.16 php5-fpm 21859 root 20 0 66168 3248 2540 S 1 0.6 0:00.02 sshd 21863 root 20 0 66028 3188 2548 S 0 0.6 0:00.06 sshd 3956 www-data 20 0 31756 3052 928 S 0 0.6 0:06.42 nginx 3954 www-data 20 0 31712 3036 928 S 0 0.6 0:06.74 nginx 3951 www-data 20 0 31712 3008 928 S 0 0.6 0:06.42 nginx 3957 www-data 20 0 31688 2992 928 S 0 0.6 0:06.56 nginx 3950 www-data 20 0 31676 2980 928 S 0 0.6 0:06.72 nginx 3955 www-data 20 0 31552 2896 928 S 0 0.5 0:06.56 nginx 3953 www-data 20 0 31552 2888 928 S 0 0.5 0:06.42 nginx 3952 www-data 20 0 31544 2880 928 S 0 0.5 0:06.60 nginx So, the question is there any way to use less memory? Btw, I have 16 cores and it would be nice to make use of them...

    Read the article

  • wget crawling search results of news website

    - by kiltek
    I am trying to crawl the search results of a news website using wget. The name of the website is www.voanews.com. After typing in my search keyword and clicking search, it proceeds to the results. Then i can specify a "to" and a "from"-date and hit search again. After this the URL becomes: http://www.voanews.com/search/?st=article&k=mykeyword&df=10%2F01%2F2013&dt=09%2F20%2F2013&ob=dt#article and the actual content of the results is what i want to download. To achieve this I created the following wget-command: wget --reject=js,txt,gif,jpeg,jpg \ --accept=html \ --user-agent=My-Browser \ --recursive --level=2 \ www.voanews.com/search/?st=article&k=germany&df=08%2F21%2F2013&dt=09%2F20%2F2013&ob=dt#article Unfortunately, the crawler doesn't download the search results. It only gets into the upper link bar, which contains the "Home,USA,Africa,Asia,..." links and saves the articles they link to. It seems like he crawler doesn't check the search result links at all. What am I doing wrong and how can I modify the wget command to download the results search list links (and of course the sites they link to) only ?

    Read the article

  • SendMail not working in CentOs 6.4

    - by Kane
    I am trying to send e-mails from my CentOS 6.4 but it does not work. My knowledge about servers is quite limited, so I hope someone can help me. Here is what I did: First i tried to send an email using the "mail" command, but it was not in the OS so I installed it. # yum install mailx After that, I tried sending an email using the "mail" command, but it did not send anything. I checked it on the internet and I realized I needed an e-mail server like sendmail, so I installed it. # yum install sendmail sendmail-cf sendmail-doc sendmail-devel After that, I configured it following some tutorials. First, sendmail.mc file. # vi /etc/mail/sendmail.mc Commented out the next line: BEFORE # DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl AFTER # dnl DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl Check that the next lines are correct: # FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl # ... # FEATURE(use_cw_file)dnl # ... # FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl Update sendmail.cf # m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf Open the port 25 adding the proper line in the iptables file # vi /etc/sysconfig/iptables # -A INPUT -m state --state NEW -m tcp --dport 25 -j ACCEPT restart iptables and sendmail # service iptables restart # service sendmail restart So i thought that would be ok, but when i tried: # mail '[email protected]' # Subject: test subject # test content #. I checked the mail log: # vi /var/log/maillog And that is what I found: Aug 14 17:36:24 dev-admin-test sendmail[20682]: r7D8RItS019578: to=<[email protected]>, ctladdr=<[email protected]> (0/0), delay=1+00:09:06, xdelay=00:00:00, mailer=esmtp, pri=2460500, relay=alt4.gmail- smtp-in.l.google.com., dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. I do not understand why there is a connection time out. Am I missing something? Can anyone help me, please? Thank you.

    Read the article

  • Extract a section of a tgz file

    - by TRiG
    I have a 28.5 GB .tgz file which was created on the command line of a Linux computer, compressing one folder and all its many many subfolders. I now want to extract a single sub-sub folder from that .tgz file, using 7zip on Windows Vista. I can't see a way to do it. Opening the .tgz file in 7zip just shows the .tar file inside it. There doesn't seem to be any way to browse that .tar file and extract the section I want. I assume there is a way to do this, but I can't see it. Simply double-clicking on the .tar file brings up a progress bar which runs slowly till my computer complains it's running out of space; I imagine it's trying to extract the whole thing. Searching for "extract section of tgz" and "extract tgz subfolder" and similar found me a way to do it on the Linux command line, but no obvious way to do it on Windows. (Most results found were about extracting into a subfolder, not extracting a subfolder out of the archive.)

    Read the article

  • VLAN Tagging Traffic on Cisco Switch

    - by David W
    I have a situation where I'm setting up multiple VLANS on a pfSense firewall on the same physical interface for a client. So in pfSense, I now have VLAN 100 (employees) and VLAN 200 (students - student computer lab). Downstream from pfSense, I have a Cisco SG200 switch, and coming off of the SG200 is the student lab (running on a Catalyst 2950. Yes, that's old, but it works, and this is a poor nonprofit we're talking about). What I'd like to do is tag everything on the network as VLAN 100, except for the student computer lab. Earlier today when I was on-site with the client, I went into to the old Catalyst 2950, and assigned all of its ports to access VLAN 200 (switchport mode access vlan 200) without setting up a trunk on the Catalyst or on the SG200. Looking back on it, I now understand why internet in the lab broke. I reverted the lab back to the default VLAN1 (we're still running on a different firewall - we haven't deployed pfSense -, and the traffic is still separated physically). So my question is, what do I need to do in order to properly deploy this scenario? I believe the correct answer is: Ensure VLANs 100 and 200 are setup in pfSense, and that DHCP is operating correctly (on separate subnets) Setup a trunkport VLAN that allows both 100 & 200 traffic, and plug that port directly into pfSense. Setup a VLAN 200 trunkport on the SG200 (It's not running iOS, but if it were, the command would be switchport trunk native vlan 200), which will then plug into the Catalyst 2950. Setup a VLAN 200 trunkport on the Catalyst 2950 (that is plugged into the SG200 VLAN200 port with the same command - switchport trunk native vlan 200) Setup the rest of the ports on the old Catalyst 2950 in the lab to be access ports on VLAN200. Is there anything that I'm missing, or do I need to tweak any of these steps, in order to properly segment the network traffic?

    Read the article

  • How can I pause console output in rxvt?

    - by Javid Jamae
    I'm running rxvt in Cygwin on a Windows box. This is how I invoke it: rxvt -sr -sl 2500 -sb -geometry 90x30 -tn rxvt -fn "Lucida Console-14" -e /usr/bin/bash --login -i Anyone know how to pause the console output in rxvt? I can use Ctrl-S / Ctrl-Q to pause / un-pause, but this won't work if a script is already running and spewing output to stdout. Highlighting the terminal window with the mouse doesn't seem to work like with other consoles such as the standard Cygwin console, or the Windows command prompt console. Some sort of scroll lock would be nice, but I can't seem to find any way to do this. I know I could just pipe my output to a file, but I want a way to pause the output for something that I didn't expect to explode with console output. Basically I want to scroll back while its running without it constantly moving me to the bottom of the output buffer as it updates more data to stdout. I don't particularly care if the solution given actually pauses the script (like when you highlight the mouse in the Windows Command window), or just scroll locks and let's me scroll while its still running the underlying script, though I'd like to know how to do both if its possible.

    Read the article

  • Migrate openldap users and groups

    - by user53864
    I have an OpenLDAP server running on one of my ubuntu 8.10 servers. I used command-line only for OpenLdap installation and some basic configurations, everything else I'll configure with the Webmin gui tool. I'm trying to migrate to ubuntu 10.04 and I was able to migrate all other servies, application and databases but not the ldap. I'm an ldap beginner: I have installed OpenLDAP server and client on ubuntu 10.04 server using the link and used the following command to export and import ldap users and groups To export from 8.10 server slapcat > ldap.ldif To import to 10.04 server Stop ldap and slapadd -l ldap.ldif and Start ldap Then I accessed Webmin and checked in Ldap users and groups and I could see all the users and groups of my old ldap server.Whenever I create an ldap user from the webmin(in 8.10 or 10.04) a unix user is also created with the home directory under /home. But the imported users in 10.04 from 8.10 are not present as a unix user(/etc/passwd). How could I make the ldap users available as a unix user, is there any perfect way to export and import?. I also wanted to check the ldap users from the terminal that if password is exported properly but I don't know how to access the ldap users which are not available as unix users. On 8.10, I just use su - ldapuser and it is not working in the 10.04 as unix users are not created for the exported ldap users. If every thing works fine then the CVS works as it is using ldap authentication. Anybody could help me?

    Read the article

  • Postfix SMTP-relay server against Gmail on CentOS 6.4

    - by Alex
    I'm currently trying to setup an SMTP-relay server to Gmail with Postfix on a CentOS 6.4 machine, so I can send e-mails from my PHP scripts. I followed this tutorial but I get this error output when trying to do a sendmail [email protected] Output: tail -f /var/log/maillog Apr 16 01:25:54 ext-server-dev01 postfix/cleanup[3646]: 86C2D3C05B0: message-id=<[email protected]> Apr 16 01:25:54 ext-server-dev01 postfix/qmgr[3643]: 86C2D3C05B0: from=<[email protected]>, size=297, nrcpt=1 (queue active) Apr 16 01:25:56 ext-server-dev01 postfix/smtp[3648]: 86C2D3C05B0: to=<[email protected]>, relay=smtp.gmail.com[173.194.79.108]:587, delay=4.8, delays=3.1/0.04/1.5/0.23, dsn=5.5.1, status=bounced (host smtp.gmail.com[173.194.79.108] said: 530-5.5.1 Authentication Required. Learn more at 530 5.5.1 http://support.google.com/mail/bin/answer.py?answer=14257 qh4sm3305629pac.8 - gsmtp (in reply to MAIL FROM command)) Here is my main.cf configuration, I tried a number of different options but nothing seems to work: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 html_directory = no inet_interfaces = localhost inet_protocols = ipv4 mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = $myhostname, localhost.$mydomain, localhost myhostname = host.local.domain myorigin = $myhostname newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES relayhost = [smtp.gmail.com]:587 sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_sasl_auth_enable = yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_security_options = noanonymous smtp_sasl_tls_security_options = noanonymous smtp_sasl_type = cyrus smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt smtp_use_tls = yes smtpd_sasl_path = smtpd unknown_local_recipient_reject_code = 550 In the /etc/postfix/sasl_passwd files (sasl_passwd & sasl_passwd.db) I got the following (removed the real password, and replaced it with "password"): [smtp.google.com]:587 [email protected]:password To create the sasl_passwd.db file, I did that by running this command: postmap hash:/etc/postfix/sasl_passwd Do anybody got an idea why I can't seem to send an e-mail from the server? Kind Regards Alex

    Read the article

  • Mystery 0xc0000142 error on starting java from a service, as a different user.

    - by cpf
    This is a very convoluted setup, but effectively this is what goes down: Manager service (which I don't have control over) running as admin user X starts my executable, which then starts Java as user Y using the standard c# StartInfo.Username/Password controls. Now, from a basic (not elevated or anything, just admin) command prompt I can run that executable, and Java pops up and works fine, running perfectly under the user it should be. When the service runs the same executable, however, Java silently fails. The only hint I see is this series of events in the event viewer: Service starts "Application popup: java.exe - Application Error : The application was unable to start correctly (0xc0000142). Click OK to close the application. " (googling this reveals a lot of scam sites telling me to use their "free antivirus to fix 0xc0000142 errors easy!"... sigh) Service stops (the java shutdown propagated, which is supposed to happen) And here's what process explorer has for the failure: As you can see, everything shows as a success. Now, I think this might have something to do with the permissions (the user java.exe is running under has traverse permission for the entire drive and full permissions to Directory A, which is where the .jar is), but I just can't fathom how something that works fine from the command line (and, this is an upgrade, the previous system without the user-switching aspect works fine from the service) can fail with such a cryptic message and little showing up in logs.

    Read the article

  • vmware esxi 5, cant create snapshots and consolidate fails, how to delete old or consolidate redo logs?

    - by Scott Szretter
    I have a VM that seems to be working ok, but when VMWare DR (or I) tries to create a snap shot, it fails, and when I view the summary page of the VM it has a warning at the top showing that the disks need to be consolidated. So I go to snapshot manager for the VM and choose consolidate (in snapshot manager, there are no snapshots actually listed by the way). If fails with this error: This virtual machine has 255 or more redo logs in a single branch of its snapshot tree. The maximum supported limit has been reached, creating new snapshots will not be allowed. To create new snapshots, please delete old snapshots or consolidate the redo logs. If I browse the data store (which has plenty of free space, 2 TB and this vm is under 40gb), in the vm folder, I do in fact see a bunch of files, numbered all the way to 0255: myvm-000255-ctk.vmdk myvm-000255-delta.vmdk myvm-000255.vmdk How can I clean all this up? Is there an SSH command line command or can I delete some of the files safely? Thanks!

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • set service dependency on internet connection

    - by nccsbim071
    Hi, I have created a window service and set some dependencies like on MSMQ, MSSQLSERVER and so. Everything is working nice. but i need to send another dependency for my service. That is on internet connection. My service is responsible for sending emails. As soon my server starts, my service starts too and it finds if there is anything to send, if there is, it starts to send email, if during sending it is not able to connect to the internet it cannot send email. so i guess i should set my service dependency on internet connection too. I already set my window service dependency to MicrosoftSQL Server and Microsoft Message Queuing by editing the registry value. by adding new multi string value named "DependOnService", Type "REG_MULTI_SZ" and space separated names of the services that my service depends upon for the Data. For Microsoft SQL Server i set the value to "MSSQLSERVER" but i don't know the name of the internet service that i need to set dependency upon. how can i do this, any help please Thanks

    Read the article

  • How do I change the Dropbox directory on a headless GNU/Linux server?

    - by DrTwox
    I have installed Dropbox 2.0.0 via command line on my home server (Ubuntu Server 12.04) to use for off-site automated backups, but I can't change the directory that the Dropbox daemon keeps synced. I've tried the following: The official docs say to use the desktop application, which is not applicable in my situation. However I installed the desktop app on my desktop machine and changed the default folder location, but I can't find where this change is stored in the ~/.dropbox/ directory so I can make the same change on the server. This page (and several others) recommends a Python script to do the job. Looking at the script, it opens a SQLite database called ~/.dropbox/dropbox.db, which does not exist on my Dropbox install, leading me to believe the script is out-of-date. This forum thread suggests manually inserting the required row in the config.db database, which I did, but it made no difference. I checked the same database file on my desktop machine, and it does not have the dropbox_path key, so I'm presuming the information in that thread is also out of date for version 2.0. I have tried to launch the Dropbox GUI configuration wizard over SSH with X11 forwarding, as suggested in one of the answers, but the binary must detect the absence of a local X11 install and it starts a command line daemon instead, which provides no means to change the option I need. I am currently using a symlink, as suggested as an answer, but this is a kludge. I would like to know the correct way to make the change. How do I change the Dropbox directory on a headless GNU/Linux server? Update: I've ditched Dropbox and started using Copy. Their Linux tools and support is far superior to Dropbox. I leave this question here in case someone, someday, can answer it.

    Read the article

  • Terminate child processes on ctrl-c

    - by jackweirdy
    In tiny core linux, I have the following script: #!/bin/sh # ~/.X.d/freerdp.sh rdp(){ while true do xfreerdp -f [IP Address] done } rdp & It's pretty simple; when X starts up and checks the .X.d directory (as is the case in tiny core) it finds and executes this script. The script starts up freerdp and keeps a connection open to the server by restarting it whenever it closes. As you can see from the rdp & line, the function is run in the background to allow X to continue its startup routine. The problem is that whenever I cancel X with a Ctrl-Alt-Backspace the rdp process doesn't die. I'm looking for a way to kill the process as soon as X finishes, either through: A) a script, executed on X closing, which kills the process or B) by modifying the script to check the return value of the xfreerdp command. NB - if the solution does check the return value, it must only end if the command fails to open the X display. For that reason, if you could point me to a reference for xfreerdp return values I'd be grateful.

    Read the article

  • Recovering a mdadm+lvm+ext4 partition with read error

    - by bitwelder
    One of disks in my NAS has failed. The NAS is running Linux, and it uses mdadm + LVM technology for its filesystems. I do have backup for most of the contents, but not for the very last changes, and if possible, I'd like to recover that from this failing disk. The disk (a 'green drive' WD10EARS 1TB in size) throws this kind of errors: Oct 3 12:00:41 kernel: [ 3625.620000] ata5.00: read unc at 9453282 Oct 3 12:00:41 kernel: [ 3625.620000] lba 9453282 start 9453280 end 1953511007 Oct 3 12:00:41 kernel: [ 3625.620000] sde5 auto_remap 0 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: exception Emask 0x0 SAct 0x1 SErr 0x0 action 0x6 Oct 3 12:00:41 kernel: [ 3625.630000] ata5.00: edma_err_cause=00000084 pp_flags=00000003, dev error, EDMA self-disable Oct 3 12:00:41 kernel: [ 3625.640000] ata5.00: failed command: READ FPDMA QUEUED Oct 3 12:00:41 kernel: [ 3625.650000] ata5.00: cmd 60/40:00:e0:3e:90/00:00:00:00:00/40 tag 0 ncq 32768 in Oct 3 12:00:41 kernel: [ 3625.650000] res 41/40:00:e2:3e:90/12:00:00:00:00/40 Emask 0x409 (media error) <F> Oct 3 12:00:41 kernel: [ 3625.660000] ata5.00: status: { DRDY ERR } However, while testing with 'dd', I noticed that if I skip the first 4kB, the read seems to be ok, i.e. a command like. dd if=/dev/sde5 of=dev/null bs=4k count=1000 skip=1 doesn't return any read error. Supposing that there is no other read failure in the rest of the disk, would I be able to recover this 900 GB partition (as I mentioned before, it's a 'linux raid autodetect' partition, that contains a a LVM2 volume that contains a ext4 filesystem) if I copy-clone the partition somewhere else, but the first 4kB?

    Read the article

  • Unable to Use Bluetooth Mighty Mouse or Wireless Keyboard with Boot Camp

    - by Kristopher Johnson
    I have Windows 7 64-bit running on a MacBook Pro in a Boot Camp partition. I am trying to pair with my Bluetooth Mighty Mouse and Apple wireless keyboard under Windows, but whenever I try to do so, here's what happens: While on the Add a device window, I turn on the mouse or press a key on the keyboard, and the mouse or keyboard shows up in the list of available devices. I click the device and then the Next button, and the window displays Connecting to device... Time passes. Eventually, I get this error message: Adding this device to this computer failed Adding the device failed resulting in an unknown error. The reported error code is 0x80070015. Contact your device manufacturer for assistance. I've run Windows Update and Apple Software Update. I've also tried reinstalling the drivers from the Snow Leopard DVD. The mouse and keyboard both work fine when I boot into Mac OS X. FWIW, after many, many repeated tries, I eventually got it to work. I don't know why. So while my problem is solved, I'd still like to get an "answer" as to why trial-and-error seems to be the only approach. The keyboard, in particular, was hard to get set up. A few times, Windows would apparently recognize it and prompt me to enter the pairing code, but then it would time out after a couple of seconds (not long enough to enter the code). Grrrr.

    Read the article

  • error: unexplained error (code 130) at rsync.c(541) [sender=3.0.7]

    - by brazorf
    This error: unexplained error (code 130) at rsync.c(541) [sender=3.0.7] error is happening after i changed router. Actually, i found out that this error just happens on a ctrl+c signal, so it could be not representative about the error itself. The command i run is very basic: rsync -avz --delete /local/path/ username@host:/path/to/remote/directory Basically, the rsync just stuck there and nothing's happening, until i ctrl+c. After interrupting the process i got the error in subject. I past the whole thing here: rsync -avvvvz --delete /source/path/ username@host:/path/to/direectory cmd=<NULL> machine=HOSTNAME user=username path=/path/to/direectory cmd[0]=ssh cmd[1]=-l cmd[2]=username cmd[3]=HOSTNAME cmd[4]=rsync cmd[5]=--server cmd[6]=-vvvvlogDtprze.iLsf cmd[7]=--delete cmd[8]=. cmd[9]=/path/to/direectory opening connection using: ssh -l username HOSTNAME rsync --server -vvvvlogDtprze.iLsf --delete . /path/to/direectory note: iconv_open("UTF-8", "UTF-8") succeeded. ^C[sender] _exit_cleanup(code=20, file=rsync.c, line=541): entered rsync error: unexplained error (code 130) at rsync.c(541) [sender=3.0.7] [sender] _exit_cleanup(code=20, file=rsync.c, line=541): about to call exit(130) The authentication runs on ssh via rsa key. I tried basic troubleshoot such as: ping the remote host ssh -l username remote.host check software firewall logs i asked the remote host sysadmin to check for logs, and when i run that command a ssh connection is actually being established and i can state there is no comunication/authentication/name resolution issue here. Rolling back to old router make this work again. Both client and server are running ubuntu 10.04. Try to take a look at my router configuration, where i'm no experienced at all, but i didnt see any "suspect" (what i was looking for is firewall blocking something) setting. The router itself is DLINK DVA-G3670B. Any suggestion? Thank You F.

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >