Search Results

Search found 43847 results on 1754 pages for 'command line arguments'.

Page 585/1754 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • Automatically generated /etc/hosts is wrong

    - by Niels Basjes
    I've created a kickstart script to install CentOS 5.5 (32bit) in a fully automated way. The DNS/DHCP setup correctly gives the system the right hostname in both the forward and reverse lookups. dig node4.mydomain.com. +short 10.10.10.64 dig -x 10.10.10.64 +short node4.mydomain.com. In the state the installed system is right after the installation completed is as follows: cat /etc/sysconfig/network NETWORKING=yes NETWORKING_IPV6=yes GATEWAY=10.10.10.1 HOSTNAME=node4.mydomain.com echo ${HOSTNAME} node4.mydomain.com cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 10.10.10.64 node4 My problem is that this automatically generated hosts file is slightly different from the way I want it (or better: the way Hadoop wants it). The last line should look like this: 10.10.10.64 node4.mydomain.com node4 What do I modify where to fix this? Thanks.

    Read the article

  • Wget save cookies not working

    - by TrymBeast
    I've been trying to login in the pyload through the web api, but wget is not saving the cookies and I don't understand why. I'm using the following command: wget --delete-after --keep-session-cookies --save-cookies=my_cookies.txt --post-data="username=USERNAME&password=PASSWORD" http://localhost:8000/api/login But the content of my_cookies.txt is: # HTTP cookie file. # Generated by Wget on 2012-06-23 22:31:33. # Edit at your own risk. When I run the same command but in debug mode I get the following output that includes the set cookie in the header response: DEBUG output created by Wget 1.10.2 (Red Hat modified) on linux-gnueabi. --22:31:11-- http://localhost:8000/api/login Resolving localhost... 127.0.0.1 Caching localhost => 127.0.0.1 Connecting to localhost|127.0.0.1|:8000... connected. Created socket 3. Releasing 0x000504d0 (new refcount 1). ---request begin--- POST /api/login HTTP/1.0 User-Agent: Wget/1.10.2 (Red Hat modified) Accept: */* Host: localhost:8000 Connection: Keep-Alive Content-Type: application/x-www-form-urlencoded Content-Length: 32 ---request end--- [POST data: username=USERNAME&password=PASSWORD] HTTP request sent, awaiting response... ---response begin--- HTTP/1.1 200 OK Content-Length: 34 Content-Type: application/json Cache-Control: no-cache, must-revalidate Set-cookie: beaker.session.id=405390ddc809efed54820638c95d7997; expires=Tue, 19-Jan-2038 04:14:07 GMT; Path=/ Connection: Keep-Alive Date: Sat, 23 Jun 2012 21:31:11 GMT Server: CherryPy/3.1.2 WSGI Server ---response end--- 200 OK hs->local_file is: login (not existing) Registered socket 3 for persistent reuse. TEXTHTML is on. Length: 34 [application/json] Saving to: `login' 100%[=======================================>] 34 --.-K/s in 0s 22:31:11 (1.28 MB/s) - `login' saved [34/34] Removing file due to --delete-after in main(): Removing login. Saving cookies to my_cookies.txt. Done saving cookies. Can anyone tell me what am I doing wrong? Thanks in advance!

    Read the article

  • Ubuntu 10.4 Lucid Server Minimal Install: Slow terminal scrolling

    - by noname
    I have a minimal install of Ubuntu 10.4 Server for testing and learning purposes. There is a very annoying occurrance: whenever I try to "man dpkg" or any command that load a few screens length of text (eg. "ls -al") the redraw speed of the console is just way too slow. I can see how each new line causes the whole screen to redraw. Note: that this doesn't happen inside X. No gui is installed. I have been experimenting with adding vesafb to the grub line as some guides suggested, but no speedups happened. You might be able to reproduce this behaviour on your linux system by switching to terminal using CTRL+ALT+F1. Is there any way to speed scrolling up?

    Read the article

  • How to configure a shortcut for an SSH connection through a SSH tunnel

    - by Simone Carletti
    My company production servers (FOO, BAR...) are located behind two gateway servers (A, B). In order to connect to server FOO, I have to open a ssh connection with server A or B with my username JOHNDOE, then from A (or B) I can access any production server opening a SSH connection with a standard username (let's call it WEBBY). So, each time I have to do something like: ssh johndoe@a ... ssh webby@foo ... # now I can work on the server As you can imagine, this is a hassle when I need to use scp or if I need to quickly open multiple connections. I have configured a ssh key and also I'm using .ssh/config for some shortcuts. I was wondering if I can create some kind of ssh configuration in order to type ssh foo and let SSH open/forward all the connections for me. Is it possible? Edit womble's answer is exactly what I was looking for but it seems right now I can't use netcat because it's not installed on the gateway server. weppos:~ weppos$ ssh foo -vv OpenSSH_5.1p1, OpenSSL 0.9.7l 28 Sep 2006 debug1: Reading configuration data /Users/xyz/.ssh/config debug1: Applying options for foo debug1: Reading configuration data /etc/ssh_config debug2: ssh_connect: needpriv 0 debug1: Executing proxy command: exec ssh a nc -w 3 foo 22 debug1: permanently_drop_suid: 501 debug1: identity file /Users/xyz/.ssh/identity type -1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_rsa type 1 debug2: key_type_from_name: unknown key type '-----BEGIN' debug2: key_type_from_name: unknown key type 'Proc-Type:' debug2: key_type_from_name: unknown key type 'DEK-Info:' debug2: key_type_from_name: unknown key type '-----END' debug1: identity file /Users/xyz/.ssh/id_dsa type 2 bash: nc: command not found ssh_exchange_identification: Connection closed by remote host

    Read the article

  • ssh connection operation timed out using rsync

    - by Mark Molina
    I use rsync to backup my remote server on my local device but when I combine it with a cron job my ssh times out. Just to be clear, the data is stored on a remote server and I want it stored on my local server. The backup request must be sent from my local server to the remote server. The command for backup up the data is working when I just type it in terminal like this: rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP but when I combine it with a cron job like this: 10 11 * * * rsync -chavzP --stats USERNAME@IP_ADDRESS: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP the ssh connection times out. When the cronjob executes it send a mail to the root user with the output like this: From local.xx.xx.xx Tue Jul 2 11:20:17 2013 X-Original-To: username Delivered-To: [email protected] From: [email protected] (Cron Daemon) To: [email protected] Subject: Cron <username@server> rsync -chavzP --stats USERNAME@IPADDRES: PATH_TO_BACKUP LOCAL_PATH_TO_BACKUP X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/bin:/bin> X-Cron-Env: <LOGNAME=username> X-Cron-Env: <USER=username> X-Cron-Env: <HOME=/Users/username> Date: Tue, 2 Jul 2013 11:20:17 +0200 (CEST) ssh: connect to host IP_ADDRESS port XX: Operation timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver] rsync error: unexplained error (code 255) at /SourceCache/rsync/rsync-42/rsync/io.c(452) [receiver=2.6.9] So the rsync command is working when just typed in terminal but not when used by a cronjob. Can anybody explain this?

    Read the article

  • Libre Office/Writer PDF export: white borders appear between lines even after setting borders to none

    - by Yttric
    My document prints ok (exactly as it appears on the screen in Libre Office), but when I export to PDF and view the PDF on screen there are white borders around each text or picture object. Here's a sample snapshot from PDF/Preview: http://imgur.com/TWip5 I've tried selecting a paragraph and changing the border property to None as described in Libre Office help (http://help.libreoffice.org/Common/Borders), setting the "Line arrangement Default" to "Set no borders". But borders set by the Format dialog don't correspond to the borders I see in PDF/Preview. In PDF/Preview the border appears on line boundaries. Borders set in Format appear around each picture, for example. What am I doing wrong?

    Read the article

  • Mac OS X Disk Encryption - Automation

    - by jfm429
    I want to setup a Mac Mini server with an external drive that is encrypted. In Finder, I can use the full-disk encryption option. However, for multiple users, this could become tricky. What I want to do is encrypt the external volume, then set things up so that when the machine boots, the disk is unlocked so that all users can access it. Of course permissions need to be maintained, but that goes without saying. What I'm thinking of doing is setting up a root-level launchd script that runs once on boot and unlocks the disk. The encryption keys would probably be stored in root's keychain. So here's my list of concerns: If I store the encryption keys in the system keychain, then the file in /private/var/db/SystemKey could be used to unlock the keychain if an attacker ever gained physical access to the server. this is bad. If I store the encryption keys in my user keychain, I have to manually run the command with my password. This is undesirable. If I run a launchd script with my user credentials, it will run under my user account but won't have access to the keychain, defeating the purpose. If root has a keychain (does it?) then how would it be decrypted? Would it remain locked until the password was entered (like the user keychain) or would it have the same problem as the system keychain, with keys stored on the drive and accessible with physical access? Assuming all of the above works, I've found diskutil coreStorage unlockVolume which seems to be the appropriate command, but the details of where to store the encryption key is the biggest problem. If the system keychain is not secure enough, and user keychains require a password, what's the best option?

    Read the article

  • SSH client not showing prompt after successful login

    - by user431949
    I'm having problems with my SSH client on Ubuntu 10.10. When I switch on my computer and open a Terminal and execute the command ssh user@host, it gives me a password prompt after which I enter the right password, I then get a prompt to execute my commands on the remote computer. Now the problem is, after a little while (probably around 10 minutes), the terminal window stops accepting commands (No matter what I type, nothing shows). Once this happens, I close the Terminal window and try to start all over again by opening another Terminal window. But this time around, after entering the right password, I don't get a welcome message or prompt. The cursor just keeps blinking on a new line. I ran the ssh command with -v parameter and the message I get after a successful login is: debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.utf8 Still the cursor keeps blinking on a new line without a prompt. However, Putty SSH client works perfectly on the same machine. Thank you very much for your time. Your help would be greating appreciated.

    Read the article

  • Rebuild Fedora 19 ISO adding Kickstart for USB install

    - by dooffas
    I am attempting to edit a Fedora 19 DVD ISO to add a kickstart file. I then need this ISO burnt to a USB stick for instillation. The error I get when booting is Warning: Could not boot. Warning: /dev/root does not exist To try and determine which part of the process is failing I have broken the process down in to separate stages. Step 1: Burn the original ISO "Fedora-19-x86_64-DVD.iso" (Available - here) to a pendrive and see if that will install. dd if=/path/to/iso of=/dev/sdc Burning this image was successful and it installed without issue. Step 2: Exctract the ISO, repackage it and burn it to a pendrive and see if that will install. PLEASE NOTE: The final command in this section has been broken down in to multiple lines for ease of reading, in fact it was run as a single command on one line. mkdir -p /mnt/linux mount -o loop /tmp/linux-install.iso /mnt/linux cd /mnt/ tar -cvf - linux | (cd /var/tmp/ && tar -xf - ) cd /var/tmp/linux xorriso -as mkisofs -R -J -V "NewFedoraImage" -o ouput/file.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -isohybrid-mbr /usr/share/syslinux/isohdpfx.bin . This iso was then burnt to a pendrive as before. dd if=/path/to/iso of=/dev/sdc This ISO burnt to the pen drive with no problem and will boot. I then see the fedora options screen. After choosing either "Install Fedora 19" or "Test this media & install Fedora 19" I then receive the errors highlighted above. This means the kickstart file is not to blame, but repackaging the ISO. Is there something I am missing in the repackaging process? Any input would be great! NOTE: If it is of any help, I attempted Step 2 with an Ubuntu server ISO and the process was successful.

    Read the article

  • How do I set "relay_hosts_only" setting using sendmail / m4

    - by Dave
    We're using CentOS and sendmail's m4 configuration. How do I set domains where emails should be delivered? I only want two domains, and would like email to all other domains blocked. I tried this in my "/etc/mail/sendmail.mc" file ... FEATURE(`relay_hosts_only,mydomain1.com,mydomain2.com')dnl But then got this error tryiing to generate the sendmail.cf file ... [dalvarado@mymachine ~]$ sudo m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf m4:/etc/mail/sendmail.mc:156: Warning: excess arguments to builtin `include' ignored m4:/etc/mail/sendmail.mc:156: cannot open `/usr/share/sendmail-cf/feature/relay_hosts_only': No such file or directory Thanks for your advice, - Dave

    Read the article

  • Help me exorcise my demon possessed logon script

    - by Detritus Maximus
    I have a user logon script that copies a file over to a subfolder of the current user's profile path: Script (only showing the line that isn't working): copy /Y c:\records\javasettings_Windows_x86.xml "%USERPROFILE%\Application Data\OpenOffice.org\3\user\config">>c:\records\OOo3%USERNAME%.txt 2>&1 To diagnose why it wasn't working, I did a somelogfile.log parameter on the group policy script and found that what the above command is translating to is this: C:\WINDOWS>copy /Y c:\records\javasettings_Windows_x86.xml "C:\Documents and Settings\test2\Application Data\OpenOffice.org\3\user\config" 1>>c:\records\OOo3test2.txt 2>&1 So the question is, how do I get rid of (exorcise) the " 1" in that line? Update 1: So the reason the script wasn't working was that the creator didn't have any permissions on the directory. I fixed the permissions, and now the file works but! I still have the " 1" showing on all the logs and would like to know why.

    Read the article

  • In gnuplot, how to plot with lines but skip missing data points?

    - by Anna
    I've got a value associated to each day, as such: 120530 70.1 120531 69.0 120601 69.2 120602 69.5 # and so on for 200 lines When plotting this data in gnuplot with lines, the data points are nicely connected. Unfortunately, at places over a week of data points can be missing. Gnuplot draws long lines over these intervals. How can I make gnuplot only connect points on consecutive days? Solutions that require preprocessing of the data are fine, as I already smooth it with a script. Here is what I use: set xdata time set timefmt "%y%m%d" plot "vikt_ma.txt" using 1:2 with lines title "first line", \\ "" using 1:3 with lines title "second line" Example:

    Read the article

  • Break all hardlinks within a folder

    - by Georges Dupéron
    I have a folder which contains a certain number of files which have hard links (in the same folder or somewhere else), and I want to de-hardlink these files, so they become independant, and changes to their contents won't affect any other file (their link count becomes 1). Below, I give a solution which basically copies each hard link to another location, then move it back in place. However this method seems rather crude and error-prone, so I'd like to know if there is some command which will de-hardlink a file for me. Crude answer : Find files which have hard links (Edit: To also find sockets, etc. that have hardlinks, use find -not -type d -links +1) : find -type f -links +1 A crude method to de-hardlink a file (copy it to another location, and move it back) : Edit: As Celada said, it's best to do a cp -p below, to avoid loosing timestamps and permissions. Edit: Create a temporary directory and copy to a file under it, instead of overwriting a temp file, it minimizes the risk to overwrite some data, though the mv command is still risky (thanks @Tobu). # This is unhardlink.sh set -e for i in "$@"; do temp="$(mktemp -d ./hardlnk-XXXXXXXX)" [ -e "$temp" ] && cp -ip "$i" "$temp/tempcopy" && mv "$temp/tempcopy" "$i" && rmdir "$temp" done So, to un-hardlink all hard links (Edit: changed -type f to -not -type d, see above) : find -not -type d -links +1 -print0 | xargs -0 unhardlink.sh

    Read the article

  • Is there an SSL equivelent to an ssh agent?

    - by Matthew J Morrison
    Here is my situation: There are a number of developers who all need to have access to be able to install ruby gems and python eggs from a remote source. Currently, we have a server inside our firewall that hosts the gems and eggs. We now want the ability to be able to install things hosted on that server outside of our firewall. Since some of the gems and eggs that we host are proprietary I would like to somewhat lock access to that machine down, as unobtrusively as possible to the developers. My first thought was using something like ssh keys. So, I spent some time looking at SSL mutual authentication. I was able to get everything set up and working correctly, testing with curl, but the unfortunate thing was that I had to pass extra arguments to curl so it knows about the certificate, key and certificate authority. I was wondering if there is anything like the ssh agent that I can set up to provide that information automatically so that I can push the certificates and keys to the developer's machines so the developers don't have to log in or provide keys each time they try to install something. Another thing that I want to avoid is having to modify the 'gem' command and the 'pip' command to provide keys when they make the http connection. Any other suggestions that may solve this problem (not related to ssl mutual auth) are also welcome. EDIT: I've been continuing to research this and I came across stunnel. I think this may be what I'm looking for, any feedback regarding stunnel would also be great!

    Read the article

  • Wrapping text in an opened file in vim

    - by TK
    I want to soft wrap text in Vim to 90 columns per line. I want soft wrap so that it doesn't affect actual text by adding line break characters. Here's is what I tried: // Opened a file with lots of text and ran the following: set wrap set tw=90 set linebreak Running the commands doesn't change anything about the view at all. It soft wraps at the end of the window. I have used "Soft Wrap" in TextMate by Command-Option-W to get the same effect, and want to know how to get it work on Vim.

    Read the article

  • Problem with kickstart

    - by user70523
    Hi, I created a kickstart file ks.cfg and then I have put that in the bootable disk*(Ubuntu 10.04)* and then added the following line to the isolinux.cfg linux ks=ks.cfg and have not removed any other lines from the isoconfig.cfg file and then while installing the installation is not automated,again it is asking for language and all. If i removed include menu.cfg or any other line from isolinux.cfg i am getting a boot error. What should i do now to automate the installation.Where should i add the boot parameters so that installation will start from the ks.cfg . Thanks and Regards Ravi Kumar

    Read the article

  • Oracle 10.2.0.1 --> 10.2.0.4 patchset errors on Advanced Queuing tables. Serious or not?

    - by hurfdurf
    We're running Oracle on RHEL 5.4 64-bit. We recently did an upgrade from 10.2.0.1 to 10.2.0.4. Many errors were generated during the upgrade (sample listed below from trace.log) but during application testing afterward everything seemed fine (clean EXP, inserts, updates, deletes, etc.). The errors look like they are all related to Advanced Queuing tables and views. We are not using replication at all, this is a simple single instance db. ORA-24002: QUEUE_TABLE SYS.AQ_EVENT_TABLE does not exist ORA-24032: object AQ$_AQ_SRVNTFN_TABLE_T exists, index could not be created ORA-24032: object AQ$_ALERT_QT_S exists, index could not be created for queue ORA-06512: at "SYS.DBMS_AQADM_SYSCALLS", line 117 ORA-06512: at "SYS.DBMS_AQADM_SYS", line 5116 Is this worth worrying about, and if so, how do I go about cleaning up/recreating the corrupted and/or missing objects?

    Read the article

  • Linux commands shows different results

    - by ClydeFrog
    I'm really having a hard time to process these results on my Ubuntu server. I have a major problem with my JBoss server where I get FileNotFoundExceptions along with "No space left on device" errors. And I thought "maybe I'm out of disk space", and used df command to figure out how much I have left: root@ubuntu1:/# df -h Filsystem Storlek Anvnt Tillg Anv% Monterat på /dev/mapper/ubuntu1-root 36G 13G 21G 38% / none 2,0G 192K 2,0G 1% /dev none 2,0G 0 2,0G 0% /dev/shm none 2,0G 64K 2,0G 1% /var/run none 2,0G 0 2,0G 0% /var/lock /dev/sda1 228M 23M 193M 11% /boot /dev/mapper/vgdata-lvdata 79G 9,2G 66G 13% /data And as you can see, I have plenty of space left. And I also checked if I'm out of i-nodes: root@ubuntu1:/# df -i Filsystem Inoder IAnv IFria IAnv% Monterat på /dev/mapper/ubuntu1-root 2346512 61992 2284520 3% / none 505380 773 504607 1% /dev none 507383 1 507382 1% /dev/shm none 507383 30 507353 1% /var/run none 507383 2 507381 1% /var/lock /dev/sda1 124496 230 124266 1% /boot /dev/mapper/vgdata-lvdata 10486784 233945 10252839 3% /data But then i used du: root@ubuntu1:/# du -s -h /* 7,5M /bin 23M /boot 19G /data 192K /dev 11G /eniro 5,3M /etc 112K /home 0 /initrd.img 183M /lib 0 /lib64 16K /lost+found 12K /media 4,0K /mnt 4,0K /opt du: kan inte komma åt "/proc/20452/task/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/task/20452/fdinfo/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fd/3": Filen eller katalogen finns inte du: kan inte komma åt "/proc/20452/fdinfo/3": Filen eller katalogen finns inte 0 /proc 18M /root 8,2M /sbin 4,0K /selinux 8,0K /srv 0 /sys 40K /tmp 691M /usr 1,2G /var 0 /vmlinuz Notice that /data and /eniro are 30G combined! How is it possible? Do I have a memory leak somewhere? Or is it something else? ----- EDIT 1 ----- Ok, I figured out that /data has its own mount so it's not possible to combine /data and /eniro because they aren't on the same mount. But how come it says 9,2G on the first command when it says 19G on the third on directory /data?

    Read the article

  • How do I recover from a grub renaming issue?

    - by Justin Ardini
    I am currently dual-booting Ubuntu 10.04 and Windows 7 with grub 1.98 as the bootloader. I recently decided to edit the grub menu to remove the old Ubuntu kernels, so I followed the instructions of this guide and added the line list='version_find_latest $list' to the 10_linux file, then ran sudo update-grub. Apparently I made a mistake, because upon restart I had these bootloader entries: Ubuntu, with Linux $list Ubuntu, with Linux $list (recovery mode) Ubuntu, with Linux version_find_latest Ubuntu, with Linux version_find_latest (recovery mode) When I try to load any of these, I get a grub error along the lines of, Error: not a normal file Now I can't start any version of Ubuntu to remove the line I added. What's the best course of action to be able to use Ubuntu again? Thanks.

    Read the article

  • Cisco Router - Add a missing MIB file

    - by Jonathan Rioux
    I have a Cisco 881w, and I would like to setup NBAR in my NetFlow Analyzer. But it says that my router misses this MIB in order to allow NFA to poll the router with snmp to get NBAR infos. From the FAQ page of the NetFlow Analyzer website, it responds to my error: Q. I am able to issue the command "ip nbar protocol-discovery" on the router and see the results. But NFA says my router does not support NBAR, Why? A. Earlier version of IOS supports NBAR discovery only on router. So you can very well execute the command "ip nbar protocol-discovery" on the router and see the results. But NBAR Protocol Discovery MIB(CISCO-NBAR-PROTOCOL-DISCOVERY-MIB) support came only on later releases. This is needed for collecting data via SNMP. Please verify that whether your router IOS supports CISCO-NBAR-PROTOCOL-DISCOVERY-MIB. The missing MIB is: CISCO-NBAR-PROTOCOL-DISCOVERY-MIB I found it here: ftp://ftp.cisco.com/pub/mibs/v2/CISCO-NBAR-PROTOCOL-DISCOVERY-MIB.my But how can I add this MIB into the router? The IOS of my router is: c880data-universalk9-mz.151-3.T1.bin

    Read the article

  • Unable to receive emails from Ubuntu postfix mail server

    - by Paddington
    I am unable to receive emails on an Ubuntu 11.04 server running postfix with the Plesk control panel. I can't see the mails even on webmail. I am able to send emails and am not getting any error messages on the email client when I try to receive. Here is the output of the logs: *tail -f /usr/local/psa/var/log/maillog Aug 29 10:38:31 cp9 postfix/tlsmgr[3811]: fatal: open database /var/lib/postfix/smtpd_scache.db: Invalid argument Aug 29 10:38:32 cp9 postfix/master[27738]: warning: process /usr/lib/postfix/tlsmgr pid 3811 exit status 1 Aug 29 10:38:32 cp9 postfix/master[27738]: warning: /usr/lib/postfix/tlsmgr: bad command startup -- throttling Aug 29 10:38:36 cp9 pop3d: Connection, ip=[::ffff:196.201.7.158] Aug 29 10:38:36 cp9 pop3d: IMAP connect from @ [::ffff:196.201.7.158]INFO: LOGIN, [email protected], ip=[::ffff:196.201.7.158] Aug 29 10:38:37 cp9 pop3d: 1346229517.874008 LOGOUT, [email protected], ip=[::ffff:196.201.7.158], top=0, retr=0, time=1, rcvd=24, sent=1716, maildir=/var/qmail/mailnames/essentialhuku.co.za/earle/Maildir Aug 29 10:14:05 cp9 postfix/tlsmgr[1133]: fatal: open database /var/lib/postfix/smtpd_scache.db: Invalid argument Aug 29 10:14:06 cp9 postfix/master[27738]: warning: process /usr/lib/postfix/tlsmgr pid 1133 exit status 1 Aug 29 10:14:06 cp9 postfix/master[27738]: warning: /usr/lib/postfix/tlsmgr: bad command startup -- throttling Aug 29 10:14:08 cp9 pop3d: Connection, ip=[::ffff:196.201.7.158

    Read the article

  • Automate creation of Windows startup script?

    - by Niten
    Is there a good way to automate installing local startup (rather than login) scripts in Windows XP and Windows 7, via the command line, WMI, or otherwise (even COM or Win32 if it comes to that)? I need to setup a local startup script on a large number of computers, and unfortunately, Active Directory is absolutely not an option. I would like to write a script or small program that I can run on each computer to perform the startup script installation in order to save myself a lot of error-prone point-and-click manual labor. I see that when one uses gpedit.msc to create a local startup script, information about the script gets stored in the registry here: HKLM\Software\Policies\Microsoft\Windows\System\Scripts\Startup However, if you create such a script and then delete its registry key, the script will remain listed in the local Group Policy editor; as is so often the case in Windows, apparently there is more going on there than meets the eye. This leads me to question whether it's safe to manually add subkeys for new startup scripts here (I wouldn't want my script to be overwritten by later changes made using the local Group Policy editor, for instance)... Another option that's occurred to me is to create an item in the Task Scheduler configured to run at system startup. However, my concerns there are twofold: Can this be automated any more easily? For instance, the at command doesn't appear to let you schedule a task for system startup, and WMI's Win32_ScheduledJob interface looks unreliable (it fails to show any of my currently scheduled tasks, for one thing). Would I be able to prevent users from logging in until the scheduled startup task is completed, as can be done with "normal" Windows startup scripts? Thanks in advance for any suggestions, I've been banging my head against this one for a bit...

    Read the article

  • How to debug a kernel created using ubuntu-vm-builder?

    - by user265592
    Aim: Trying to perform a code walkthrough of what functions are getting called for sending and receiving packets over the network. I am building a kernel and using gdb for debugging/ tracing purposes. I have build a vm using the following command : time sudo ubuntu-vm-builder qemu precise --arch 'amd64' --mem '1024' --rootsize '4096' --swapsize '1024' --kernel-flavour 'generic' --hostname 'ubuntu' --components 'main' --name 'Bob' --user 'ubuntu' --pass 'ubuntu' --bridge 'br0' --libvirt 'qemu:///system' And I can run the VM successfully in qemu using the following command: qemu-system-x86_64 -smp 1 -drive file=tmpGgEOzK.qcow2 "$@" -net nic -net user -serial stdio -redir tcp:2222::22 Now, I want to debug the kernel using gdb. For this I need an executable with debug symbols(vmlinux), which apparently I don't have, as the vm-builder never asked for any such options and simply created a .qcow2 file. Question 1: Am I taking the correct approach to solve the problem and is there an easier way to do it? Question 2: Is there a way to debug this kernel using GDB? P.S: I don't have hardware support for KVM. Please correct me if I am wrong. Thanks.

    Read the article

  • Supervisord doesn't stop nginx process

    - by Lennart Regebro
    I'm using Supervisor a lot, and in this project I have an nginx process managed by Supervisord. The relevant parts of the configuration is this: [supervisord] logfile=/home/projects/eceee-web/prod/var/log/supervisord.log logfile_maxbytes=5MB logfile_backups=10 loglevel=info pidfile=/home/projects/eceee-web/prod/var/supervisord.pid ; childlogdir=/home/projects/eceee-web/prod/var/log nodaemon=false ; (start in foreground if true;default false) minfds=1024 ; (min. avail startup file descriptors;default 1024) minprocs=200 ; (min. avail process descriptors;default 200) directory=/home/projects/eceee-web/prod [program:nginx] command = /home/projects/eceee-web/prod/bin/nginx redirect_stderr = true autostart= true autorestart = true directory = /home/projects/eceee-web/prod stdout_logfile = /home/projects/eceee-web/prod/var/log/nginx-stdout.log stderr_logfile = /home/projects/eceee-web/prod/var/log/nginx-stderr.log The /home/projects/eceee-web/prod/bin/nginx command will start nginx in the foreground, it does not deamonify itself. Still, stopping it will fail: supervisorctl stop nginx Will not give any answer, but the process will continue. Any idea what? This is on OS X Darwin, with Supervisor 3.0a9 and nginx 0.7.65.

    Read the article

  • Pgpool-regclass gives error when installling

    - by user119720
    I have a problem when installing the pgpool-regclass. When I'm running 'Make',it shows me this kind of error : p,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et -Wall -Wmissing-prototypes -Wpointer-arith -Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute -Wformat-security -fno-strict-aliasing -fwrapv -fpic -I. -I. -I/usr/pgsql-9.2/include/server -I/usr/pgsql-9.2/include/internal -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include -c -o pgpool-regclass.o pgpool-regclass.c pgpool-regclass.c:99:37: error: macro "RangeVarGetRelid" requires 3 arguments, but only 2 given pgpool-regclass.c: In function âpgpool_regclassâ: pgpool-regclass.c:99: error: âRangeVarGetRelidâ undeclared (first use in this function) pgpool-regclass.c:99: error: (Each undeclared identifier is reported only once pgpool-regclass.c:99: error: for each function it appears in.) make: *** [pgpool-regclass.o] Error 1 Can anyone help me to sort this things out?I really appreciate it. Thanks.

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >