Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 671/1523 | < Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >

  • Sendmail Tuning For Batch Mail Jobs

    - by Kyle Brandt
    I have a webservers that send out emails to a sendmail relay server as a batch job. The emails need to be accepted by the relay sendmail server as fast as possible, however, they do not need to go out (be relayed) very quickly. I am seeing a couple timeouts once and a while from the webserver trying to connect to the relay server. The load currently is about 30 emails a second for a couple minutes. There are quite a few tuning options for sendmail in the sendmail tuning guide. What I am focusing on now is the Delivery Mode: Delivery Mode There are a number of delivery modes that sendmail can operate in, set by the DeliveryMode ( d) configuration option. These modes specify how quickly mail will be delivered. Legal modes are: i deliver interactively (synchronously) b deliver in background (asynchronously) q queue only (don't deliver) d defer delivery attempts (don't deliver) There are tradeoffs. Mode i gives the sender the quickest feedback, but may slow down some mailers and is hardly ever necessary. Mode b delivers promptly but can cause large numbers of processes if you have a mailer that takes a long time to deliver a message. Mode q minimizes the load on your machine, but means that delivery may be delayed for up to the queue interval. Mode d is identical to mode q except that it also prevents lookups in maps including the -D flag from working during the initial queue phase; it is intended for ``dial on demand'' sites where DNS lookups might cost real money. Some simple error messages (e.g., host unknown during the SMTP protocol) will be delayed using this mode. Mode b is the usual default. If you run in mode q (queue only), d (defer), or b (deliver in background) sendmail will not expand aliases and follow .forward files upon initial receipt of the mail. This speeds up the response to RCPT commands. Mode i should not be used by the SMTP server. I currently have the CentOS default modes: Sendmail.cf: DeliveryMode=background Submit.cf: DeliveryMode=i Is sendmail.cf/mc for outgoing email from relay (to the intertubes) and sumbit.cf/mc for incoming eamil (from my webservers). Would it make sense to change the outgoing delivery mode to queue? If I did, what would the outbound email flow behave like? If this is the right thing to do, can anyone show me example mc configurations for this change? If it isn't, what recommendations are there for these constraints?

    Read the article

  • OpenVPN Keeps Crashing

    - by Frank Thornton
    Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28523 [vpntest] Peer Connection Initiated with [AF_INET]<MY_IP>:28523 Oct 20 21:00:44 sb1 openvpn[2082]: vpntest/<MY_IP>:28523 MULTI_sva: pool returned IPv4=10.8.0.6, IPv6=(Not enabled) Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28522 WARNING: 'link-mtu' is used inconsistently, local='link-mtu 1576', remote='link-mtu 1376' Oct 20 21:00:44 sb1 openvpn[2082]: <MY_IP>:28522 WARNING: 'tun-mtu' is used inconsistently, local='tun-mtu 1532', remote='tun-mtu 1332' Oct 20 21:00:45 sb1 openvpn[2082]: <MY_IP>:28522 [vpntest2] Peer Connection Initiated with [AF_INET]<MY_IP>:28522 Oct 20 21:00:45 sb1 openvpn[2082]: vpntest2/<MY_IP>:28522 MULTI_sva: pool returned IPv4=10.8.0.10, IPv6=(Not enabled) Oct 20 21:00:46 sb1 openvpn[2082]: vpntest/<MY_IP>:28523 send_push_reply(): safe_cap=940 Client File: client dev tun proto tcp remote <IP> 443 resolv-retry infinite nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1410 persist-key persist-tun auth-user-pass comp-lzo SERVER: port 443 #- port proto tcp #- protocol dev tun tun-mtu 1500 tun-mtu-extra 32 reneg-sec 0 #mtu-disc yes mssfix 1410 ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem plugin /etc/openvpn/openvpn-auth-pam.so /etc/pam.d/login #plugin /usr/share/openvpn/plugin/lib/openvpn-auth-pam.so /etc/pam.d/login #- Comment this line if you are using FreeRADIUS #plugin /etc/openvpn/radiusplugin.so /etc/openvpn/radiusplugin.cnf #- Uncomment this line if you are using FreeRADIUS client-to-client client-cert-not-required username-as-common-name server 10.8.0.0 255.255.255.0 push "redirect-gateway def1" push "dhcp-option DNS 8.8.8.8" push "dhcp-option DNS 8.8.4.4" keepalive 3 30 comp-lzo persist-key persist-tun What is causing the VPN to keep dropping the connection and then reconnecting?

    Read the article

  • Recommended Patches For R12.1.3 Procurement Contracts, Contract Terms Library or Repository Contracts

    - by Oracle_EBS
    If you are implementing or upgrading to R12.1.3 Procurement Contracts, Contract Terms Library or Repository Contracts, then please review the following note for a list of recommended patches to apply on top of 12.1.3: 1349213.1: Recommended Patches For R12.1.3 Procurement Contracts and Contracts Core. Note that currently the methods given in Note 1400757.1: How to Find E-Business Suite Recommended Patches may not give the same patch listing given in Note 1349213.1.

    Read the article

  • For enabling SSL for a single domain on a server with muliple vhosts, will this configuration work?

    - by user1322092
    I just purchased an SSL certificate to secure/enable only ONE domain on a server with multiple vhosts. I plan on configuring as shown below (non SNI). In addition, I still want to access phpMyAdmin, securely, via my server's IP address. Will the below configuration work? I have only one shot to get this working in production. Are there any redundant settings? ---apache ssl.conf file--- Listen 443 SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCertificateChainFile /home/web/certs/domain1.intermediate.crt ---apache httpd.conf file---- ... DocumentRoot "/var/www/html" #currently exists ... NameVirtualHost *:443 #new - is this really needed if "Listen 443" is in ssl.conf??? ... #below vhost currently exists, the domain I wish t enable SSL) <VirtualHost *:80> ServerAdmin [email protected] ServerName domain1.com ServerAlias 173.XXX.XXX.XXX DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> #below vhost currently exists. <VirtualHost *:80> ServerName domain2.com ServerAlias www.domain2.com DocumentRoot /home/web/public_html/domain2.com/public </VirtualHost> #new -I plan on adding this vhost block to enable ssl for domain1.com! <VirtualHost *:443> ServerAdmin [email protected] ServerName www.domain1.com ServerAlias 173.203.127.20 SSLEngine on SSLProtocol all SSLCertificateFile /home/web/certs/domain1.public.crt SSLCertificateKeyFile /home/web/certs/domain1.private.key SSLCACertificateFile /home/web/certs/domain1.intermediate.crt DocumentRoot /home/web/public_html/domain1.com/public </VirtualHost> As previously mentioned, I want to be able to access phpmyadmin via "https://173.XXX.XXX.XXX/hiddenfolder/phpmyadmin" which is stored under "var/www/html/hiddenfolder"

    Read the article

  • Monit network availability checking

    - by viraptor
    Hi, I'd like to start a service with monit but only when I have the correct ip bound to the host. Can this be done somehow with the normal config? For example I want to start a process xxx with pidfile xxx.pid, but only if host currently has 10.0.0.1 bound to some interface.

    Read the article

  • drivers/rtc/hctosys.c: unable to open rtc device (rtc0) after recompile on boot

    - by squareone
    After recompiling a new kernel on CentOS 6.3, using the same kernel I have been using on several other machines, I am getting a kernel panic on two machines. I get the following when trying to boot: drivers/rtc/hctosys.c: unable to open rtc device (rtc0) (flashes this before displaying the panic below) not syncing: Attempted to kill init! exitcode=0x00000100 Pid: 1, comm: init Not tainted etc... I have been trying to figure out what is going on, and am having trouble doing so, and feel I have exhausted all of my options here. Any help would be appreciated. My grub.conf: default=0 timeout=5 splashimage=(hd0,0)/grub/splash.xpm.gz hiddenmenu title CentOS (3.4.18-rt29) root (hd0,0) kernel /vmlinuz-3.4.18-rt29 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet panic=5 initrd /initramfs-3.4.18-rt29.img title CentOS (2.6.32-279.14.1.el6.x86_64) root (hd0,0) kernel /vmlinuz-2.6.32-279.14.1.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_NO_LUKS LANG=en_US.UTF-8 rd_NO_MD rd_LVM_LV=VolGroup/lv_swap SYSFONT=latarcyrheb-sun16 crashkernel=auto rd_LVM_LV=VolGroup/lv_root KEYBOARDTYPE=pc KEYTABLE=us rd_NO_DM rhgb quiet panic=5 initrd /initramfs-2.6.32-279.14.1.el6.x86_64.img Any help or guidance would be greatly appreciated.

    Read the article

  • How do I reference the value of a constructed environment variable in a loop?

    - by Rob Spieldenner
    What I'm trying to do is loop over environment variables. I have a number of installs that change and each install has 3 IPs to push files to and run scripts on, and I want to automate this as much as possible (so that I only have to modify a file that I'll source with the environment variables). The following is a simplified version that once I figure out I can solve my problem. So given in my.props: COUNT=2 A_0=foo B_0=bar A_1=fizz B_1=buzz I want to fill in the for loop in the following script #!/bin/bash . <path>/my.props for ((i=0; i < COUNT; i++)) do <script here> done So that I can get the values from the environment variables. Like the following(but that actually work): echo $A_$i $B_$i or A=A_$i B=B_$i echo $A $B returns foo bar then fizz buzz

    Read the article

  • Centos Server/MySQL server problem

    - by Jake
    Hello all, I currently run a website we get about 15,000-20,000 hits a day. We currently run a very active forum, that is hosted using Vbulletin software. We have 4.5 Million Posts, 80,000 Threads, with about 11,000 members of which just under a third is active all the time. Now I am running a Intel Xeon Quad Core (2.13Ghz) with 4GB of RAM, Centos 5.5 and running DirectAdmin on the box to manage it. I also run the current stable version of Apache, MySQL, and php. This is the only site that is hosted on this machine. Now during random times of day sometimes when it gets busy the server load can get to like 20, but this can also happen when we only have like 200 users active too. I dont understand what is causing these problems. Sometimes I get pages that can generate in .2 seconds other times it takes like 5-8 seconds. I have customized the my.cnf file and that has not helped out anything, I didnt know where else to turn so if anyone has any suggestions please let me know. Thank You In advance.

    Read the article

  • Where to install JDBC drivers in Hyperic Server

    - by Svish
    I have installed Hyperic Server 4.4.0 and I want to use an SQL plugin that connects to an Oracle database. To make this work on the Agent i had to download a JDBC driver for Oracle and put it in [agent-dir]/bundles/[bundle-dir]/pdk/lib. I can now run my plugin on the agent using java -jar hq-products.jar .... Now I want to add it so that it shows up in the server hq. I put the plugin in the appropriate directory and I can add it as a platform service. However, when i try to configure the plugin I get the following error: No suitable driver found for jdbc:oracle:thin:@blah.blah:blah:blah This is the same error I got on the client before I added the Oracle JDBC driver, so I assume that's the problem here too. But where do I put the JDBC drivers on the server?

    Read the article

  • How to free up block device that is mounted to an inaccessible place?

    - by Vi
    root@vi-notebook:~# cat /proc/mounts | grep raidy /dev/md0 /root/e/i/wpc2/boot/mnt/raidy reiserfs ro,nosuid,nodev,noexec,noatime 0 0 root@vi-notebook:~# umount -n /root/e/i/wpc2/boot/mnt/raidy umount: /root/e/i/wpc2/boot/mnt/raidy: Transport endpoint is not connected root@vi-notebook:~# mount /dev/md/raidy /mnt/raidy/ -t reiserfs -o nodev,nosuid,noexec,acl,noatime mount: /dev/md0 already mounted or /mnt/raidy/ busy The only workaround I found is: root@vi-notebook:~# losetup /dev/loop0 /dev/md/raidy root@vi-notebook:~# mount /dev/loop0 /mnt/raidy/ -t reiserfs -o nodev,nosuid,noexec,acl,noatime

    Read the article

  • Upgrade to ubuntu 13.04 from 12.04 with iso image

    - by Digvijay Yadav
    I have ubuntu 12.04 installed on my system. I want to upgrade it to ubuntu 13.04. I want to do this upgrade using an iso image of ubuntu 13.04. I tried this Solution But it didn't work for me. After running these command I didn't get any alerts about updating. Also I don't understand the gksu part of the solution. Here are the steps I tried: sudo mount -t iso9660 -o loop PATH/TO/ISO /cdrom then sudo /cdrom/cdromupgrade Read more: http://linuxpoison.blogspot.tw/2011/06/how-to-upgrade-ubuntu-using-alternate.html#ixzz2SFMqlOPx I also wanted to know, If I can do this using a networked computer. By this I mean the iso file is on some other computer. Thank you.

    Read the article

  • Disabling networkmanager for a specific interface

    - by bdonlan
    I'd like to do some experimentation with hostap without disabling my primary wireless interface. How do I tell networkmanager to keep its hands off a specific interface or interfaces while allowing it to continue managing all other interfaces normally? I'm using Ubuntu 9.04. (Wasn't sure if this should go on superuser or serverfault, as networkmanager isn't much of a 'server' tool - if it belongs on serverfault please feel free to move it) Edit: I've tried adding this to /etc/network/interfaces: allow-hotplug wlan2 iface wlan2 inet static address 192.168.49.1 netmask 255.255.255.0 But this has no apparent effect, even after restarting NetworkManager. Here's my /etc/NetworkManager/nm-system-settings.conf: [main] plugins=ifupdown,keyfile [ifupdown] managed=false Edit[2]: Looks like I needed to restart nm-system-settings, then NetworkManager.

    Read the article

  • How to get ~/foo from /home/user1/foo?

    - by Claudius
    The Bash prompt supports the \w escape sequence, documented as \w the current working directory, with $HOME abbreviated with a tilde (uses the value of the PROMPT_DIRTRIM variable) Is there any way to get a similar abbreviation for an arbitrary string? That is, is there a general command that does something like the following, provided that HOME=/home/user1 /home/user1 ? ~ /home/user1/a/1 ? ~/a/1 /home/user2/b/2 ? ~user2/b/2 /root ? ~root Sure, I could try something ugly with sed, but that is unlikely to give me the result I want in any case. :-) The movitation behind this is that I would like to keep the titles in the tabs of my terminals as short as possible, hence abbreviate working directories where possible.

    Read the article

  • How can I make monodevelop render text in KDE?

    - by Spikolynn
    Monodevelop from git in KDE 4.10.2 does not render text in code edit tabs I tried with xfce and text is rendered ok there. I tried disabling composition with alt shift f12 and restarting x server but it was no better. I also tried disabling font softening in monodevelop options and disabling plugins. I also tried temporarily deleting my KDE profile. This is dual screen setup on Nvidia with nouveau. OS is slackware64-current.

    Read the article

  • Why Does Adding a UDF or Code Truncates the # of Resources in List?

    - by Jeffrey McDaniel
    Go to the Primavera - Resource Assignment History subject area.  Go under Resources, General and add fields Resource Id, Resource Name and Current Flag. Because this is using a historical subject area with Type II slowly changing dimensions for Resources you may get multiple rows for each resource if there have been any changes on the resource.  You may see a few records with current flags = 0, and you will see a row with current flag = 1 for all resources. Current flag = 1 represents this is the most up to date row for this resource.  In this query the OBI server is only querying the W_RESOURCE_HD dimension.  (Query from nqquery log) select distinct 0 as c1,      D1.c1 as c2,      D1.c2 as c3,      D1.c3 as c4 from       (select distinct T10745.CURRENT_FLAG as c1,                T10745.RESOURCE_ID as c2,                T10745.RESOURCE_NAME as c3           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */            where  ( T10745.LAST_RUN_PER_DAY_FLAG = 1 )       ) D1 If you add a resource code to the query now it is forcing the OBI server to include data from W_RESOURCE_HD, W_CODES_RESOURCE_HD, as well as W_ASSIGNMENT_SPREAD_HF. Because the Resource and Resource Codes are in different dimensions they must be joined through a common fact table. So if at anytime you are pulling data from different dimensions it will ALWAYS pass through the fact table in that subject areas. One rule is if there is no fact value related to that dimensional data then nothing will show. In this case if you have a list of 100 resources when you query just Resource Id, Resource Name and Current Flag but when you add a Resource Code the list drops to 60 it could be because those resources exist at a dictionary level but are not assigned to any activities and therefore have no facts. As discussed in a previous blog, its all about the facts.   Here is a look at the query returned from the OBI server when trying to query Resource Id, Resource Name, Current Flag and a Resource Code.  You'll see in the query there is an actual fact included (AT_COMPLETION_UNITS) even though it is never returned when viewing the data through the Analysis. select distinct 0 as c1,      D1.c2 as c2,      D1.c3 as c3,      D1.c4 as c4,      D1.c5 as c5,      D1.c1 as c6 from       (select sum(T10754.AT_COMPLETION_UNITS) as c1,                T10706.CODE_VALUE_02 as c2,                T10745.CURRENT_FLAG as c3,                T10745.RESOURCE_ID as c4,                T10745.RESOURCE_NAME as c5           from                 W_RESOURCE_HD T10745 /* Dim_W_RESOURCE_HD_Resource */ ,                W_CODES_RESOURCE_HD T10706 /* Dim_W_CODES_RESOURCE_HD_Resource_Codes_HD */ ,                W_ASSIGNMENT_SPREAD_HF T10754 /* Fact_W_ASSIGNMENT_SPREAD_HF_Assignment_Spread */            where  ( T10706.RESOURCE_OBJECT_ID = T10754.RESOURCE_OBJECT_ID and T10706.LAST_RUN_PER_DAY_FLAG = 1 and T10745.ROW_WID = T10754.RESOURCE_WID and T10745.LAST_RUN_PER_DAY_FLAG = 1 and T10754.LAST_RUN_PER_DAY_FLAG = 1 )            group by T10706.CODE_VALUE_02, T10745.RESOURCE_ID, T10745.RESOURCE_NAME, T10745.CURRENT_FLAG      ) D1 order by c4, c5, c3, c2 When querying in any subject area and you cross different dimensions, especially Type II slowly changing dimensions, if the result set appears to be short the first place to look is to see if that object has associated facts.

    Read the article

  • "pull" process/job into the background

    - by Mustafa Ismail Mustafa
    I know of terminating a command with & and then moving it into the background by pressing Ctrl-Z and then bg [pid], and I also know of nohup. But say you started a process that turned out to take much longer than one expected, is there a way of pulling, so to speak, this process from another terminal screen into the background so that even if I log off from the server the process would continue?

    Read the article

  • Fluxbox startup file not working

    - by Jack
    I am placing apps into my fluxbox startup file as per the instructions, however nothing starts up except fluxbox. It doesn't matter what app I try, so it isn't an app problem. here is my startup file: #!/bin/sh # # fluxbox startup-script: # # Lines starting with a '#' are ignored. # Change your keymap: xmodmap "/home/josh/.Xmodmap" # Applications you want to run with fluxbox. # MAKE SURE THAT APPS THAT KEEP RUNNING HAVE AN ''&'' AT THE END. tint2 & tilda & # And last but not least we start fluxbox. # Because it is the last app you have to run it with ''exec'' before it. exec fluxbox # or if you want to keep a log: # exec fluxbox -log "/home/josh/.fluxbox/log" I have also tried tests such as "touch ~/testwoked" and such, nothing works. It makes no difference if the file is executable or not.

    Read the article

  • Script execution flow stopped?

    - by vijay.shad
    Hi all, Now my script is able to start server, But I am still have some problem with my script. When the start server command is executed, the control does not pass the line and does not execute further of that line. Please tell me what is the problem and how can I get smooth execution of the my script.

    Read the article

  • Ubuntu most menu items dark-on-dark

    - by krzysz00
    Since to ubuntu 10.04 upgrade move of my drop-down menus have been dark-on-dark text, which becomes readable (changed background) when selected. I don't know what's causing this but it's a problem on Ambience and Radiance both. Any hints?

    Read the article

  • Recommended way to restrict Apache users

    - by Dor
    Following on why should we restrict Apache users, another two questions arises: What is the recommended method of restricting the places Apache users can traverse & read in the file system? What to do against fork bombs and other shell scripting problems? (bash scripting is allowed) My possible solutions (I prefer to know which solution you choose and why): chroot OR mod_chroot disable bash OR use Restricted BASH Please offer another solutions if you find appropriate. (perhaps selinux is?) Current status: Users are allowed to executed bash scripts (via PHP for example) suexec is active Apache requested are served with FastCGI for PHP

    Read the article

  • Escaping query strings with wget --mirror

    - by Jeremy Banks
    I'm using wget --mirror --html-extension --convert-links to mirror a site, but I end up with lots of filenames in the format post.php?id=#.html. When I try to view these in a browser it fails, because the browser ignores the query string when loading the file. Is there any way to replace the ? character in the filenames with something else? The answer of --restrict-file-names=windows worked correctly. In conjunction with the flags --convert-links and --adjust-extension/-E (formerly named --html-extension, which also works but is deprecated) it produces a mirror that behaves as expected. wget --mirror --adjust-extension --convert-links --restrict-file-names=windows http://www.example

    Read the article

  • Causes of sudden massive filesystem damage? ("root inode is not a directory")

    - by poolie
    I have a laptop running Maverick (very happily until yesterday), with a Patriot Torx SSD; LUKS encryption of the whole partition; one lvm physical volume on top of that; then home and root in ext4 logical volumes on top of that. When I tried to boot it yesterday, it complained that it couldn't mount the root filesystem. Running fsck, basically every inode seems to be wrong. Both home and root filesystems show similar problems. Checking a backup superblock doesn't help. e2fsck 1.41.12 (17-May-2010) lithe_root was not cleanly unmounted, check forced. Resize inode not valid. Recreate? no Pass 1: Checking inodes, blocks, and sizes Root inode is not a directory. Clear? no Root inode has dtime set (probably due to old mke2fs). Fix? no Inode 2 is in use, but has dtime set. Fix? no Inode 2 has a extra size (4730) which is invalid Fix? no Inode 2 has compression flag set on filesystem without compression support. Clear? no Inode 2 has INDEX_FL flag set but is not a directory. Clear HTree index? no HTREE directory inode 2 has an invalid root node. Clear HTree index? no Inode 2, i_size is 9581392125871137995, should be 0. Fix? no Inode 2, i_blocks is 40456527802719, should be 0. Fix? no Reserved inode 3 (<The ACL index inode>) has invalid mode. Clear? no Inode 3 has compression flag set on filesystem without compression support. Clear? no Inode 3 has INDEX_FL flag set but is not a directory. Clear HTree index? no .... Running strings across the filesystems, I can see there are what look like filenames and user data there. I do have sufficiently good backups (touch wood) that it's not worth grovelling around to pull back individual files, though I might save an image of the unencrypted disk before I rebuild, just in case. smartctl doesn't show any errors, neither does the kernel log. Running a write-mode badblocks across the swap lv doesn't find problems either. So the disk may be failing, but not in an obvious way. At this point I'm basically, as they say, fscked? Back to reinstalling, perhaps running badblocks over the disk, then restoring from backup? There doesn't even seem to be enough data to file a meaningful bug... I don't recall that this machine crashed last time I used it. At this point I suspect a bug or memory corruption caused it to write garbage across the disks when it was last running, or some kind of subtle failure mode for the SSD. What do you think would have caused this? Is there anything else you'd try?

    Read the article

< Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >