Search Results

Search found 40722 results on 1629 pages for 'oracle enterprise linux'.

Page 619/1629 | < Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >

  • Attempting to update Amazon Route53 using a script, but domain is not being updated

    - by ks78
    I have several Amazon EC2 instances, running Ubuntu 10.04, with which I'd like to use Amazon's Route53. I setup a script as described in Shlomo Swidler's article, but I'm still missing something. When the script runs, it doesn't return any output, which I initially assumed meant it ran correctly. However, when I check the DNS records using MyR53DNS, there are no entries for my instances. Here's my script: #!/bin/tcsh -f set root=`dirname $0` setenv EC2_HOME /usr/lib/ec2-api-tools setenv EC2_CERT /etc/cron.route53/ec2_x509_cert.pem setenv EC2_PRIVATE_KEY /etc/cron.route53/ec2_x509_private.pem setenv AWS_ACCESS_KEY_ID myaccesskeyid setenv AWS_SECRET_ACCESS_KEY mysecretaccesskey /user/bin/ec2-describe-instances | \ perl -ne '/^INSTANCE\s+(i-\S+).*?(\S+\.amazonaws\.com)/ \ and do { $dns = $2; print "$1 $dns\n" }; /^TAG.+\sShortName\s+(\S+)/ \ and print "$1 $dns\n"' | \ perl -ane 'print "$F[0] CNAME $F[1] --replace\n"' | \ xargs -n 4 $/etc/cron.route53/cli53/cli53.py \ rrcreate -x 60 mydomain.com Does anyone see a problem with this script? If its not the script, what else could be preventing my Route53 domain from being updated? I am using the Security Groups to IP-restrict the instances. I've tried opening port 53, but that didn't seem to have an effect. Is there another port that Route53 uses? I'd appreciate any help or guidance the ServerFault community can offer. Let me know if you need any further info.

    Read the article

  • Don’t miss the Procurement AME New Features and Setup for Purchase Orders Webcast on December 6th and Follow up Live Chat

    - by MargaretW
    Webcast This one-hour session on December 6th is recommended for technical and functional users who are interested to know more about the new 12.1.3 features for Procurement with Approval Management Engine (AME). TOPICS WILL INCLUDE: Scope and limitations of AME functionality for purchase orders Setup and use of AME for purchase orders PO Review and PO E-Sign new features Demonstrations will be included See DocID 1456150.1 to sign up now! Live Chat There will be a live chat in the Procurement Community on December 13th for follow up questions and answers.  Join us to share and gain knowledge!

    Read the article

  • Advantages of Ubuntu LTS versions over regular Ubuntu?

    - by Adam Matan
    Do the LTS versions of Ubuntu have any advantages for the non-paying customers (who don't get any support?) From the tech spec only, these versions seem outdated in many aspects - mainly drivers and installed software versions. For instance, My previous (bounty!) problem regarding the AGN 5100 drivers would have been solved under Ubuntu 9.04.

    Read the article

  • How to generate customized sudoers files in puppet depending on the environment they're deployed to?

    - by gozu
    the sysadmins are present in the sudoers files of all environments, but other sudoers are not. Different environments all have slightly different sudoers. Most of the time, 90% of users are the same, and 10% vary so we cannot have only one sudoers file for everything. Right now, we are using puppet with 10 different files with names like sudoers.production1, sudoers.production2, sudoers.production3, sudoers.testing1, sudoers.staging1 and so forth. Puppet then picks the file to deploy based on the server's $domain (ex: dbserver.staging1.acme.com) or $hardwaremodel. It works fine but it's a nightmare to maintain so many files. I'd like to autogenerate sudoers files based on the server's domain and have only one big file with all the sudoers permissions for all users and all environments. Something that looks like: User_Alias ADMINS = abe, bob, carol, dave case $domain { "staging1.acme.com" { #add dev1,dev2,tester1,tester2 to sudoers file } "testing2.acme.com" { #add tester1, tester3, tester4 to sudoers file } What's the best way to go about this? Suggestions for alternatives are welcome. I'd appreciate any tips. Update 1: For security reasons, we'd rather not concatenate a bunch of files from a folder located on a puppet client in case someone puts a file in there (maliciously or not) and either breaks the combined file or inserts something in it. Most importantly, for usability, we'd like to keep the number of sudoers related files (fragment or complete) on puppet server to either 3 (prod/stage/test) or preferably 1 file. this file would (somehow) generate sudoers files on the puppet server and send one customized file to each puppet client. The purpose of this would be only searching for a username in a single file and removing it quicker than doing it on 11 files. When adding a user to a bunch of environments, it won't be as quick, but only one file would need to be opened and looked at, greatly reducing the chances of an omission. our Sudo version is 1.6.9p8 so we can't use /sudoers.d folder, only a sudoers file.

    Read the article

  • Command to execute another command while replaying the command on STDOUT

    - by hakre
    It's not easy to formulate the question properly, maybe it helps when I describe what I'd like to do. I want to execute a command and pipe it's output into a tool called pastebinit which uploads the STDOUT output to pastebin. That works very well, however I would like to send the command itself on top of it but w/o typing it a second time. Is there some command I can launch "my command" with that will Print "my command" on STDOUT Executes "my command" I have the feeling that something like that exists but as hard as it is to formulate such a question properly, I was not able to dig it up with google so far.

    Read the article

  • Ubuntu NBR karmic boot freezes at fsck from util-linux-ng 2.16

    - by BlueBill
    Hi all, I have a netbook (emachine e250 - equivalent to an acer aspire one) and I have Ubunutu NBR 9.10 installed on it. Every other cold boot freezes at the following error message: "fsck from util-linux-ng 2.16" There is no disk activity, no activity what so ever. I have left the machine sit for over an hour and nothing. It takes a couple of hard resets to be able to boot properly. Once it boots everything works great (wireless, suspend/resume, etc.)! I have spent the last couple of weeks researching the problem and the only thing that seems to work is setting nolapic in the boot string in grub - it boots every time. Unfortunately, nolapic disables the second core and causes problems with suspend resume. At first I thought it was an fsck problem with the first partition on the hard disk as it is a hidden ntfs partition containing the windows xp recover information. So in /etc/fstab I set the partition so that it would be ignored by fsck. This didn't seem to do anything. I have these partitions: /dev/sda1 - an ntfs recovery partition /dev/sda2 - /boot /dev/sda3 - swap /dev/sda5 - / /dev/sda6 - /home I am running kernel version 2.6.31-19-generic and have all the patches (as indicated by update manager). I also have no splash screen so I can see the boot progress. I have only been using NBR since January, I have been using Ubuntu on my desktop since last June (2009-06). What logs should I be looking at? Is there a log for failed boots? Thanks, Troy

    Read the article

  • Using PAM and vsftpd without root access

    - by Zizzencs
    I'm trying to set up a few vsftpd instances on a machine that I have no root access to. The authentication should be done through PAM with pam_listfile, like this: pam_listfile.so item=group sense=allow file=/path/filename onerr=fail I can ask the administrator to set up a PAM service for me and include that line but he is not willing to create 6 PAM services for my 6 vsftpd instances and I really need different /path/filename set for each vsftpd server. Is there a way to solve this problem? Can I somehow not use absolute path as the parameter? (I know the correct solution would be to use one vsftpd instance and set up virtual users properly. However unfortunately I have to work what I have and the users already exist in an Active Directory and are authenticated to the system using another PAM service.)

    Read the article

  • Why isn't MediaWiki loading?

    - by E L
    I recently set up MediaWiki on an Apache server with PostgreSQL. It installed successfully. However, when I try to access the website, I get a blank page. The error log reports the following. [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Warning: require_once(/var/www/mediawiki-1.19.2/LocalSettings.php): failed to open stream: Permission denied in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 [error] PHP Fatal error: require_once(): Failed opening required '/var/www/mediawiki-1.19.2/LocalSettings.php' (include_path='.:/usr/share/pear:/usr/share/php') in /var/www/mediawiki-1.19.2/includes/WebStart.php on line 134 I've seen other people with similar problems and the solutions have involved using chmod on LocalSettings.php to 644 or in other cases 755. Others have said using chown to make LocalSettings match the Apache user, which is just 'apache' in my case. None of these solutions have worked for me. Does anyone have other suggestions or maybe I missed something?

    Read the article

  • GDM Won't Automatically Start After Boot

    - by Mikey.B
    Ubuntu 10.04 Hi guys. I'm not sure why but quite recently, my ubuntu desktop started bringing up the command prompt login screen after booting. I want GDM to start up by default. I can manually bring up GDM by typing: sudo service gdm start or sudo /usr/sbin/gdm but I want something more permanent. The contents of /etc/X11/default-display-manager point to /usr/sbin/gdm The chkconfig status for GDM shows that it's off on all run levels... could that be it? I tried running: sudo chkconfig --add gdm and just got errors... Any ideas? How can I get GDM to automatically come up again? Any/all help is appreciated! -M

    Read the article

  • GlassFish will not start when SNMP is enabled

    - by edarc
    I have a GlassFish v3 app server running on 64-bit Debian Lenny. Everything is running fine, except I would like to monitor GF's JVM instance with SNMP. However, every time I try to enable it by adding the following <jvm-options> in domain.xml: -Dcom.sun.management.snmp.port=10161 -Dcom.sun.management.snmp.acl.file=/path/to/snmp.acl -Dcom.sun.management.snmp.interface=127.0.0.1 GlassFish refuses to start: $ asadmin start-domain Waiting for DAS to start .Error starting domain: default. The server exited prematurely with exit code 1. Command start-domain failed. $ There is also nothing illuminating (well, really nothing at all) in jvm.log or server.log. The snmp.acl file contains: acl = { { communities = public access = read-only managers = localhost } } and is chmod 600 (I know this is not the problem because it will actually fail with an error about the permissions if it is set to anything other than 600) $ java -version java version "1.6.0_0" OpenJDK Runtime Environment (build 1.6.0_0-b11) OpenJDK 64-Bit Server VM (build 1.6.0_0-b11, mixed mode)

    Read the article

  • MySQL and PostgreSQL on the same hardware

    - by Kamil Kisiel
    We recently bought some new hardware for a database server which we were intending to dedicate to the operation of PostgreSQL. However now we have the requirement to also run MySQL as some software we want to use only supports that database. Since the storage on this machine is the most suitable for hosting a DB, and we don't currently have the budget for more hardware,we're thinking of running both of them on the same server. Are there any caveats or best practices we should be aware of?

    Read the article

  • Will my system fsck when I reboot?

    - by Tom Newton
    ...and how do I find out? Say I am about to reboot a server. I would like to minimize downtime, so thinking about wrapping reboot in an alias that says "hang on buddy, you're going to hit a fsck on boot". Next question.. what's the best way to say "lets do it next time?" set the last check date? I know tune2fs can set a bunch of parameters, but how would I get em?

    Read the article

  • MaxStartups and MaxSessions configurations parameter for ssh connections?

    - by Webby
    I am copying the files from machineB and machineC into machineA as I am running my below shell script on machineA. If the files is not there in machineB then it should be there in machineC for sure so I will try copying the files from machineB first, if it is not there in machineB then I will try copying the same files from machineC. I am copying the files in parallel using GNU Parallel library and it is working fine. Currently I am copying 10 files in parallel. Below is my shell script which I have - #!/bin/bash export PRIMARY=/test01/primary export SECONDARY=/test02/secondary readonly FILERS_LOCATION=(machineB machineC) export FILERS_LOCATION_1=${FILERS_LOCATION[0]} export FILERS_LOCATION_2=${FILERS_LOCATION[1]} PRIMARY_PARTITION=(550 274 2 546 278) # this will have more file numbers SECONDARY_PARTITION=(1643 1103 1372 1096 1369 1568) # this will have more file numbers export dir3=/testing/snapshot/20140103 find "$PRIMARY" -mindepth 1 -delete find "$SECONDARY" -mindepth 1 -delete do_Copy() { el=$1 PRIMSEC=$2 scp david@$FILERS_LOCATION_1:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. || scp david@$FILERS_LOCATION_2:$dir3/new_weekly_2014_"$el"_200003_5.data $PRIMSEC/. } export -f do_Copy parallel --retries 10 -j 10 do_Copy {} $PRIMARY ::: "${PRIMARY_PARTITION[@]}" & parallel --retries 10 -j 10 do_Copy {} $SECONDARY ::: "${SECONDARY_PARTITION[@]}" & wait echo "All files copied." Problem Statement:- With the above script at some point I am getting this exception - ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host ssh_exchange_identification: Connection closed by remote host And I guess the error is typically caused by too many ssh/scp starting at the same time. That leads me to believe /etc/ssh/sshd_config:MaxStartups and MaxSessions is set too low. But my question is on which server it is pretty low? machineB and machineC or machineA? And on what machines I need to increase the number? On machineA this is what I can find - root@machineA:/home/david# grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10:30:60 root@machineA:/home/david# grep MaxSessions /etc/ssh/sshd_config And on machineB and machineC this is what I can find - [root@machineB ~]$ grep MaxStartups /etc/ssh/sshd_config #MaxStartups 10 [root@machineB ~]$ grep MaxSessions /etc/ssh/sshd_config #MaxSessions 10

    Read the article

  • problem when view the super block in ext3 file system

    - by xuczhang
    I tried to view the superblock by command "dd" in ext3 file system. dd if=/dev/sda3 bs=4096 skip=1 count=1 of=superblock But the result in superblock file is not correct(I compare the value of Inodes count I got from dumpe2fs). The device file /dev/sda3 is started at the boot block and then the superblock of the group0? And another question is the boot block and superblock's size are both BLOCKSIZE(here is 4096)? The disk format of ext2/ext3(I think they are the same) are shown below:

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • What can cause an increase in inactive memory and how to reclame it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB

    Read the article

  • Setting a custom timeout to nmblookup

    - by C2H5OH
    As part of a batch script, I have the following command: hostname=$(nmblookup -A $ip_address | awk '$2 == "<20>" {print $1}') Which works fine from a functinality perspective, even for unresolved hosts. The problem is that when the IP address is not reachable or the remote machine does not respond to the SMB request, the command takes about ten seconds to complete. Therefore, the question is simple: is there a way to lower the elapsed time in such cases? Or, in other words, is there a way to set a custom timeout for the nmblookup command? NOTE: I'm interested in solutions that do not make use of SIGALRM or similar mechanisms; if they exist. The nmblookup version is 3.6.3 from Ubuntu 12.04 LTS.

    Read the article

  • Glibc importance of error ...

    - by Oz123
    Hi Everyone, I am following LFS 6.7, and I reached the point where I compile glibc-2.12.1 . I mounted the LFS partition with the atime option: here is a confirm on that I think: /dev/sdb1 on /mnt /lfs type ext4 (rw) I get the following errors on making the test, and I have no clue if I should try to resolve them, or just ignore them and go on ... rpc/types.h sunrpc/rpc/svc_auth.h sunrpc/rpcsvc/bootparam.h sysvipc/sys/ipc.h \ sysvipc/sys/msg.h sysvipc/sys/sem.h sysvipc/sys/shm.h termios/termios.h \ termios/sys/termios.h termios/sys/ttychars.h time/time.h time/sys/time.h \ time/sys/timeb.h wcsmbs/wchar.h wctype/wctype.h > \ /sources/glibc-build/begin-end-check.out make[1]: Target `check' not remade because of errors. make[1]: Leaving directory `/sources/glibc-2.12.1' make: *** [check] Error 2 root:/sources/glibc-build# grep Error glibc-check-log make[2]: *** [/sources/glibc-build/math/test-float.out] Error 1 make[2]: *** [/sources/glibc-build/math/test-ifloat.out] Error 1 make[1]: *** [math/tests] Error 2 make[2]: [/sources/glibc-build/posix/annexc.out] Error 1 (ignored) make: *** [check] Error 2 thanks in advance, Oz

    Read the article

  • Configure IPv6 routing

    - by godlark
    I've got IPv6 addresses from SIXXS. My host is connected with SIXXS network over a AICCU tunnel ("sixxs" interface). My host address is 2001:::2, the host on the end has address 2001:::1. On my host IPv6 is fully accessible. I have problem with configuring IPv6 network on VMs. I use VirtualBox, the VM (Ubuntu) uses tap1 (on the host bridged by br0) #!/bin/sh PATH=/sbin:/usr/bin:/bin:/usr/bin:/usr/sbin # create a tap tunctl -t tap1 ip link set up dev tap1 # create the bridge brctl addbr br0 brctl addif br0 tap1 # set the IP address and routing ip link set up dev br0 ip -6 route del 2001:6a0:200:172::/64 dev sixxs ip -6 route add 2001:6a0:200:172::1 dev sixxs ip -6 addr add 2001:6a0:200:172::2/64 dev br0 ip -6 route add 2001:6a0:200:172::2/64 dev br0 Host: routing table: 2001:6a0:200:172::1 dev sixxs metric 1024 2001:6a0:200:172::/64 dev br0 proto kernel metric 256 2001:6a0:200:172::/64 dev br0 metric 1024 2000::/3 dev sixxs metric 1024 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev sixxs proto kernel metric 256 fe80::/64 dev br0 proto kernel metric 256 fe80::/64 dev tap1 proto kernel metric 256 default via 2001:6a0:200:172::1 dev sixxs metric 1024 Guest: interface eth1 (it is connected with tap1): auto eth1 iface eth1 inet6 static address 2001:6a0:200:172::3 netmask 64 gateway 2001:6a0:200:172::2 Guest: routing table 2001:6a0:200:172::/64 dev eth1 proto kernel metric 256 fe80::/64 dev eth0 proto kernel metric 256 fe80::/64 dev eth1 proto kernel metric 256 default via 2001:6a0:200:172::2 dev eth1 metric 1024 The guest pings to the host, the host pings to the guest, the host pings to 2001:6a0:200:172::1, but the guest doesn't ping to 2001:6a0:200:172::1. The guest tries to ping, on the host (by tcdump) I can capture its packets, but the host doesn't send them to 2001:6a0:200:172::1. What have I missed in configuration?

    Read the article

  • Silent Partner

    - by [email protected]
    The Team Behind the Man Behind the Mask As a continuing sponsor of the blockbuster Iron Man franchise, Oracle has been quietly preparing for the explosive sequel blasting its way into theaters this May. Through a series of advertising campaigns, immersive online experiences, and contests, Oracle plans to highlight its backstage efforts to help Marvel Entertainment hone its newfound superpowers. By driving the performance of critical systems, Oracle technologies are helping Marvel transform itself from mild-mannered comic book publisher to film industry power broker. You can learn more about this dynamic duo, and get free movie memorabilia, by visiting our Iron Man 2 showcase site.

    Read the article

  • proftpd on debian - authuserfile

    - by dirknibleck
    I have installed proftpd on my debian 4.0 server. I have modified the proftpd.conf file so that there is a statement for AuthUserFile, which points to a valid file. The file is configured per the proftpd specs, however the user that I have placed in this file is not able to log-in to the server. What could I be doing wrong? AuthUserFile is of the format: username:passwd:999:1002:www:/var/www:/bin/bash

    Read the article

  • Proftp error message Fatal: unknown configuration directive 'DisplayFirstChdir' on line 22 of '/etc/proftpd/proftpd.conf'

    - by LedZeppelin
    Sorry for the newb factor but I'm trying to set up a server using this guide: http://www.intac.net/build-your-own-server/ I'm at the end of step 5 and when I try to restart proftp I get the following error message me@me-desktop:~$ sudo service proftpd restart * Stopping ftp server proftpd [ OK ] * Starting ftp server proftpd Fatal: unknown configuration directive 'DisplayFirstChdir' on line 22 of '/etc/proftpd/proftpd.conf' [fail] Any clues on how to change line 22?

    Read the article

  • VirtualBox guest responds to ping but all ports closed in nmap

    - by jeremyjjbrown
    I want to setup a test database on a vm for development purposes but I cannot connect to the server via the network. I've got Ubuntu 12.04vm installed on 12.04 host in Virtualbox 4.2.4 set to - Bridged network mode - Promiscuous Allow All When I try to ping the virtual guest from any network client I get the expected result. PING 192.168.1.209 (192.168.1.209) 56(84) bytes of data. 64 bytes from 192.168.1.209: icmp_req=1 ttl=64 time=0.427 ms ... Internet access inside the vm is normal But when I nmap it I get nothin! jeremy@bangkok:~$ nmap -sV -p 1-65535 192.168.1.209 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:39 CST Nmap scan report for jeremy (192.168.1.209) Host is up (0.0032s latency). All 65535 scanned ports on jeremy (192.168.1.209) are closed Service detection performed. Please report any incorrect results at http://nmap.org/submit/ Nmap done: 1 IP address (1 host up) scanned in 0.88 seconds ufw and iptables on VM... jeremy@jeremy:~$ sudo service ufw stop [sudo] password for jeremy: ufw stop/waiting jeremy@jeremy:~$ sudo iptables -L Chain INPUT (policy ACCEPT) target prot opt source destination Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination I have scanned around and have no reason to believe that my router is blocking internal ports. jeremy@bangkok:~$ nmap -v 192.168.1.2 Starting Nmap 5.21 ( http://nmap.org ) at 2012-11-15 18:44 CST Initiating Ping Scan at 18:44 Scanning 192.168.1.2 [2 ports] Completed Ping Scan at 18:44, 0.00s elapsed (1 total hosts) Initiating Parallel DNS resolution of 1 host. at 18:44 Completed Parallel DNS resolution of 1 host. at 18:44, 0.03s elapsed Initiating Connect Scan at 18:44 Scanning 192.168.1.2 [1000 ports] Discovered open port 445/tcp on 192.168.1.2 Discovered open port 139/tcp on 192.168.1.2 Discovered open port 3306/tcp on 192.168.1.2 Discovered open port 80/tcp on 192.168.1.2 Discovered open port 111/tcp on 192.168.1.2 Discovered open port 53/tcp on 192.168.1.2 Discovered open port 5902/tcp on 192.168.1.2 Discovered open port 8090/tcp on 192.168.1.2 Discovered open port 6881/tcp on 192.168.1.2 Completed Connect Scan at 18:44, 0.02s elapsed (1000 total ports) Nmap scan report for 192.168.1.2 Host is up (0.0017s latency). Not shown: 991 closed ports PORT STATE SERVICE 53/tcp open domain 80/tcp open http 111/tcp open rpcbind 139/tcp open netbios-ssn 445/tcp open microsoft-ds 3306/tcp open mysql 5902/tcp open vnc-2 6881/tcp open bittorrent-tracker 8090/tcp open unknown Read data files from: /usr/share/nmap Nmap done: 1 IP address (1 host up) scanned in 0.08 seconds Answer... Turns out all of the ports were open to the network. I installed open ssh and confirmed it. Then I edited my db conf to listen to external IP's and all was well.

    Read the article

  • CRMIT’s HIGH VALUE CRM++ PLUGINS FOR CRM On DEMAND

    - by Soumo Das
    Customer satisfaction and experience being the two most considerable factors, these days businesses are on the lookout for automation tools that are world class, agile and keep quality at its core. CRMIT has developed such tools using cutting edge technologies and abstracting industry best practices and R&D.  Self Service Portal  With customers being so meticulous about regular updates and reliable access to their data, administrators just cannot think of walking a thin line. Surviving without a resource that provides a track of customer requirements for services available 24 x 7 can severely affect the productivity. In such a scenario, CRMIT’s Self Service Portal (SSP) is the best solution. This not only tracks the required customer data, but also allows companies to stay in tune with their employees, vendors and stakeholders.   One can directly sign up to become a CRMOD contact and SSP user. One need not use the database, as operations and interactions are d at run time. This is a fully configurable solution that tracks results periodically, thus making it easy for end users. It also offers better security and data visibility that enables users to progress smoothly. Quote and Order Management   When dealing with quotes, contracts and orders becomes complicated, only Quote & Order Management can work as a one-stop solution. CRMIT offers this great tool for managing all this information and for taking care of customer orders and service requirements.  This CRM On Demand plug-in allows one to create a new quote or copy the existing one. Products can be directly added from the product list of CRMOD and the pricing is calculated automatically. Quote can be generated and mailed to the external users in PDF, HTML and XLS formats. This not only allows management of quotes in an enhanced manner, but also supports various billing and tax calculation features that make work effortless.    Report Scheduler  When it comes to analyzing and providing statistics of various business processes currently running in an organization, one cannot depend on manual updates, which sometimes may be inaccurate or even delayed. CRMIT provides a SaaS based powerful solution - Report Scheduler - that allows CRM users to schedule reports as per the frequencies and then receive them as email attachments at the scheduled time.   With this powerful tool, administrators can control the report scheduler for assigning specific reports to specific users. After that, users can login and schedule any assigned report for viewing at particular intervals on monthly, weekly or daily basis. Additionally, users can also copy the mail to external users and can choose the preferred format. The best part is that sharing business data with third party become easy with this and for viewing reports, users need not log into their CRMOD account.  CRM On Demand Offline Solution CRM On-Demand Offline is another great CRM++ extension that allows one to work in both online and offline modes. Synchronizing both the modes is absolutely easy and offers ease while working. CRM OD offline works as an automation tool that not only improves efficiency, but also works as a backup in most cases. It is readily available as a windows application installer and requires users to be online only while validating and synchronizing. The best part is that working in the offline mode also works as a backup. 

    Read the article

< Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >