Search Results

Search found 27238 results on 1090 pages for 'local variable'.

Page 414/1090 | < Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >

  • install zenoss on ubuntu, raise No valid ZENHOME error

    - by bxshi
    I've added an user with name zenoss, and set export ZENHOME=/usr/local/zenoss in ~/.bashrc under /home/zenoss, and when using echo $ZENHOME, it could show /usr/local/zenoss When install zenoss, I switched to zenoss and then run install.sh under zenoss-4.2.0/inst, when it tries to run Tests, the error occured. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.zenoss.utils.ZenPacksTest Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.045 sec <<< FAILURE! Running org.zenoss.utils.ZenossTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec Results : Tests in error: testGetZenPack(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetPackPath(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetAllPacks(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. Tests run: 6, Failures: 0, Errors: 3, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Zenoss Core ....................................... SUCCESS [27.643s] [INFO] Zenoss Core Utilities ............................. FAILURE [12.742s] [INFO] Zenoss Jython Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 40.586s [INFO] Finished at: Wed Sep 26 15:39:24 CST 2012 [INFO] Final Memory: 16M/60M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project utils: There are test failures. [ERROR] [ERROR] Please refer to /home/zenoss/zenoss-4.2.0/inst/build/java/java/zenoss-utils/target/surefire-reports for the individual test results.

    Read the article

  • hosts.deny not blocking ip addresses

    - by Jamie
    I have the following in my /etc/hosts.deny file # # hosts.deny This file describes the names of the hosts which are # *not* allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # # The portmap line is redundant, but it is left to remind you that # the new secure portmap uses hosts.deny and hosts.allow. In particular # you should know that NFS uses portmap! ALL:ALL and this in /etc/hosts.allow # # hosts.allow This file describes the names of the hosts which are # allowed to use the local INET services, as decided # by the '/usr/sbin/tcpd' server. # ALL:xx.xx.xx.xx , xx.xx.xxx.xx , xx.xx.xxx.xxx , xx.x.xxx.xxx , xx.xxx.xxx.xxx but i am still getting lots of these emails: Time: Thu Feb 10 13:39:55 2011 +0000 IP: 202.119.208.220 (CN/China/-) Failures: 5 (sshd) Interval: 300 seconds Blocked: Permanent Block Log entries: Feb 10 13:39:52 ds-103 sshd[12566]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12567]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12568]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:52 ds-103 sshd[12571]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root Feb 10 13:39:53 ds-103 sshd[12575]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=202.119.208.220 user=root whats worse is csf is trying to auto block these ip's when the attempt to get in but although it does put ip's in the csf.deny file they do not get blocked either So i am trying to block all ip's with /etc/hosts.deny and allow only the ip's i use with /etc/hosts.allow but so far it doesn't seem to work. right now i'm having to manually block each one with iptables, I would rather it automatically block the hackers in case I was away from a pc or asleep

    Read the article

  • Using custom variables in Google Analytics funnels?

    - by Matt Huggins
    Google Analytics allow you to view how many users completed funnels through a set of pages in order to reach a goal URL. The service also allows you to pass custom variables when tracking a page view. Is it possible to combine the two, such that I create a funnel based upon the vale of a custom variable set for each visitor?

    Read the article

  • Xen PV packet loss

    - by Delphinator
    I'm having some serious issues with packetloss with one of my servers. This server is a somewhat old (P4-era) machine running Debian Squeeze and Xen 4.0. There are two domUs running on it (both also Debian Squeeze), one gateway and a fileserver. Unfortunatly the processor has no virtualization extensions, therefore only PV can be used. While investigating why our network seems to be slower than it should I found some pretty bad packet loss (~25%). After further investigation and several experiments I did a measurment between the dom0 and one of the domUs: Server listening on UDP port 5001 Receiving 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ ------------------------------------------------------------ Client connecting to dom0, UDP port 5001 Sending 1470 byte datagrams UDP buffer size: 110 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.1.2(domU) port 33817 connected with 192.168.1.100(dom0) port 5001 [ 4] local 192.168.1.2(domU) port 5001 connected with 192.168.1.100(dom0) port 48606 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 46.3 MBytes 38.7 Mbits/sec [ 3] Sent 33020 datagrams [ 3] Server Report: [ 3] 0.0-10.0 sec 46.2 MBytes 38.6 Mbits/sec 0.030 ms 89/33019 (0.27%) [ 3] 0.0-10.0 sec 1 datagrams received out-of-order [ 4] 0.0-10.2 sec 43.0 MBytes 35.3 Mbits/sec 13.074 ms 11575/42256 (27%) tl;dr: 27% packet loss from dom0 to domU with 50Mbit UDP packets. Same thing happens from anywhere in the network. The problem gets better for smaller bandwidths (0.047% for 5Mbit) and worse for higher (59% for 200Mbit) ones. I did increase the CPU-weight of the dom0, there is no swapping going on, and actual networking-hardware is not involved. I never expected Xen (or anything related) to drop packets, and I'm completly clueless what to try next.

    Read the article

  • invalid argument in bash script when port is bad

    - by user273689
    When I do this command I get an error when there is something wrong with the eth3. RESC="1234" RESD="1234" RESO="1234" RESC=$(ssh -q vmx@$1 cat /sys/class/net/$2/carrier) RESO=$(ssh -q vmx@$1 cat /sys/class/net/$2/operstate) RESD=$(ssh -q vmx@$1 cat /sys/class/net/$2/dormant) cat: /sys/class/net/eth3/carrier: Invalid argument cat: /sys/class/net/eth3/dormant: Invalid argument How can I use the invalid argument inside the RESC and RESD variable Thanks

    Read the article

  • Running shortcut from command prompt without the .lnk extension (Windows)

    - by Abbas
    I have created a folder (d:\shortcuts), created shortcuts for most applications in this folder and appended the folder path to the Path environment variable. Now all my applications are available from run and command window without messing around with Path. However, I now have to type the name of the shortcut as well as extension (e.g. vlc.lnk) to invoke it. Is there any way to do this without typing the extension?

    Read the article

  • Sharepoint (WSS 3.0) on SBS 2008 broken.

    - by tcv
    I recently ran the Sharepoint Products and Technologies Wizard. I had hoped this would bring up Sharepoint and allow me to access it so I could begin to learn. But it's not working. Here is some data that I hope is relevant. I am doing all my testing on the SBS 2008 server itself. I changed the hostheader in IIS to reflect an external FQDN I plan to deploy. The SBS server is remote and there are no domain-connected workstations. If I browse "localhost" SSL, I can get to the site, albeit with a self-signed cert warning. If I attempt to connect via SSL using either the internal FQDN (.local), the External FQDN (.net) or any other permutation thereof, I am prompted for credentials three times but am not allowed access. My account is a domain admin. The site is inaccessible using port 80 whether using localhost, internal FQDN (.local), and external FQDN (.net) Right now, I suspect my problem is within IIS, but I don't know. My plan to publish the sharepoint site to the web so my partner and I can check documents in/out. Can someone help me get started in current direction?

    Read the article

  • Prevent master to fall back to master after failure

    - by Chrille
    I'm using keepalived to setup a virtual ip that points to a master server. When a failover happens it should point the virtual ip to the backup, and the IP should stay there until I manually enable (fix) the master. The reason this is important is that I'm running mysql replication on the servers and writes should only be on the master. When I failover I promote the slave to master. The master server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP1 } vrrp_instance APP1 { interface eth0 state EQUAL virtual_router_id 61 priority 999 nopreempt virtual_ipaddress { 217.x.x.129 } smtp_alert } Backup server: global_defs { ! this is who emails will go to on alerts notification_email { [email protected] ! add a few more email addresses here if you would like } notification_email_from [email protected] ! I use the local machine to relay mail smtp_server 127.0.0.1 smtp_connect_timeout 30 ! each load balancer should have a different ID ! this will be used in SMTP alerts, so you should make ! each router easily identifiable lvs_id APP2 } vrrp_instance APP2 { interface eth0 state EQUAL virtual_router_id 61 priority 100 virtual_ipaddress { 217.xx.xx.129 } notify_master "/etc/keepalived/notify.sh del app2" notify_backup "/etc/keepalived/notify.sh add app2" notify_fault "/etc/keepalived/notify.sh add app2” smtp_alert }

    Read the article

  • VMWare Fusion cannot connect to the NAT connection on my Mac

    - by FFish
    I have been using VMWare Fusion on my Mac to check out my websites on localhost. Now I can't connect anymore with the NAT connection. There seems to be a problem with my IP address or Mac address? I have no idea what causes this, it was working fine before!? In the XP (SP2) VM, in the taskbar I see the Local Area Connection with the yellow warning icon. The bubble says: "This connection has limited or no connectivity. You might not be aisle to access the Internet or some network resources. For more information, click this message." Doing that opens up the Local Area Connection Status panel. In the Support tab, when I click the repair button I get following message: "Windows could not finish repairing the problem because the following action cannot be completed: Renewing IP address." I tried disabling my firewall and also XAMPP that I use as server on OSX. VMWware version: 3.1 VM: XP SP2 Mac OSX 10.6.3 Any help would be greatly appreciated.

    Read the article

  • Keyboard issue when using kitty+puttycyg but not when using putty or cygwin alone

    - by kamaradclimber
    I would like to use a unique way to use console on my windows setup. Previously I used putty for remote access to linux servers and cygwin to have unix-like tools on windows. Then I discovered kitty which is a patched putty and have added the puttycyg patch. It provides the same way to connect to remote and local console. However, there is a strange behavior using vim when connected to the local console (using the puttycyg patch) : keys display A/B/C/D and replace the current character by these letter. In insert mode it does replace the caracter, in normal mode, no modification is made to the document even if the caracter is displayed as replaced. For instance, when I type : fixed bug with product deleted I get : fixed bbug wiwith prprodudueleteted I have read a lot of questions about this type of issue 3, 4 and googled it but there is no answer that work for me. The issue is present only for the setup kitty+puttycyg patch : cygwin alone works perfectly (and putty alone works also for access to linux servers). Any help would be appreciated !

    Read the article

  • wget is working only when used with sudo

    - by Yusuf
    I'm having quite a strange behavior with wget since yesterday. I can download files by using sudo wget, but when I try the same file with only wget, I can get this error: yusufh@ubuntu-yuh:~$ wget http://www.kegel.com/wine/winetricks --2010-12-17 09:34:11-- http://www.kegel.com/wine/winetricks Resolving www.kegel.com... failed: Name or service not known. wget: unable to resolve host address `www.kegel.com' and with sudo wget: yusufh@ubuntu-yuh:~$ sudo wget http://www.kegel.com/wine/winetricks --2010-12-17 09:35:37-- http://www.kegel.com/wine/winetricks Connecting to 127.0.0.1:5865... connected. Proxy request sent, awaiting response... 200 OK Length: 190672 (186K) [text/plain] Saving to: `winetricks' 100%[==================================================================================================>] 190,672 --.-K/s in 0.03s 2010-12-17 09:35:37 (6.92 MB/s) - `winetricks' saved [190672/190672] After the comments below, here is an update: I can use Google Chrome or Firefox perfectly without running it as root. I use ntlmaps to connect to the office proxy. So I need to use 127.0.0.1:5865 as the proxy for clients. Result for env | grep -i proxy: NO_PROXY=localhost,127.0.0.0/8,*.local, http_proxy=127.0.0.1:5865 ftp_proxy=127.0.0.1:5865 all_proxy=socks://127.0.0.1:5865/ ALL_PROXY=socks://127.0.0.1:5865/ https_proxy=127.0.0.1:5865 no_proxy=localhost,127.0.0.0/8,*.local while sudo env | grep -i proxy is empty! HELP!

    Read the article

  • Strange behaviour when creating/deleting subdomains

    - by Saif Bechan
    This can be a DNS cache issue from my local machine, but I am not sure. This is what happens. I have a domain that does not use wildcard subdomains, so they have to be created. Without creating the domain, and I point my browser to test.domain.com, I get a page server not found. Now when I create the subdomain, I keep getting the same problem. Now when I first create the domain, without ever visiting the page, I get the normal page, but now when I delete the subdomain, it never goes away. Can this be a DNS cache issue, I am working on a shared environment, maybe the router has a cache but I doubt that. Can this have something to do with my setup. I have tried to use the Google DNS hosting, but this gives me the same results. I have also tried some tools that clear my local DNS cache, they were some add-ons for FireFox. Anyone have any ideas what can be the problem. Are there any tests I can do to see if there is some kind of cache between me and the server.

    Read the article

  • Oracle 10g for Windows does not start up on system boot

    - by Mike Dimmick
    We have an Oracle 10g Enterprise Edition installation (10.2.0.1.0) on a Windows Server 2003 virtual machine. It was initially created with Virtual Server 2005 R2 SP1 but has now been migrated to Windows Server 2008 Hyper-V. The services start on system boot, but the instance does not start up. This problem was actually occurring on Virtual Server after a migration from one server to another, but I managed to fix it then with: oradim -edit -sid ORCL -startmode auto However, this now has no effect. oradim.log (in %OracleHome%\database\oradim.log) says: Thu Jun 10 14:14:48 2010 C:\oracle\product\10.2.0\db_3\bin\oradim.exe -startup -sid orcl -usrpwd * -log oradim.log -nocheck 0 Thu Jun 10 14:14:48 2010 ORA-12560: TNS:protocol adapter error sqlnet.log in the same folder has: Fatal NI connect error 12560, connecting to: (DESCRIPTION=(ADDRESS=(PROTOCOL=BEQ)(PROGRAM=oracle)(ARGV0=oracleorcl)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))'))(CONNECT_DATA=(SID=orcl)(CID=(PROGRAM=C:\oracle\product\10.2.0\db_3\bin\oradim.exe)(HOST=ORACLE-VM)(USER=SYSTEM)))) VERSION INFORMATION: TNS for 32-bit Windows: Version 10.2.0.1.0 - Production Oracle Bequeath NT Protocol Adapter for 32-bit Windows: Version 10.2.0.1.0 - Production Time: 10-JUN-2010 14:14:48 Tracing not turned on. Tns error struct: ns main err code: 12560 TNS-12560: TNS:protocol adapter error ns secondary err code: 0 nt main err code: 530 TNS-00530: Protocol adapter error nt secondary err code: 2 nt OS err code: 0 The ORA_ORCL_AUTOSTART registry value is set to TRUE, so it should be auto-starting - and you can see that it's trying to. The problem also occurs when stopping and restarting the OracleServiceORCL service. I've enabled SQL*Net tracing which shows: [10-JUN-2010 15:09:33.919] snlpcss: entry [10-JUN-2010 15:09:34.419] snlpcss: Unable to spawn Oracle oracle (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq))) orcl, error 2. [10-JUN-2010 15:09:34.419] snlpcall: exit On a hunch that error 2 is Windows error 2 (file not found) I tried restarting the service with Process Monitor watching oradim.exe, but this appears to delay things just enough that it always works. Right now I have a horrible hack where I've created a Scheduled Task to run oradim -startup -sid ORCL when the Administrator account logs on, and set the VM to auto-logon. I'd still like to work out why it's not working.

    Read the article

  • Samba4/Ubuntu Shares Incorrectly Available to All Users

    - by Dan
    I've got my Ubuntu server working with Samba4 and got it set up as the Primary domain controller on my network with AD and all that goodness. However, I'm trying to get my Samba configuration to work with the users and groups I've defined with the Active Directory tools from Windows. For instance, I've got a share X which I want users A and B (as part of the 'management' group, known as LLGrpManager in my setup) to see, but no body else. However, after making changes to the configuration, restarting Samba, I test by connecting to the share with my Mac over Samba as user 'C' which isn't part of the management group, and I can, incorrectly, see the X share. I've tried alsorts of combinations of specifying the group with no luck at all. I've got a feeling that my global config might be too lenient or something to do with file permissions but being a bit green, I'm without clue. My /etc/samba/smb.conf # Global parameters [global] server role = domain controller server string = Office Server workgroup = LLDOMAIN realm = lldomain.local netbios name = DUMBO passdb backend = samba4 logon path = \\%L\profiles\%U logon drive = L: log file = /var/log/samba/%m.log max log size = 50 security = ads domain logons = yes domain master = auto usershare allow guests = no valid users = %S [netlogon] path = /var/lib/samba/sysvol/lldomain.local/scripts read only = no guest ok = no [sysvol] path = /var/lib/samba/sysvol read only = No guest ok = no valid users = @LLDOMAIN\LLGrpManager [ShareX] path = /data comment = Entire Data Volume guest ok = no comment = Entire Data Volume guest ok = no valid users = @LLDOMAIN\LLGrpManager admin users = @LLDOMAIN\LLGrpManager browsable = no inherit acls = yes inherit permissions = yes ... My /etc/nsswitch.conf I've also instructed the system to use the nss winbind library when searching for users or groups by adding the stanza passwd and group in /etc/nsswitch.conf: passwd: compat winbind group: compat winbind shadow: compat Permissions on the folder in question drwxrwxrwt 8 root root 4.0K Oct 28 19:11 data

    Read the article

  • The Network folder specified is currently mapped using a different user name and password

    - by Frank Thornton
    I have a NAS device, it has 3 shares. On one computer I have access to all 3 of the shares. On another computer I keep getting this error when try and add a 2nd one. The Network folder specified is currently mapped using a different user name and password [...] That is the message I keep getting. What causes that? EDIT: Every share has it's own username and password. EDIT: NET USE on the one running 3 from the same NAS device New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK T: \\192.168.2.5\SHARE1 Microsoft Windows Network OK X: \\Nas-1dsho-abc\SHARE2 Microsoft Windows Network Disconnected Y: \\192.168.2.9\backups Microsoft Windows Network OK Z: \\Nas-1dsho-abc\cbackups Microsoft Windows Network The command completed successfully. NET USE on the other: New connections will be remembered. Status Local Remote Network ------------------------------------------------------------------------------- OK Y: \\192.168.2.5\SHARE1 Microsoft Windows Network Unavailable Z: \\192.168.2.5\SHARE2 Microsoft Windows Network The command completed successfully.

    Read the article

  • DAS vs SAN storage for serving 2 to 4 nodes

    - by Luke404
    We currently have 4 Linux nodes with local storage, arranged in two active/passive pairs with storage mirrored using DRBD, running virtual machines (actually using Xen Hypervisor) for typical hosting workloads (mail, web, a couple VPS, etc.). We're approaching the (presumed) maximum IOPS of those servers, and we're planning to migrate to an external storage solution with two active nodes, with capacity for up to four active nodes. Since we're an all-Dell shop I've done some research and found the MD3200 / MD3200i products should be the ones we're looking for. We are pretty sure we won't be attaching more than 4 hosts on a single storage and I'm wondering if there is any clear advantage for one or the other. In theory I should be able to attach 4 SAS hosts to a single MD3200 (single links on a single controller MD3200, or dual redundant SAS links from each host to a dual-controller MD3200), or 4 iSCSI hosts to a single MD3200i (directly on its 4 GigE ports without any switch, again with dual links for the dual controller option). Both setups should let us implement live VM migration since all hosts can access all the LUNs at the same time, and also some shared filesystem like GFS2 or OCFS2. Also, both setups should allow full redundancy of the whole system (assuming dual controllers in the storage). One difference I can see is that the DAS solution is actually limited to 4 hosts while the iSCSI one should be able to grow to more hosts (adding two GigE switches to the mix). One point for the iSCSI solution is that it would allow us to start out with our current nodes and upgrade them at a later time (we can't add other SAS controllers, but they already have 4 GigE ports each). With the right (iSCSI|SAS) controllers I should be able to connect diskless nodes and boot them off the external storage which I think is a good thing (get rid of any local storage). On the other hand, I would have thought the SAS one to be cheaper but it seems like an MD3200 actually costs a little less than an MD3200i (?) (please note: I've used Dell gear in my examples since that's what we're looking for but I assume the same goes with other vendors) I would like to know if my assumptions above are correct, and if I'm missing any important difference between the two setups.

    Read the article

  • How to disable mod_security2 rule (false positive) for one domain on centos 5

    - by nicholas.alipaz
    Hi I have mod_security enabled on a centos5 server and one of the rules is keeping a user from posting some text on a form. The text is legitimate but it has the words 'create' and an html <table> tag later in it so it is causing a false positive. The error I am receiving is below: [Sun Apr 25 20:36:53 2010] [error] [client 76.171.171.xxx] ModSecurity: Access denied with code 500 (phase 2). Pattern match "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" at ARGS:body. [file "/usr/local/apache/conf/modsec2.user.conf"] [line "352"] [id "300015"] [rev "1"] [msg "Generic SQL injection protection"] [severity "CRITICAL"] [hostname "www.mysite.com"] [uri "/node/181/edit"] [unique_id "@TaVDEWnlusAABQv9@oAAAAD"] and here is /usr/local/apache/conf/modsec2.user.conf (line 352) #Generic SQL sigs SecRule ARGS "((alter|create|drop)[[:space:]]+(column|database|procedure|table)|delete[[:space:]]+from|update.+set.+=)" "id:1,rev:1,severity:2,msg:'Generic SQL injection protection'" The questions I have are: What should I do to "whitelist" or allow this rule to get through? What file do I create and where? How should I alter this rule? Can I set it to only be allowed for the one domain, since it is the only one having the issue on this dedicated server or is there a better way to exclude table tags perhaps? Thanks guys

    Read the article

  • Using psftp to upload and download files

    - by macha
    Hello I am trying to upload and download files from my desktop to my server. Now after some search I did download psftp. I used to use filezilla earlier, but I cannot install it on my desktop due to a few reasons. Since psftp (similar to putty) is just an executable for file transfer. So now after going through this link http://www.math.tamu.edu/~mpilant/math696/psftp.html. I understood that put and get are two commands I would use to download and upload files. Now when I logon to the server and say get filename, it actually is throwing back an error "local: unable to open filename". I tried that with other files too, and I end up getting the same error. The psftp.exe file is on my desktop. The process that I am using is I double click the .exe file open "servrname" cd /path/where/files/are get "filename" And I get this error "local: unable to open filename". Am I making a mistake or is it a problem with this executable?

    Read the article

  • OpenVPN Server - CPU is pegged out

    - by ericl42
    Hello, I am configuring OpenVPN to act as a SSL tunnel for a remote location. I have OpenVPN1 at our current location acting as a server then OpenVPN2 at the other location that is acting as a client but is also acting as a DHCP server to machines behind it so they are basically connected to the local LAN. Everything is set up fine and I can talk from location A to location B with no problems like everyone is local. I am however having some performance issues. OpenVPN1 CPU is pegged to 100% the entire time I am copying or doing any type of activity through the tunnel. I expect some CPU usage going up but nothing like this. It's really killing my performance. OpenVPN1 is running in ESX right now with 2 gig RAM and 4 procs with unlimited bursting capacity. I am using AES-192 encryption with a 1024 key. Any idea how I can get my CPU down on OpenVPN1 and my download/upload speeds higher between the tunnel? Thanks. edit: Turning down the logging helped boost the throughput a little bit, but I am still fairly shy of where I believe I should be. Also I am still maxed out on the CPU. Does anyone have any ideas? I am really stuck on this. Thanks.

    Read the article

  • Why is my connection slow?

    - by Jay R.
    I have a Dell Precision T5400 with a Broadcom 1Gb onboard NIC. For some strange reason, when I access machines on our local network, the best I can get is around 125KB/s download speed. My laptop that has a 10/100Mb NIC onboard usually gets around 300KB/s or better from the same network resource. Both machines are plugged into the same 1Gb switch which connects to our local network wall jack at 100Mb half duplex. There is also a printer plugged into the same switch at 100Mb full. The resource I'm using for the test is a 30MB zip file copied from a jetty webserver that is running as part of a cruisecontrol installation. The cruisecontrol installation is running WindowsXP with full real-time antivirus and Altiris patch management and inventory running. That stuff on its own is eating some of the download speed. I've seen the laptop reach into the multiple MB/s download speed before, but the desktop never seems to get past 125KB/s to 130KB/s. In WindowsXP, before I upgraded the driver in the desktop, it was that slow. In Fedora, it is still slow even though it appears to be using the same driver version as the upgraded Windows driver. The upgraded Windows driver is faster, but still not nearly as fast as the laptop. What gives? Any insight to improve the situation would be appreciated. Could it be that the BroadCom board just isn't that good, or the driver in linux is just not as good as the Windows one?

    Read the article

  • Server Manager from Windows 2008 to Hyper-V 2008 R2?

    - by Roger Lipscombe
    My workstation is running Windows Server 2008. I do not have local admin privileges. I have a Hyper-V Server 2008 R2 (i.e. Core+Hyper-V) box. On that box, I do have local admin privileges. I can Remote Desktop to the box; Hyper-V Manager works fine (outside of Server Manager). It's just that there are some things that are easier to do in Server Manager (partition disks, etc.) than at the command line. I'd like to use Server Manager on my workstation to manage the Hyper-V box. However: When I run Server Manager on my workstation, it prompts for elevation, and won't then let me connect to another server. If I attempt to run MMC and then add "Server Manager" as a Snap-in, it doesn't prompt me for the server name. Then it complains that I'm not an Administrator. It doesn't provide for connecting to another server. The Remote Server Administration Tools (RSAT) are for Windows Vista and Windows 7 RC. These don't install on Windows 2008.

    Read the article

  • Exchange Connector Won't Send to External Domains

    - by sisdog
    I'm a developer trying to get my .Net application to send emails out through our Exchange server. I'm not an Exchange expert so I'll qualify that up front!! We've set up a receive Connector in Exchange that has the following properties: Network: allows all IP addresses via port 25. Authentication: Transport Layer Security and Externally Secured checkboxes are checked. Permission Groups: Anonymous Users and Exchange Servers checkboxes are checked. But, when I run this Powershell statement right on our Exchange server it works when I send to a local domain address but when I try to send to a remote domain it fails. WORKS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER (BTW: my value for OURSERVER=boxname.domainname.local. This is the same fully-qualified name that shows up in our Exchange Management Shell when I launch it). FAILS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER Send-MailMessage : Mailbox unavailable. The server response was: 5.7.1 Unable to relay At line:1 char:17 + Send-Mailmessage <<<< -To [email protected] -From [email protected] -Subject testing -Body himom -SmtpServer FTI-EX + CategoryInfo : InvalidOperation: (System.Net.Mail.SmtpClient:SmtpClient) [Send-MailMessage], SmtpFailed RecipientException + FullyQualifiedErrorId : SmtpException,Microsoft.PowerShell.Commands.SendMailMessage EDIT: From @TheCleaner 's advice, I ran the Add-ADPermission to the relay and it didn't help; [PS] C:\Windows\system32Get-ReceiveConnector "Allowed Relay" | Add-ADPermission -User "NT AUTHORITY\ANONYMOUS LOGON" -E xtendedRights "Ms-Exch-SMTP-Accept-Any-Recipient" Identity User Deny Inherited -------- ---- ---- --------- FTI-EX\Allowed Relay NT AUTHORITY\ANON... False False Thanks for the help. Mark

    Read the article

  • Setup Exchange 2010 cannot verify Host (A) record warning

    - by Joost Verdaasdonk
    When I try to install Exchange 2010 on my server 2008 R2 server I get a warning during the prerequisites check: Warning: setup cannot verify that the 'Host' (A) record for this computer exists within the DNS database on server: 90.195.200.12. The goal of this Exchange setup is that I'm able to sent email in my local domain as well receive/sent email through the public domain name. Some information about my setup This Server is going to be a dedicated exchange host and has the following IP setup: (IP's are examples and not the real IP's ofc) Local VLAN NIC: IP: 10.10.50.22 Subnet: 255.255.255.0 No gateway DNS: 10.10.50.1 (is domain controler with authoritive DNS) public WAN NIC: IP: 90.195.200.148 Subnet: 255.255.255.235 Gateway: 90.195.200.145 DNS: 90.195.200.12 | 190.160.230.14 My public domain - exampledomain.com A record: mail - IP: 90.195.200.148 MX record IP: 90.195.200.148 As I'm seeing now the exchange setup is looking for the A record in one of the DNS servers in my Public WAN NIC. And ofc this is not where my A records are defined. I have those A records in 2 places: 1. In the domain controler DNS (the private nic) 2. In the online dns registration of my public domain (exampledomain.com) My question is... is this warning going to be a problem? Can I do something better in my setup so that this warning will go away? Please advice?

    Read the article

  • PowerShell: How to customize prompt?

    - by Ariel
    I like to define the env variable PROMPT as $p$_$g so prompt starts in a new line. But seems to be not applying to my PowerShell prompt :-( dir function:/ shows that a name "prompt" is already defined. Any way I can get my prompt customized in a PowerShell console, without messing up with the already defined "prompt" name?

    Read the article

  • Pair programming with tmux and Vagrant

    - by neezer
    Does anyone have a clear step-by-step guide for setting up a shared tmux session on a Vagrant vbox that my coworkers (on our local office lan) could SSH into? The articles I've found online only seem to cover setting this up from machine to machine (no virtualbox setups), and I'm not very good at networking, so I haven't been able to extrapolate a solution... We're all running the latest Macs in our office, btw. Here's one article I've found but haven't been able to get working with Vagrant: http://blog.voxdolo.me/remote-pairing-with-vim-and-tmux.html EDIT: To clarify, I don't really know how I should be setting up Vagrant to allow me to SSH into it from a machine outside the one hosting the VM. The article above suggests that I add the tunnels host on my physical machine running the VM (here-on referred to as the MBP), so I did that. Next is the ProxyCommand host declaration, which I have also assumed should live on the MBP. So next I try SSHing into the MBP from a guest machine (another separate physical machine on my network), and that seems to work... but that only gets me into the MBP, not the Vagrant image running on the MBP. I normally login Vagrant image on the MBP via vagrant ssh (per the docs), and I know how to forward ports on the Vagrant VM to the MBP, but it's unclear to me how I could forward ports/SSH from the MBP to the Vagrant VM, which I assume I would need to do so that my guest machine could SSH in--through the MBP--to my Vagrant image. That, in a nutshell, is what I'm trying to accomplish. I do my development work in Vagrant VMs which keeps my MBP nice and clean of any dev-related cruft and also keeps my dev environments totally isolated from one another, yet I would like to start pair-programming with my coworkers via tmux, thus the reason why I've asked this question. I would like to accomplish all of this without setting up an additional user account on the MBP, or giving my coworkers access to my local user account on the MBP to get to my Vagrant VM, if that's at all possible.

    Read the article

< Previous Page | 410 411 412 413 414 415 416 417 418 419 420 421  | Next Page >