Search Results

Search found 19308 results on 773 pages for 'network efficiency'.

Page 660/773 | < Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >

  • Sticky connection and HTTPS support for HAProxy

    - by Saif
    We have 2 HTTP Load balancer with HAproxy and heartbeat. There are 4 apache nodes in this cluster. It's doing round robin load balancing. The HTTP cluster working fine. We are having problem with our portal because it uses SSO. We need sticky connection support in our HAproxy. Also we need load balancing for HTTPS traffic. Here's our HAproxy conf file. global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local0 log 127.0.0.1 local1 notice chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats #--------------------------------------------------------------------- # common defaults that all the 'listen' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 #--------------------------------------------------------------------- # main frontend which proxys to the backends #--------------------------------------------------------------------- frontend main *:5000 acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js use_backend static if url_static default_backend app #--------------------------------------------------------------------- # static backend for serving up images, stylesheets and such #--------------------------------------------------------------------- backend static balance roundrobin server static 127.0.0.1:4331 check #--------------------------------------------------------------------- # round robin balancing between the various backends #--------------------------------------------------------------------- backend app listen ha-http 10.190.1.28:80 mode http stats enable stats auth admin:xxxxxx balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /haproxy.txt HTTP/1.0 server apache1 portal-04:80 cookie A check server apache2 im-01:80 cookie B check server apache3 im-02:80 cookie B check server apache4 im-03:80 cookie B check Please advice. Thanks for your help in advance.

    Read the article

  • Bind9 DNS help with psuedo domains

    - by Tempname
    I have setup a dns server on my home network to manage some apps that I have written for home. Currently I have 3 "domains" that I am using: controller devserver fileserver The first issue that I am having is that when I attempt to ping the parent domain of any of these 3 I am unable to. I simply get ping: unknown host controller. I however can ping any of the subdomains I have setup for these 3 parent domains. The second issue is I am unable to ping any of the 3 parent domains or any child domains from my window machines. I have verified that these domains work on other devices in my house (ipod touch, ipad, cell phone). Any help with this is greatly appreciated Here is bind data file for my parent domain controller: ; ; BIND data file for local loopback interface ; $TTL 604800 @ IN SOA controller. admin.controller. ( 9 604800 86400 2419200 604800 ) ; @ IN NS controller. @ IN A 192.168.1.104 controller IN A 192.168.1.194 admin.controller. IN A 192.168.1.104

    Read the article

  • Not Able To Connect to Windows Server 2008 R2 using FileZilla Externally

    - by obautista
    I configured FTP Service/Role on my Windows Server 2008 R2 machine. I am able to connect from the inside, but not from the outside. On the inside I tested using cmd prompt and IE FTP. On the outside, I am testing with FileZilla and IE FTP. From the outside, IE FTP prompts me to enter my username/pwd, but nothing happens. Page eventually times out and I get "Internet Explorer cannot display page". Using FileZilla, I get the following messages. Note FileZilla resolved domain name and authenticates. I did not configure FTP Wirewall Support on the FTP site. I am not sure if I need to do this. I set up basic authentication, non-ssl, not allowing anonymous. I testing with Windows Firewall Turned off and on (I added windows firewall rule for port 21). On my network firewall (Cisco), I added a rule to forward port 21 traffic to FTP Server. Status: Resolving address of ftp.technologyblends.com Status: Connecting to 75.149.66.201:21... Status: Connection established, waiting for welcome message... Response: 220 Microsoft FTP Service Command: USER * Response: 331 Password required for . Command: PASS *** Response: 230 User logged in. Command: SYST Response: 215 Windows_NT Command: FEAT Response: 211-Extended features supported: Response: LANG EN* Response: UTF8 Response: AUTH TLS;TLS-C;SSL;TLS-P; Response: PBSZ Response: PROT C;P; Response: CCC Response: HOST Response: SIZE Response: MDTM Response: REST STREAM Response: 211 END Command: OPTS UTF8 ON Response: 200 OPTS UTF8 command successful - UTF8 encoding now ON. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I. Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing

    Read the article

  • Azure can't ping or telnet VM from client

    - by Raif
    I have a VM on Azure with an instance sqlserver 2012 running on it. From my work computer and my home computer I can't get sqlserver management studio connect to it. I have looked at ALL the settings recommended in numerous articles. everything is setup correctly. endpoint 1433 Private and public sqlserver tcp enabled. sqlserver tcp listening on right port sqlserver using mixed auth windows fire wall, holes poked and then disabled on both client and VM can log in from VM using the credentials that I'm trying to use remotely further more I can't ping the dns or ip or tellnet address from my local machines. I can however hit the iis from a browser using the ip. strange. CS asked me to download MS Network Monitor, which I did and pinged and telneted. I have the results saved but can't really make heads or tails of them. CS hasn't responded yet. I can post some info here that would help. EDIT Never one to shrink from a challenge, I deleted my VM and re-did everything. Now it works although my confidence azure is somewhat shaken.

    Read the article

  • What's the risk of running a Domain Controller so that it is accessible from the internet?

    - by Adrian Grigore
    I have three remote dedicated web servers at different webhosts. Adding them to a common domain would make a lot of administration tasks much easier. Since two of the servers are running Windows 2008 R2 Standard, I thought about promoting them to Domain Controllers in order to set up the windows domain. There's another thread at Serverfault that recommends this. At the same time I've read a lot of times on different websites that this is not a good idea because an domain controller should always be behind a firewall LAN. But I can't set up something like this because I don't have a LAN with a static IP accessible from the internet. In fact I don't even have a windows server in my LAN. What I have not found out is why exposing a DC to the Internet would be bad idea. The only risk I can see is that if someone penetrates one of my webservers, it should be much easier to penetrate the others as well. But as far as I can see that's the worst case scenario since I am only going my web servers to that domain, not any computers from my local network. Is this the only downside or does it also make it easier to penetrate one of my web servers in the first place? Thanks, Adrian

    Read the article

  • MAC addresses on dual-NIC mainboards

    - by Tom O'Connor
    Here's a weird problem. We've got a number of devices with dual-NIC mainboards. Some are Realtek NICs, which suck. Some are Intel e1000s, which don't. I've just noticed on 2 machines, one is an Intel NIC, one is a Realtek, that when I put the MAC address of one machine into the dhcpd.conf file on our DHCP server to get it to PXE boot the machine into a rebuild environment, initially everything is fine. The server gets a DHCP allocation, and PXE boots into the Ubuntu preseed enviroment. On one or two machines, it gets as far as Ubuntu's DHCP network configuration, and fails. If i pull up a busybox shell (on tty2 on the installing machine), and run ip link, I can see that the UP flag is set on the other NIC. Here's some stuff. host xeon16-ghz240-gb48-node1 { hardware ethernet BC:AE:C5:07:1F:18; filename "pxelinux.0"; next-server 192.168.123.80; } That's what's in dhcpd.conf This is what ip link on the evil machine looks like. Only one NIC is actually connected (deliberately). As you can see, the NIC that's in the dhcpd config, is not marked as UP, and the link that is UP, isn't the one in DHCP. So far I've seen this on two brands of dual-NIC configuration. Does anyone know 1) what's causing it, and b) What we can do about it?

    Read the article

  • Why does my Belkin wireless router has eMule port open?

    - by Jeremy Powell
    I have a Belkin F6D4230-4 v1 router. When I port scan it with nmap I get the following: $ sudo nmap -sS -A -T5 192.168.2.1 -p- Starting Nmap 5.00 ( http://nmap.org ) at 2010-04-17 11:40 CDT Interesting ports on 192.168.2.1: Not shown: 65532 closed ports PORT STATE SERVICE VERSION 80/tcp open http Belkin 2307 wifi router http config (IP_SHARER httpd 1.0) |_ html-title: '+i1+' 4661/tcp filtered unknown 4662/tcp filtered edonkey MAC Address: 00:22:75:5D:52:D8 (Belkin International) Device type: WAP|broadband router|firewall|printer|specialized|webcam Running (JUST GUESSING) : Linksys embedded (95%), TRENDnet embedded (95%), Netgear embedded (92%), Canon embedded (89%), On Time RTOS (89%), Symantec embedded (89%), D-Link embedded (86%), Polycom embedded (85%) Aggressive OS guesses: Linksys WRT54GC or TRENDnet TEW-431BRP wireless broadband router (95%), TRENDnet TW100-BRF114 broadband router (95%), Netgear FR114P ProSafe VPN firewall (92%), Canon PIXMA MX850 printer (89%), On Time RTOS (89%), Symantec Firewall/VPN 100 (89%), D-Link DI-714P+ wireless broadband router (86%), Polycom ViewStation video conferencing system (85%) No exact OS matches for host (test conditions non-ideal). Network Distance: 1 hop Service Info: Device: WAP OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 21.57 seconds Why are the 4461 and 4462 ports open? This is a basic, out-of-the-box installation.

    Read the article

  • Port forwarding + shared connection with Ubuntu

    - by Joey Adams
    Because my wireless router's ethernet ports are defective, I set up a shared wireless connection from my laptop (which has wifi) to my eMac (which does not) via a crossover ethernet cable. The laptop is behind a router as 192.168.1.131, and the eMac is behind the laptop as 10.42.43.1 . The laptop is running Ubuntu 9.10 (Karmic). I achieved the shared connection through NetworkManager Applet. I right-clicked on the network icon at the topright, went to Edit Connections, selected the Wired connection named "Auto eth0", clicked "Edit...", went to the "IPv4 Settings" tab, and selected the Method "Shared to other computers". The eMac can now access the Internet. Now I want to enable port forwarding. There's a game I want to play that needs port 6112 forwarded (both TCP and UDP) in order to host games. I set up the router to enable port forwarding for 192.168.1.131 (the laptop), but port forwarding still isn't available on the eMac. I suppose I need to pretend my laptop is a router and configure port forwarding on it, indicating that incoming connections to the laptop (192.168.1.131) should be forwarded to the eMac on the shared connection (10.42.43.1 ). Thus, packets coming into the router on port 6112 would be redirected to the laptop (by the router), then to the eMac (by the laptop). My question is, how would I do that on Ubuntu (in light of NetworkManager's presence)? Also, if I can't get this to work, does anyone mind hosting a comp stomp? :D

    Read the article

  • WebDAV through Apache2 permissions/missing files

    - by Strifariz
    I have a WebDAV setup on Apache2 on a server running Debian 5.0 (Lenny), which I am accessing through a mapped network drive under Windows 7. The setup appears to run fine, I receive no permission errors when copying a file to the share the first time, but the file never shows up in the directory (it's invisible, doing a ls -lha on the directory as root on the server also shows no files. When attempting to copy the file once more I am informed that the file already exists though, and I am asked if I wish to overwrite the file, when selecting "Yes" to this, I receive a permission error saying I'm not able to write to the folder. My logs aren't reporting any access violations of any kind, what could be the problem? (See log excerpt below) [17/Jan/2011:10:26:34 +0100] "PUT /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PUT /1.png HTTP/1.1" 201 304 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "LOCK /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "LOCK /1.png HTTP/1.1" 200 447 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PROPPATCH /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PROPPATCH /1.png HTTP/1.1" 207 389 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "HEAD /1.png HTTP/1.1" 401 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "HEAD /1.png HTTP/1.1" 200 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:34 +0100] "PUT /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "PUT /1.png HTTP/1.1" 204 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "PROPPATCH /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "PROPPATCH /1.png HTTP/1.1" 207 389 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "UNLOCK /1.png HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:35 +0100] "UNLOCK /1.png HTTP/1.1" 204 - "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:38 +0100] "PROPFIND / HTTP/1.1" 401 525 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600" [17/Jan/2011:10:26:38 +0100] "PROPFIND / HTTP/1.1" 207 1634 "-" "Microsoft-WebDAV-MiniRedir/6.1.7600"

    Read the article

  • Firebird 2.5 Database Corrupt

    - by BrendanH
    We have an issue where a database hangs the server when: a backup is performed (Hangs on a specific table) selecting * or count(1) from a specific table or viewing data that is related to the table (FKs, etc) We could browse the table to a certain point (using IBExpert) however after about 2900 records the machine just spikes and hangs. Performing a gfix -m does not work, and the validation reports back Record level errors = 4 (no matter how many times we run gfix -m, -v, etc. The Firebird.log file reports back these types of messages: Relation has 91631 orphan backversions (9214273 in use) in table BINS (137) - {Which is apparently just a warning} Unable to complete network request to host "MHPLZA1". Error reading data from the connection. INET/inet_error: read errno = 10054 SERVER/process_packet: broken port, server exiting Shutting down the server with 1 active connection(s) to 1 database(s), 0 active service(s) - {If we leave the backup to run while hanging, it eventually logs this error message} The setup is: The table is question has about 7000 records. The Firebird version is 2.5 Classic Server x64 install. The OS is Windows Server 2008. This is a virtual machine (VMWare) running on a massive server. (Does anyone have issues with VMs and Firebird?). We have the same setup running fine on other servers (However they are not virtual machines). Is there anyway to pin point the issue and or the cause?

    Read the article

  • Vmware Player 3.0 - cannot ping 32 bits guest from 64 bits (guest or host)

    - by npmj
    I'm stuck with what seems a bug in VmWare Player (build 203739). I'm using W7 Ultimate 64bits as host and have a CentOS 5.4 (64 bits) as a guest and a Windows XP Professional SP3 (32 bits) as another guest. From the 64 bits machines (the host and the linux guest) I cannot ping the windows XP. Off course, I already turned off the windows firewall in the guest and also in the host. The network is pretty basic, I'm using Vmnet8 (NAT), with DHCP and port forwarding (to the windows XP's IP). Everything is working ok, I have internet access from host and from both guests. Port forwarding to the XP guest is working ok too. The only problem is that I cannot access the XP guest through the Vmnet8. I monitored the traffic using wireshark (in the host and in the windows guest). If I try to ping the XP guest from the host, what I see is the ARP request leaving the host, being answered by the guest and, after that, there is no echo request leaving the host. The same occurs if I try to ping the XP from the CentOs guest. From the windows XP guest I can ping both the host and the CentOs guest. From the XP guest I can access the host shares. Obviously, from the host I cannot see the XP shares (as I cannot even ping the guest). I want to maintain this setup (using NAT to share the host's internet connection). Any suggestions?

    Read the article

  • Internal but no external Citrix Access?

    - by leeand00
    We recently had to reload our configuration of Citrix on our server Server1, and since we have, we can access Citrix internally, but not externally. Normally we access Citrix from http://remote.xyz.org/Citrix/XenApp but since the configuration was reloaded we are met with a Service Unavailable message. Internally accessing the Citrix web application from http://localhost/Citrix/XenApp/ on Server1 we are able to access the web application. And also from machines on our local network using http://Server1/Citrix/XenApp/. I have gone into the Citrix Access Management Console and from the tree pane on the left clicked on Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/PNAgent Citrix Access Management Console->Citrix Resources->Configuration Tools->Web Interface->http://remote.xyz.org/Citrix/XenApp, which in both cases displays a screen that reads Secure client access. Here it offers me several options: Direct, Alternate, Translated, Gateway Direct, Gateway Alternate, Gateway Translated. I know that I can change the method of use by clicking Manage secure client access->Edit secure client access settings which opens a window that reads "Specify Access Methods", and below that reads "Specify details of the DMZ settings, including IP address, mask, and associated access method", I don't know what the original settings were, and I also don't know how our DMZ is configured so that I can specify the correct settings, to give access to our external users on the http://remote.xyz.org/Citrix/XenApp site. We have a vendor who setup our DMZ and does not allow us access to the gateway to see these settings. What sorts of questions should I ask them to restore remote access?

    Read the article

  • Localhost problems on Mac OS X 10.7

    - by Maya
    Sorry for the duplicate post ( http://stackoverflow.com/questions/9720871/localhost-problems-on-mac-os-x-10-7 ), but I got the advice that this is a better place to ask my question: I want to access a mysql server remotely over ssh. So I used port forwarding to access the remote 3306 port on my localhost as 8383. The ssh connection can established successfully. But when I want to telnet onto port 8383 on localhost I get the following error: ~: telnet 127.0.0.1 8383 Trying 127.0.0.1... telnet: connect to address 127.0.0.1: Connection refused telnet: Unable to connect to remote host I tried the same on a friends Laptop (also Mac OS X 10.7) and it worked fine, so it is very unlikely that the ssh connection is the problem. I assume it has something to do with my local network configuration. I turned off IPv6, just in case. My /etc/hosts looks like this: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost I would greatly appreciate any help. Please point me in the right direction if this is not the right place to ask this question.

    Read the article

  • Can not join additional domain controllers

    - by Hosm
    Hi all, I had a dead PDC and another not so synced domain controller for my domain. using comments here link now the so called secondary domain controller has seized domain controls and I can verify it from dsa.msc that it is a domain controller. I set up another domain controller (win2003SRV) and about to promote an AD on it as a domain controller for my domain. When I try to join the new domain controller to the domain I face DNS problem. here is some more detail DNS was successfully queried for the service location (SRV) resource record used to locate a domain controller for domain DOMNAME.A.B: The query was for the SRV record for _ldap._tcp.dc._msdcs.DOMNAME.A.B The following domain controllers were identified by the query: update.DOMNAME.A.B Common causes of this error include: - Host (A) records that map the name of the domain controller to its IP addresses are missing or contain incorrect addresses. - Domain controllers registered in DNS are not connected to the network or are not running. For information about correcting this problem, click Help. it is worth noting that update.DOMNAME.A.B is the current domain controller to which I'd like to add another controller named PDC.DOMNAME.A.B Ip address of update.DOMNAME.A.B is 192.168.200.1 and for pdc.DOMNAME.A.B is 192.168.200.100 querying DNS on both machine return correct results. Any idea?

    Read the article

  • Cachefilesd (cachefiles) everything seems to be set up, still not working

    - by Evgenius
    I'm trying to set up cachefilesd to work with my network folder shared using NFS. I have seemingly everything set up, however cachefilesd starts normally, however caching isn't functioning. Here is output of commands, which I ran in the same order 1 sudo mount ... cache-1:/mnt/datashared on /mnt/nfsshare type nfs (rw,sync,ac,acregmin=3,acregmax=60,acdirmin=30,acdirmax=300,lookupcache=pos,vers=3,fsc) ... 2 lsbmod | grep cachefiles cachefiles 40555 1 fscache 57430 4 nfs,cifs,cachefiles,nfsv4 3 [edited - deleted] 4 uname -r 3.8.0-34-generic 5 grep CONFIG_NFS_FSCACHE /boot/config-3.8.0-34-generic CONFIG_NFS_FSCACHE=y 6 lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 13.04 Release: 13.04 Codename: raring 7 sudo service cachefilesd restart * Restarting FilesCache daemon cachefilesd [ OK ] 8 dmesg [6211206.141781] FS-Cache: Withdrawing cache "mycache" [6211210.135236] FS-Cache: Cache "mycache" added (type cachefiles) [6211210.135242] CacheFiles: File cache on sdb1 registered [6214644.348929] CacheFiles: File cache on sdb1 unregistering [6214644.348935] FS-Cache: Withdrawing cache "mycache" [6214654.575909] FS-Cache: Cache "mycache" added (type cachefiles) [6214654.575915] CacheFiles: File cache on sdb1 registered 9 ps aux | grep cachefilesd root 65399 0.0 0.0 4460 540 ? Ss 23:14 0:00 /sbin/cachefilesd 1000 65464 0.0 0.0 8160 916 pts/0 S+ 23:16 0:00 grep --color=auto cachefilesd finally, biggest problem is 10 cat /proc/fs/nfsfs/volumes NV SERVER PORT DEV FSID FSC v3 64476645 801 0:24 233e020f0da07a93 no tl;dr I think I configured everything properly but FSC mount option + fscachefilesd don't seem to work

    Read the article

  • ISA Server 2006 SSL Certificate Dilemma

    - by JohnyD
    I'm making so great headway in offering our services over https with help from a Go Daddy certificate, later to be upgraded to Thawte SSL123 certs. But, I've just run into one whopper of a problem. Here's my setup: I run an ISA 2006 firewall. Our web services are distributed over 2 servers. One is Windows 2000 (www.domain.com) and the other is Windows 2003 (services.domain.com). So, I'll need to purchase 2 certs for both www and services, import them into IIS6 on their respective machines, then export them with the primary key (making sure to Include all certificates in the certification path if possible... that had me stumped for a while), and then to finally import them into ISA's local computer Personal store. The problem I've just run into is that I have separate firewall rules for services.domain.com and www.domain.com... because requests need to be forwarded to different web servers. Each of these firewall rules use the same httplistener. I have just found out that you can only use 1 certificate per httplistener. To make matters worse you can only have a single httplistener per ip / port. Is this correct? I can only use a single certificate for a single ip address? This would seem to be a severe limitation. Am I wrong? If I'm not then I've got a whole lot more work ahead of me as I'll have to set up extra ip's, add them to the firewall's network interface, create new listeners using that ip, etc... Can someone please confirm that I'm doing this correctly / incorrectly? Once I got my head wrapped around it all it seemed easy... then this. Thanks in advance.

    Read the article

  • Unable to access Windows share

    - by mbnoimi
    I've installed Alfresco 4.2.d under Ubuntu 12.04 LTS; Everything done fine except I can't access it from Windows share although I got the link from Alfresco explorer which is: file:///%5C%5CECSA%5CAlfresco%5CSites%5Cswsdp%5CdocumentLibrary%5CAgency%20Files%5CImages%5Ccoins.JPG I tried to access it from: \\ECSA but I failed too so I made a ping (192.168.0.70 is server IP) then I got: C:\Users\user>ping 192.168.0.70 Pinging 192.168.0.70 with 32 bytes of data: Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Reply from 192.168.0.70: bytes=32 time<1ms TTL=64 Ping statistics for 192.168.0.70: Packets: Sent = 4, Received = 4, Lost = 0 (0% loss), Approximate round trip times in milli-seconds: Minimum = 0ms, Maximum = 0ms, Average = 0ms C:\Users\user>ping ECSA Ping request could not find host ECSA. Please check the name and try C:\Users\user> Some logs of what's going on: C:\Users\user>net view ECSA System error 1707 has occurred. The network address is invalid. C:\Users\user>nbtstat -a 192.168.0.70 Local Area Connection: Node IpAddress: [192.168.0.84] Scope Id: [] NetBIOS Remote Machine Name Table Name Type Status --------------------------------------------- ECSA <20> UNIQUE Registered ECSA <00> UNIQUE Registered WORKGROUP <00> GROUP Registered MAC Address = 00-00-00-00-00-00 C:\Users\user> CIFS Server Configuration in file-servers.properties ### CIFS Server Configuration - file-servers.properties ### cifs.enabled=true cifs.serverName=${localname}A cifs.domain= cifs.broadcast=255.255.255.255 cifs.bindto=192.168.0.70 cifs.ipv6.enabled=false cifs.hostannounce=true cifs.disableNIO=false cifs.disableNativeCode=false cifs.sessionTimeout=900 cifs.maximumVirtualCircuitsPerSession=16 cifs.tcpipSMB.port=445 cifs.netBIOSSMB.sessionPort=139 cifs.netBIOSSMB.namePort=137 cifs.netBIOSSMB.datagramPort=138 cifs.WINS.autoDetectEnabled=true cifs.WINS.primary=192.168.0.70 cifs.WINS.secondary=192.168.0.1 cifs.sessionDebug= cifs.pseudoFiles.enabled=true cifs.pseudoFiles.explorerURL.enabled=true cifs.pseudoFiles.explorerURL.fileName=__Alfresco.url cifs.pseudoFiles.shareURL.enabled=false cifs.pseudoFiles.shareURL.fileName=__Share.url How can I fix this issue?

    Read the article

  • Virtualbox HTTP load testing, host CPU overload issues

    - by aschuler
    I'm doing HTTP load testing benchmarks (using Apache Benchmark and Siege) on a small Java EE 1.7.0 / Tomcat 7.0.26 application running on a Debian Squeeze 6.0.4 x64 virtualized with Virtualbox 4.1.8. The computer host is Ubuntu 11.10 x64. I've modified those parameters in the Tomcat server.xml : <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="200000" redirectPort="8443" acceptCount="2000" maxThreads="150" minSpareThreads="50" /> The application executed on the server takes around 300ms. This app is running well until a certain amount of concurrent connections like those one : ab -n 500 -c 150 http://xx.xx.xx.xx:8080/myapp/ ab -n 1000 -c 50 http://xx.xx.xx.xx:8080/myapp/ siege -b -c 100 -r 20 http://xx.xx.xx.xx:8080/myapp/ A lot of socket connection timed out happens and this completly overload the host processor (but the CPU load inside the VM is normal). Doing an htop on the host, i can see that the Virtualbox processus is running under 300% CPU and never come down even after the load test is finished. (I've allocated 4 processors to the VM, if I allocate only one processor, CPU load goes under 100%). Restarting Tomcat don't do anything, i'm forced to restart the whole VM. I've tryed to launch those ab/siege commands locally on the VM and everything goes well. I first thought it was related to a linux network limit as explained here: Running some benchmarks using ab, and tomcat starts to really slow down So I've modified those TCP parameters : echo 15 > /proc/sys/net/ipv4/tcp_fin_timeout echo 30 > /proc/sys/net/ipv4/tcp_keepalive_intvl echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse It seems to be better, but it continues to overload the host CPU and output socket connections time out at a certain amount of concurrent connections. I'm wondering if this is not related to how Virtualbox handles external concurrent connections.

    Read the article

  • Cisco SG200 vlan issue in ESXi VSA cluster

    - by George
    I have three Cisco SG200-26 switches, and I also have two ESXi hosts that I have connected like shown in the below "best practice" map by VMware: http://communities.vmware.com/servlet/JiveServlet/previewBody/17393-102-1-22458/VSA_networking_map.pdf Even though I created the VLANs in the SG200 and I set the two VLANs (508 and 608) as allowed for these untagged ports (where my ESX NIC's are connected), I can not ping from host 1 to host 2 when configuring the NIC's to use 608 VLAN. Am I missing something? my IP's are all in the 192.168. range, and the only reason I need the VLANs is to isolate the traffic of VSA back-end internally, only the two hosts will be using the VLANs. So I think I do not have to create virtual interfaces on my router since that's the case, is my understanding correct? Also sending my switch config screenshot below.. all 3 switches have the latest firmware (it seems these were originally linksys and got rebranded as cisco after the acquisition) http://img31.imageshack.us/img31/2503/switch.gif Any ideas what to change on the Cisco SG200 to make this work , would be appreciated! The second VLAN (608) only needs two IP's: 192.168.0.1 and 192.168.0.2 The first VLAN (508) will have about 15 IP's for ESXi Management and VSA cluster service, I could use either 192.168.1.xx or 10.0.1.xx The rest of my network (about 50 clients) is in 192.168.1.xx range VMware also states that the VLAN protocol on the physical switch must be 802.1Q, not ISL, anyone knows which of the two my SG200-26 uses? In addition to that, the only requirement from VSA is that my two hosts: -Are in the same subnet. -Have static IP addresses set. -Have the same Default Gateway configured. If I need inter-vlan routing for this, I suppose I have to create virtual interfaces on my sonicwall, and assign an IP for each VLAN, and then set routes between them? Thank you for your time!

    Read the article

  • Can one config LDAP to accept auth from ssh-agent instead of from Kerberos?

    - by Alex North-Keys
    [This question is not about getting your LDAP password to authenticate you for SSH logins. We have that working just fine, thank you :-) ] Let's suppose you're on a Linux network (Ubuntu 11.10, slapd 2.4.23), and you need to write a set of utilities that will use ldapmodify, ldapadd, ldapdelete, and so on. You don't have Kerberos, and don't want to deal with its timeouts (most users don't know how to get around this), quirks, etc. This resolves the question to one of where else to get credentials to feed to LDAP, probably through GSSAPI - which technically doesn't require Kerberos despite its dominance there - or something like it. However, nearly everyone seems to have an SSH agent program, complete with its key cache. I'd really like an ssh-add to be sufficient to allow passwordless LDAP command use. Does anyone know of a project working on using the SSH agent as the source of authentication to LDAP? It might be through an ssh-aware GSSAPI layer, or some other trick I haven't thought of. But it would be wonderful for making LDAP effortless. Assuming I haven't just utterly missed a way to use ldapmodify and kin without having to type my LDAP passwords - using -x is NOT acceptable. At my site, the LDAP server only accepts ldaps connections, and requires authentication for modifying operations. Those are requirements, of course. Any ideas would be greatly appreciated. :-)

    Read the article

  • Multiple vlans access to shared pbx system

    - by Matt
    I'm new to networking and was looking for some assistance. First off I'm using packet tracer to diagram my scenario as I will be receiving my equipment next week to deploy. Hardware to be used: 2 catalyst 3560 switches all connect to a sonic wall router I have two companies that work in the same office space. I need to keep these companies separate on their own vlan. They will however need to share the phone system. (Packet tracer file uploaded to give those who have the time to see what I put together.) http://dl.dropbox.com/u/86234623/network%20build.pkt Here is my current test scenario: on switch 0 I have: company A on vlan 2 computers 172.16.1.100 and 101 255.255.0.0 FA0/10 FA0/11 company B on vlan 3 computers 172.16.2.102, 255.255.0.0 FA0/12 PBX on a trunk port 172.16.0.5, 255.255.0.0 FA0/5 trunk port on FA0/1 to connect the switches on switch 1 I have: company A on vlan 2 computers 172.16.1.102, 255.255.0.0 company B on vlan 3 computers 172.16.2.100 and 101, 255.255.0.0 trunk port on FA0/1 to connect the switches I can ping the respective computers on the same vlan but cant ping company A to B which is what I want. However neither company can talk (ping) the PBX. Here are the commands I used to configure what I have: switch 0 en conf t vlan 2 name A vlan 3 name B int fa0/10 switchport mode access switchport access vlan 2 int fa0/11 switchport mode access switchport access vlan 2 int fa0/12 switchport mode access switchport access vlan 3 int fa0/5 switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3 int fa0/1 (to connect the switches) switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3 Switch 1 en conf t vlan 2 name A vlan 3 name B int fa0/10 switchport mode access switchport access vlan 3 int fa0/11 switchport mode access switchport access vlan 3 int fa0/12 switchport mode access switchport access vlan 2 int fa0/1 (to connect the switches) switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3

    Read the article

  • Virtualbox Headless Server on Ubuntu missing VRDP Options

    - by The Daemons Advocate
    I'm running VirtualBox headless server on an Ubuntu 64 bit host, and I want to use it remotely. However, I'm having problems connecting via RDP. The DNS names in my network show the host to be 'server', and the guest to be 'ubuntu-vm'. From the official documentation, I gather that I am to connect to server on the default RDP port in order to see the guest machine. I start the virtual machine like so: vboxheadless -startvm My_VM Then I connect on my laptop, and I get... rdesktop -a 16 server ERROR: server: unable to connect So next I consult the documentation further, and I find there are RDP flags that can be turned on (but should be on implicitly for a headless server). So I pull up information using 'vboxmanage showvminfo My_VM', and I find the VRDP property is off. VRDP Connection: not active To make things even weirder, RDP flag seems to be missing from vboxmanage. I've installed straight from the ubuntu repo's using the virutalbox-ose package, not sure how that measures up against the official docs. For instance, this command doesn't exist: VBoxManage modifyvm My_VM --vrdp on From the UI, the VM's Settings regarding Display have greyed out the 'remote Display' option. What I'm looking for is advice :). I'm open to suggestions that don't involve starting again with something like VMWare. Thanks in advance!

    Read the article

  • MySQL socket connections working, but not port connections

    - by Neil
    I installed MySQL community 5.1.45 on my Snow Leopard 10.6, using the pkg from their site. I had previously installed a MySQL binary from entropy.ch. In the previous installation, the connections were working fine before I upgrade to Snow Leopard. In Snow Leopard, both the installations are problematic. Using an app called Sequel Pro, if I connect with the socket operation, it connects properly. However, a standard connection with the same credentials doesn't work. From what I've understood, socket connections happen on the machine itself between processes, whereas normal connections occur over the network/ports, in this case a loopback to my machine, since the server and client are both on the same machine. My new CakePHP installation isn't being able to connect to the db with the root credentials I provided. Btw, I've been starting the MySQL server using the Preference Pane. When I tried running mysqld from terminal, it gave me: 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test 100323 1:54:37 [Warning] Can't create test file /usr/local/mysql-5.1.45-osx10.6-x86_64/data/mbp.lower-test mysqld: Can't change dir to '/usr/local/mysql-5.1.45-osx10.6-x86_64/data/' (Errcode: 13) 100323 1:54:37 [ERROR] Aborting 100323 1:54:37 [Note] mysqld: Shutdown complete mbp is the name of my machine. How do I fix this so that my webserver can connect to the mysql server?

    Read the article

  • How to avoid intrusion detection/anti spoofing issue on a sonicwall TZ series FW

    - by Ian
    We have a sonicwall tz series FW with two internet service providers connected. One of the providers has a wireless service which works a bit like an ethernet switch in that we have an ip with a /24 subnet and the gateway is .1. All other clients on the same subnet (say 195.222.99.0) have the same .1 gateway - this is important, read on. Some of our clients are also on the same subnet. Our config: X0 : Lan X1 : 89.90.91.92 X2 : 195.222.99.252/24 (GW 195.222.99.1) X1 and X2 are not connected, other than both being connected to the public Internet. Client config: X1 : 195.222.99.123/24 (GW 195.222.99.1) What fails, what works: Traffic 195.222.99.123 (client) <- 89.90.91.92 (X1) : Spoof alert Traffic 195.222.99.123 (client) <- 195.222.99.252 (X1) : OK - no spoof alert I have several clients with IPs in the 195.222.99.0 range and all provoke identical alerts. This is the alert I see on the FW: Alert Intrusion Prevention IP spoof dropped 195.222.99.252, 21475, X1 89.90.91.92, 80, X1 MAC address: 00:12:ef:41:75:88 Anti-spoofing is switched off on my FW (network-mac-ip-anti-spoofing - config for each interface) for all ports I can provoke the alerts by telneting to a port on X1 from the clients. You can't argue with the logic - this is suspicious traffic. X1 is receiving traffic with a source IP which corresponds to X2s subnet. Anyone know how can I tell the FW that packets with a src subnet of 195.222.99.0 can legitimately appear on X1? I know whats going wrong, I've seen the same thing before, but with higher end FWs you can avoid this with a few extra rules. I can't see how to do this here. And before you ask why we're using this service provider - they give us 3ms (yep 3ms, thats not an error) delay between routers.

    Read the article

  • Ghost Solution Suite: Booting over PXE to WinPE for re-imaging, then booting to installed OS

    - by uberdanzik
    I have 40 networked computers that need to be re-imaged each night over a network via an automatic and unattended process. This is to reset the computers to a default state, as well as update the computers to the latest software loads. I'm using Symantec Ghost Solution Suite 2.5. My process so far is the following: Client begins in a powered down WakeOnLan accepting state. Ghost Console task uses WakeOnLan and PXE to boot the client into the WinPE environment. The client connects to the ghost console and reimages itself successfully. The client closes WinPE and restarts. PROBLEM: The client boots into the WinPE environment again, instead of the newly installed OS (Win7) I need it to boot into Win7 once so that I can run a few configuration batch files, then shut down into the WakeOnLan state again. Ghost console even reports an error on the process, that it never rebooted into the OS. Right now it seems that there is not an option to stop it from booting into the PXE server's WinPE image after re-imaging. Even if I set up a PXE boot menu with other boot options, its pointless, because it will always boot the default option. I would expect the ghost console task to be able to influence the PXE boot choice somehow. What do they expect us to do, turn the PXE server on and off manually? How can I get the client to boot to the OS after re-imaging? Thank you.

    Read the article

< Previous Page | 656 657 658 659 660 661 662 663 664 665 666 667  | Next Page >