Search Results

Search found 55992 results on 2240 pages for 'web service'.

Page 858/2240 | < Previous Page | 854 855 856 857 858 859 860 861 862 863 864 865  | Next Page >

  • Fresh Install of SQL Server 2008 doesn't install managment studio. Help!

    - by Jordan S
    Ok I am running Windows 7, 64 bit. I cleaned of SQL server 2005 completely off my system leaving only SQL Compact Edition. I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=01af61e6-2f63-4291-bcad-fd500f6027ff&displaylang=en and installed SQL Server 2008 Express Edition Service Pack 1. After the install, under my start bar menu all i have for SQL configuration tools are the Configuration Manager, Error and Usage Reporting and the Install Center. I don't have the SQL Managment Studio. So I went here http://www.microsoft.com/downloads/details.aspx?FamilyID=08e52ac2-1d62-45f6-9a4a-4b76a8564a2b&displaylang=en and downloaded the SQL Server 2008 Management Studio Express but when I try to install it I get a warning says This program has known compatibility issues and that I need to Install SQL Server 2008 Service Pack 1. I thought that is what I installed. So, I tried to continue running the install but I then get an error message that says Invoke or BeginInvoke can not be called on a Form before it is opened... How can I check if Service pack 1 is installed or not? What should I do?

    Read the article

  • Nagios command not transmitting all arguments

    - by markus
    I'm using the following service to monitor our postgres db from nagios: define service{ use test-service ; Name of servi$ host_name DEMOCGN002 service_description Postgres State check_command check_nrpe!check_pgsql!192.168.1.135!test!test!test notifications_enabled 1 } On the remote machine I've configured the command: command[check_pgsql]=/usr/lib/nagios/plugins/check_pgsql -H $ARG1$ -d $ARG2$ -l $ARG3$ -p $ARG4$ In the syslog I can see that command is executed, but there is only one argument transmitted: Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Running command: /usr/lib/nagios/plugins/check_pgsql -H 192.168.1.134 -d -l -p Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Command completed with return code 3 and output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Oct 20 13:18:43 DEMOSRV01 nrpe[1033]: Return Code: 3, Output: check_pgsql: Database name is not valid - -l#012Usage:#012check_pgsql [-H <host>] [-P <port>] [-c <critical time>] [-w <warning time>]#012 [-t <timeout>] [-d <database>] [-l <logname>] [-p <password>] Why are arguments 2,3 and 4 missing?

    Read the article

  • I have added a port to the public zone in firewalld but still can't access the port

    - by mikemaccana
    I've been using iptables for a long time, but have never used firewalld until recently. I have enabled port 3000 TCP via firewalld with the following command: # firewall-cmd --zone=public --add-port=3000/tcp --permanent However I can't access the server on port 3000. From an external box: telnet 178.62.16.244 3000 Trying 178.62.16.244... telnet: connect to address 178.62.16.244: Connection refused There are no routing issues: I have a separate rule for a port forward from port 80 to port 8000 which works fine externally. My app is definitely listening on the port too: Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN 99 36797 18662/node firewall-cmd doesn't seem to show the port either - see how ports is empty. You can see the forward rule I mentioned earlier. # firewall-cmd --list-all public (default, active) interfaces: eth0 sources: services: dhcpv6-client ssh ports: masquerade: no forward-ports: port=80:proto=tcp:toport=8000:toaddr= icmp-blocks: rich rules: However I can see the rule in the XML config file: # cat /etc/firewalld/zones/public.xml <?xml version="1.0" encoding="utf-8"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name="dhcpv6-client"/> <service name="ssh"/> <port protocol="tcp" port="3000"/> <forward-port to-port="8000" protocol="tcp" port="80"/> </zone> What else do I need to do to allow access to my app on port 3000? Also: is adding access via a port the correct thing to do? Or should I make a firewalld 'service' for my app instead?

    Read the article

  • Sonicwall NAT Policy Loopback

    - by John
    I have an issue and am pretty perplexed over it. I have a sonicwall and its setup with NAT polices and reflexive nat for an internal web server. That is, only 2 policies, no loopback policy, and the internal clients can access the web server by public ip no problems. Now, on another connection, another sonicwall, i have the exact same setup for another web server, with exact same policies (obviously different IP's) and the internal clients can't access the internal website by its public IP without creating the loopback policy. Maybe on the first one I've overlooked it, but I don't see any loopback what so ever and its working fine. My question is, does anyone know why the first one works like this but the second one needs the loopback policy? Thanks

    Read the article

  • Setting up an Active-Active IIS Cluster with ARR - is it possible?

    - by Ahmed Zubair
    I would like to know if we can setup an Active-Active IIS Cluster using Windows Cluster services that shares a common storage to store web content and WITHOUT the use of Windows NLB. I'm aware that this may not be a best practice or not a recommended setup, however, the setup is to be configured as below: Two web servers running IIS 7.5 (needs a common storage for web content) for HA and another set of two servers for sql cluster in active-passive mode for HA. Also is it possible to enable ARR on 2 node active-active IIS cluster for load balancing http requests? Appreciate if someone replies with both pros & cons of the setup.

    Read the article

  • internet-based sync software that will keep running after Windows Live Sync stops doing PC-to-PC-syncs?

    - by Warren P
    According to the wikipedia page, Microsoft Live Sync will shortly stop offering the PC-to-PC sync service. There are lots of apps to sync two PCs on the same LAN, but I want to sync two PCs that are in different cities, across the internet, traversing two different NATs, and that requires some kind of service running in the internet that both connect into. There is already a few questions about syncing folders and files, but this is not a duplicate because none of them answer this basic question: Microsoft Live Sync works better than RSYNC, or any of the linked SYNC solutions in any of the "not really duplicates" because it works even when the two PCs have NAT and firewalls between them that forbid direct connectivity, because Windows Live Sync has a free always-on internet server that all the client PCs connect into. I'm looking for a FREE (no-fees) Microsoft Live Sync work-alike PC-to-PC sync solution that works between PCs and Macs, at least, as well as between PCs, and works behind NAT and firewalls at least as well as Microsoft's solution. (Note that Microsoft's solution makes only outbound socket calls to a microsoft server, so this solution must necessarily include a server-hub component that is hosted publically on a free site and which does not require that I set up and manage and pay for my own public internet hosting site) Hint: None of the answers in the linked duplicate are equivalent (PureSync,FreeFileSync,BestSync 2010,SyncButler,Comodo BackUp,QuickShadow,Gbridge) in that none of them work for the PC to Mac situation, where firewalls and nats prevent direct connection, or else they require money to be paid. When Microsoft Live Sync / Live Mesh finally kills direct PC-to-PC mode, the limitation will be that you will have to pay for more than 25 GB of cloud service, and you can then only sync PC #1 to PC #2 if you first sync to the cloud, then down to other clients. I can currently sync 100 gb of data from one computer to another, only temporarily "moving the data" through Microsoft's data servers without using up my Skydrive storage quota.

    Read the article

  • "Password Server: Stopped" on Mac OS Lion Server. Stops with error -1 during startup

    - by V1ru8
    Since I've restored the Open Directory from an archive because my Server crashed and the DB was corrupt. The password server does not start anymore. The log looks like this: Feb 14 2012 21:41:20 156746us Mac OS X Password Service version 376.1 (pid = 2438) was started at: Tue Feb 14 21:41:20 2012. Feb 14 2012 21:41:20 156801us RunAppThread Created Feb 14 2012 21:41:20 156852us RunAppThread Started Feb 14 2012 21:41:20 156879us Initializing Server Globals ... Feb 14 2012 21:41:20 163094us Initializing Networking ... Feb 14 2012 21:41:20 163196us Initializing TCP ... Feb 14 2012 21:41:20 191790us SASL is using realm "SERVER.HOME.POST-NET.CH" Feb 14 2012 21:41:20 191847us Starting Central Thread ... Feb 14 2012 21:41:20 191860us Starting other server processes ... Feb 14 2012 21:41:20 191873us StartCentralThreads: 1 threads to stop Feb 14 2012 21:41:20 191905us Initializing TCP ... Feb 14 2012 21:41:20 191954us Starting TCP/IP Listener on ethernet interface, port 106 Feb 14 2012 21:41:20 192012us Starting TCP/IP Listener on ethernet interface, port 3659 Feb 14 2012 21:41:20 192048us Starting TCP/IP Listener on interface lo0, port 106 Feb 14 2012 21:41:20 192082us Starting TCP/IP Listener on interface lo0, port 3659 Feb 14 2012 21:41:20 192117us StartCentralThreads: Created 4 TCP/IP Connection Listeners Feb 14 2012 21:41:20 192132us Starting UNIX domain socket listener /var/run/passwordserver Feb 14 2012 21:41:20 193034us CRunAppThread::StartUp: caught error -1. Feb 14 2012 21:41:20 193056us ** ERROR: The Server received an error during startup. See error log for details. Feb 14 2012 21:41:20 193075us RunAppThread::StartUp() returned: 4294967295 Feb 14 2012 21:41:20 193107us Stopping server processes ... Feb 14 2012 21:41:20 193119us Stopping Network Processes ... Feb 14 2012 21:41:20 193131us Deinitializing networking ... Feb 14 2012 21:41:20 193149us Server Processes Stopped ... Feb 14 2012 21:41:20 193165us RunAppThread Stopped Feb 14 2012 21:41:20 193202us Aborting Password Service. See error log. The error log repeats the following: Feb 14 2012 21:41:50 409022us Server received error -1 during startup. Feb 14 2012 21:41:50 409141us Aborting Password Service. Anyone an idea what's wrong here and how I can fix this?

    Read the article

  • Hosting WCF over Internet

    - by karthik
    I am pretty new to exposing the WCF services hosted on IIS over internet. I will be deploying a WCF service over IIS(6 or 7) and would like to expose this service over the internet. This will be hosted in a corporate network having firewall, I want this service to be accessible over the internet(should be able to pass through the firewall) I did some research on this and some of the pointers I got: 1. I could use wsHTTPBinding or nettcpbinding (the client is intended to be .net client). Which of the bindings is preferable. 2. To overcome the corporate I came across DMZ server, what is the purpose of this and do I really need to use this). 3. I will be passing some files between the client and server, and the client needs to know the progress of the processing on server and the end result. I know this is a very broad question to ask, but could anyone give me pointers where I could start on this and what approach to take for this problem. Any help will be appreciated. Thanks Karthik

    Read the article

  • SQL Server 2008 Cluster Installation - First network name always fails

    - by boflynn
    I'm testing failover clustering in Windows Server 2008 to host a SQL Server 2008 installation using this installation guide. My base cluster is installed and working properly, as well as clustering the DTC service. However, when it comes time to install SQL Server, my first attempt at installation always fails with the same message and seems to "taint" the network name. For example, with my previous cluster attempt, I was installing SQL Server as VSQL. After approximately 15 attempts of installation and trying to resolve the errors, e.g. changing domain accounts for SQL, setting SPNs, etc., I typoed the network name as VQSL and the installation worked. Similarly on my current cluster, I tried installing with the SQL service named PROD-C1-DB and got the same errors as last time until I tried changing the name to anything else, e.g. PROD-C1-DB1, SQL, TEST, etc., at which point the install works. It will even install to VSQL now. While testing, my install routine was: Run setup.exe from patched media, selecting appropriate options After the install fails, I'd chose "Remove node from a SQL Server failover cluster" and remove the single, failed, node Attempt to diagnose problem, inspect event logs, etc. Delete the computer account that was created for the SQL Service from Active Directory Delete the MSSQL10.MSSQLSERVER folder from the shared data drive The error message I receive from the SQL Server installer is: The following error has occurred: The cluster resource 'SQL Server' could not be brought online. Error: The group or resource is not in the correct state to perform the requested operation. (Exception from HRESULT: 0x8007139F) Along with hundreds of the following errors in the Application event log: [sqsrvres] checkODBCConnectError: sqlstate = 28000; native error = 4818; message = [Microsoft][SQL Server Native Client 10.0][SQL Server]Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. System configuration notes: Windows Server 2008 Enterprise Edition x64 SQL Server 2008 Enterprise Edition x64 using slipstreamed SP1+CU1 media Dell PowerEdge servers Fibre attached storage

    Read the article

  • Internal+external interfaces with multiple default gateways on win2003

    - by fileitup
    Im trying to set up several web servers for a load balanced cluster and need to have each server connected to the internal network (for load balancing) as well as to an external network (internet - for administration). I have two NICs but since I cant set two default gateways I have the external gateway as default and the internal as a route rule. This setup only works half way - the internal network is fine but I cant log in from outside or see the web from the box. If I switch the gateways remote login/web will work, but the internal wont. Im sure someone encountered this before but wasnt able to find anything online. Any help will be appreciated.

    Read the article

  • Local Area Connection in Slacware 13

    - by asdasd
    I have windows xp and slackware 13 on one computer, and the ISP provided me a new modem. There was manual how to configure it, so i start the web browser and typed it's ip address 192.168.1.1 and the web interface of the modem appeared so i logged in, that was easy. But under slackware, i don't know how to enter in the modem config / web interface. I type in 192.168.1.1 but it's not working. Here's the output of ifconfig eth0 : eth0 Link encap:Ethernet HWaddr 00:a1:b0:01:18:28 inet addr:169.254.73.8 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:17 Memory:febff400-febff4ff How can i log in into the modem from linux, i.e. find it's assigned ip under slackware ? Thank you.

    Read the article

  • Implementing a form of port knocking + Phone Factor = 2 Factor auth for RDP?

    - by jshin47
    I have been looking into how to secure a publicly-available RDP endpoint and want to implement our two-factor authentication RADIUS server, PhoneFactor. I would like to implement the following process: User opens up web app in browser In web app, user enters username + password, initiates RADIUS auth Phone factor calls user to complete auth Once user is authenticated, port 3389 is opened on user's IP on pfSense firewall. After some amount of time, firewall rule is removed for that IP I would like to know the following: Is this a typical setup? If it is a bad idea, please explain why. If it is possible, are there any packages that assist with this? Specifically, the third step, where the appropriate firewall rule would need to be added... Edit: I am aware of TS Web Gateway, but I want the users to be able to use the traditional RDP client...

    Read the article

  • IIS Seems to Forward Domain/IP to Domain Controller

    - by asinc
    We have a server (Server 1) with Win 2008 that is accessible by RDP and also is set as the primary DNS IP for a domain (example.com). This server is on the same network as an SBS 2008 server (Server 2) which is the domain controller and internal Dns server. Web requests going to example.com with IP of Server 1 are being passed to Server 2 and served up by IIS from Server 2. What causes this to happen? Is there a safe way to have Server 1 IIS handle the web requests which was our expected outcome? Example: DNS entry on ISP: example.com = 111.111.111.111 Server 1 = 111.111.111.111 Server 2 = 111.111.111.112 Web user goes to example.com in browser, and the page is actually returned from 111.111.111.112?

    Read the article

  • SQL Server 2000 + ASP.NET: Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

    - by Rick
    I just migrated a development workstation FROM: Windows XP Pro SP3 with IIS 6 TO: Vista Enterprise 64bit with IIS 7 Since the move, one of my pages that accesses an SQL Server 2000 database is receiving the following error from my ASP.NET 2.0 web page: "Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'." I have: enabled Windows Authentication in IIS and web.config disabled Anonymous Authentication in IIS set up Impersonation to run as the authenticated user verified that the logged in user (in this case, me) has access to the appropriate database on the SQL Server verified that my login and impersonation information is correct in the ASP.NET page by checking User.Identity.Name and System.Security.Principal.WindowsIdentity.GetCurrent().Name (both display my username) My connection string using SqlConnection is "Server={SERVER_NAME};Database={DB_NAME};Integrated Security=SSPI;Trusted_Connection=True;" Why is it trying to login with NT AUTHORITY\ANONYMOUS LOGIN? I have to assume it's some setting or web.config entry specific to IIS7 since it worked fine before the migration. NOTE: The SQL Server is Windows authentication only - no mixed mode or SQL only.

    Read the article

  • Remote Desktop Services Gateway Issue

    - by AVandelay05
    Alright fellow techies here's the rundown. I have installed Server 2008 r2 Remote Dekstop Services on a VM in my network. I installed the following RD role services: RD Session Host, Licensing, Connection Broker, Gateway, Web Access. When I set things up originally, the gateway server and RDWeb worked as it should locally. After getting things running locally (remoteserver.domainname.local) I wanted to test things externally. From the outside, I couldn't get things running (meaning I could connect to rdweb access externally, but when I tried to run an app I would get the message "can't connect/find computer"). Here's my setup for external access The VM has every RD Services role services installed on it, meaning it acts as gateway, rd web access, session host, licensing, the whole bit. I made a self-signed certificate on the gateway server (gateway.domainname.net is the cert name). Internally, I have a secondary forward-lookup zone called domainname.net with an A record gateway pointing to the local IP of the gateway server. On our public DNS (domainname.net) I have an A record gateway. This is to access the RDWeb externally. In IIS I have the following authentication settings RDWeb: All disabled except for anonymous authentication Rpc: All disabled except for basic and windows RpcWithCert: All disbled except for windows authentication I have the necessary web access config in our sonicwall tz210 (https and rdp, external ip pointing to local ip of rds server) RAP and CAP have the correct user and computer groups, authentication, and allowed devices After all of this, here's what happens accessing externally. I can login correctly to RDWeb Access (I've tried a bogus login, I can't login to it so that's working properly). I see the Apps for use. I click on an app, click connect, the credential window opens, I put in the correct user creds, it tries to connect to the gateway server, but then the cred window comes back in view. I tried to reach a limit of failed logins, but never reached one, haha. So from the same external client machine I try to connect to the gateway through a Remote Desktop connection. I put in the correct gateway settings in the RD window, try to connect and get the same results as I did in RDWeb access. I checked the event logs on the RD Services machine and saw the following event IDs around the time I tried to login externally: ID 6037 with the message "The program svchost.exe, with the assigned process ID 2168, could not authenticate locally by using the target name host/gateway.domainname.net. The target name used is not valid. A target name should refer to one of the local computer names, for example, the DNS host name. Try a different target name." ID 10 RADWebAccess "RD Web Access was unable to access gateway.domainname.net, which is the server that is specified as running the RemoteApp and Desktop Connection Management service. Ensure that the computer account of the RD Web Access server is a member of the TS Web Access Computers security group on gateway.domainname.net" ID 4625 "An account failed to log on. Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: Administrator Account Domain: gateway.domainname.net Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc000006a Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: USER-LAPTOP Source Network Address: External IP Source Port: 63125 Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols." I don't think the VM has a null SID. The SID of the VM and it's physical host have different SIDS. I can access the blank page for rpc externally using the external gateway name. It seems like authentication is a problem. Also, is it a problem that the external name of the gateway server doesn't match the local name? The external name (which the cert is based on) is gateway.domainname.net and the internal name is remoteserver.domainname.local. That's the only thing I can think of that would be the problem, but the external name has to be different from the local right? Internally, I ping gateway.domainname.net and it gives me the correct local IP of the server. Now, there isn't an actual computer name in AD, but I don't know how I would achieve that? I hope I've been clear....any help would be appreciated. I think I'm close to achieving this. :)

    Read the article

  • Why do messenger keeps coming up when I adjust screen brightness?

    - by Asaf R
    My Dell Studio XPS 13 has a brightness up & down buttons (Fn + up arrow / down arrow), as do most laptops have. It's running Windows 7 64bit and I've installed Windows Live Messenger (build 14 and something) on it. When I hit Fn + down arrow, to reduce screen brightness, the Messenger window pops up. That's true whether it has been running in background (system tray / minimized) before or not. Note, to make messenger hide itself to the system tray, I've set it to Vista Compatibility mode. I don't think that has anything to do with it though, since if I shut compatibility mode off, the problem persists. There's a thread on the matter here, but with no solution. Thanks, Asaf SOLUTION: Stop Intellitype when using internal keyboard. See detailed howto here. EDIT: Some new findings on the matter - If I exit messenger entirely and restart (or simply stop) the service "Windows Live ID Sign-in Assistant" the problem is solved until I run messenger again. It persists even after entirely closing messenger, until I restart the said service. EDIT2: The above service "Windows Live ID Sign-in Assistant" actual name is wlidsvc. There's no shortcut defined for messenger, nor is there any other hot key in its preferences. EDIT3: I'm not sure if it's relates, but some of the Fn keys are not working - Fn + F1 doesn't put the machine to sleep, Fn + F3 doesn't show battery status. EDIT4: Problem probably relates to conflict with IntelliType Pro. See my answer below. Said conflict also causes Undo command when disabling Wireless (Wireless touch button), and other side effects for various keys.

    Read the article

  • Better performance with memcached cluster or local memcaches?

    - by Nicholas Tolley Cottrell
    I have a small cluster of servers balancing a Java web app. Currently I have 3 memcached servers caching data and all web apps shares all 3 memcached instances. I often get strange slowdowns and timeouts to some of the memcacheds and I wondering if there is a good way of analyzing the performance. I am wondering whether my iptables rules (or some other system limitation) are blocking/slowing connections. I am considering reconfiguring the web apps so that they only query the memcached process on their own localhost.

    Read the article

  • Reverse proxy 502 bad gateway

    - by Brian Graham
    I have setup a subdomain to proxy my plesk panel, but when saving pages I am getting 502 Bad Gateway error instead of a completion message. I am running CentOS 6. Here is my vhost.conf configuration for http://plesk.domain.tld/: RewriteEngine On RewriteCond %{SERVER_PORT} ^80$ RewriteRule $ https://plesk.domain.tld/ [R,L] Here is my vhost_ssl.conf configuration for https://plesk.domain.tld/: SSLProxyEngine On <Location /> ProxyPass https://localhost:8443/ ProxyPassReverse https://localhost:8443/ </Location> I have more than enough (and I have even checked) RAM, CPU and HDD. There are no spikes. As well, the posted information does save, it just errors when trying to show me a "This information has been saved." green/red block. Here is the relevent error from /var/log/nginx/error.log (IP/Host Filtered): 2014/05/29 02:42:41 [error] 8046#0: *402 upstream prematurely closed connection while reading response header from upstream, client: 173.238.XX.XX, server: plesk.domain.tld, request: "POST /smb/web/edit HTTP/1.1", upstream: "https://198.100.XX.XX:7081/smb/web/edit", host: "plesk.domain.tld", referrer: "https://plesk.domain.tld/smb/web/edit"

    Read the article

  • Windows Server 2008R2 IIS7.5, Requests getting 401 status

    - by TLBH
    We have a web site running in windows server 2008r2 iis7.5 and we are seeing several errors reported from our global error handler saying that "Request Timed Out". I matched up one to the IIS log file and see the request took 135116 (presumably milliseconds) had an sc-status of 401 an sc-sub0status of 0 and an sc-win32-status of 64. 2 requests failed in these way but lots of surrounding requests (1979 successful requests vs 2 fails) for the same user went through perfectly fine- with the same cs-username which makes a 401 seem a little odd. The target of the requests is an ASP.Net web service's web method called by the .net client library- it's called a lot of times per user (3 times per second) to keep a page updated. We're getting some users reportng seeing a freezing effect and I think this may be the cause, any ideas? Peter

    Read the article

  • How to restrict HTTP Methods

    - by hemalshah
    I want to restrict HTTP methods to those required by the MOSS07 application(s) using IIS6. I am not able to find a solution after googling. Can any one help. Update this is what was written in the document IIS6 should be used to restrict HTTP methods to those required by the MOSS07 application(s). I also searched some books and saw something curious in O'Reilly's Sharepoint 2007 by James Pyles and others. There is no real suppported way to use HTTP POST and HTTP GET because of the web.config settings and the static definition of the WSDL. In the web.config <protocols> <remove name="HttpGet"> <remove name="HttpPost"> <remove name="HttpPostLocalHost"> <add name="Documentation"> </protocols> If we do this in the Web.Config file, would it solve the problem?

    Read the article

  • how to configure Firefox to automatically reuse the login credentials like IE

    - by Black Eagle
    Multiple HTTP Authentication Prompts in Firefox We are currently working on porting our application from Internet Explorer to Firefox and the application currently uses HTTP Digest Authentication. In case of Internet Explorer, the popup dialog to enter the Username/password appears only once and the entered login credentials are reused for subsequent HTTP requests to the web server. However in case of Firefox, the Authentication popups appears whenever the request is made to the Web Server. The Web Server used is Emweb Server. We would like to know how to configure Firefox to automatically reuse the login credentials like IE.

    Read the article

  • Changing Corosync/Heartbeat pair's active node based on MySQL/Galera cluster state

    - by Hace
    Background I'm planning on building a High Availability "cluster" for our Zabbix instance by placing two physical servers in one server room and two in another server room. In each server room one of the physical servers will run Zabbix on RHEL and the other will run Zabbix's MySQL database, also on RHEL. I'd prefer synchronous replication for the MySQL nodes so I'm planning on using Galera in a master-slave configuration. The Zabbix instances on the two Zabbix servers would be controlled by Heartbeat/Corosync (although Red Hat Cluster Suite is also an option...) If the Zabbix server in Server Room A goes down, the one in Server Room B becomes active (and vice versa). Ditto for the MySQL servers/instances. If either of those cases happen, however, the connection between the Zabbix server and the MySQL server becomes significantly slower as ti has to travel over WAN. Question Is it possible to configure the Heartbeat/CoroSync pair to instruct the MySQL/Galera cluster to change the master node to switch to (if available) the one that's in the server room as the active Heartbeat/Corosync -node and (more challengingly) is it possible to do the same in the other direction, i.e have the Galera cluster change the active Heartbeat/CoroSync server to be in the same room as the active MySQL master server in case of a failover in over to avoid unnecessary WAN transfers between the application and its DB? Theories Most likely I can get CoroSync to run something that'd log in to one of the DB nodes to change the MySQL/Galera master but I don't know if it's really possible to do anything similar in the other direction in Galera. Is it possible to define a "service" in CoroSync/Heartbeat so that both the service and its MySQL service would migrate as one if possible. Using the DB server that's behind WAN should still be a better option to DB downtime. Am I just using too many tools to solve a problem that'd be far simpler with something else?

    Read the article

  • iptables forwarding to a dummy interface

    - by madinc
    Hi, I'm trying to accomplish the following: I have a box with a service listening on a dummy interface (say 172.16.0.1), udp port 5555. Now what I'd like to do is to take packets that arrive on interfaces eth0 (1.1.1.1:5555) and eth1 (2.2.2.2:5555) and forward them to the service on the dummy interface, and have replies go back to clients out the same physical interface they came in. Clients must think they're talking to 1.1.1.1:5555 or 2.2.2.2:5555. I think I need a mix of iptables rules and packet marking, plus some iproute rules (if it's possible at all). What I tried is to catch packets coming in from eth0 and eth1, udp port 5555, and mark them with 1 and 2 respectively, and --save-mark in the connmark. Then I used a DNAT to 172.16.0.1. The service seems to be getting the packets. Now I'm not sure how to do the reverse. It seems that for packets originating from the box, you can't do anything before the routing decision, but that would be the place to restore the marks, and thus make a routing decision based on those. Here's what I have so far: iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j MARK --set-mark 1 iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j MARK --set-mark 2 iptables -t mangle -A PREROUTING -d 1.1.1.1 -p udp --port 5555 -j CONNMARK --save-mark iptables -t mangle -A PREROUTING -d 2.2.2.2 -p udp --port 5555 -j CONNMARK --save-mark iptables -t nat -A PREROUTING -m mark --mark 1 -j DNAT --to-destination 172.16.0.1 iptables -t nat -A PREROUTING -m mark --mark 2 -j DNAT --to-destination 172.16.0.1 # What next? As I said, I'm not even sure it can be done. To give a bit of background, it's an old OpenVPN installation that cannot be upgraded (otherwise I'd install a recent version that supports multihoming natively). Thanks for any help.

    Read the article

  • deploying war on tomcat fails to start

    - by Asghar
    i have a java application which uses JAX_WS when i deployed on my tomcat5 server . it is deployed successfully. but it fails to start SEVERE: WSSERVLET11: failed to parse runtime descriptor: java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName java.lang.IllegalArgumentException: prefix cannot be "null" when creating a QName at javax.xml.namespace.QName.<init>(xml-commons-apis-1.3.02.jar.so) at gnu.xml.stream.XMLParser.getAttributeName(libgcj.so.7rh) at com.sun.xml.ws.util.xml.XMLStreamReaderFilter.getAttributeName(XMLStreamReaderFilter.java:228) at com.sun.xml.ws.streaming.XMLStreamReaderUtil$AttributesImpl.<init>(XMLStreamReaderUtil.java:355) at com.sun.xml.ws.streaming.XMLStreamReaderUtil.getAttributes(XMLStreamReaderUtil.java:198) at com.sun.xml.ws.transport.http.DeploymentDescriptorParser.parseAdapters(DeploymentDescriptorParser.java:204) at com.sun.xml.ws.transport.http.DeploymentDescriptorParser.parse(DeploymentDescriptorParser.java:147) at com.sun.xml.ws.transport.http.servlet.WSServletContextListener.contextInitialized(WSServletContextListener.java:124) at org.apache.catalina.core.StandardContext.listenerStart(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardContext.start(catalina-5.5.23.jar.so) at org.apache.catalina.manager.ManagerServlet.start(catalina-manager-5.5.23.jar.so) at org.apache.catalina.manager.HTMLManagerServlet.start(catalina-manager-5.5.23.jar.so) at org.apache.catalina.manager.HTMLManagerServlet.doGet(catalina-manager-5.5.23.jar.so) at javax.servlet.http.HttpServlet.service(tomcat5-servlet-2.4-api-5.5.23.jar.so) at javax.servlet.http.HttpServlet.service(tomcat5-servlet-2.4-api-5.5.23.jar.so) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(catalina-5.5.23.jar.so) at org.apache.catalina.core.ApplicationFilterChain.doFilter(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardWrapperValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardContextValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardHostValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.valves.ErrorReportValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.core.StandardEngineValve.invoke(catalina-5.5.23.jar.so) at org.apache.catalina.connector.CoyoteAdapter.service(catalina-5.5.23.jar.so) at org.apache.coyote.http11.Http11Processor.process(tomcat-http-5.5.23.jar.so) at org.apache.coyote.http11.Http11BaseProtocol$Http11ConnectionHandler.processConnection(tomcat-http-5.5.23.jar.so) at org.apache.tomcat.util.net.PoolTcpEndpoint.processSocket(tomcat-util-5.5.23.jar.so) at org.apache.tomcat.util.net.LeaderFollowerWorkerThread.runIt(tomcat-util-5.5.23.jar.so) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(tomcat-util-5.5.23.jar.so) at java.lang.Thread.run(libgcj.so.7rh)

    Read the article

  • How hard for a Software Developer to Maintain a Server

    - by Samy
    I'm a software developer and don't have much experience as a sysadmin. I developed a web app and was considering buying a server and hosting the web app on it. Is this a huge undertaking for a web developer? What's the level of difficulty of maintaining a server and keeping up with the latest security patches and all that kind of fun stuff. I'm a single user, and not planning to sell the service to others. Can someone also recommend an OS for my case, and maybe some good learning resources that's concise and not too overwhelming.

    Read the article

< Previous Page | 854 855 856 857 858 859 860 861 862 863 864 865  | Next Page >