Search Results

Search found 12497 results on 500 pages for 'linked servers'.

Page 284/500 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • TMG Forefront Proxy blocking internal HTTP requests

    - by Pascal
    I have TMG Forefront with Proxy installed and configured. However, whenever I make internal HTTP requested to servers on the internal network with a fully qualified dns name, the proxy denies the connection. Denied Connection FRW-02 18/03/2011 20:06:37 Log type: Web Proxy (Forward) Status: 12202 Forefront TMG denied the specified Uniform Resource Locator (URL). Rule: Default rule Source: Internal (10.50.75.21:21492) Destination: Internal (10.50.75.10:8080) Request: GET http://app-01.mydomain.com.br:9871/internalwebserver_deploy/MyServiceService.svc?wsdl Filter information: Req ID: 0a157279; Compression: client=No, server=No, compress rate=0% decompress rate=0% Protocol: http User: anonymous How can I get around this block? This is an internal call, so it should block it. If I use only http://app-01:9871/internalwebserver_deploy/MyServiceService.svc?wsdl, without the domain after the server name, then it doesn't get blocked. 10.50.75.10 is the firewall's ip, and the internal network's gateway.

    Read the article

  • Virus that tries to brute force attack Active Directory users (in alphabetical order)?

    - by Nate Pinchot
    Users started complaining about slow network speed so I fired up Wireshark. Did some checking and found many PCs sending packets similar to the following: (screenshot) http://imgur.com/45VlI.png I blurred out the text for the username, computer name and domain name (since it matches the internet domain name). Computers are spamming the Active Directory servers trying to brute force hack passwords. It will start with Administrator and go down the list of users in alphabetical order. Physically going to the PC finds no one anywhere near it and this behavior is spread across the network so it appears to be a virus of some sort. Scanning computers which have been caught spamming the server with Malwarebytes, Super Antispyware and BitDefender (this is the antivirus the client has) yields no results. This is an enterprise network with about 2500 PCs so doing a rebuild is not a favorable option. My next step is to contact BitDefender to see what help they can provide. Has anybody seen anything like this or have any ideas what it could possibly be?

    Read the article

  • Memory Pressure Protection Feature for TCP Stack - Provided by Microsoft Security Update KB967723

    - by Angry_IT_Guru
    We've been having a lot of funky issues with some of our web based applications that allow clients to submit lot of image files to our servers. Lots of ports are used in the process. http://www.microsoft.com/technet/security/bulletin/MS09-048.mspx - released in Sept-2009. support.microsoft.com/kb/974288 - Memory Pressure Protection description. Evidently, after applying KB967723, our clients receive funky error messages as if connections cannot be made to the server or connections have been closed. There doesn't appear to be a pattern and sometimes it works and other times is doesn't. Typically we've noticed it when server is under load. I'm curious what others think about this MPP and any issues that you may have experienced from it. I understand its purpose, but I think it may have broken a lot of apps in the process. It doesn't look like Microsoft made this "feature" public to everyone.

    Read the article

  • Specify IPSEC port range using ipsec-tools

    - by Sandman4
    Is it possible to require IPSEC on a port range ? I want to require IPSEC for all incoming connections except a few public ports like 80 and 443, but don't want to restrict outgoing connections. My SPD rules would look like: spdadd 0.0.0.0/0 0.0.0.0/0[80] tcp -P in none; spdadd 0.0.0.0/0 0.0.0.0/0[443] tcp -P in none; spdadd 0.0.0.0/0 0.0.0.0/0[0....32767] tcp -P in esp/require/transport; In setkey manpage I see IP ranges, but no mention of port ranges. (The idea is to use IPSEC as a sort of VPN to protect internal communications between multiple servers. Instead of configuring permissions basing on source IPs, or configuring specific ports, I want to demand IPSEC on anything which is not meant to be public - I feel it's less error-prone this way.)

    Read the article

  • Howto WCF Service HTTPS Binding and Endpoint Configuration in IIS with Load Balancer?

    - by Mike G
    We have a WCF service that is being hosted on a set of 12 machines. There is a load balancer that is a gateway to these machines. Now the site is setup as SSL; as in a user accesses it through using an URL with https. I know this much, the URL that addresses the site is https, but none of the servers has a https binding or is setup to require SSL. This leads me to believe that the load balancer handles the https and the connection from the balancer to the servers are unencrypted (this takes place behind the firewall so no biggie there). The problem we're having is that when a Silverlight client tries to access a WCF service it is getting a "Not Found" error. I've set up a test site along with our developer machines and have made sure that the bindings and endpoints in the web.config work with the client. It seems to be the case in the production environment that we get this error. Is there anything wrong with the following web.config? Should we be setting up how https is handled in a different manner? We're at a loss on this currently since I've tried every programmatic solution with endpoints and bindings. None of the solutions I have found deal with a load balancer in the manner we're dealing. Web.config service model info: <system.serviceModel> <behaviors> <serviceBehaviors> <behavior name="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> <behavior name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior"> <serviceMetadata httpsGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <bindings> <customBinding> <binding name="SecureCRMCustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> <binding name="SecureAACustomBinding"> <binaryMessageEncoding /> <httpsTransport /> </binding> </customBinding> <mexHttpsBinding> <binding name="SecureMex" /> </mexHttpsBinding> </bindings> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" /> <!--Defines the services to be used in the application--> <services> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.CRM.CRMServiceBehavior" name="TradePMR.OMS.Framework.Services.CRM.CRMService"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureCRMCustomBinding" contract="TradePMR.OMS.Framework.Services.CRM.CRMService" name="SecureCRMEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> <service behaviorConfiguration="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregationBehavior" name="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation"> <endpoint address="" binding="customBinding" bindingConfiguration="SecureAACustomBinding" contract="TradePMR.OMS.Framework.Services.AccountAggregation.AccountAggregation" name="SecureAAEndpoint" /> <!--This is required in order to be able to use the "Update Service Reference" in the Silverlight application--> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> </service> </services> </system.serviceModel> </configuration> The ServiceReferences.ClientConfig looks like this: <configuration> <system.serviceModel> <bindings> <customBinding> <binding name="StandardAAEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureAAEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="StandardCRMEndpoint"> <binaryMessageEncoding /> <httpTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> <binding name="SecureCRMEndpoint"> <binaryMessageEncoding /> <httpsTransport maxReceivedMessageSize="2147483647" maxBufferSize="2147483647" /> </binding> </customBinding> </bindings> <client> <endpoint address="https://Service2.svc" binding="customBinding" bindingConfiguration="SecureAAEndpoint" contract="AccountAggregationService.AccountAggregation" name="SecureAAEndpoint" /> <endpoint address="https://Service1.svc" binding="customBinding" bindingConfiguration="SecureCRMEndpoint" contract="CRMService.CRMService" name="SecureCRMEndpoint" /> </client> </system.serviceModel> </configuration> (The addresses are of no consequence since those are dynamically built so that they will point to a dev's machine or to the production server)

    Read the article

  • Minimizing SQL transaction log file size on developer box running simple recovery model

    - by Anders Rask
    We have alot of SQL servers on development environment where we never take backup of the databases (TFS for code is enough). The (SharePoint) databases are all set to simple recovery model, but the log files, especially for the SharePoint configuration database is growing quite large and filling up our data drive on the SQL server. Since these log files are never used for anything, i would like advice on how to best minimize the size of these log files -or even disable them if possible. I'm not completely sure why the log files grow so large even on simple logging (checked for long running transactions (DBCC OPENTRAN) but found none). I guess the reason for the log files not being truncated is, that we dont take any backups, and hence Checkpoints arent reached. The autogrowth for log files are set to autogrow by 10% restricted to 2 gb, so i guess that is why Checkpoint (70%) arent reached here either. What would be the be best strategy to keep log files small (best case 0) without sacrificing performance (eg VLF fragmentation)?

    Read the article

  • virtual directory make file copy operation extremely slow on UNC Path (IIS 7.5 bug?)

    - by user144737
    When i create a website/virtual directory pointing to UNC path, its make our file copy extremely slow on the UNC path. 6 seconds for file copy (~13 M) on the UNC path without any virtual directory/website pointing to it. over 1 mins. for file copy (same files ~13M) on the same UNC path with virtual directory/website pointing to it. All file copy operation run on web server side. Our setting as below: Web server - Windows Server standard R2 2008 / IIS 7.5 File server - Windows Server standard 2003 I have tested this case on 3 servers (Windows Server standard R2 2008 / IIS 7.5) and got same result. I also tested this case on 2 windows 2003 / IIS 6, it won't slow down the file copy. Is it IIS 7.5 bug? any patch/hotfix to solve this case? Thank you. Gordon

    Read the article

  • Download HP Power Protector for ESXi

    - by Mark Henderson
    The HP PowerProtector user guide states that to install the HP PowerProtector client on an ESXi Host: Download the latest version of HPPP from the HP website (http://www.hp.com/go/rackandpower). The ESXi Server is automatically detected, and a shutdown command script is generated. However in typical HP fashion, after clicking through no less than 6 different links to get to the downloads page, I am presented with: http://h18004.www1.hp.com/products/servers/proliantstorage/power-protection/software/power-protector/pp-dl.html HP Power Protector (HPPP) - Windows HP Power Protector (HPPP) - Linux x86 HP Power Protector (HPPP) - Linux x64 HP Power Protector (HPPP) - Linux IA64 HP Power Protector (HPPP) - HPUX The Linux packages contain an RPM and in no way resemble what is in the HP documentation. None of these are labelled for ESXi. Does anyone know where or how to get the HP Power Protector ESXi client installed?

    Read the article

  • MySQL replication Slave_IO_Running: No

    - by Christy
    Hi all, I have two servers that I am trying to get replication of one database between. I found a setup guide on sourceforge that I followed and I have tried various other settings since then, but no matter what I do, when I start the slave, the 'Slave_IO_Running' setting is always No.... I have no idea why or what to look at, any suggestions are appreciated. The slave setup was: CHANGE MASTER TO MASTER_HOST='myserver.mydomain.net', MASTER_USER='slave_user', 'MASTER_PASSWORD='mypassword', 'MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=1368363 (last data from today, trying to do setup again. I deleted and recreated the database on the slave from a new dump and tried to redo the setup.) I have slave_user setup for %, localhost, and the specific IP of the slave computer but nothing seems to be working... Thanks in advance for any advice or suggestions

    Read the article

  • Server 2003 on domain wont let domain user have local profile

    - by RobW
    I have a few servers that are acting in this behavior, you log in and always get put into a temporary profile. The server is licensed for TS. The user I am testing with has local admin rights so it doesn't seem to be a permission issue on the server. I'll first get a message that the users roaming profile cannot be found, even though we dont use roaming profiles. I then get another message immediately after saying a local profile could not be loaded, so it will only use a temp profile. Any help would be greatly appreciated.

    Read the article

  • L2TP with PEAP authentication from MacOS/iOS

    - by Jose
    Following the recent security advisory, I'm reconfiguring our VPN servers and having trouble. We're using Windows 2008 R2 server for VPN services, running RRAS and NPS on the same server and configure it to use PEAP-EAP-MSCHAPV2 authentiation for all tunnel type(PPTP, L2TP, IKEv2, SSTP), which previously allowed plain MSCHAPv2. But Apple products, MacOS and iOS cannot connect to VPN after this change. I tried to install root certificate used in PEAP transaction but no change. Does anyone know whether MacOS/iOS supports PEAP-EAP-MSCHAPv2 authentication in PPTP/L2TP? If so any tips to make it work? (I know PEAP-EAP-MSCHAPv2 is supported in WPA/WPA2 enterprise) Regards.

    Read the article

  • OpenVPN slow with Firewall enabled on Zyxel ZyWall USG-100

    - by aleroot
    I have an OpenVPN server on a Linux machine, after installing a ZyWall USG-100 I'm experiencing extremely slowness navigating web servers on my remote LAN through the VPN connection, while accessing the web interface of the ZyWall is fast. I have configured everything : the Virtual Server for the OpenVPN Server, the static route as with the replaced router that I had before installing the ZyWall Today. I even added a rule to the firewall that allows connection to the OpenVPN Server machine : but navigation on the LAN through the VPN still slow, it seems that the Firewall is blocking packages, since if I disable the firewall on the USG-100 everything works fast as usual, while with the firewall enabled it is extremely slow. Why ? Do I need to add some other rule to the firewall to speed up ?

    Read the article

  • Setup mod-rewrite

    - by Publiccert
    I'm trying to setup mod-rewrite for a few servers. The code lives in /home/jeff/www/upload/application/ However, this is what's happening. It appears to be a problem with mod-rewrite since it's appending code.py to the beginning of the directory: The requested URL /code.py/home/jeff/www/upload/application/ was not found on this server. Here are the rules. Which one is the culprit? WSGIScriptAlias / /home/jeff/www/upload/application Alias /static /home/jeff/www/upload/public_html <Directory /home/jeff/www/upload/application> SetHandler wsgi-script Options ExecCGI FollowSymLinks </Directory> AddType text/html .py <Location /> RewriteEngine on RewriteBase / RewriteCond %{REQUEST_URI} !^/static RewriteCond %{REQUEST_URI} !^(/.*)+code.py/ RewriteRule ^(.*)$ code.py/$1 [PT] </Location> </VirtualHost>

    Read the article

  • What are industry standards and professional best practices in network hosts naming? [closed]

    - by Ivan
    Possible Duplicate: Naming convention for computers It seems an important and difficult dilemma for me how to name network hosts (routers, servers (while a server can be a router and host diverse services at the same time), virtual machines (while they host important services and can migrate), workstations and notebooks (using pc-username is not the best idea as users may change), printers & MFUs, surveillance IP cameras, etc). Are there known and accepted best practices for this task? Excuse me if there already was a similar question here (I think it probably was), I haven't found it.

    Read the article

  • Unable to resolve FQDN, hostname works

    - by HannesFostie
    We are having an issue where computers who are not part of the domain cannot resolve the FQDN of a server (but regular hostname and ip do resolve). The strange thing is that this does work when the computer is added to the network. Our domain name is rather long, its something along the lines of "team.dept.company.com", could that be it? DHCP server passes along the proper DNS, Name and WINS servers, as well as the domain name. I thought that should've solved the problem, but apparently not really. Our domain is still windows2003 EDIT: I am starting to believe I can narrow this down to a problem either with the vmware tools NIC drivers that are embedded in my winPE boot image, or to the fact that I'm trying to do this from inside a VM. Pinging a FQDN at the same time when using a different task sequence on a physical machine works.

    Read the article

  • Exposing BL as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? if so, why ? Do you know anyone who had a similar experience ? Thank you !

    Read the article

  • Exposing BL as WCF service

    - by Oren Schwartz
    I'm working on a middle-tier project which encapsulates the business logic (uses a DAL layer, and serves a web application server [ASP.net]) of a product deployed in a LAN. The BL serves as a bunch of services and data objects that are invoked upon user action. At present times, the DAL acts as a separate application whereas the BL uses it, but is consumed by the web application as a DLL. Both the DAL and the web application are deployed on different servers inside organization, and since the BL DLL is consumed by the web application, it resides in the same server. The worst thing about exposing the BL as a DLL is that we lost track with what we expose. Deployment is not such a big issue since mostly, product versions are deployed together. Would you recommend migrating from DLL to WCF service? if so, why ? Do you know anyone who had a similar experience ?

    Read the article

  • Samba: session setup failed: NT_STATUS_LOGON_FAILURE

    - by stivlo
    I tried to set up Samba with "unix password sync", but I still get logon failure. I am running Ubuntu Natty Narwhal. $ smbclient -L localhost Enter stivlo's password: session setup failed: NT_STATUS_LOGON_FAILURE Here is my /etc/samba/smb.conf [global] workgroup = obliquid server string = %h server (Samba, Ubuntu) dns proxy = no log file = /var/log/samba/log.%m max log size = 1000 syslog = 0 panic action = /usr/share/samba/panic-action %d security = user encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes map to guest = bad user [www] path = /var/www browsable = yes read only = no create mask = 0755 After modifying I restarted the servers: $ sudo restart smbd $ sudo restart nmbd However I still can't logon with my Unix username and password. Can anyone please help? Thank you in advance!

    Read the article

  • Setting up Lan within a Lan

    - by nageeb
    How unreasonable would it be to setup a small LAN within an existing LAN? I'm setting up a series of video surveillance servers and a number of IP cameras in a client's location and cannot have my equipment on the same network as their local machines. My network is essentially self-contained and the only device that anyone needs to access is a web-app on one of the machines. Basically I'm thinking of installing a SOHO router which would uplink to their LAN, and then set up some NAT rules on both their router and my router, to allow outside access to the webserver. Is there anything fundamental that i'm missing which would prevent this from working?

    Read the article

  • CentOS 5.5 APIC issue on ESX 4.1 & ML115

    - by Adnan
    Hi, I've just installed vSphere 4.1 on an HP ProLiant ML115 G5 Quad-core and am trying to install CentOS 5.5 as a guest system. However, when the guest boots up I get a calibrate_APIC_clock warning and a kernel panic message. I've come across this knowledge base article on the vmware website which suggests moving the guest onto another Intel based host (!). Funnily enough I don't have a collection of spare host servers sitting around, so can anyone suggest another solution? Alternatively, would installing an earlier version of CentOS get around this issue, or would a yum update put me back to square one? How about BIOS settings, could anything be tweaked there? Thanks.

    Read the article

  • NFS caching on Ubuntu

    - by stream
    We run a bunch of ubuntu servers (mostly 8.04 LTS) which all mount an nfs share at /nfs. We use the nfs primarily for two purposes: symlinking config files (such as apache vhosts) reading & writing uploaded files This all works great except it makes us fully dependent on the central NFS server (which is a DRBD cluster with heartbeat failover from primary to secondary, but we've still seen issues). What we'd like is if we could mount the NFS through some local caching layer which would make any file which had previously been read remain available even if /nfs isn't. Writes could be disabled for this period. Searching around it looks like cachefilesd may be an option. Unfortunately, it's only packaged for ubuntu 9.10 & 10.04 it looks like. I was also looking for a FUSE-based solution which might fit the bill, but hadn't found anything yet. Any suggestions would be greatly appreciated!

    Read the article

  • Backing up Information Store - Recovering to Different Information Store / RSG

    - by Kip
    Hi All, I have a question on a situation, that hasn't yet arrisen but I wondered the possibilities and how we go about it. Currently we backup our Exchange 2003 Cluster with Backup exec. Currently it is set to backup the Microsoft Information Store on that server and all of the Mailbox Stores beneath it. We have previously used this in conjunction with a recovery storage group on the same server to recover lost mailboxes. However, due to space constrictions on that server ( a seperate issue that is being addressed in the very near future but outside of the scope of this question) we now don't have enough space on that server to do a recovery storage group type restore. Is it possible, to restore an information store, to a different server in the same administrative group (ie first)? By that I mean we have the following: Server1 | First Storage Group | Mailbox Store1/2/3 Could Mailbox Store 1 be restored to: Server2 | First Storage Group | Recovery Storage Group Both servers are under the same Administrative Group Currently for whatever reason ( mainly time) the mailboxes are not being backed up individually. Regards Kip

    Read the article

  • james - mail server DNS configuration

    - by Chaitanya
    hi, I am setting up james mail server. I installed James and added in the config.xml added the servername as mydomain.com. In the DNS for mydomain.com, I have created a A-record, say mx.mydomain.com, which corresponds to the ipaddress of the above mail server machine. Then added mx.mydomain.com as MX record for mydomain.com. In James, I have created a new user test. From the user I have sent a mail to my gmail account. I see that the mail is accepted and the mail is in outgoing folder of James. But it's not relay to the gmail server. In the config.xml of James, I have added 8.8.8.8 and 8.8.4.4 as the dns server addresses, which are public DNS servers hosted by Google. IPTables on the machine is stopped. Thanks for your help!

    Read the article

  • Massive Network Upgrade

    - by Cliff Racer
    I find myself tasked with organizing an upgrade of our entire Active Directory from server 2003 to 2008. We run a few AD dependant services such as Exchange 2007 SQL Server 2008 SharePoint 2007 All of which we are looking to bring up to date as well with their most recent versions. The original AD was a little bit of a mess (the exchange upgrade from 2003 left some stuff in the AD database that I make references to servers that no longer exist for example). Here is what I want to accomplish Migrate the domain from our 2003 to a NEW clean 2008r2 domain Upgrade from Sharepoint 2007 to 2010 Upgrade Exchange from 2007 to 2010 My question is, in what order do we do things? Can I do a domain upgrade and simply migrate exchange after? On their own, these objectives are complicated enough, orchestrating them in our company while minimizing downtime is making my head spin. I have done a lot of the research on how to do them individually but I am having trouble figuring out how to do them all in concert.

    Read the article

  • Nagios Terminal Services check?

    - by jldugger
    Most of our servers are licensed for 2 concurrent remote desktop sessions. This is fine, so long as everyone does their administrative task and logs off, but some people accidentally close sessions (disconnect but remain logged in) instead. I know that you can force someone off with the right Admin tools, but it's a bit ugly and may hurt productivity or maybe even the server(?). I was thinking that a nightly Nagios check of remote sessions available nagging people would help enforce build discipline on the subject. Can anyone recommend a service check that can monitor terminal service availability?

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >