Search Results

Search found 83224 results on 3329 pages for 'web application firewall'.

Page 308/3329 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • Our application is responding different on client server

    - by WtFudgE
    I have an application running on our local server and it works perfect. However when I deploy my application to our client's server there are problems. If i copy all the sources to another local from their server, it works fine again.... I am talking about an ASP.NET application talking with windows server 2008. The server specs of our client are: intel Xeon 2.13 GHz 6 GB RAM 300 GB HDD windows server 2008 R2 Although I think it's not that important I will describe the problems I am talking about to give u a better idea: It is related to upating and deleting fields in the sql server. Whenever there is more than one user running the application, and updates or deletes (not even the same record!0, there seems to be fields updating/deleting wrongly. But as I said before, if we copy the source to another server it is not the case. So I assumed it is more configuration related. Do you guys have any idea what the issue could be? Thanks a lot EDIT: my client old me he disabled the sql related accounts Could this be related?

    Read the article

  • IIS 7.0 404 Custom Error Page and web.config

    - by Colin
    I am having trouble with a custom 404 error page. I have a domain running a .NET proj with it's own error handling. I have a web.config running for the domain which contains: <customErrors mode="RemoteOnly"> <error statusCode="500" redirect="/Error"/> <error statusCode="404" redirect="/404"/> </customErrors> On a sub dir of that domain I am ignoring all routes there by doing routes.IgnoreRoute("Assets/{*pathInfo}"); in the .NET proj and I want to put a custom 404 error page on that and any sub dir's of Assets. The sub dir contains static content like images, css, js etc etc. So in the Error Pages section of IIS I put a redirect to an absolute URL. The web.config for that dir looks like the following: <system.webServer> <httpErrors> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="http://mydomain.com/404" responseMode="Redirect" /> </httpErrors> </system.webServer> But I navigate to an unknown URL under that dir and yet I still see the default IIS 404 page. I am also seeing an alert in IIS that reads: You have configured detailed error messages to be returned for both local and remote requests. When this option is selected, custom error configuration is not used. Does this have anything to do with the customErrors mode="RemoteOnly" in the site web.config? I have tried to overwrite the customErrors in the sub dir web.config but nothing changes. Any help would be appreciated. Thanks.

    Read the article

  • Browsers hang when loading web pages

    - by avalanchis
    HP Laptop with Windows Vista Home Premium, Service Pack 2. Problem affects IE8 and Firefox. When attempting to browse most web sites, browser loads web site title in browser's title bar, then displays "Waiting for site url" and never completes loading the page. Able to connect to www.google.com with no problems. Unable to connect to www.carbonwy.com, www.wsj.com, www.cnn.com, www.microsoft.com, etc. Other PCs on same subnet are all able to connect to these sites without any problem. nslookup resolves all of these sites without any problem, so DNS does not appear to be the problem. Windows Firewall is disabled. Norton 360 was previously installed but has been uninstalled. No other firewall software is installed. IE8 Protected Mode is off. IE8 advanced settings have been reset. IE8 user settings have been reset. After reset, IE8 tries to connect to http://hp-laptop.aol.com and go.microsoft.com but same symptoms persist. Title bar loads, then nothing. System has been restarted numerous times. Running out of ideas. Please help!

    Read the article

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr K.
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Apache Proxy Pass and Web Sockets

    - by James
    I'm using Apache with the mod_proxy module to reverse proxy my Node.js application through to port 80, so that we can access it as an internal application. I have a file in sites-enabled which contains this: VirtualHost *:80> DocumentRoot /var/www/internal/ ServerName internal ServerAlias internal <Directory /var/www/internal/public/> Options All AllowOverride All Order allow,deny Allow from all </Directory> ProxyRequests off <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPass / http://localhost:8080/ retry=0 ProxyPassReverse / http://localhost:8080/ ProxyPreserveHost on ProxyTimeout 1200 LogLevel debug AllowEncodedSlashes on </VirtualHost> As I said, our application is written in Node.js and we're using socket.io to make use of web-sockets, as our application also contains realtime elements to it. The problem is, mod_proxy doesn't seem to handle web sockets and we get errors when trying to use them: WebSocket connection to 'ws://bloot/socket.io/1/websocket/nHtTh6ZwQjSXlmI7UMua' failed: Unexpected response code: 502 How can we fix this issue and keep sockets working, as the only way we can get it working currently is to access the site via ip:port which we don't want to do. Also, as a side question, how can I get ErrorDocument to work properly? Our error files are stored in /var/www/internal/public/error/ but they seem to get put through the proxy too?

    Read the article

  • Apache Reverse proxy for intranet and other integrated application on intranet

    - by user1433448
    I'm trying to configure a reverse proxy (ssl) with apache 2.2 in Debian Squeeze, but I have some problems, specially with some path absolute and with https I'll try to detail what I have made and what I'm trying to configure I have a server Debian Squeeze with apache2.2 + mod_proxy_html with: # apt-get install libapache2-mod-proxy-html libxml2-dev # a2enmod proxy # a2enmod proxy_http # a2enmod proxy_html # a2enmod headers After that I have configured a virtual host with: reverse_proxy_ssl.conf I'm trying to configure to allow access of our intranet from internet with a reverse proxy (apache that is located in DMZ). With this configuration domain.com/intranet works correctly and we can access to intranet, but we have one problem when from domain.com/intranet we need to use another internal application that is called from intranet with absolute path ( https://192.168.10.25/application/) and from internet appears that try to access with internal ip, and this link es incorrect from external site We only need to access from intranet to multiple internal application that are in external server and we like to restrict to minimal access from internet. All the application that are in the smae server of intranet are working. The second problem is with https and reverse proxy in our firewall appears some errors with packets (not valid packets), and with https seems to work. What can I do to solve this problems (absolute path and ssl problem) Thanks

    Read the article

  • How to get my W2003-server (back) into the web (after setting up bridged networking)

    - by MBaas
    I have recently set up Virtualbox on a W2003-Server (which is also used as webserver, accessed from the web). My vbox worked nicely, but then I wanted more, I wanted to have the vm appear in the intranet like any ordinary pc. I was advised to setup bridged networking as opposed to NAT. I did so, and in the server's network connections have bridged the LAN-Connection and the "VirtualBox Host-Only Network" (yes, it says "host only network", but I assure that VBox networking is configured to use network bridge). So now my VM is visible in the intranet and it also has www-accesss, the server can also access the web. The only problem that came up is that the server is no longer accessible from the web. I've traced an HTTP-Request and it says "Can't connect to *:80 (connect: No route to host)". So maybe something in the router's config needs to be adjusted (yeah, well, the server's IP-Address changed from 192.168.1.199 to ...198). So I went into the router-config, reviewed port-forwarding for port 80 and adjusted the IP there, but it still didn't work. Unsure if it was a router-problem or rather something in the server's config, I've setup a "demilitarized zone" in the router and have put the server into it. (My understanding is that this would put the server straight into the web...) But the result of the HTTP-Requests is still the same :(

    Read the article

  • Seeking web-based FTP client for very large file upload

    - by Paul M. Nguyen
    I have looked around for these for some time... the limits imposed by the web server and/or the dynamic programming environment (e.g. PHP) are far too restrictive for the application I'm working on. We need to be able to move large graphics and video files to and from clients (ranging from tens of MB to a few GB in a single file). Plain FTP with a proper desktop client will do the trick, and we're hosting this in Amazon EC2 with EBS. User management will be done from the office via webmin. Users are chroot-jailed into their home dir by proftpd. net2ftp will work for many clients, but we often need to move single files that approach 1GB or exceed 2-3GB which is way out of the range of any http-based uploader. So we turn to Java or Flash - can they do it? From within the web browser establish an FTP connection and grab a huge file? There are licensed applets and such out there, but none seem convincing. Again, I'm looking for some code that can speak FTP and read (& write?) the local disk, that is delivered in a web browser, and can move single files of 2GB+. The reason for having a web-based interface to FTP is to skip the software installation step for our clients. I will consider proper desktop client software as long as it's "portable" and at least Win+Mac and can be easily configured by lay-man users in a hurry.

    Read the article

  • IIS 7.0 404 Custom Error Page and web.config

    - by Colin
    I am having trouble with a custom 404 error page. I have a domain running a .NET proj with it's own error handling. I have a web.config running for the domain which contains: <customErrors mode="RemoteOnly"> <error statusCode="500" redirect="/Error"/> <error statusCode="404" redirect="/404"/> </customErrors> On a sub dir of that domain I am ignoring all routes there by doing routes.IgnoreRoute("Assets/{*pathInfo}"); in the .NET proj and I want to put a custom 404 error page on that and any sub dir's of Assets. The sub dir contains static content like images, css, js etc etc. So in the Error Pages section of IIS I put a redirect to an absolute URL. The web.config for that dir looks like the following: <system.webServer> <httpErrors> <remove statusCode="404" subStatusCode="-1" /> <error statusCode="404" prefixLanguageFilePath="" path="http://mydomain.com/404" responseMode="Redirect" /> </httpErrors> </system.webServer> But I navigate to an unknown URL under that dir and yet I still see the default IIS 404 page. I am also seeing an alert in IIS that reads: You have configured detailed error messages to be returned for both local and remote requests. When this option is selected, custom error configuration is not used. Does this have anything to do with the customErrors mode="RemoteOnly" in the site web.config? I have tried to overwrite the customErrors in the sub dir web.config but nothing changes. Any help would be appreciated. Thanks.

    Read the article

  • transparently proxying a firewalled web application from a non-standard port to port 80

    - by Terrence Brannon
    I have a web application that serves on port 8088 on $server. However, the only port accessible from remote on $server is port 80. Furthermore, only CGI programs can execute on port 80. I would like to write a CGI program accessible via port 80 that allows one to use the web app running on port 8088. From my view, an ideal solution would be some sort of Java web browser that simply opened up a window and allowed me to use the program running on that port. The CGI program would simply initiate a web browser applet or something. I wrote a Perl CGI program that does it, but I really would like a more transparent solution: my $q = new CGI; print $q->header; use LWP::Simple; use HTML::Tree; my $base = "http://localhost:8088"; my $request = $base; my $qurl = $q->param('url'); if (length($qurl) > 1) { warn "long $qurl"; $request = "$base$qurl"; } else { warn "short $qurl"; } my $content = get($request); my $tree = HTML::TreeBuilder->new_from_content($content); my @a = $tree->look_down('_tag' => 'a'); for my $a (@a) { my $url = $a->attr('href'); next if index($url, '#') > -1 ; $url = "?url=$url"; $a->attr(href => $url); } print $tree->as_HTML;

    Read the article

  • Long access time for static web page on virtual machine

    - by Karol
    My setup Windows 7 on workstation that I use at work (with domain) and home (no domain) Virtual machine (VMWare) that runs Arch Linux (I will call it just "Linux") with network interface in bridged mode. Linux serves web pages with Nginx. IP address of Linux machine is 192.168.0.16 and is added to C:\windows\system32\drivers\etc\hosts: 192.168.0.16 bridged bri IP address of Windows workstation is added to /etc/hosts: 192.168.0.10 workstation I can add more details to my setup description (I am not sure what is relevant). The question Often (but not always) it takes long time for a web browser (Firefox) to open static web page served by Linux. I am sure it is not a performance issue. To be more specific: it takes about ~20 seconds to resolve(?) the address http://bridged for a web browser. Additionally I have just installed samba service and noticed similar problem, so it is not specific to browser & http. Initial access for samba shares also takes long time.

    Read the article

  • How to move Mdadm RAID drive (EBS based) to different AWS Instance

    - by Stanley
    We have a media-rich web application that is hosted on AWS. We have several Web Servers and we have an NFS server. On the NFS server (Linux server) we have several EBS volumes that are mounted and we've used mdadm to implement the different mounted volumes as a single RAID volume. The Web Servers simply access the NFS storage through a mount point. Amazon has now let us know that they will be performing power maintenance on this server in a couple of days time. Since all our media is on here it would render our site unusable for the hours while Amazon is working on it. We want to try and prevent this downtime. I was thinking that we can prevent server downtime by perhaps setting up a new server temporarily and attaching the EBS drives (raid volume) to that server and have our web servers point there during maintenance. This is a very high risk operation since this involves several terabytes of our production data. What would be the safe way to move over our logical raid drive (md0) to a new amazon instance? I was hoping that I could start with building the new server, mounting the ebs volumes and assembling the RAID partition using mdadm --assemble --scan before unmounting from the existing instance so that I can first test that everything works and thus having it mounted on two instances at the same time, but I don't believe that is possible with the way that filesystems work. How do I move a Linux software RAID to a new machine? suggests a way to move drives, but isn't really a cloud-based question. Perhaps there are simpler ways to prevent system downtime with our solution being hosted on the cloud? I have considered taking an EBS snapshot, but that tries to replicate all the many terabytes of mounted storage, so this is not a practical solution. Any ideas?

    Read the article

  • How do I identify Blackberry / OWA users in my IIS logs?

    - by Quinten
    We just rolled out a Blackberry Express Server, and would like to make sure that all Blackberry devices that our users own are connecting SOLELY through the BES server. We are running Exchange 2010 SP1. I've read some links that discuss blocking BIS at the firewall level. Before doing that, however, I'd like to individually contact all users with Blackberries and make sure that they have a chance to switch to the BES server. I've sent a company-wide email, but unsurprisingly folks tend to tune these out until they are forced into action. Is there an easy way to identify the users with Blackberries by searching IIS logs, or perhaps using the Exchange Management Shell? Especially some automated way? I've tried searching for the Blackberry identifier, but it does not appear next to any user name, so it's not as helpful as it could be. Edit: to clarify, what I'm talking about is the fact that Blackberries can use OWA to download mail to the phone. We do not allow IMAP or POP access through our firewall so that's not a concern--just folks with Blackberries using Blackberry's hack to allow it to connect to Exchange without a BES server. As far as I know, Blackberries are the only popular phones that use this method to download mail.

    Read the article

  • Unable to access newly created web site in IIS 7.5

    - by Animesh
    Configuration: 32-bit Windows 7 development machine with IIS 7.5 I created a new web site in IIS to host only MVC sites called MVCHOST. The physical path to this website is set as C:\inetpub\mvcroot. I created a new v4.0 pool called mvcpool for this purpose. I have given Modify rights to IIS_WPG, IIS_IUSRS, ASPNET accounts. I created this web site with a host header "mvchost" and port 80, in the hopes of browsing MVC sites in the following way: mvchost/mvcapp1 mvchost/mvcapp2 instead of localhost/mvcapp1 localhost/mvcapp2 The only binding I set is the default one: http:*:80:mvchost. I have also copied the files iisstart.htm, web.config, welcome.png and folder aspnet_client from wwwroot over to mvcroot. Now when I try to the browse this site from IIS manager, I get the following error: This webpage is not available If I leave out the host header and give some port, say 99, I can access this website at localhost:99. What am I missing here? Why am I unable to access the web site at: http://mvchost/?

    Read the article

  • VirtualBox VM running web server not accessible via external IP

    - by mwigdahl
    I have a Windows 7 machine running VirtualBox with an Ubuntu guest. The guest has a Bitnami LAMP stack installed. I have the guest configured for Bridged networking, and I can access the guest web server just fine from other machines on my LAN using the guest's IP. I'm trying to configure port forwarding so that I can access the web server from outside my LAN. (The router is a 2WIRE model as I'm on ATT's UVerse). I've set up port forwarding for ports 80 and 443 to the guest's IP in a similar manner to how I had them set up for my previous, physical web server, which worked just fine. However, I cannot seem to access the new, virtual web server using my external IP on the forwarded port. I suspected Windows Firewall issues on the host, but disabling it didn't solve the issue. Anyone have advice on what I should try next? EDIT: I've now attempted disabling the firewall on the guest with sudo ufw disable -- that doesn't seem to help either. However, after checking the router's port forwarding in more detail I may see the problem. My VM is named "linux" and in the router's configuration pages it shows up inconsistently. Sometimes it reports with a valid LAN IP and other times it doesn't show up with any IP. Even when it shows the correct IP the router indicates that it is disconnected. Could this be an indication that the 2WIRE router doesn't play well with VirtualBox's bridged networking mode?

    Read the article

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Web browsing through SSH tunnel gets stuck/clogged

    - by endolith
    I use tools like Tunnelier to log into my home Tomato router through SSH, and then use it as a proxy for web browsing, tunnel for Remote Desktop/VNC, etc. Most days it works great, but some days every page I try to view gets stuck, like the tunnel is clogged. I load a web page and it seems to be loading, then stops, with the little loading icon spinning and nothing happening. I refresh the page, I reboot the router, I reboot the other computers on my home network and turn off any bandwidth-hogging services on them, I've turned on QoS on the router to prioritize SSH. I don't understand what's getting stuck. Rebooting or disconnecting/reconnecting the SSH tunnel improves responsiveness for a minute, but then it gets clogged again. It also seems to help if I don't do anything on the tunnel for a few minutes, then it will be responsive for a bit and then get clogged again. Trying to open a terminal console from Tunnelier is also unresponsive, so it's not just a web browsing problem. Likewise, connecting to http://192.168.1.1 in the browser (to the router's web config through its own tunnel) is also slow/laggy/halting. The realtime bandwidth reported by the router is nowhere near my DSL connection's limits, though it does show big spikes during the laggy times, and the connection is responsive when it shows low bandwidths. How do I troubleshoot something like this?

    Read the article

  • SQL Server 2008 login problem with ASP.NET application: Failed to open the explicitly specified database

    - by eulerfx
    I am running SQL Server 2008 Express Edition on Windows Server 2008 with an ASP.NET application which must access the server. The ASP.NET application is associated with an application pool that runs on the NetworkService account. This account in turn has a Login and User record on SQL Server in the required database. When I attempt to run the ASP.NET website I get a blank page and when viewed in the error log, I seem to be getting this information event record: Login failed for user 'NT AUTHORITY\NETWORK SERVICE'. Reason: Failed to open the explicitly specified database. [CLIENT: myLocalMachine] The connection string has Trusted_Connection=True; and the required database specified. When I explicitly specify the user name and password I get another login error stating the password is incorrect, even though the same un/pw combination works through SQL Server Management studio. The NETWORK SERVICE account seems to have all the required privileges for the database. Also, I made a test ASP.NET website project which does a simple select from a table in that database, and using the same config file I am not getting the error and it seems to work. Is it something to do with trust levels then, because the original ASP.NET web app references various DLLs including open source libraries. Also, the application does not seem to be able to write to the event log itself, throwing a security exception, even though everything in the config files, including machine.config states the app is in full trust.

    Read the article

  • Using Computer name in URL causes issues when connecting to Web Services

    - by AWinters
    The set of applications I work on all access the same 8 or so web services that we have. These services and applications all reside on the same box and all use the computer name when trying to connect to the web service. For Example: If I have a web service called MapDataService and I have an application that accesses it, it would access it by the URL: http://COMPUTERNAME/MapDataService/MapDataService.asmx. This works in most of the applications that access the web service. However, we have several applications that, when using the computer name in the URL, will not get data returned from the service (actually a 503 is returned). In order to get it to work, the IP address of the system needs to be used in place of the COMPUTERNAME. This strikes me as very odd considering, as I mentioned before, all applications and services are on the same box and most other applications usr the COMPUTERNAME with no issues. Can someone give me some insight as to what could be causing this? We have no access to IIS logs and what logs we did get (this is on a customer site) are not very useful.

    Read the article

  • Web based KVM management for Ubuntu

    - by Tim
    We've got a single Ubuntu 9.10 root server on which we want to run multiple KVM virtual machines. To administer these virtual machines I'd like a web based KVM management tool, but I don't know which one to choose from the list of tools mentioned on linux-kvm.org. I've used virsh & virt-manager on my desktop, but would like a web interface for the server. I tested ConVirt on my desktop, but it failed to pickup KVM machines from virsh / virt-manager, and I could not get KVM virtual machine import to work (only Xen). oVirt looks good, but I can't find out if and how I can install it on Ubuntu 9.10.. (And I'd really rather not waste another few days on testing stuff that might not work in the end.) Can anyone recommend any good web based KVM management tools that are easy to install on Ubuntu 9.10? I'm looking for something that will also allow me to run other services like apache and postgresql besides hosting virtual machines, so preferably fairly lightweight & no dedicated OS installs. We don't need any professional clustering / migration or anything, just something that will let us create, start, inspect, administer & stop virtual machines from a web page. Best regards, Tim Update: Anyone have any suggestions? It's awfully quiet here..

    Read the article

  • Hyper-v and sql server connections for web apps

    - by Rick Ratayczak
    I have a physical machine running win8, and two VMs in hyper-v client: 1 web server, 1 sql server. The web server works fantastic. The sql is the one that is giving me the problem. I can connect to it with server explorer in visual studio or management studio just fine, and it's blazing fast. The problem happens when I use the same connection string I am using in visual studio server explorer in the web.config for an app. data source=VMSQL1;initial catalog=OtherShell;persist security info=True;user id=OtherShell;password=****;network library=dbmssocn;MultipleActiveResultSets=True;App=EntityFramework I made sure it was also using tcp-ip, but it doesn't connect with or without the network library part of the connection string. A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) This is driving my batty for the last two days, any ideas? It fails from the web vm too, but works in management studio with the same connection string.

    Read the article

  • Several web applications on a single port

    - by Nevermind
    We're developing an online browser-based game. The game itself is a plugin in the web page, that uses TCP connection to a game server, and also sends http requests to "content server" web application. This makes 3 servers total: the site itself, game server and content server. Site and content server are IIS web applications, game server is a custom application communicating over TCP with proprietary protocol. While the game is in beta stage, all these servers are physically hosted on a single machine, and distinguished by ports. For example, website is game.example.com:80, game server is game.example.com:34285 and content server is game.example.com:50000. This works OK most of the time, but some of our players have ports other than 80 closed. Is there any way to make all these application work through port 80, while still having them one one physical server? Maybe using different sub-domains? There's probably a way to make IIS forward requests to different web applications based on URL alone, but that doesn't help with game server. Edit Server is Windows Server 2008, IIS 7

    Read the article

  • Controller Action Methods with different signatures

    - by Narsil
    I am trying to get my URLs in files/id format. I am guessing I should have two Index methods in my controller, one with a parameter and one with not. But I get this error message in browser below. Anyway here is my controller methods: public ActionResult Index() { return Content("Index "); } // // GET: /Files/5 public ActionResult Index(int id) { File file = fileRepository.GetFile(id); if (file == null) return Content("Not Found"); else return Content(file.FileID.ToString()); } Error: Server Error in '/' Application. The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Reflection.AmbiguousMatchException: The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [AmbiguousMatchException: The current request for action 'Index' on controller type 'FilesController' is ambiguous between the following action methods: System.Web.Mvc.ActionResult Index() on type FileHosting.Controllers.FilesController System.Web.Mvc.ActionResult Index(Int32) on type FileHosting.Controllers.FilesController] System.Web.Mvc.ActionMethodSelector.FindActionMethod(ControllerContext controllerContext, String actionName) +396292 System.Web.Mvc.ReflectedControllerDescriptor.FindAction(ControllerContext controllerContext, String actionName) +62 System.Web.Mvc.ControllerActionInvoker.FindAction(ControllerContext controllerContext, ControllerDescriptor controllerDescriptor, String actionName) +13 System.Web.Mvc.ControllerActionInvoker.InvokeAction(ControllerContext controllerContext, String actionName) +99 System.Web.Mvc.Controller.ExecuteCore() +105 System.Web.Mvc.ControllerBase.Execute(RequestContext requestContext) +39 System.Web.Mvc.ControllerBase.System.Web.Mvc.IController.Execute(RequestContext requestContext) +7 System.Web.Mvc.<c_DisplayClass8.b_4() +34 System.Web.Mvc.Async.<c_DisplayClass1.b_0() +21 System.Web.Mvc.Async.<c__DisplayClass81.<BeginSynchronous>b__7(IAsyncResult _) +12 System.Web.Mvc.Async.WrappedAsyncResult1.End() +59 System.Web.Mvc.MvcHandler.EndProcessRequest(IAsyncResult asyncResult) +44 System.Web.Mvc.MvcHandler.System.Web.IHttpAsyncHandler.EndProcessRequest(IAsyncResult result) +7 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +8677678 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +155 Updated Code: public ActionResult Index(int? id) { if (id.HasValue) { File file = fileRepository.GetFile(id.Value); if (file == null) return Content("Not Found"); else return Content(file.FileID.ToString()); } else return Content("Index"); } It's still not the thing I want. URLs have to be in files?id=3 format. I want files/3 routes from global.asax routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); //files/3 //it's the one I wrote routes.MapRoute("Files", "{controller}/{id}", new { controller = "Files", action = "Index", id = UrlParameter.Optional} ); I tried adding a new route after reading Jeff's post but I can't get it working. It still works with files?id=2 though.

    Read the article

  • Error deploying web application on Weblogic 10.3 using maven 2: "Can't find wsdl /wsdls/wsat.wsdl"

    - by Marcos Carceles
    Hi, I'm using maven for deploying a web application in my Weblogic 10.3 server remotely. I created my pom file based on the indication on this previous question: Using maven as build tool for Weblogic 10.3 My pom.xml file is: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.balfourbeatty.horizon.maven.test</groupId> <artifactId>maven-test-webapp</artifactId> <packaging>war</packaging> <version>1.0-SNAPSHOT</version> <name>maven-test-webapp Maven Webapp</name> <url>http://maven.apache.org</url> <properties> <weblogic.version>10.3</weblogic.version> </properties> <build> <plugins> <plugin> <groupId>org.apache.myfaces.trinidadbuild</groupId> <artifactId>maven-jdev-plugin</artifactId> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>weblogic-maven-plugin</artifactId> <version>2.9.1</version> <configuration> <name>maven-test-webapp</name> <adminServerHostName>******************</adminServerHostName> <adminServerPort>****</adminServerPort> <adminServerProtocol>t3</adminServerProtocol> <userId>******</userId> <password>*****</password> <upload>true</upload> <remote>true</remote> <verbose>true</verbose> <debug>true</debug> <targetNames>WLS_Spaces</targetNames> <noExit>true</noExit> <projectPackaging>war</projectPackaging> </configuration> <dependencies> <dependency> <groupId>com.sun</groupId> <artifactId>tools</artifactId> <version>1.6</version> <scope>system</scope> <systemPath>${java.home}/../lib/tools.jar</systemPath> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>weblogic</artifactId> <version>${weblogic.version}</version> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>webservices</artifactId> <version>${weblogic.version}</version> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.full</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.i18n</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.rmi.client</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>javax.enterprise.deploy</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>webserviceclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.wls</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.identity</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wlclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.transaction</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.classloaders</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wljmsclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.management.core</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wls-api</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.descriptor</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.logging</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.socket.api</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.security.digest</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.workmanager</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.weblogic.lifecycle</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.utils.wrapper</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>wlsafclient</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.management.jmx</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>com.bea.core.descriptor.wl</artifactId> <version>${weblogic.version}</version> <scope>provided</scope> </dependency> <dependency> <groupId>weblogic</groupId> <artifactId>javax.mail</artifactId> <version>10.3</version> </dependency> </dependencies> </plugin> </plugins> <finalName>maven-test-webapp</finalName> </build> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.codehaus.mojo</groupId> <artifactId>weblogic-maven-plugin</artifactId> <version>2.9.1</version> </dependency> </dependencies> <distributionManagement> <!-- use the following if you're not using a snapshot version. --> <repository> <id>internal</id> <name>Archiva Managed Internal Repository</name> <url>http://localhost:8180/archiva/repository/internal</url> </repository> <!-- use the following if you ARE using a snapshot version. --> <snapshotRepository> <id>snapshots</id> <name>Archiva Managed Snapshot Repository</name> <url>http://localhost:8180/archiva/repository/snapshots</url> </snapshotRepository> </distributionManagement> </project> All the dependencies are already resolved properly, as they are in the local archiva repository. The application does not contain any web-service, being just a "hello world" application. /index.jsp /WEB-INF/web.xml The error I get is: [BasicOperation.execute():423] : Initiating deploy operation for app, maven-test-webapp, on targets: [BasicOperation.execute():425] : WLS_Spaces Task 14 initiated: [Deployer:149026]deploy application maven-test-webapp on WLS_Spaces. dumping Exception stack Task 14 failed: [Deployer:149026]deploy application maven-test-webapp on WLS_Spaces. Target state: deploy failed on Server WLS_Spaces weblogic.wsee.ws.WsException: When processing WebService module 'maven-test-webapp.war'. Can't find wsdl /wsdls/wsat.wsdl at weblogic.wsee.deploy.WSEEWebModule.loadWsdlDefinitions(WSEEWebModule.java:159) at weblogic.wsee.deploy.WSEEModule.loadWsdl(WSEEModule.java:334) at weblogic.wsee.deploy.WSEEAnnotationProcessor.isWsdlHasPolicy(WSEEAnnotationProcessor.java:312) at weblogic.wsee.deploy.WSEEAnnotationProcessor.process(WSEEAnnotationProcessor.java:91) at weblogic.wsee.deploy.WSEEAnnotationProcessor.process(WSEEAnnotationProcessor.java:51) at weblogic.wsee.deploy.WSEEModule.prepare(WSEEModule.java:102) at weblogic.wsee.deploy.ServletDeployListener.contextPrepared(ServletDeployListener.java:26) at weblogic.servlet.internal.EventsManager$FireContextPreparedAction.run(EventsManager.java:503) at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321) at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:121) at weblogic.servlet.internal.EventsManager.notifyContextPreparedEvent(EventsManager.java:162) at weblogic.servlet.internal.WebAppServletContext.initContextListeners(WebAppServletContext.java:1782) at weblogic.servlet.internal.WebAppServletContext.prepare(WebAppServletContext.java:1136) at weblogic.servlet.internal.HttpServer.doPostContextInit(HttpServer.java:449) at weblogic.servlet.internal.HttpServer.loadWebApp(HttpServer.java:424) at weblogic.servlet.internal.WebAppModule.registerWebApp(WebAppModule.java:924) at weblogic.servlet.internal.WebAppModule.prepare(WebAppModule.java:356) at weblogic.application.internal.flow.ScopedModuleDriver.prepare(ScopedModuleDriver.java:176) at weblogic.application.internal.flow.ModuleListenerInvoker.prepare(ModuleListenerInvoker.java:199) at weblogic.application.internal.flow.DeploymentCallbackFlow$1.next(DeploymentCallbackFlow.java:391) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:83) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:59) at weblogic.application.internal.flow.DeploymentCallbackFlow.prepare(DeploymentCallbackFlow.java:43) at weblogic.application.internal.BaseDeployment$1.next(BaseDeployment.java:1221) at weblogic.application.utils.StateMachineDriver.nextState(StateMachineDriver.java:83) at weblogic.application.internal.BaseDeployment.prepare(BaseDeployment.java:367) at weblogic.application.internal.SingleModuleDeployment.prepare(SingleModuleDeployment.java:39) at weblogic.application.internal.DeploymentStateChecker.prepare(DeploymentStateChecker.java:154) at weblogic.deploy.internal.targetserver.AppContainerInvoker.prepare(AppContainerInvoker.java:60) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.createAndPrepareContainer(ActivateOperation.java:207) at weblogic.deploy.internal.targetserver.operations.ActivateOperation.doPrepare(ActivateOperation.java:98) at weblogic.deploy.internal.targetserver.operations.AbstractOperation.prepare(AbstractOperation.java:217) at weblogic.deploy.internal.targetserver.DeploymentManager.handleDeploymentPrepare(DeploymentManager.java:747) at weblogic.deploy.internal.targetserver.DeploymentManager.prepareDeploymentList(DeploymentManager.java:1216) at weblogic.deploy.internal.targetserver.DeploymentManager.handlePrepare(DeploymentManager.java:250) at weblogic.deploy.internal.targetserver.DeploymentServiceDispatcher.prepare(DeploymentServiceDispatcher.java:159) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.doPrepareCallback(DeploymentReceiverCallbackDeliverer.java:157) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer.access$000(DeploymentReceiverCallbackDeliverer.java:12) at weblogic.deploy.service.internal.targetserver.DeploymentReceiverCallbackDeliverer$1.run(DeploymentReceiverCallbackDeliverer.java:45) at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:516) at weblogic.work.ExecuteThread.execute(ExecuteThread.java:201) at weblogic.work.ExecuteThread.run(ExecuteThread.java:173) Does anyone have any idea on what could the problem be? Many thanks!

    Read the article

  • CEF3 application crash: Fault Module KERNELBASE.DLL

    - by Asadullah Laghari
    I am using CEF3 in Delphi XE3. Application crashes as soon as I launch it. It gives following details **Problem Event Name: APPCRASH Application Name: Project4.exe Application Version: 1.0.0.0 Application Timestamp: 50c4062d Fault Module Name: KERNELBASE.dll Fault Module Version: 6.2.9200.16384 Fault Module Timestamp: 5010acfa Exception Code: 0eedfade Exception Offset: 0001277c OS Version: 6.2.9200.2.0.0.256.48 Locale ID: 1033** Can anyone help me out plz??

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >