Search Results

Search found 15439 results on 618 pages for 'wls configuration'.

Page 337/618 | < Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >

  • What could prevent one Amazon EC2 instance from pinging another instance's Private IP?

    - by ks78
    I have multiple Amazon EC2 instances which need to communicate using private IPs. However, so far I've been unable to ping one instance's private IP from another instance. I can ping external addresses, such as their Elastic IPs and other sites (yahoo, google, etc), so it seems there's nothing wrong with the instances' network configuration. Also, they are all in the same zone, so that shouldn't be an issue. Does anyone have any idea what I could be doing wrong? Could this related to the Security Group settings?

    Read the article

  • Dell Latitude will not boot after fresh Ubuntu 12.10 installation. Black screen

    - by James
    I have an old dell latitude d610 that I've just installed ubuntu 12.10 onto, and now unfortunately it will not complete booting up. The screen flashes purple but then fades out and dies. I'm fairly sure there is an issue with graphics drivers, as I had to turn on "nomodeset" in the options when installing off a DVD to see the installer, but I'm very new to linux and don't know terribly much at all apart from what I've read on the net. I have been able to hold down shift and bring up GRUB, and entered into recovery mode, and when I enter into the low graphics mode, the xserver log file says it can detect a screen but found none with a useable configuration, then goes on to say a fatal server error has occurred and no screen have been found at all, telling me to check a log file at "/var/log/xorg.0.log" I have no idea how to check this specific log file! Attempting to actually go further than this in low graphics mode and restart the display simply artefacts the screen. It is all very strange and annoying because I did once by random have the machine boot completely for no apparent reason, but upon restarting the issue reoccurred.

    Read the article

  • Unable to connect to the Report Server

    - by pghcpa
    Win/7 Professional SQL Server 2008 R2 Express Reporting Services Configuration Manager When I launch it, shows correct Server Name, but report server instance is blank. When I press FIND I get: "Unable to connect to the Report Server " This is my development workstation, so no IIS installed. Seems to work fine on XP. SSMS works fine - no issues. I tried uninstalling SQL Server completely, rebooting, reinstalling a fresh download. Same result. I've googled every article I can find - nothing. Can anyone point me in the right direction in case you've come across this yourself? Thanks.

    Read the article

  • Oracle RDBMS Server 11gR2 Pre-Install RPM for Oracle Linux 6 has been released

    - by Lenz Grimmer
    Now that the certification of the Oracle Database 11g R2 with Oracle Linux 6 and the Unbreakable Enterprise Kernel has been announced, we are glad to announce the availability of oracle-rdbms-server-11gR2-preinstall, the Oracle RDBMS Server 11gR2 Pre-install RPM package (formerly known as oracle-validated). Designed specifically for Oracle Linux 6, this RPM aids in the installation of the Oracle Database. In order to install the Oracle Database 11g R2 on Oracle Linux 6, your system needs to meet a few prerequisites, as outlined in the Linux Installation Guides. Using the Oracle RDBMS Server 11gR2 Pre-install RPM, you can complete most of the pre-installation configuration tasks. which is now available from the Unbreakable Linux Network, or via the Oracle public yum repository. The pre-install package is available for x86_64 only. Specifically, the package: Causes the download and installation of various software packages and specific versions needed for database installation, with package dependencies resolved via yum Creates the user oracle and the groups oinstall and dba, which are the defaults used during database installation Modifies kernel parameters in /etc/sysctl.conf to change settings for shared memory, semaphores, the maximum number of file descriptors, and so on Sets hard and soft shell resource limits in /etc/security/limits.conf, such as the number of open files, the number of processes, and stack size to the minimum required based on the Oracle Database 11g Release 2 Server installation requirements Sets numa=off in the kernel boot parameters for x86_64 machines Please see the release announcement for further details and instructions. Also take a look at Ginny Henningsen's "How I Simplified Oracle Database Installation on Oracle Linux" article on the Oracle Technology Network for a general description on how to perform the installation of the Oracle Database on Oracle Linux. While the article refers to Oracle Linux 5 and the former "oracle-validated" package, the steps for Oracle Linux 6 are still very similar (we're looking into updating that article for Oracle Linux 6).

    Read the article

  • webmin bind issue- error when i try to start bind

    - by Arvind
    I am setting up a domain = mydomain.com with 2 nameservers ns1.mydomain.com and ns2.mydomain.com in Webmin - BIND. For this, I have registered 2 child nameservers at my domain registrar, and in Webmin-Bind I have set up a new zone for this domain. In this zone, i have specified 2 nameserver records- one each for ns1 and ns2. Also, I have defined 2 address records- one each for ns1.mydomain.com -> IP Address #1 and for ns2.mydomain.com -> IP Address #2 However when I try to start BIND in Webmin, I get the following error-- Failed to start BIND : Starting named: Error in named configuration: zone mydomain.com/IN: has no NS records zone mydomain.com/IN: not loaded due to errors. _default/mydomain.com/IN: bad zone [FAILED]

    Read the article

  • Creating reverse DNS entries which resolve [closed]

    - by Tiffany Walker
    Possible Duplicate: Reverse DNS - how to correctly configure for SMTP delivery I ran a DNS check and ended up with the following error: FAIL: Found reverse DNS entries which don't resolves IP-IP-IP-IP.HOST.DOMAIN.TLD ? ??? All IP's reverse DNS entries should resolve back to IP address (MX record's name -> IP -> IP Reverse -> IP). Many mail servers are configured to reject e-mails from IPs with inconsistent reverse DNS configuration. How do I properly configure and it so it goes to an IP?

    Read the article

  • Connecting MS SQL using freetds and unixodbc: isql - no default driver specified

    - by Dejan
    I am trying to connect to the MS SQL database using freetds and unixodbc. I have read various guides how to do it, but no one works fine for me. When I try to connect to the database using isql tool, I get the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect Have anybody already successfully established the connection to the MS SQL database using freetds and unixodbc on Ubuntu 12.04? I would really appreciate some help. Below is the procedure I used to configure the freetds and unixodbc. Thanks for your help in advance! Procedure First, I have installed the following packages sudo apt-get unixodbc unixodbc-dev freetds-dev tdsodbc and configured freetds as follows: --- /etc/freetds/freetds.conf --- [TS] host = SERVER port = 1433 tds version = 7.0 client charset = UTF-8 Using tsql tool I can successfully connect to the database by executing tsql -S TS -U username -P password As I need an odbc connection I configured odbcinst.ini as follows: --- /etc/odbcinst.ini --- [FreeTDS] Description = FreeTDS Driver = /usr/lib/x86_64-linux-gnu/odbc/libtdsodbc.so Setup = /usr/lib/x86_64-linux-gnu/odbc/libtdsS.so FileUsage = 1 CPTimeout = CPResuse = client charset = utf-8 and odbc.ini as follows: --- /etc/odbc.ini --- [TS] Description = "test" Driver = FreeTDS Servername = SERVER Server = SERVER Port = 1433 Database = DBNAME Trace = No Trying to connect to the database using isql tool with such a configuration results the following error: $ isql -v TS username password [IM002][unixODBC][Driver Manager]Data source name not found, and no default driver specified [ISQL]ERROR: Could not SQLConnect

    Read the article

  • IIS SmtpSVC - Adding remote domains on the fly

    - by Andrej Pintar
    Since I am using SMTPSVC from IIS to send all mail out I have noticed some domains that reject mail regarding LFs and similar SMTP day to day basis problems. So I mostly re route these domains by using smarthosts. Now I aslo read that on IIS7 or most of them when you add a remote domain to domains you must restart SMTPSVC to take effect. I also enabled METABASE editing. So I also hoped that this will help me add remote domains on the fly. But it's not working. Should I use another SMTP: -hmailserver or similar to route DOMAINS by smarthost. We used a smarthost configuration before but ISP smarthost gets a lot on RBL Blacklist so mail comes back. Since DNS MX direct sending is more work because of troublesome domains now I got more work to monitor SMTP logs. Thank you in advance.

    Read the article

  • Workflow: Operate Zones

    - by Owen Allen
    The Operate Zones workflow is another of the workflow documents that we introduced recently. It follows naturally after the Deploy Oracle Solaris 11 Zones workflow that I talked about last week, so I thought I'd talk about it next. This workflow is less linear than the zone deployment workflow. It's built around this image: The left side shows you the prerequisites for zone operation: you have to deploy libraries and deploy either Oracle Solaris 10 or 11 zones - whichever type you want to manage using this workflow. Once you have the zones deployed, you can begin to operate them. If you want to associate resources with the global zone, the workflow directs you to the Exploring Your Server Pools how-to, which talks about adding global zones to server pools and associating libraries and network resources with them. Otherwise, it directs you to a set of how-tos about zone management: Managing the Configuration of a Zone, which explains how to add storage, edit zone attributes, and connect zones to networks; Lifecycle Management of Zones, which explains how to halt, shut down, boot, reboot, or delete a zone; and Migrating Zones, which explains how to move a zone to a new global zone in the same server pool. Finally, it directs you to the Update Oracle Solaris workflow when you want to update your zones, and to the Monitor and Manage Incidents workflow to learn more about monitoring your assets.

    Read the article

  • Git version control with multiple users

    - by ignatius
    Hello, i am a little bit lost with this issue, let me explain you my problem: I want to setup a git repository, three of four users will contribute, so they need to download the code and shall be able to upload their changes to the server or update their branch with the latest modifications. So, i setup a linux machine, install git, setup the repository, then add the users in order to enable the acces throught ssh. Now my question is, What's next?, the git documentation is a little bit confusing, i.e. when i try from a dummy user account to clone the repository i got: xxx@xxx-desktop:~/Documentos/git/test$ git clone -v ssh://[email protected]/pub.git Initialized empty Git repository in /home/xxx/Documentos/git/test/pub/.git/ [email protected]'s password: fatal: '/pub.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly is that a problem of privileges? need any special configuration? i want to avoid using git-daemon or gitosis, sorry, maybe my question sound silly but git is powerfull but i admit not so user friendly. Thanks Br

    Read the article

  • How to make ClearType looks ok even with large fonts

    - by Sorin Sbarnea
    I discovered that on large/huge fonts ClearType does have a negative impact. Just take a look at http://patterns.dataincubator.org/book/ page (check title with huge fonts versus normal text). If you are on Windows 7 you can use Win+Plus/Minus to zoom in/out. I'm looking for a configuration that would make both small fonts and large fonts look well. The system is Windows 7 but I suppose you could replicate the problem on Vista and even XP if you activate ClearType. Current results: ClearType on and tunned - small fonts looking good and large one looking bad ClearType off - small fonts looking bad and large one looking ok (smoothed)

    Read the article

  • Available Instance types for marketplace ami's

    - by Christian
    I based my autoscaling AMI's on the Turnkey Linux nginx AMI from the marketplace. I am now unable to select any of the newer generation instance types; for instance, my autoscaling uses m3.large type but I'd really like it to use the c3.xlarge type but every time I try to create a c3.xlarge instance with my AMI I get errors; The instance configuration for this AWS Marketplace product is not supported. My question is; Can I override this? I'm not using TKL support or any of their services, just the AMI. If I can't override it, do I have any other options besides creating a brand new AMI from scratch to use?

    Read the article

  • Postfix relay to multiple servers and multiple users

    - by Frankie
    I currently have postfix configured so that all users get relayed by the local machine with the exception of one user that gets relayed via gmail. To that extent I've added the following configuration: /etc/postfix/main.cf # default options to allow relay via gmail smtp_use_tls=yes smtp_sasl_auth_enable = yes smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt smtp_sasl_security_options = noanonymous # map the relayhosts according to user sender_dependent_relayhost_maps = hash:/etc/postfix/relayhost_maps # keep a list of user and passwords smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd /etc/postfix/relayhost_maps user-one@localhost [smtp.gmail.com]:587 /etc/postfix/sasl_passwd [smtp.gmail.com]:587 [email protected]:user-one-pass-at-google I know I can map multiple users to multiple passwords using smtp_sasl_password_maps but that would mean that all relay would be done by gmail where I specifically want all relay to be done by the localhost with the exception of some users. Now I would like to have a user-two@localhost (etc) relay via google with their own respective passwords. Is that possible?

    Read the article

  • Logging Timeout'd Request in Apache 2.X

    - by m3rLinEz
    Hello, I am migrating some applications from Apache 1.3 to 2.2. We used to run some tests where attacker opens some HTTP connection to our server, and do nothing. Apache 1.3 would log the following 408 code, for example: 126.1.86.85 - - [01/Dec/2010:06:26:19 +0000] "-" 408 - "-" 0 126.1.86.85 - - [01/Dec/2010:06:26:19 +0000] "-" 408 - "-" 0 But with Apache 2.2, nothing is logged to the log file. I run the same test by using netcat to open the connection: $ nc IP_victim PORT_victim $ nc 10.42.37.3 80 I would like to have Apache 2.2 log the same 408 code to the log file, so that we would know of attempted DoS attack from the outside. Do I need any more configuration in Apache 2 to enable this? I have tried some different configurations such as LogLevel = Debug, Timeout 30, RequestReadTimeout header=10 body=30. Thanks.

    Read the article

  • Hosting custom domains with IP address flexibility

    - by F21
    I am building a small service where users will be assigned a subdomain such as: myusername.myservice.com anotheruser.myservice.com I know that I can set up a wildcard vhost and using some configuration regex, serve the files like so: myusername.myservice.com ===> /var/www/myusername anotherusername.myservice.com ===> /var/www/anotherusername The problem is that I would like to allow users to alias their own domain names to their service. I understand that for the webserver, once the user adds the domain via my web interface, I can easily create a vhost for the domain in nginx and then refresh the webserver. The problem is that I would prefer to NOT let the users add an A record of my webserver's IP address as I would prefer to keep things flexible (when we upgrade our infrastructure to something more complex to scale). What is the best way to achieve this?

    Read the article

  • How can I tell whether an interrupted rm -r removed any files?

    - by Jake Petroules
    I installed sshfs a Linux box and then mounted my Mac home directory. In the middle of troubleshooting a configuration issue, I did an ls -l on the mount directory (as normal user), receiving: total 0 d????????? ? ? ? ? ? sl I then ran sudo rm -r on that directory but pressed Ctrl+C to terminate it immediately before it (looks) like the command did anything. I notice no files missing but I want to be sure - is there a way I can somehow inspect the filesystem log on my Mac to see if any files were actually removed?

    Read the article

  • Mandatory Profiles on a Server 2003 TS Box

    - by Chloe
    I have a Windows Server 2003 box which will be acting as a terminal server. It will actually be running Citrix, but I don't believe that to be relevant here. There has been a request for every user to use a single mandatory profile. I've used mandatory profiles before, but there have been generally different profiles for different users so I've always used the "Terminal Services Profile" tab to good effect. What I'd like this time is a single setting, such as a Group Policy or similar that simply forces every non-domain admin user logging on to the box into using the mandatory profile. We'll be using Folder Redirection to take care of everything else. I'm aware of the following GPO: Computer Policy\Computer Configuration\Administrative Templates\Windows Components/Terminal Services Set path for TS Roaming Profiles But, as that's a computer policy, will it not apply to all users including administrators? If so, is it possible to exclude admins somehow?

    Read the article

  • Problem with MSDE 2000 5 minute keepalive over ISDN

    - by mcrick
    We have a SQL Server transactionally pushing replicate data to an MSDE 2000 SP3a subscriber over ISDN. Prior to a recent upgrade to bring us to the MSDE 2000 level we pushed to MSDE 1. We are finding that there is now a 5 minute keepalive being instigated from MSDE 2000 which we cannot account for. Further, we can find no way to either disable it or lengthen the keepalive interval. Not surprisingly, we are finding a marked increase in ISDN line costs due to these previously non-existent keepalive packets! Please note that we are assuming that it is an MSDE 2000 server issue, but it could equally be some behaviour related to the way that replication is operating on MSDE 2000. Unfortunately, as yet, we have not identified a replication configuration parameter that affects the keepalive in any way. Can anyone advise how we might indentify a root cause for this problem (and ideally a fix)?

    Read the article

  • Problem with MSDE 2000 5 minute keepalive over ISDN

    - by mcrick
    We have a SQL Server transactionally pushing replicate data to an MSDE 2000 SP3a subscriber over ISDN. Prior to a recent upgrade to bring us to the MSDE 2000 level we pushed to MSDE 1. We are finding that there is now a 5 minute keepalive being instigated from MSDE 2000 which we cannot account for. Further, we can find no way to either disable it or lengthen the keepalive interval. Not surprisingly, we are finding a marked increase in ISDN line costs due to these previously non-existent keepalive packets! Please note that we are assuming that it is an MSDE 2000 server issue, but it could equally be some behaviour related to the way that replication is operating on MSDE 2000. Unfortunately, as yet, we have not identified a replication configuration parameter that affects the keepalive in any way. Can anyone advise how we might indentify a root cause for this problem (and ideally a fix)?

    Read the article

  • Crypto-Analysis of keylogger logs and config file. Possible?

    - by lost.
    Is there anyway Encryption on an unidentified file can be broken(file in question: config file and log files from ardamax keylogger). These files date back all the way to 2008. I searched everywhere, nothing on slashdot, nothing on google. Ardamax Keyviewer? Should I just write to Ardamax? I am at a loss of what to do. I feel comprimised. Anyone managed to decrpyt files with Crypto-analysis? More Information-- There are log files in the folder and a configuration file, "akv.cfg". Is it possible to decrypt the files and maybe getting the attackers email address used to receive the keylogger logs? I've Checked ardamax.com. They have an built-in log viewer. But its unavailable for download. If superuser isn't the proper place to ask, know where I might get help?

    Read the article

  • Which PHP accelerator to use with prefork mpm (CentOS and RedHat default httpd settings)

    - by FractalizeR
    Hello. As you know, even if it is possible to start httpd in worker mode under CentOS/RedHat, php in default rpm repo is not thread-safe. And the default configuration for stability is mpm_prefork. So, two questions: Is there PHP accelerator capable of working in mpm_prefork mode (using shm or whatever)? If there is none, what can be done to improve PHP speed on CentOS/RedHat systems (I want to use rpms, preferably from default CentOS repo; building custom PHP from source code is not a good option for me)

    Read the article

  • How to ignore query parameters in web cache?

    - by eduardocereto
    Google Analytics use some query parameters to identify campaigns and to do cookie control. This is all handled by javascript code. Take a look at the following example: http://www.example.com/?utm_source=newsletter&utm_medium=email&utm_ter m=October%2B2008&utm_campaign=promotion This will set cookies via JavaScript with the right campaign origin. This query parameters can have multiple and sometimes random values. Since they are used as cache hash keys the cache performance is heavily degraded in some scenarios. I suppose there's a not so hard configuration on cache servers to just ignore all query parameters or specific query parameters. Am I right? Does anyone know how hard is it in popular web cache solutions, to create ? I'm not interested in a specific web cache solution. It would be great to hear about the one you use.

    Read the article

  • Oracle Linux Partner Pavilion Spotlight

    - by Ted Davis
    With the first day of Oracle OpenWorld starting in less than a week, we wanted to showcase some of our premier partners exhibiting in the Oracle Linux Partner Pavilion ( Booth #1033) this year. We have Independent Hardware Vendors, Independent Software Vendors and Systems Integrators that show the breadth of support in the Oracle Linux and Oracle VM ecosystem. We'll be highlighting partners all week so feel free to come back check us out. Centrify delivers integrated software and cloud-based solutions that centrally control, secure and audit access to cross-platform systems, mobile devices and applications by leveraging the infrastructure organizations already own. From the data center and into the cloud, more than 4,500 organizations, including 40 percent of the Fortune 50 and more than 60 Federal agencies, rely on Centrify's identity consolidation and privilege management solutions to reduce IT expenses, strengthen security and meet compliance requirements. Visit Centrify at Oracle OpenWorld 2102 for a look at Centrify Suite and see how you can streamline security management on Oracle Linux.  Unify identities across the enterprise and remove the pain and security issues associated with managing local user accounts by leveraging Active Directory Implement a least-privilege security model with flexible, role-based controls that protect privileged operations while still granting users the privileges they need to perform their job Get a central, global view of audited user sessions across your Oracle Linux environment  "Data Intensity's cloud infrastructure leverages Oracle VM and Oracle Linux to provide highly available enterprise application management solutions.  Engineers will be available to answer questions about and demonstrate the technology, including management tools, configuration do's and don'ts, high availability, live migration, integrating the technology with Oracle software, and how the integrated support process works."    Mellanox’s end-to-end InfiniBand and Ethernet server and storage interconnect solutions deliver the highest performance, efficiency and scalability for enterprise, high-performance cloud and web 2.0 applications. Mellanox’s interconnect solutions accelerate Oracle RAC query throughput performance to reach 50Gb/s compared to TCP/IP based competing solutions that cap off at less than 12Gb/s. Mellanox solutions help Oracle’s Exadata to deliver 10X performance boost at 50% Hardware cost making it the world’s leading database appliance. Thanks for reviewing today's Partner spotlight. We will highlight new partners each day this week leading up to Oracle OpenWorld.

    Read the article

  • Wireless with WEP extremely slow on an Acer Timeline 4810T with a Centrino Wireless-N 1000

    - by noq38
    I've upgraded an Acer Timeline 4810T to Ubuntu 11.10. Everything works fine except for the darn wireless interface (network manager). I just tested the wireless interface over a non-encrypted signal and it works beautifully. The issue is definitely related to WEP. Unfortunately, some of the networks I need to connect to are WEP encrypted, therefore this is a serious issue for me that is preventing me from using Ubuntu on my laptop. This was no problem in 11.04 and prior. Is there a simple solution for this? Any suggestions? Here's more hardware information. Hopefully this helps to debug the network issue: sudo lshw -class network *-network description: Wireless interface product: Centrino Wireless-N 1000 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 00 serial: 00:1e:64:3c:5e:e0 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlagn driverversion=3.0.0-13-generic-pae firmware=39.31.5.1 build 35138 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:43 memory:d2400000-d2401fff lspci 02:00.0 Network controller: Intel Corporation Centrino Wireless-N 1000 rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no Many thanks for your help! I just tested the wireless interface over a non-encrypted signal and it works beautifully. The issue is definitely related to WEP. Unfortunately, some of the networks I need to connect to are WEP encrypted, therefore this is a serious issue for me that is preventing me from using Ubuntu on my laptop. Any suggestions?

    Read the article

  • Web API, JavaScript, Chrome &amp; Cross-Origin Resource Sharing

    - by Brian Lanham
    The team spent much of the week working through this issues related to Chrome running on Windows 8 consuming cross-origin resources using Web API.  We thought it was resolved on day 2 but it resurfaced the next day.  We definitely resolved it today though.  I believe I do not fully understand the situation but I am going to explain what I know in an effort to help you avoid and/or resolve a similar issue. References We referenced many sources during our trial-and-error troubleshooting.  These are the links we reference in order of applicability to the solution: Zoiner Tejada JavaScript and other material from -> http://www.devproconnections.com/content1/topic/microsoft-azure-cors-141869/catpath/windows-azure-platform2/page/3 WebDAV Where I learned about “Accept” –>  http://www-jo.se/f.pfleger/cors-and-iis? IT Hit Tells about NOT using ‘*’ –> http://www.webdavsystem.com/ajax/programming/cross_origin_requests Carlos Figueira Sample back-end code (newer) –> http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d (older version) –> http://code.msdn.microsoft.com/CORS-support-in-ASPNET-Web-01e9980a   Background As a measure of protection, Web designers (W3C) and implementers (Google, Microsoft, Mozilla) made it so that a request, especially a JSON request (but really any URL), sent from one domain to another will only work if the requestee “knows” about the requester and allows requests from it. So, for example, if you write a ASP.NET MVC Web API service and try to consume it from multiple apps, the browsers used may (will?) indicate that you are not allowed by showing an “Access-Control-Allow-Origin” error indicating the requester is not allowed to make requests. Internet Explorer (big surprise) is the odd-hair-colored step-child in this mix. It seems that running locally at least IE allows this for development purposes.  Chrome and Firefox do not.  In fact, Chrome is quite restrictive.  Notice the images below. IE shows data (a tabular view with one row for each day of a week) while Chrome does not (trust me, neither does Firefox).  Further, the Chrome developer console shows an XmlHttpRequest (XHR) error. Screen captures from IE (left) and Chrome (right). Note that Chrome does not display data and the console shows an XHR error. Why does this happen? The Web browser submits these requests and processes the responses and each browser is different. Okay, so, IE is probably the only one that’s truly different.  However, Chrome has a specific process of performing a “pre-flight” check to make sure the service can respond to an “Access-Control-Allow-Origin” or Cross-Origin Resource Sharing (CORS) request.  So basically, the sequence is, if I understand correctly:  1)Page Loads –> 2)JavaScript Request Processed by Browser –> 3)Browsers Prepares to Submit Request –> 4)[Chrome] Browser Submits Pre-Flight Request –> 5)Server Responds with HTTP 200 –> 6)Browser Submits Request –> 7)Server Responds with Data –> 8)Page Shows Data This situation occurs for both GET and POST methods.  Typically, GET methods are called with query string parameters so there is no data posted.  Instead, the requesting domain needs to be permitted to request data but generally nothing more is required.  POSTs on the other hand send form data.  Therefore, more configuration is required (you’ll see the configuration below).  AJAX requests are not friendly with this (POSTs) either because they don’t post in a form. How to fix it. The team went through many iterations of self-hair removal and we think we finally have a working solution.  The trial-and-error approach eventually worked and we referenced many sources for the information.  I indicate those references above.  There are basically three (3) tasks needed to make this work. Assumptions: You are using Visual Studio, Web API, JavaScript, and have Cross-Origin Resource Sharing, and several browsers. 1. Configure the client Joel Cochran centralized our “cors-oriented” JavaScript (from here). There are two calls including one for GET and one for POST function(url, data, callback) {             console.log(data);             $.support.cors = true;             var jqxhr = $.post(url, data, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("post", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send(data);                     } else {                         console.log(">" + jqXhHR.status);                         alert("corsAjax.post error: " + status + ", " + errorThrown);                     }                 });         }; The GET CORS JavaScript function (credit to Zoiner Tejada) function(url, callback) {             $.support.cors = true;             var jqxhr = $.get(url, null, callback, "json")                 .error(function(jqXhHR, status, errorThrown) {                     if ($.browser.msie && window.XDomainRequest) {                         var xdr = new XDomainRequest();                         xdr.open("get", url);                         xdr.onload = function () {                             if (callback) {                                 callback(JSON.parse(this.responseText), 'success');                             }                         };                         xdr.send();                     } else {                         alert("CORS is not supported in this browser or from this origin.");                     }                 });         }; The POST CORS JavaScript function (credit to Zoiner Tejada) Now you need to call these functions to get and post your data (instead of, say, using $.Ajax). Here is a GET example: corsAjax.get(url, function(data) { if (data !== null && data.length !== undefined) { // do something with data } }); And here is a POST example: corsAjax.post(url, item); Simple…except…you’re not done yet. 2. Change Web API Controllers to Allow CORS There are actually two steps here.  Do you remember above when we mentioned the “pre-flight” check?  Chrome actually asks the server if it is allowed to ask it for cross-origin resource sharing access.  So you need to let the server know it’s okay.  This is a two-part activity.  a) Add the appropriate response header Access-Control-Allow-Origin, and b) permit the API functions to respond to various methods including GET, POST, and OPTIONS.  OPTIONS is the method that Chrome and other browsers use to ask the server if it can ask about permissions.  Here is an example of a Web API controller thus decorated: NOTE: You’ll see a lot of references to using “*” in the header value.  For security reasons, Chrome does NOT recognize this is valid. [HttpHeader("Access-Control-Allow-Origin", "http://localhost:51234")] [HttpHeader("Access-Control-Allow-Credentials", "true")] [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPATCH, COPY, MOVE, DELETE, MKCOL, LOCK, UNLOCK, PUT, GETLIB, VERSION-CONTROL, CHECKIN, CHECKOUT, UNCHECKOUT, REPORT, UPDATE, CANCELUPLOAD, HEAD, OPTIONS, GET, POST")] [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destination, Content-Type, Depth, User-Agent, X-File-Size, X-Requested-With, If-Modified-Since, X-File-Name, Cache-Control")] [HttpHeader("Access-Control-Max-Age", "3600")] public abstract class BaseApiController : ApiController {     [HttpGet]     [HttpOptions]     public IEnumerable<foo> GetFooItems(int id)     {         return foo.AsEnumerable();     }     [HttpPost]     [HttpOptions]     public void UpdateFooItem(FooItem fooItem)     {         // NOTE: The fooItem object may or may not         // (probably NOT) be set with actual data.         // If not, you need to extract the data from         // the posted form manually.         if (fooItem.Id == 0) // However you check for default...         {             // We use NewtonSoft.Json.             string jsonString = context.Request.Form.GetValues(0)[0].ToString();             Newtonsoft.Json.JsonSerializer js = new Newtonsoft.Json.JsonSerializer();             fooItem = js.Deserialize<FooItem>(new Newtonsoft.Json.JsonTextReader(new System.IO.StringReader(jsonString)));         }         // Update the set fooItem object.     } } Please note a few specific additions here: * The header attributes at the class level are required.  Note all of those methods and headers need to be specified but we find it works this way so we aren’t touching it. * Web API will actually deserialize the posted data into the object parameter of the called method on occasion but so far we don’t know why it does and doesn’t. * [HttpOptions] is, again, required for the pre-flight check. * The “Access-Control-Allow-Origin” response header should NOT NOT NOT contain an ‘*’. 3. Headers and Methods and Such We had most of this code in place but found that Chrome and Firefox still did not render the data.  Interestingly enough, Fiddler showed that the GET calls succeeded and the JSON data is returned properly.  We learned that among the headers set at the class level, we needed to add “ACCEPT”.  Note that I accidentally added it to methods and to headers.  Adding it to methods worked but I don’t know why.  We added it to headers also for good measure. [HttpHeader("Access-Control-Allow-Methods", "ACCEPT, PROPFIND, PROPPA... [HttpHeader("Access-Control-Allow-Headers", "Accept, Overwrite, Destin... Next Steps That should do it.  If it doesn’t let us know.  What to do next?  * Don’t hardcode the allowed domains.  Note that port numbers and other domain name specifics will cause problems and must be specified.  If this changes do you really want to deploy updated software?  Consider Miguel Figueira’s approach in the following link to writing a custom HttpHeaderAttribute class that allows you to specify the domain names and then you can do it dynamically.  There are, of course, other ways to do it dynamically but this is a clean approach. http://code.msdn.microsoft.com/windowsdesktop/Implementing-CORS-support-a677ab5d

    Read the article

< Previous Page | 333 334 335 336 337 338 339 340 341 342 343 344  | Next Page >