Search Results

Search found 15035 results on 602 pages for 'request'.

Page 177/602 | < Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >

  • DAO/Webservice Consumption in Web Application

    - by Gavin
    I am currently working on converting a "legacy" web-based (Coldfusion) application from single data source (MSSQL database) to multi-tier OOP. In my current system there is a read/write database with all the usual stuff and additional "read-only" databases that are exported daily/hourly from an Enterprise Resource Planning (ERP) system by SSIS jobs with business product/item and manufacturing/SCM planning data. The reason I have the opportunity and need to convert to multi-tier OOP is a newer more modern ERP system is being implemented business wide that will be a complete replacement. This newer ERP system offers several interfaces for third party applications like mine, from direct SQL access to either a dotNet web-service or a SOAP-like web-service. I have found several suitable frameworks I would be happy to use (Coldspring, FW/1) but I am not sure what design patterns apply to my data access object/component and how to manage the connection/session tokens, with this background, my question has the following three parts: Firstly I have concerns with moving from the relative safety of a SSIS job that protects me from downtime and speed of the ERP system to directly connecting with one of the web services which I note seem significantly slower than I expected (simple/small requests often take up to a whole second). Are there any design patterns I can investigate/use to cache/protect my data tier? It is my understanding data access objects (the component that connects directly with the web services and convert them into the data types I can then work with in my Domain Objects) should be singletons (and will act as an Adapter/Facade), am I correct? As part of the data access object I have to setup a connection by username/password (I could set up multiple users and/or connect multiple times with this) which responds with a session token that needs to be provided on every subsequent request. Do I do this once and share it across the whole application, do I setup a new "connection" for every user of my application and keep the token in their session scope (might quickly hit licensing limits), do I set the "connection" up per page request, or is there a design pattern I am missing that can manage multiple "connections" where a requests/access uses the first free "connection"? It is worth noting if the ERP system dies I will need to reset/invalidate all the connections and start from scratch, and depending on which web-service I use might need manually close the "connection/session"

    Read the article

  • Static classes and/or singletons -- How many does it take to become a code smell?

    - by Earlz
    In my projects I use quite a lot of static classes. These are usually classes that naturally seem to fit into a single-instance type of thing. Many times I use static classes and recently I've started using some singletons. How many of these does it take to become a code smell? For instance, in my recent project which has a lot of static classes is an Authentication library for ASP.Net. I use a static class for a helper class that fixes ASP.Net error codes so it can be used like CustomErrorsFixer.Fix(Context); Or my authentication class itself is a static class //in global.asax's begin_application Authentication.SomeState="blah"; Authentication.SomeOption=true; //etc //in global.asax's begin_request Authentication.Authenticate(); When are static or singleton classes bad to use? Am I doing it wrong, or am I just in a project that by definition has very little per-instance state associated with it? The only per-instance state I have is stored in HttpContext.Current.Items like so: /// <summary> /// The current user logged in for the HTTP request. If there is not a user logged in, this will be null. /// </summary> public static UserData CurrentUser{ get{ return HttpContext.Current.Items["fscauth_currentuser"] as UserData; //use HttpContext.Current as a little place to persist static data for this request } private set{ HttpContext.Current.Items["fscauth_currentuser"]=value; } }

    Read the article

  • ITIL Incident Classification - Fault vs SR vs Technical Incident

    - by ExceptionLimeCat
    I am new to ITIL and Incident classifcations and I am trying learn more about them and understand how they could integrate in our organization. I have found it difficult to find a clear definition of Fault vs. Service Request vs. Technical incidents. I am basing my definitions on this article: http://www.itsmsolutions.com/newsletters/DITYvol6iss27.htm As I understand it: Service Request - Service provided by IT as part of regular administration of a system. Fault - An unexpected error in a system. Technical Incident - An interruption or potential interruption in IT service due to an expected incident caused by some IT policy.

    Read the article

  • 403 in Response to OPTIONS when updating working copy having full access

    - by user23419
    There is an SVN repository (single repository) http://example.net/svn The repository contains several projects (directories): http://example.net/svn/Project1 http://example.net/svn/Project2 User has full access to Project1 directory and has no access neither to root nor to Project2. Everything works fine for a while: user checks out http://example.net/svn/Project1, commits and updates it successfully. But sometimes trying to update leads to the following error: Command: Update Error: Server sent unexpected return value (403 Forbidden) in response to OPTIONS Error: request for 'http://example.net/svn' Finished! Why does TortoiseSVN request something in the root??? I have noticed that this happens after somebody else committed copy or move operation. Checking out http://example.net/svn/Project1 helps till next time... The main question: How to set up access rights for user to avoid these errors? Note, it's not an option to grant user any read or write access right on the root directory for security reasons.

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • Help with IPTables - Masquerading + Forwarding, 1-to-1?

    - by Artiom Chilaru
    I've got a clean Ubuntu Server 10.10 with OpenSSH, OpenVPN and vsFTPd installed. The server is running as a VM on the Hyper-V server (hypervisor), has two network interfaces mapped to physical adapters (eth0 and eth1), and a virtual interface with a direct connection to the hypervisor (eth2). The VPN will create a tun0 interface when a client connects. What I want is the remote user, connecting over VPN to be able to connect to the hypervisor (all ports, ping etc). The initial idea was to make the VPN create a tap0 interface, and bridge eth2 to tap0, but this didn't work, unfortunately, as it seems that the adapters don't want to go into promiscuous mode (partially confirmed by MS) At the same time, both the hypervisor and the remove client over VPN can successfully ping/connect to the ubuntu server with no problems. So my plan right now is to try doing some 1-1 masquerading, if possible. Basically, I want every request sent from the VPN client to the ubuntu server to be redurected to the hypervisor instead (with IP translation ofc), and every request from the hypervisor to the ubuntu machine sent to the VPN client (IP translated too). Only 1 client will be connected at a time to the VPN, so I can force limit it to a single IP at all times, if necessary. Is this the right way to go, and if true, how can this be achieved? It's almost like a special case of port-forwarding, except every single port on tun0 is forwarded to a machine in eth2, and every port on the eth2 side forwards to an ip on tun0 I guess it could be done with iptables, but I'm rather new in linux, so I can't do it myself... help? :(

    Read the article

  • Need help configurating my Tomcat server without any WAR files

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • How should an API use http basic authentication

    - by user1626384
    When an API requires that a client authenticates to it, i've seen two different scenarios used and I am wondering which case I should use for my situation. Example 1. An API is offered by a company to allow third parties to authenticate with a token and secret using HTTP Basic. Example 2. An API accepts a username and password via HTTP Basic to authenticate an end user. Generally they get a token back for future requests. My Setup: I will have an JSON API that I use as my backend for a mobile and web app. It seems like good practice for both the mobile and web app to send along a token and secret so only these two apps can access the API blocking any other third party. But the mobile and web app allow users to login and submit posts, view their data, etc. So I would want them to login via HTTP Basic as well on each request. Do I somehow use a combination of both these methods or only send the end user credentials (username and token) on each request? If I only send the end user credentials, do I store them in a cookie on the client?

    Read the article

  • magento on Zend Server (Win7) installation error

    - by czerasz
    I try to install magento for the first time. I've created the database with the name "project" in my C:\Zend\Apache2\conf\httpd.conf I added on the end: <Directory "C:\Zend\Apche2\htdocs\project"> Options Indexes FollowSymLinks AllowOverride All Order allow,deny Allow from all </Directory> in my ZendServer/Server Setup/Extensions: PDO_MySQL, simplexml, mcrypt, hash, GD, DOM, iconv, curl, SOAP are on in C:\Zend\ZendServer\etc\php.ini I set: safe_mode = Off ;<-- was set to off ... memory_limit = 512M; Maximum amount of memory a script may consume (128MB) After step "Configuration" of magento installation (with Use Web Server (Apache) Rewrites enabled) I get: Internal Server Error My database is full of tables (that schould be ok) My Zend Server shows: 27-Oct 06:55 6 Severe Slow Request Execution (Absolute) http://localhost/project/index.php/install/wizard/installDb/ Critical Open 27-Oct 06:55 4 Fatal PHP Error C:\Zend\Apache2\htdocs\project\lib\Varien\Db\Adapter\Pdo\Mysql.php Critical Open 27-Oct 06:55 5 Slow Function Execution curl_exec Warning Open 27-Oct 06:55 5 Slow Request Execution (Absolute) http://localhost/project/index.php/install/wizard/configPost/ What can be wrong?

    Read the article

  • 404 Not Found for a PL script that exists!

    - by Abs
    Hello all, I make a GET request to a CGI script and I get a 404 error. However, I am 100% sure that script is present and it has permissions: -rwxr-xr-x 1 apache apache 6520 Sep 7 03:01 uu_ini_status_audios.pl The request URL is: http://mysite.com/cgi-bin/uu_ini_status_audios.pl?tmp_sid=893facacc5dc392ad0f4c91e6a9e8d40&rnd_id=0.12266222834382812 The error I get: The requested URL /cgi-bin/uu_ini_status_audios.pl was not found on this server. This use to work for me before, but I think it stopped working after I restarted apache so maybe it means its a configuration I changed?? I checked the error logs for apache and php and nothing useful was found to help me with my problem! I appreciate any help on this!

    Read the article

  • Apache file negotiation failed

    - by lorenzo.marcon
    I'm having the following issue on a host using Apache 2.2.22 + PHP 5.4.0 I need to provide the file /home/server1/htdocs/admin/contents.php when a user makes the request: http://server1/admin/contents, but I obtain this message on the server error_log. Negotiation: discovered file(s) matching request: /home/server1/htdocs/admin/contents (None could be negotiated) Notice that I have mod_negotiation enabled and MultiViews among the options for the related virtualhost: <Directory "/home/server1/htdocs"> Options Indexes Includes FollowSymLinks MultiViews Order allow,deny Allow from all AllowOverride All </Directory> I also use mod_rewrite, with the following .htaccess rules: RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([^\./]*)$ index.php?t=$1 [L] </IfModule> It seems very strange, but on the same box with PHP 5.3.6 it used to work correctly. I'm just trying an upgrade to PHP 5.4.0, but I cannot solve this negotiation issue. Any idea on why Apache cannot match contents.php when asking for content (which should be what mod_negotiation is supposed to do)?

    Read the article

  • ArchBeat Link-o-Rama for 11/18/2011

    - by Bob Rhubart
    IT executives taking lead role with both private and public cloud projects: survey | Joe McKendrick "The survey, conducted among members of the Independent Oracle Users Group, found that both private and public cloud adoption are up—30% of respondents report having limited-to-large-scale private clouds, up from 24% only a year ago. Another 25% are either piloting or considering private cloud projects. Public cloud services are also being adopted for their enterprises by more than one out of five respondents." - Joe McKendrick SOA all the Time; Architects in AZ; Clearing Info Integration Hurdles This week on the Architect Home Page on OTN. OIM 11g OID (LDAP) Groups Request-Based Provisioning with custom approval – Part I | Alex Lopez Iin part one of a two-part blog post, Alex Lopez illustrates "an implementation of a Custom Approval process and a Custom UI based on ADF to request entitlements for users which in turn will be converted to Group memberships in OID." ArchBeat Podcast Information Integration - Part 3/3 "Oracle Information Integration, Migration, and Consolidation" author Jason Williamson, co-author Tom Laszeski, and book contributor Marc Hebert talk about upcoming projects and about what they've learned in writing their book. InfoQ: Enterprise Shared Services and the Cloud | Ganesh Prasad As an industry, we have converged onto a standard three-layered service model (IaaS, PaaS, SaaS) to describe cloud computing, with each layer defined in terms of the operational control capabilities it offers. This is unlike enterprise shared services, which have unique characteristics around ownership, funding and operations, and they span SaaS and PaaS layers. Ganesh Prasad explores the differences. Stress Testing Oracle ADF BC Applications - Do Connection Pooling and TXN Disconnect Level Oracle ACE Director Andrejus Baranovskis describes "how jbo.doconnectionpooling = true and jbo.txn.disconnect_level = 1 properties affect ADF application performance." Exploring TCP throughput with DTrace | Alan Maguire "According to the theory," says Maguire, "when the number of unacknowledged bytes for the connection is less than the receive window of the peer, the path bandwidth is the limiting factor for throughput."

    Read the article

  • Strange IIS/Asp.net Exception Message

    - by Element
    I have a standard asp.net 2.0 application running on IIS 6. I have noticed some strange exception messages in the logs. They seem to be caused by random spam bots trying to submit forms. They are strange because the request string is huge and all the exception details in the event manager are messed up, they have been replaced with %21,%22, etc.. as seen in the screen shot. Is this some kind of exploit or just a bug in the asp.net exception handler/logger ? UPDATE: I traced the requests that are causing this strange log event to a bug in IE8 that causes it to request scriptresource.axd?d={html from page} as described in these links: MS Connect SO - Invalid Webresource.axd SO - IE8 Dropping Memory Pages I am still not sure why these requests would break the IIS log event like seen above, they are just long strings of jiberish being sent to the server, maybe someone reading this can shed some light on it.

    Read the article

  • Why do SNMP agents need MIB files?

    - by user1495181
    After reading up on SNMP and some of the questions help here I think understand the agent role as a SNMP service to device (Like SQL, it is an API to storage). When you execute a SQL query the SQL engine does all the work and returns the result - You don't need to be aware of how the storage and where the storage is done. But MIBs are not actual storage , so what is the role of my agent? if the agent only register the MIB like i follow in this tutorial, is it not used as handler at all? Say that I want monitor my application's pending request queue, so I want an agent that all SNMP request for application_pending_request will be fired for it and it will return the queue depth. Why do I need to have an actual MIB when all I need to poll my application queue in order to get result?

    Read the article

  • How to Fix this specific Google "Fetch as Googlebot" error appearing on my Webmaster Tools?

    - by UXdesigner
    Good day, I'm currently finding out why I have lost all of my website's rank in google. I don't even appear in google results by the domain. But other sites do link me and they appear in the google results. I think it's all about leaving my site two months alone and finding out I had 20k in comment spam, which I completely deleted and fixed with filters and adding a new Disqus comment service. Thing is, I added my site to Google Webmaster Tools and I'm finding out several awful things. For example, when I click in Google Fetch As GoogleBot. I receive this error message below in response to my request. And I don't even know what's the real problem and how to fix it. I simply don't get it. This is what appears: Date: Wednesday, July 20, 2011 9:43:35 AM PDT Googlebot Type: Web Download Time (in milliseconds): 55 HTTP/1.1 403 Forbidden Date: Wed, 20 Jul 2011 16:43:36 GMT Server: Apache Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 248 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 403 Forbidden Forbidden You don't have permission to access / on this server. Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request. Do you guys know anything about this problem ? I need to have Google crawl my site again. I used to have a really nice google result in the past three years. Now, there's nothing. thanks,

    Read the article

  • The JavaServer Faces 2.2 viewAction Component

    - by Janice J. Heiss
    Life just got easier for users of JavaServer Faces. In a new article, now up on otn/java, titled “New JavaServer Faces 2.2 Feature: The viewAction Component,” Tom McGinn, Oracle’s Principal Curriculum Developer for Oracle Server Technologies, explores the advantages offered by the JavaServer Faces 2.2 view action feature, which, according to McGinn, “simplifies the process for performing conditional checks on initial and postback requests, enables control over which phase of the lifecycle an action is performed in, and enables both implicit and declarative navigation.”As McGinn observes: “A view action operates like a button command (UICommand) component. By default, it is executed during the Invoke Application phase in response to an initial request. However, as you'll see, view actions can be invoked during any phase of the lifecycle and, optionally, during postback, making view actions well suited to performing preview checks.”McGinn explains that the JavaServer Faces 2.2 view action feature offers several advantages over the previous method of performing evaluations before a page is rendered:   * View actions can be triggered early on, before a full component tree is built, resulting in a lighter weight call.   * View action timing can be controlled.   * View actions can be used in the same context as the GET request.   * View actions support both implicit and explicit navigation.   * View actions support both non-faces (initial) and faces (postback) requests.Read the complete article here.

    Read the article

  • Top 10 solution documents for Weblogic Server J2EE Feb 2014 - May 2014

    - by jhpierce -Oracle
    The following are the top 10 documents linked to SRs as solutions, for Weblogic Server J2EE issues, from Feb 2014 thru May 2014. 1163020.1 How to configure Filtering class loader in weblogic.xml   To configure the Filtering Class Loader to specify a certain package is loaded from an application, add a prefer-application-packages descriptor element. 1276593.1 WLS - How to supress servlet/JSP version details In WebLogic HTTP response header The string "X-Powered-By: Servlet/2.4 JSP/2.0" is showing up in the servlet response header.How to stop Weblogic from including servlet/JSP version details in the x-powered-by HTTP response header. 1490080.1 WebLogic Server 12.1.1.0 in a Cluster Environment Throws NotSerializableException for CDI Applications at com.sun.jersey.server.impl.cdi.CDIExtension When running in clustered environment, server start-up is not clean when you have CDI applications deployed. 1268138.1 Sample TwoWay SSL implementation for JAX-WS Webservice!   In this sample provided the recipient checks for the initiator's public certificate. Note that the client certificate can be used for authentication. 1584779.1 Socket Leaks When Calling Web-Service Over SSL This is a known bug 16810786 1598617.1 Secure WebService call throwing CANNOT RESOLVE URL FOR PROTOCOL HTTP/HTTPS through web server(APACHE) plug-in.    1056121.1 How to Timeout Weblogic Webservice Client   How to timeout a WebService client with and without using Stubs. 1568638.1 When packaging Jersey JAX-RS libraries into webapp throws NoSuchMethodError()  When attempting to include custom Jersey implementation libraries in to web application in a OSB domain. 1118264.1 WLS 10.3: Intermittent XA error: XAResource.XAER_RMERR In WebLogic 10.3, a CMP EJB sometimes throws the exception.   1608951.1 How to get More Details About Error BEA-101215 Malformed Request. Request parsing failed Code: -1   Which was seen when accessing the application via loadbalancer?

    Read the article

  • "No route to host" with ssl but not with telnet

    - by Clemens Bergmann
    I have a strange problem with connecting to a https site from one of my servers. When I type: telnet puppet 8140 I am presented with a standard telnet console and can talk to the Server as always: Connected to athena.hidden.tld. Escape character is '^]'. GET / HTTP/1.1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://athena.hidden.tld:8140/"><b>https://athena.hidden.tld:8140/</b></a></blockquote></p> <hr> <address>Apache/2.2.16 (Debian) Server at athena.hidden.tld Port 8140</address> </body></html> Connection closed by foreign host. But when I try to connect to the same host and port with ssl: openssl s_client -connect puppet:8140 It is not working connect: No route to host connect:errno=113 I am confused. At first it sounded like a firewall problem but this could not be, could it? Because this would also prevent the telnet connection. As Firewall I am using ferm on both servers. The systems are debian squeeze vm-boxes. [edit 1] Even when I try to connect directly with the IP address: openssl s_client -connect 198.51.100.1:8140 #address exchanged connect: No route to host connect:errno=113 Bringing down the firewalls on both hosts with service ferm stop is also not helping. But when I do openssl s_client -connect localhost:8140 on the server machine it is connecting fine. [edit 2] if I connect to the IP with telnet it also is not working. telnet 198.51.100.1 8140 Trying 198.51.100.1... telnet: Unable to connect to remote host: No route to host The confusion might come from IPv6. I have IPv6 on all my hosts. It seems that telnet uses IPv6 by default and this works. For example: telnet -6 puppet 8140 works but telnet -4 puppet 8140 does not work. So there seems to be a problem with the IPv4 route. openssl seems to only (or by default) use IPv4 and therefore fails but telnet uses IPv6 and succeeds.

    Read the article

  • Apache in front of tomcat on Railo proxy with ajp

    - by user1468116
    I'm trying to setup apache in front of the tomcat embedded in railo. I have this settings: <VirtualHost *:80> DocumentRoot "/var/www/myapp" ServerName www.myapp.test ServerAlias www.myapp.test ProxyRequests Off ProxyPass /app ! <Proxy *> Order deny,allow Allow from all </Proxy> ProxyPreserveHost On ProxyPassReverse / ajp://%{HTTP_HOST}:8009/ RewriteEngine On # If it's a CFML (*.cfc or *.cfm) request, just proxy it to Tomcat: RewriteRule ^(.+\.cf[cm])(/.*)?$ ajp://%{HTTP_HOST}:8009/$1$2 [P] My server.xml : <Host name="www.myapp.test" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> <Context path="" docBase="/var/www/myapp" /> <Alias>myapp.test</Alias> </Host> The index is loaded, but if I try to load some internal page I got: The proxy server could not handle the request GET /report/myreportname. Reason: DNS lookup failure for: localhost:8009report Could you help me?

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Receive Location

    - by Stuart Brierley
    I have previously talked about the installation of the Community ODBC adapter and also using the ODBC adapter to generate schemas.  But what about creating a receive location? An ODBC receive location will periodically poll the configured database using the stored procedure or SQL string defined in your request schema. If you need to, begin by adding a new receive port to your BizTalk configuration. Create a new receive location and select to use the ODBC adapter and click Address. You will now be shown the ODBC Community Adapter Transport properties window.  Select connection string and you will be shown the Choose data Source window.  If you have already created the Test Database source when generating a schema from ODBC this will be shown (if not go and take a look in my previous post to see how this is done).   You will then need to choose the SQL command that will be run by the recieve port.  In this case I have deployed the Test Mapping schemas that I created previously and selected the Request schema. You should now have populated the appropriate properties for the ODBC Com Adapter. Finally set the standard receive location properties and your ODBC receive location is now ready.

    Read the article

  • Default virtual server does not work

    - by Luc
    Hello, I have 4 Name Virtual Hosts on my apache configuration, each one using proxy_http to forward request to the correct server. They work fine. <VirtualHost *:80> ServerName application_name.domain.tld ProxyRequests Off ProxyPreserveHost On ProxyPass / http://server_ip/ ProxyPassReverse / http://server_ip/ </VirtualHost> I then tried to add a default NameVirtualHost to take care of the requests for which the server name does not match one of the four others. Otherwise a request like some_weird_styff.domain.tld would be forwarded to one of the 4 VH. I then added this one: <VirtualHost *:80> ServerAlias "*" DocumentRoot /var/www/ </VirtualHost> At the beginning it seemed to work fine, but at some point it appears that the requests that should be handed by one of the 4 regular hosts is "eaten" by the default one !!! If I a2dissite this default host, everything is back to normal... I do not really understand this. If you have any clue... thanks a lot, Luc

    Read the article

  • VS 11 Beta merge tool is awesome, except for resovling conflicts

    - by deadlydog
    If you've downloaded the new VS 11 Beta and done any merging, then you've probably seen the new diff and merge tools built into VS 11.  They are awesome, and by far a vast improvement over the ones included in VS 2010.  There is one problem with the merge tool though, and in my opinion it is huge.Basically the problem with the new VS 11 Beta merge tool is that when you are resolving conflicts after performing a merge, you cannot tell what changes were made in each file where the code is conflicting.  Was the conflicting code added, deleted, or modified in the source and target branches?  I don't know (without explicitly opening up the history of both the source and target files), and the merge tool doesn't tell me.  In my opinion this is a huge fail on the part of the designers/developers of the merge tool, as it actually forces me to either spend an extra minute for every conflict to view the source and target file history, or to go back to use the merge tool in VS 2010 to properly assess which changes I should take.I submitted this as a bug to Microsoft, but they say that this is intentional by design. WHAT?! So they purposely crippled their tool in order to make it pretty and keep the look consistent with the new diff tool?  That's like purposely putting a little hole in the bottom of your cup for design reasons to make it look cool.  Sure, the cup looks cool, but I'm not going to use it if it leaks all over the place and doesn't do the job that it is intended for. Bah! but I digress.Because this bug is apparently a feature, they asked me to open up a "feature request" to have the problem fixed. Please go vote up both my bug submission and the feature request so that this tool will actually be useful by the time the final VS 11 product is released.

    Read the article

  • Does it actually matter whether you have open applications when installing new software?

    - by Dan
    It seems the norm these days is for installers/setup programs to request that you close all open applications before initiating the install process for a piece of new software. I used to obediently follow these directions without fail, even though it could sometimes be frustrating having to close open documents and stop working on things just to get a new, seemingly unrelated application installed. Then at some point I simply stopped bothering. Nowadays if I have a lot of stuff going on I might even run multiple installers at the same time; I can't even recall a time it has ever posed a problem. Why do setup programs even make this request in the first place, then, when it appears to be unnecessary? Is this just to simplify troubleshooting for companies' support people? Has anyone else ever run into problems as a result of trying to install an app while other apps were open?

    Read the article

  • How to talk to a virtual host on a guest OS?

    - by Bernd
    Let's say there is a host OS (Mac OS X) and a virtual machine running Ubuntu as guest OS. The guest OS has the IP 192.186.56.101 and some virtual hosts, e.g. ubuntu.server So, how to really map a request to the virtual host ubuntu.server on the guest OS? I tried: Configure the host OS in /etc/hosts to map ubuntu.server to 192.186.56.101 On the guest OS we have the trouble. It accepts the request for 192.186.56.101 which is not ubuntu.server and therefor the ubuntu.server virtual host will never be requested. Just the localhost on the guest OS. It might surely be possible to simply then use 192.168.56.101. But this would only work for one host per guest OS. Any idea? Or is there a bug in my train of thoughts?

    Read the article

  • apache2 mod_proxy without 301 moved permanently?

    - by Guy Sensei
    Is it possible to not send a 301 moved permanently response to the client when using mod_proxy? I would like the client to deal with the reverse proxy as opaquely as possible. My Virtual Host Settings- relevant snippet ProxyPreserveHost On ProxyPass /GTM http://192.168.1.27/GTM ProxyPassReverse /GTM http://192.168.1.27/GTM wget localhost/GTM --2011-09-27 21:54:22-- localhost/GTM Resolving localhost... ::1, 127.0.0.1 Connecting to localhost|::1|:80... failed: Connection refused. Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: localhost/GTM/ [following] --2011-09-27 21:54:22-- localhost/GTM/ Reusing existing connection to localhost:80. HTTP request sent, awaiting response... 200 OK

    Read the article

< Previous Page | 173 174 175 176 177 178 179 180 181 182 183 184  | Next Page >