Search Results

Search found 5793 results on 232 pages for 'requests'.

Page 193/232 | < Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >

  • Django apache-wsgi configuration problem

    - by omat
    Hi, I am trying to get my Django project running on the production server. I setup the environment using pip, so it is identical to the development environment where everything is running fine. The only difference is that I don't use virtualenv on production, because this project is the only one that is going to run on production. Also on production, there is an Nginx reverse proxy to serve static content, and passes dynamic requests to Apache2. The Apache wsgi file is as follows: import sys, os sys.path.append('/home/project/src') os.environ['DJANGO_SETTINGS_MODULE'] = 'settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() When I access the server, I get an import error: ImproperlyConfigured: Error importing middleware middleware: "cannot import name UserProfile" Which refers to the middleware.py under src/ folder which is referred by the settings. But I can import both the middleware and the UserProfile class from within ./manage.py shell prompt. It seems like a problem with paths in wsgi file but I cannot see what. The directory structure is: /home/project /home/project/src (which contains the settings.py, middleware.py and app folders) /home/apache/apache.wsgi Any help is greatly appreciated. Thanks, oMat

    Read the article

  • Architecture Guidance Needed?

    - by vijay
    We are about to automate number of process for our reporting team. (The reports are like daily reports, weekly reports, monthly reports, etc..) Mostly the process is like pulling some data from the oracle and then fill them in particular excel template files. Each reports and so their templates are different from each other. Except the excel file manipulation, there are hardly any business logic behind these. Client wanted an integrated tool and all the automated processes are placed as menus/submenus. Right now roughly there are around 30 process waiting to be automated. And we are expecting more new reports in the next quarter. I am nowhere to near having any practical experience when comes to architecuring. Already i have been maintaining two or three systems(they are more than 4yrs old.) for this prestegious client.The possiblity of the above mentioned tool will be manintained for another 3 yrs is very likely. From my past experience i've been through the pain of implmenting change requests to the rigd & undocumented code base resulting in the break down of the system and then eventually myself. So My main and top most concern is the maintainablity. When i was searching for these i came across this link, Smart Clients Using CAB and SCSF is the above link appropriate for my requirement? Also Should i place each automated processes in separate forms under a single project, or place them in separate projects under a single solution.. Please correct me if have missed any other important information. Thx.

    Read the article

  • AJAX Partial Rendering issues for the default page in IIS 7 when using custom http module

    - by WiseGuyEh
    The problem When I try to make a AJAX partial update request (using the UpdatePanel control) from the default page of an IIS7 web site, it fails- instead of returning the html to be updated, it returns the entire page, which then causes the MS AJAX Javascript to throw a parsing shit-fit. The suspected cause I have narrowed the cause down to two issues- making an AJAX request to the default page when I have a certain custom http module registered. A partial rendering request to http://localhost will fail, but a partial rendering request to http://localhost/default.aspx will work fine. Also, If i remove the following line in my custom HttpModule: _application.PreRequestHandlerExecute += OnPreRequestHandlerExecute; The AJAX partial render will work correctly. Wierd huh? Another wierd thing... If I look at trace.axd, I can see that when a partial rendering request fails, two POST requests are logged for the one partial rendering request- one where the default.aspx page executes successfully (trace information such as page_load is logged) but no content is produced and a second that doesn't seem to actually execute (no trace information is logged) but produces content (HTTP_CONTENT_LENGTH is greater than 0). Please help! If anyone with a good knowledge of HTTP modules or the MS AJAX Http module could explain why this is occuring I would be very grateful. As it is, the obvious work arround is to just redirect to default.aspx if the request url is "/" but I would really like to understand why this is occurring.

    Read the article

  • Speeding up a soap powered website

    - by ChrisRamakers
    Hi all, We're currently looking into doing some performance tweaking on a website which relies heavily on a Soap webservice. But ... our servers are located in Belgium and the webservice we connect to is locate in San Francisco so it's a long distance connection to say the least. Our website is PHP powered, using PHP's built in SoapClient class. On average a call to the webservice takes 0.7 seconds and we are doing about 3-5 requests per page. All possible request/response caching is already implemented so we are now looking at other ways to improved the connection speed. This is the code which instantiates the SoapClient, what i'm looking for now is other ways/methods to improve speed on single requestes. Anyone has idea's or suggestions? private function _createClient() { try { $wsdl = sprintf($this->config->wsUrl.'?wsdl', $this->wsdl); $client = new SoapClient($wsdl, array( 'soap_version' => SOAP_1_1, 'encoding' => 'utf-8', 'connection_timeout' => 5, 'cache_wsdl' => 1, 'trace' => 1, 'features' => SOAP_SINGLE_ELEMENT_ARRAYS )); $header_tags = array('username' => new SOAPVar($this->config->wsUsername, XSD_STRING, null, null, null, $this->ns), 'password' => new SOAPVar(md5($this->config->wsPassword), XSD_STRING, null, null, null, $this->ns)); $header_body = new SOAPVar($header_tags, SOAP_ENC_OBJECT); $header = new SOAPHeader($this->ns, 'AuthHeaderElement', $header_body); $client->__setSoapHeaders($header); } catch (SoapFault $e){ controller('Error')->error($id.': Webservice connection error '.$e->getCode()); exit; } $this->client = $client; return $this->client; }

    Read the article

  • Application error passthru when using apache mod_proxy

    - by user303442
    Heyas. I'm using mod_proxy with apache2 provide vhost ability to multiple servlet apps running on the local machine. It works fine, for the most part. Requests come into apache then are directed to the application bound on a port on localhost. The app receives the request and responds, which is delivered back to the client by apache. The problem I'm having is that the application delivers 500's on errors, and mod_proxy stomps on them. Often these errors are caused in a ajax request and the error is handled in client side javascript. For example, a call to a server side createObject(name) might throw a NameNotUniqueException , which is delivered back as a 500. The client javascript might then display an appropriate error message. When an error is thrown by the application (resulting in a 500 response to mod_proxy), then apache stomps the error message and returns 500 Internal Server Error Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. .. the stock apache server side error message. I want mod_proxy to pass the original 500 back through to the client. Is there a directive I've missed which prevents clobbering of the 500? TIA

    Read the article

  • Grails UrlMappings with .html

    - by Glennn
    I'm developing a Grails web application (mainly as a learning exercise). I have previously written some standard Grails apps, but in this case I wanted to try creating a controller that would intercept all requests (including static html) of the form: <a href="/testApp/testJsp.jsp">test 1</a> <a href="/testApp/testGsp.gsp">test 2</a> <a href="/testApp/testHtm.htm">test 3</a> <a href="/testApp/testHtml.html">test 4</a> The intent is to do some simple business logic (auditing) each time a user clicks a link. I know I could do this using a Filter (or a range of other methods), however I thought this should work too and wanted to do this using a Grails framework. I set up the Grail UrlMappings.groovy file to map all URLs of that form (/$myPathParam?) to a single controller: class UrlMappings { static mappings = { "/$controller/$action?/$id?"{ constraints { } } "/$path?" (controller: 'auditRecord', action: 'showPage') "500"(view:'/error') } } In that controller (in the appropriate "showPage" action) I've been printing out the path information, for example: def showPage = { println "params.path = " + params.path ... render(view: resultingView) } The results of the println in the showPage action for each of my four links are testJsp.jsp testGsp.gsp testHtm.htm testHtml Why is the last one "testHtml", not "testHtml.html"? In a previous (Stack Overflow query) Olexandr encountered this issue and was advised to simply concatenate the value of request.format - which, indeed, does return "html". However request.format also returns "html" for all four links. I'm interested in gaining an understanding of what Grails is doing and why. Is there some way to configure Grails so the params.path variable in the controller shows "testHtml.html" rather than stripping off the "html" extension? It doesn't seem to remove the extension for any other file type (including .htm). Is there a good reason it's doing this? I know that it is a bit unusual to use a controller for static html, but still would like to understand what's going on.

    Read the article

  • Is Software Engineering Dead? [closed]

    - by nik
    Right from Jeff's blog: Software Engineering: Dead? I was utterly floored when I read this new IEEE article by Tom DeMarco (pdf). See if you can tell why. He quotes DeMarco, "I'm gradually coming to the conclusion that software engineering is an idea whose time has come and gone". Further, "What DeMarco seems to be saying -- and, at least, what I am definitely saying -- is that control is ultimately illusory on software development projects." I am writing these lines without context to invoke reading of the related subject. What are the views of the programming community here? I have started to realize that a community wiki is not getting the right amount of participation here. That is the reason I left this question out in the open, while still contemplating a change to CW. It was closed once, and I thought that was the end of it. But, now I see it was reopened and has more answers (all of which I have not yet read). However, I see a lot of CW requests and am forced to reconsider that. This is how I intend to make the CW decision here. There is a comment by Neil Butterworth requesting a CW at 12 upvotes -- "should be community wiki" There is a comment by Lance Roberts requesting no CW at 0 upvotes -- "+1 for not putting it in community wiki" The difference is 12 for a CW request at the moment If this difference becomes 5 more (that is 17), I'll move this question to CW, and it will not return back from there Of course, there is also a close vote at the moment; the question may be closed again.

    Read the article

  • Apache - "dynamic" rewrite rule

    - by Christian A. Rasmussen
    Hi there. I'm working on a Zend Framework project where I've stumbled across a bit of a problem. The problem originates from the fact that modules are 2nd class citizens in Zend Framework. In my project, I'd like for each module to have a folder containing files which are to be accessed from the outside - files such as stylesheets, javascripts and images. Now, how is this to be done. With a Zend Framework project I have a folder structure which looks like this: application/ modules/ moduleOne/ public/ stylesheet.css moduleTwo/ moduleThree/ public/ index.php The standard .htaccess file located in the public/ folder holds this: SetEnv APPLICATION_ENV development RewriteEngine On RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] RewriteRule ^.*$ index.php [NC,L] The way it works, is that the project's apache DocumentRoot is the public/ folder. All requests gets redirected through the index.php file where Zend Framework's router component takes over. Now, I'm by no means an expert with Apache nor mod_rewrite so pardon me if this is just silly. I imagine that I implement an extra step in the existing rewrite rule so that if I request http://project/public/moduleOne/stylesheet.css it will for instance resolve to /var/www/project/application/modules/moduleOne/public/stylesheet.css. So the steps which need to be done is to check if the first element in the URI is public/ if it is, we take the next segment as the modules name and use that in the path we're trying to resolve to and attempt to serve the file. Is this at all possible or does anyone have a better suggestion? Thank you for your time Christian Rasmussen

    Read the article

  • What is causing Apache2 to display PHP as plain text in this config file?

    - by rxgx
    I am trying to run PHP and Rails in the same virtual host, however, PHP is being displayed as plain/text. When I create a test host without all the rewrites and proxy-ing, Apache2 will process the PHP as desired. Where in my config file have I gone wrong? <VirtualHost *:80> #ServerName staging.domain.com #ServerAlias www.domain.com DocumentRoot /home/demo/vhosts/domain/public <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /home/demo/vhosts/domain/public> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> RewriteEngine On <Proxy balancer://thinservers> BalancerMember http://127.0.0.1:5000 BalancerMember http://127.0.0.1:5001 BalancerMember http://127.0.0.1:5002 </Proxy> # Redirect all non-static requests to thin RewriteCond %{DOCUMENT_ROOT}/%{REQUEST_FILENAME} !-f RewriteRule ^/(.*)$ balancer://thinservers%{REQUEST_URI} [P,QSA,L] ProxyPass / balancer://thinservers/ ProxyPassReverse / balancer://thinservers/ ProxyPreserveHost on <Proxy *> Order deny,allow Allow from all </Proxy> # Custom log file locations ErrorLog /home/demo/vhosts/domain/log/error.log CustomLog /home/demo/vhosts/domain/log/access.log combined </VirtualHost>

    Read the article

  • Django + jQuery: Sometimes AJAX, but always DRY?

    - by Justin Myles Holmes
    Let's say I have an app (in Django) for which I want to sometimes (but not always) load content via ajax. An easy example is logging in. When the user logs in, I don't want to refresh the page, just change things around. Yet, if they are already logged in, and then arrive at (or refresh) the same page, I want it to show the same content. So, in the first case, obviously I do some sort of ajax login and load changes to the page accordingly. Easy enough. But what about in the second case? Do I go back through and add {% if user.authenticated %} all over the place? This seems cold, dark, and WET. On the other hand, I can just wrap all the ajaxy stuff in a javascript function, called loggedIn(), and run that if the user is authenticated. But then I'm faced with two http requests instead of one. Also undesirable. So what's the standard solution here?

    Read the article

  • protecting grails melody with grails filter

    - by batmannavneet
    I have an application where I am using spring security along with grails melody. I am planning to run grails melody in production environment, but don't want visitors to have access to it. How should I achieve that ? I tried creating a filter in grails (just showing the sample of what I am trying, not the actual code)- def filters = { allURIs(uri:'/**') { before = { //... if(request.forwardURI.indexOf("admin") != -1 || request.forwardURI.indexOf("monitoring") != -1) { response.sendError 404 return false } } } } But this doesnt work as the request for "monitoring" doesnt hit this filter. I dont even want the user to know that such a URL exists, so I want to check in the filter that if "monitoring" is the URL, I show the 404 error page. Thats also the reason why I dont want to protect this URL with spring security as it will show "access denied" page. Basically I want the URL to exist but they should be invisible to users. I want the access to be open to only certain IP addresses for these special URLs. On another note, Is it possible to write a grails filter that "acts" before the spring security filter is hit ? I want to be able to do some filtering before I forward requests to spring security. Writing a grails filter like above doesnt help. Spring security filter gets hit first if I access a protected resource and this filter doesn't get called. Thanks

    Read the article

  • Return and Save XML Object From Sharepoint List Web Service

    - by HurnsMobile
    I am trying to populate a variable with an XML response from an ajax call on page load so that on keyup I can filter through that list without making repeated get requests (think very rudimentary autocomplete). The trouble that I am having seems to be potentially related to variable scoping but I am fairly new to js/jQuery so I am not quite certain. The following code doesn't do anything on key up and adding alerts to it tells me that it is executing leadResults() on keyup and that the variable is returning an XML response object but it appears to be empty. The strange bit is that if I move the leadResults() call into the getResults() function the UL is populated with the results correctly. Im beating my head against the wall on this one, please help! var resultsXml; $(document).ready( function() { var leadLookupCaml = "<Query> \ <Where> \ <Eq> \ <FieldRef Name=\"Lead_x0020_Status\"/> \ <Value Type=\"Text\">Active</Value> \ </Eq> \ </Where> \ </Query>" $().SPServices({ operation: "GetListItems", webURL: "http://sharepoint/departments/sales", listName: "Leads", CAMLQuery: leadLookupCaml, CAMLRowLimit: 0, completefunc: getResults }); }) $("#lead_search").keyup( function(e) { leadResults(); }) function getResults(xData, status) { resultsXml = xData; } function leadResults() { xData = resultsXml; $("#lead_results li").remove(); $(xData.responseXML).find("z\\:row").each(function() { var selectHtml = "<li>" + "<a href=\"http://sharepoint/departments/sales/Lists/Lead%20Groups/DispForm.aspx?ID=" + $(this).attr("ows_ID") + ">" + $(this).attr("ows_Title")+" : " + $(this).attr("ows_Phone") + "</a>\ </li>"; $("#lead_results").append(selectHtml); }); }

    Read the article

  • Password Confirmation Overlay

    - by Alasdair
    Hello, I'm creating a J2EE web application that uses jQuery and Ajax to help with some of the presentation for a user-friendly interface. I've done a lot of work ensuring security around persistant login cookies, and I've decided to request the password from any user that logged in using a persistant login cookie before being allowed to make any changes that could be malicious. This request would only happen once to confirm the user is who they say they are and will last throughout the session. At present, any requests that meet this criteria has their request information stored in session and then the user is forwarded to a page to confirm their password. Once confirmed, the user's original request is then performed and the requestion information removed from session. What I would like to do is avoid all this redirection and minimize what's held in session (even if it's just for a small time), thus improving usability and convenience for the user. I believe that a jQuery overlay could allow me to prompt the user for their password (if required) and then continue to submit the request if successful. I would of originally used ThickBox, but since that's now deprecated I don't see the benefit in implementing it in an application at this development stage. However, I have tried to create an overlay using jQuery but I've scrapped every attempt as I can't seem to make it all come together. My main problem is preventing the submission when the user incorrectly types a password or cancels the overlay. Desired Flow Persistant Login Sensitive Page Submit Password Confirmation Overlay [Continue Submit | (Cancel | Incorrect] I have already created JavaScript code to encrypt the password to be sent in a parameter, but all I need now is a method of controlling the overlay and how best to use Ajax for this purpose. Please ignore the fact that this is a J2EE web application when answering as it is irrelevant really. Thanks in advance, Alasdair

    Read the article

  • Tomcat 4.1.31 - HTTPS not working intermittently | "Page Cannot be Displayed" problems

    - by cedar715
    We are facing this error intermittently. If we restart the server it works for some time and again the problem start. We also have another load balanced server with similar configuration and that is working fine. The server is running on Linux box. If we do the "ps -ef" its listing the TOMCAT process. URL : https://xyz.abc.com:9234/axis/servlet/AxisServlet Following the configuration in server.xml file. <Connector className="org.apache.coyote.tomcat4.CoyoteConnector" port="9234" minProcessors="5" maxProcessors="75" enableLookups="true" acceptCount="100" debug="0" scheme="https" secure="true" useURIValidationHack="false" disableUploadTimeout="true"> <Factory className="org.apache.coyote.tomcat4.CoyoteServerSocketFactory" clientAuth="false" protocol="TLS" /> </Connector> Is it the problem with our load balancer which is forwarding most requests to this server? Is it any way related to the "maxProcessors" or "acceptCount" attributes defined in the above configuration? Is it the problem with the port number?? Does it have to do any thing with the certificate. The certificate is generated using Java Keytool. ( However, the other load balanced server is also using the same certificate and working fine) Please suggest in resolving this issue. thanks

    Read the article

  • Designing code for a duo Website + AIR desktop App

    - by faB
    I want to use AIR to create an OFFLINE version of a "webapp" kind of website (lots of ajax, front end code). Haven't been much further than the HelloWorld example, I keep wondering: how do you design your code, to maximize code reuse between the website (say in php or Java or .Net), and the AIR app ? Can you actually re-use 100% of the front end code, provided that it is designed to account for the AIR app ? How would you go about doing that ? For example, the website makes many Ajax calls which have latency, and uses listeners. The AIR app doesn't need listeners it could run database requests synchronously, and it doesn't need to run ajax calls right? Would you write an abstraction layer for that ? So that the same calls on the AIR app will not do a xmlhttp but instead implement the server-side code with AIR; and call the listener ? So you don't have to rewrite the front end code patterns ? Does this make sense ? It's really hard to search on Google. I'm thinking there must be a good article somewhere of somebody who went through and perhaps a framework to do that ?

    Read the article

  • Integrating legacy Ajax script in CakePHP

    - by octavian
    Hi, I have a legacy script that I would like to integrate in a cakephp app. The script makes use of $_POST and such and since I'm quite a noob I would need some help for the integration. Here is how the script looks like: THE JAVASCRIPT: prototype.js builder.js (these two are from the prototype fw) lib.js (makes a ajax requests to remote.php) THE PHP remote.php (contains FastJSON class and $_POST vars) if ($_POST['cmd'] == 'SAVETEAM' && $_POST['info']) { $INFO = json_decode(str_replace('\"', '"', $_POST['info'])); $nr = 1; $SORT = array($INFO->GK, $INFO->DEF, $INFO->MID, $INFO->FOR, $INFO->RZ); foreach ($SORT as $STD) foreach ($STD as $v) mysql_query("UPDATE players_teams SET fieldposition = ".$nr++." WHERE player_id = {$v->player_id} AND team_id = {$v->team_id}") or die(mysql_error()); // CAPTAION mysql_query("UPDATE `teams` SET captain = '{$_POST['captain']}' WHERE `user_id` = {$_POST['userid']}") or die(mysql_error()); } transfers.php (containts the form that uses the javascript and link to the JS) I have really no idea how to structure the files and calls in cakephp. Currently I have "Undefined index: cmd [APP/vendors/remote.php, line 230]" errors since I use $_POST['cmd'] (I placed remote.php in Vendors and included it, the JS was just included old fashion way, as a link and appears in the source code). How can I make this work? I'm sorry but I'm not familiar with AJAX and Cake... If you want a full look at the code, here it is: http://octavian.be/thecode.zip Thank you for reading and helping me out.

    Read the article

  • JSESSIONID collision between two servers on same ip but different ports

    - by Steve Armstrong
    I've got a situation where I have two different webapps running on a single server, using different ports. They're both running Java's Jetty servlet container, so they both use a cookie parameter named JSESSIONID to track the session id. These two webapps are fighting over the session id. Open a Firefox tab, and go to WebApp1 WebApp1's HTTP response has a set-cookie header with JSESSIONID=1 Firefox now has a Cookie header with JSESSIONID=1 in all it's HTTP requests to WebApp1 Open a second Firefox tab, and go to WebApp2 The HTTP reqeust to WebApp2 also has a Cookie header with JSESSIONID=1, but in the doGet, when I call req.getSession(false); I get null. And if I call req.getSession(true) I get a new Session object, but then the HTTP response from WebApp2 has a set-cookie header with JSESSIONID=20 Now, WebApp2 has a working Session, but WebApp1's session is gone. Going to WebApp1 will give me a new session, blowing away WebApp2's session. Continue forever So the Sessions are thrashing between each web app. I'd really like for the req.getSession(false) to return a valid session if there's already a JSESSIONID cookie defined. One option is to basically reimplement the Session framework with a HashMap and cookies called WEBAPP1SESSIONID and WEBAPP2SESSIONID, but that sucks, and means I'll have to hack the new Session stuff into ActionServlet and a few other places. This must be a problem others have encountered. Is Jetty's HttpServletRequest.getSession(boolean) just crappy?

    Read the article

  • Maintaining a pool of DAO Class instances vs doing new operator

    - by Fazal
    we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower. These objects represent tables with about 50 fields and all string type. Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns Please share your ideas and let me know if this has been tried by someone or am I missing some point here

    Read the article

  • UIWebView: webViewDidStartLoad/webViewDidFinishLoad delegate methods not called when loading certain URLs

    - by Dia
    I have basic web browser implemented using a UIWebView. I've noticed that for some pages, none of the UIWebViewDelegate methods are called. An example page in which this happens is: http://www.youtube.com/user/google. Here are the steps to reproduce the issue (make sure you insert NSLog calls in your controller's UIWebViewDelegate methods): Load the above youtube URL into the UIWebView [notice that here, the UIWebViewDelegate methods do get called when the page loads] Touch the "Uploads" category on the page Touch any video in that category [issue: notice that a new page is loaded, but none of the UIWebView delegates are called] I know that this is not an issue of UIWebView's delegate not being set properly, since the delegate methods do get invoked when loading other links (e.g. if you try clicking on a link that takes you outside of youtube, you'll notice the delegate methods getting called). My gut feeling initially was that it might be because the page is loaded using AJAX, which may not invoke the delegate method. But then when I checked Safari, it did not exhibit this problem, so it must be something on my side. I've also noticed that Three20's TTWebController has the exact same issue as I'm having. But the problem that arises from this issue is that without the delegate methods called, I'm unable to update the UI to enable/disable the back and forward browsing buttons when new requests are loaded. And idea why this is happening or how can I work around it to update the UI when a new request is made?

    Read the article

  • SSRS2005 timeout error

    - by jaspernygaard
    Hi I've been running around circles the last 2 days, trying to figure a problem in our customers live environment. I figured I might as well post it here, since google gave me very limited information on the error message (5 results to be exact). The error boils down to a timeout when requesting a certain report in SSRS2005, when a certain parameter is used. The deployment scenario is: Machine #1 Running reporting services (SQL2005, W2K3, IIS6) Machine #2 Running datawarehouse database (SQL2005, W2K3) which is the data source for #1 Both machines are running on the same vm cluster and LAN. The report requests a fairly simple SP - lets called it sp(param $a, param $b). When requested with param $a filled, it executes correctly. When using param $b, it times out after the global timeout periode has passed. If I run the stored procedure with param $b directly from sql management studio on #2, it returns the results perfectly fine (within 3-4s). I've profiled the datawarehouse database on #2 and when param $b is used, the query from the reporting service to the database, never reaches #2. The error message that I get upon timeout, when using param $b, when invoking the report directly from SSRS web interface is: "An error has occurred during report processing. Cannot read the next data row for the data set DataSet. A severe error occurred on the current command. The results, if any, should be discarded. Operation cancelled by user." The ExecutionLog for the SSRS does give me much information besides the error message rsProcessingAborted I'm running out of ideas of how to nail this problem. So I would greatly appreciate any comments, suggestions or ideas. Thanks in advance!

    Read the article

  • latin1/unicode conversion problem with ajax request and special characters

    - by mfn
    Server is PHP5 and HTML charset is latin1 (iso-8859-1). With regular form POST requests, there's no problem with "special" characters like the em dash (–) for example. Although I don't know for sure, it works. Probably because there exists a representable character for the browser at char code 150 (which is what I see in PHP on the server for a literal em dash with ord). Now our application also provides some kind of preview mechanism via ajax: the text is sent to the server and a complete HTML for a preview is sent back. However, the ordinary char code 150 em dash character when sent via ajax (tested with GET and POST) mutates into something more: %E2%80%93. I see this already in the apache log. According to various sources I found, e.g. http://www.tachyonsoft.com/uc0020.htm , this is the UTF8 byte representation of em dash and my current knowledge is that JavaScript handles everything in Unicode. However within my app, I need everything in latin1. Simply said: just like a regular POST request would have given me that em dash as char code 150, I would need that for the translated UTF8 representation too. That's were I'm failing, because with PHP on the server when I try to decode it with either utf8_decode(...) or iconv('UTF-8', 'iso-8859-1', ...) but in both cases I get a regular ? representing this character (and iconv also throws me a notice: Detected an illegal character in input string ). My goal is to find an automated solution, but maybe I'm trying to be überclever in this case? I've found other people simply doing manual replacing with a predefined input/output set; but that would always give me the feeling I could loose characters. The observant reader will note that I'm behind on understanding the full impact/complexity with things about Unicode and conversion of chars and I definitely prefer to understand the thing as a whole then a simply manual mapping. thanks

    Read the article

  • Mac CoreLocation Services does not ask for permissions

    - by Ryan Nichols
    I'm writing a Mac App that needs to use CoreLocation services. The code and location works fine, as long as I manually authenticate the service inside the security preference pane. However the framework is not automatically popping up with a permission dialog. The documentation states: Important The user has the option of denying an application’s access to the location service data. During its initial uses by an application, the Core Location framework prompts the user to confirm that using the location service is acceptable. If the user denies the request, the CLLocationManager object reports an appropriate error to its delegate during future requests. I do get an error to my delegate, and the value of +locationServicesEnabled is correct on CLLocationManager. The only part missing is the prompt to the user about permissions. This occurs on my development MPB and a friends MBP. Neither of us can figure out whats wrong. Has anyone run into this? Relevant code: _locationManager = [CLLocationManager new]; [_locationManager setDelegate:self]; [_locationManager setDesiredAccuracy:kCLLocationAccuracyKilometer]; ... [_locationManager startUpdatingLocation]; UPDATE: Answer It seems there is a problem with Sandboxing in which the CoreLocation framework is not allowed to talk to com.apple.CoreLocation.agent. I suspect this agent is responsible for prompting the user for permissions. If you add the Location Services Entitlement (com.apple.security.personal-information.location) it only gives your app the ability to use the CL framework. However you also need access to the CoreLocation agent to ask the user for permissions. You can give your app access by adding the entitlement 'com.apple.security.temporary-exception.mach-lookup.global-name' with a value of 'com.apple.CoreLocation.agent'. Users will be prompted for access automatically like you would expect. I've filed a bug to apple on this already.

    Read the article

  • Browser: Cookie lost on refresh

    - by Nirmal
    I am experiencing a strange behaviour of my application in Chrome browser (No problem with other browsers). When I refresh a page, the cookie is being sent properly, but intermittently the browser doesn't seem to pass the cookie on some refreshes. This is how I set my cookie: $identifier = / some weird string /; $key = md5(uniqid(rand(), true)); $timeout = number_format(time(), 0, '.', '') + 43200; setcookie('fboxauth', $identifier . ":" . $key, $timeout, "/", "fbox.mysite.com", 0); This is what I am using for page headers: header("Last-Modified: " . gmdate("D, d M Y H:i:s") . " GMT"); header("Cache-Control: no-cache, must-revalidate"); // HTTP/1.1 header("Expires: Thu, 25 Nov 1982 08:24:00 GMT"); // Date in the past Do you see any issue here that might affect the cookie handling? Thank you for any suggestion. EDIT-01: It seems that the cookie is not being sent with some requests. This happens intermittently and I am seeing this behaviour for ALL the browsers now. Has anyone come across such situation? Is there any situation where a cookie will not be sent with the request? Thanks again, for any guideline.

    Read the article

  • A way to specify a different host in an SSH tunnel from the host in use

    - by Tom
    I am trying to setup an SSH tunnel to access Beanstalk (to bypass an annoying proxy server). I can get this to work, but with one caveat: I have to map my Beanstalk host URL (username.svn.beanstalkapp.com) in my hosts file to 127.0.0.1 (and use the ip in place of the domain when setting up the tunnel). The reason (I think) is that I am creating the tunnel using the local SSH instance (on Snow Leopard) and if I use localhost or 127.0.0.1 when talking to Beanstalk, it rejects the authorisation credentials. I believe this is because Beanstalk use the hostname specified in a request to determine which account the username / password combination should be checked against. If localhost is used, I think this information is missing (in some manner which Beanstalk requires) from the requests. At the moment I dig the IP for username.svn.beanstalkapp.com, map username.svn.beanstalkapp.com to 127.0.0.1 in my hosts file, then for the tunnel I use the command: ssh -L 8080:ip:443 -p 22 -l tom -N 127.0.0.1 I can tell Subversion that the repo. is located at: https://username.svn.beanstalkapp.com:8080/repo-name This uses my tunnel and the username and password are accepted. So, my question is if there is an option when setting up the SSH tunnel which would mean I wouldn't have to use my hosts file workaround?

    Read the article

  • MEF and ASP.NET MVC

    - by denis_n
    I want to use MEF with asp.net mvc. I wrote following controller factory: public class MefControllerFactory : DefaultControllerFactory { private CompositionContainer _Container; public MefControllerFactory(Assembly assembly) { _Container = new CompositionContainer(new AssemblyCatalog(assembly)); } protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType) { if (controllerType != null) { var controllers = _Container.GetExports<IController>(); var controllerExport = controllers.Where(x => x.Value.GetType() == controllerType).FirstOrDefault(); if (controllerExport == null) { return base.GetControllerInstance(requestContext, controllerType); } return controllerExport.Value; } else { throw new HttpException((Int32)HttpStatusCode.NotFound, String.Format( "The controller for path '{0}' could not be found or it does not implement IController.", requestContext.HttpContext.Request.Path ) ); } } } In Global.asax.cs I'm setting my controller factory: protected void Application_Start() { AreaRegistration.RegisterAllAreas(); RegisterRoutes(RouteTable.Routes); ControllerBuilder.Current.SetControllerFactory(new MefControllerFactory.MefControllerFactory(Assembly.GetExecutingAssembly())); } I have an area: [Export(typeof(IController))] [PartCreationPolicy(CreationPolicy.NonShared)] public class HomeController : Controller { private readonly IArticleService _articleService; [ImportingConstructor] public HomeController(IArticleService articleService) { _articleService = articleService; } // // GET: /Articles/Home/ public ActionResult Index() { Article article = _articleService.GetById(55); return View(article); } } IArticleService is an interface. There is a class which implements IArticleService and Exports it. It works. Is this everything what I need for working with MEF? How can I skip setting PartCreationPolicy and ImportingConstructor for controller? I want to set my dependencies using constructor. When PartCreationPolicy is missing, I get following exception: A single instance of controller 'MvcApplication4.Areas.Articles.Controllers.HomeController' cannot be used to handle multiple requests. If a custom controller factory is in use, make sure that it creates a new instance of the controller for each request.

    Read the article

< Previous Page | 189 190 191 192 193 194 195 196 197 198 199 200  | Next Page >