Search Results

Search found 16467 results on 659 pages for 'request filtering'.

Page 590/659 | < Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >

  • AIX specific socket programming query

    - by kumar_m_kiran
    Hi All, Question 1 From SUSE man pages, I get the below details for socket connect options If the initiating socket is connection-mode, then connect() shall attempt to establish a connection to the address specified by the address argument. If the connection cannot be established immediately and O_NONBLOCK is not set for the file descriptor for the socket, connect() shall block for up to an unspecified timeout interval until the connection is established. If the timeout interval expires before the connection is established, connect() shall fail and the connection attempt shall be aborted. If connect() is interrupted by a signal that is caught while blocked waiting to establish a connection, connect() shall fail and set errno to [EINTR], but the connection request shall not be aborted, and the connection shall be established asynchronously. Question : Is the above contents valid for AIX OS (especially the connection time-out, timed wait ...etc)?Because I do not see it in AIX man pages (5.1 and 5.3) Question 2 I have a client socket whose attributes are a. SO_RCVTIMEO ,SO_SNDTIMEO are set for 5 seconds. b. AF_INET and SOCK_STREAM. c. SO_LINGER with linger on and time is 5 seconds. d. SO_REUSEADDR is set. Note that the client socket is not O_NONBLOCK. Question : Now since O_NONBLOCK is not set and SO_RCVTIMEO and SO_SNDTIMEO is set for 5 seconds, does it mean a. connect in NON Blocking or Blocking? b. If blocking, is it timed blocking or "infinite" time blocking? c. If it is infinite, How do I establish a "connect" system call which is O_BLOCKING with timeout to t secs. Sorry if the questions are be very naive. Thanks in advance for your input.

    Read the article

  • git submodule pull and commit automatically on webserver

    - by Lukas Oppermann
    I have the following setup, I am working on a project project with the submodule submodule. Whenever I push changes to github it sends a post request to update.php on the server. This php file executes a git command. Without submodules I can just do a git pull and everything is fine but with submodules it is much more difficult. I have this at the moment, but it does not do what I want. I should git pull the repo and update and pull the latest version of each submodule. <?php echo `git submodule foreach 'git checkout master; git pull; git submodule update --init --recursive; git commit -m "updating"' && git pull && git submodule foreach 'git add -A .' && git commit -m "updating to latest version including submodules" 2>&1s`; EDIT// Okay, I got it half way done. <?php echo `git submodule foreach 'git checkout master; git pull; git submodule update --init --recursive; git commit -am "updating"; echo "updated"' && git pull && git commit -am "updating to latest version including submodules" && echo 'updated'`; The echo prevents the script to stop because of non-zero returned. It works 100% fine when I run it from the console using php update.php. When github initialized the file, or I run it from the browser it still does not work. Any ideas?

    Read the article

  • Activator.CreateInstance uses a huge amount of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls using Activator.CreateInstance, the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes to 1.5 GB. If the process won't take that much the layout is rendered properly and the memory falls back to 50 MB. Otherwise everything crashes due to out of memory exeption. Does anybody encountered the same problem? Or has anybody a solution about this tricky issue?

    Read the article

  • Best way to track the stages of a form across different controllers - $_GET or routing

    - by chrisj
    Hi, I am in a bit of a dilemma about how best to handle the following situation. I have a long registration process on a site, where there are around 10 form sections to fill in. Some of these forms relate specifically to the user and their own personal data, while most of them relate to the user's pets - my current set up handles user specific forms in a User_Controller (e.g via methods like user/profile, user/household etc), and similarly the pet related forms are handled in a Pet_Controller (e.g pet/health). Whether or not all of these methods should be combined into a single Registration_Controller, I'm not sure - I'm open to any advice on that. Anyway, my main issue is that I want to generate a progress bar which shows how far along in the registration process each user is. As the urls in each form section can potentially be mapping to different controllers, I'm trying to find a clean way to extract which stage a person is at in the overall process. I could just use the query string to pass a stage parameter with each request, e.g user/profile?stage=1. Another way to do it potentially is to use routing - e.g the urls for each section of the form could be set up to be registration/stage/1, registration/stage/2 - then i could just map these urls to the appropriate controller/method behind the scenes. If this makes any sense at all, does anyone have any advice for me?

    Read the article

  • Gettingn Started with Facebook API

    - by Btibert3
    I have a friend that owns a small business and has a Page on Facebook. I want to help her manage it from a marketing perspective, and figure that it may be best to do so through their API. I have skimmed their API documentation, and have a basic working knowledge of Python. What I can't figure out is if I can access their page's data with Python and grab the data on wall posts, who liked posts, etc. Is this possible? I can't find a decent tutorial for someone who is new to programming. To provide context, I have been scraping the Twitter Search API for some time now and I am hoping there is something similar (request certain data elements, and have it returned as structured data I can analyze). I find their API extremely straight forward, and for Facebook, I don't know where to begin. I don't want to create an application, I simply want to access the data that is related to my friend's page. I am hoping to find some decent tutorials and help on what I will need to get started. Any help you can provide will be greatly appreciated.

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • What are CDN Best Practices?

    - by Wild Thing
    Hi, I have recently started using the Rackspace Cloudfiles CDN (Limelight), about which I have some questions: I am using jQuery, jQuery UI and jQuery tools in addition to custom JS code. Also, my site is written in ASP.Net, which means there is some ASP.Net generated JS code. Right now what I have done is that I have combined all of the js (including the jquery code), except the ASP.Net generated JS into one file. I am hosting this on the Rackspace CDN. I am wondering if it would make more sense to just get the jQuery, jQuery UI files from the Google hosted CDN (which I suspect would work very well in serving these files, since they will be in many users' cache already)? This would mean one extra HTTP request, so I'm not sure if it'll help. Right now I have multiple containers for my assets. For example, in Rackspace I have 3 containers: JS, CSS and Images. The URL subdomain for all 3 is different. Will that lead to a performance penalty? Should I just use one container (and thus one domain for the CDN)? Is there a way of having the MS ASP.Net generated JS loaded from MS CDN? Would this have a performance hit as per the above question? Thanks in advance, WT

    Read the article

  • how can i make sure only a single record is inserted when multiple apache threads are trying to acce

    - by Ed Gl
    I have a web service (xmlrpc service to be exact) that handles among other things writing data into the database. Here's the scenario: I often receive requests to either update or insert a record. What I would do is this: If the record already exists, append to the record, If not, create a new record The issue is that there are certain times I would get a 'burst' of requests, which spawns several apache threads to handle the request. These 'bursts' would come within less than milliseconds of each other. I now have several threads performing #1 and #2. Often two threads would would 'pass' number #1 and actually create two duplicate records (except for the primary key). I'd like to use some locking mechanism to prevent other threads from accessing the table while the other thread finishes its work. I'm just afraid of using it because if something happens I don't want to leave the table locked. Is there a solid way of handling this? I'm open to using locks if I can do it properly. Thanks,

    Read the article

  • Etiquette: Version bump my fork of opensource project?

    - by Ross
    This question is about etiquette and open source projects. I have forked an application from github and added two new features. The first feature has been request frequently elsewhere. I have added it. Code & implementation are clean (I think). The second feature is more of a hack. It will be of use to others, but the implementation is a little dirty in useage and more so in code. I need the feature but I don't have the skills to fully implement it properly or to a level that could be considered a worth while contrabution to the main project. How should the versioning work? Do I just bump up my version numbers care-free and push to my master branch? It is annoying to know which version is running, modifed or original, as both have the same version number. But will it be confusing when, months later, my github page has a version number the same as the original but both are actually completely different. (I have made pull requests etc. but that is not the context of my question.) The project I have forked uses ruby jeweler so has a versioning format of: Jeweler tracks the version of your project. It assumes you will be using a version in the format x.y.z. x is the 'major' version, y is the 'minor' version, and z is the patch version. Is this standard for other projects/langauges too? Are my changes patches? Thanks

    Read the article

  • Quick and easy flood protection?

    - by James P
    I have a site where a user submits a message using AJAX to a file called like.php. In this file the users message is submitted to a database and it then sends a link back to the user. In my Javascript code I disabled the text box the user types into when they submit the AJAX request. The only problem is, a malicious user can just constantly send POST requests to like.php and flood my database. So I would like to implement simple flood protection. I don't really want the hassle of another database table logging users IPs and such... as if they are flooding my site there will be a lot of database read/writes slowing it down. I thought about using sessions, like have a session that contains a timestamp that gets checked every time they send data to like.php, and if the current time is before the timestamp let them add data to the database, otherwise send out an error and block them. If they are allowed to enter something into the database, update their session with a new timestamp. What do you think? Would this be the best way to go about it or are there easier alternatives? Thanks for any help. :)

    Read the article

  • How to control the "flow" of an ASP.NET MVC (3.0) web app that relies on Facebook membership, with Facebook C# SDK?

    - by Chad
    I want to totally remove the standard ASP.NET membership system and use Facebook only for my web app's membership. Note, this is not a Facebook canvas app question. Typically, in an ASP.NET app you have some key properties & methods to control the "flow" of an app. Notably: Request.IsAuthenticated, [Authorize] (in MVC apps), Membership.GetUser() and Roles.IsUserInRole(), among others. It looks like [FacebookAuthorize] is equivalent to [Authorize]. Also, there's some standard work I do across all controllers in my site. So I built a BaseController that overrides OnActionExecuting(FilterContext). Typically, I populate ViewData with the user's profile within this action. Would performance suffer if I made a call to fbApp.Get("me") in this action? I use the Facebook Javascript SDK to do registration, which is nice and easy. But that's all client-side, and I'm having a hard time wrapping my mind around when to use client-side facebook calls versus server-side. There will be a point when I need to grab the user's facebook uid and store it in a "profile" table along with a few other bits of data. That would probably be best handled on the return url from the registration plugin... correct? On a side note, what data is returned from fbApp.Get("me")?

    Read the article

  • Increment the number of times an article has been read

    - by r.sendecky
    I have a situation where I need to increase the number of time article has been read. Once someone opens an article it should be reflected in the database by incrementing number of reads by one. Simple. Sending POST request to the server increments the number of reads by one. The article in question is supplied via URL parameter. Doing it manually by typing the URL in a browser works as expected. So server side is not at fault. My problems start with the javascript side of it or rather jquery. I hook the event to the article link. So every time a user clicks on the article link it increments the number of reads like so: $('#list-articles .article-link').click(function(e){ var oid = $(this).parent().parent().attr('data-oid').toString(); //Get the article id $.post( "/articles/viewed/" + oid ); }); Now this does not work! Number is not increased. I don't prevent default action since I need the link to actually open and display the article. Now if I put an alert right after the post like this: $('#list-articles .article-link').click(function(e){ var oid = $(this).parent().parent().attr('data-oid').toString(); //Get the article id $.post( "/articles/viewed/" + oid ); alert(oid); }); This variant works. After I dismiss the alert window, the number is incremented. Why is this so?? How can I fix this to actually work without the alert event present?

    Read the article

  • JAR files, don't they just bloat and slow Java down?

    - by Josamoto
    Okay, the question might seem dumb, but I'm asking it anyways. After struggling for hours to get a Spring + BlazeDS project up and running, I discovered that I was having problems with my project as a result of not including the right dependencies for Spring etc. There were .jars missing from my WEB-INF/lib folder, yes, silly me. After a while, I managed to get all the .jar files where they belong, and it comes at a whopping 12.5MB at that, and there's more than 30 of them! Which concerns me, but it probably and hopefully shouldn't be concerned. How does Java operate in terms of these JAR files, they do take up quite a bit of hard drive space, taking into account that it's compressed and compiled source code. So that can really quickly populate a lot of RAM and in an instant. My questions are: Does Java load an entire .jar file into memory when say for instance a class in that .jar is instantiated? What about stuff that's in the .jar that never gets used. Do .jars get cached somehow, for optimized application performance? When a single .jar is loaded, I understand that the thing sits in memory and is available across multiple HTTP requests (i.e. for the lifetime of the server instance running), unlike PHP where objects are created on the fly with each request, is this assumption correct? When using Spring, I'm thinking, I had to include all those fiddly .jars, wouldn't I just be better off just using native Java, with say at least and ORM solution like Hibernate? So far, Spring just took extra time configuring, extra hard drive space, extra memory, cpu consumption, so I'm concerned that the framework is going to cost too much application performance just to get for example, IoC implemented with my BlazeDS server. There still has to come ORM, a unit testing framework and bits and pieces here and there. It's just so easy to bloat up a project quickly and irresponsibly easily. Where do I draw the line?

    Read the article

  • how to get response from remote server

    - by ruhit
    I have made a desktop application in asp.net using c# that connecting with remote server.I am able to connect but how do i show that my login is successful or not. After that i want to retrieve data from the remote server..........so plz help me.I have written the below code..............is there any better way try { string strId = UserId_TextBox.Text; string strpasswrd = Password_TextBox.Text; ASCIIEncoding encoding = new ASCIIEncoding(); string postData = "UM_email=" + strId; postData += ("&UM_password=" + strpasswrd); byte[] data = encoding.GetBytes(postData); MessageBox.Show(postData); // Prepare web request... //HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create("http://localhost/ruhit/basic_framework/index.php?menu=login=" + postData); HttpWebRequest myRequest =(HttpWebRequest)WebRequest.Create("http://www.facebook.com/login.php=" + postData); myRequest.Method = "POST"; myRequest.ContentType = "application/x-www-form-urlencoded"; myRequest.ContentLength = data.Length; Stream newStream = myRequest.GetRequestStream(); // Send the data. newStream.Write(data, 0, data.Length); MessageBox.Show("u r now connected"); HttpWebResponse response = (HttpWebResponse)myRequest.GetResponse(); // WebResponse response = myRequest.GetResponse(); StreamReader reader = new StreamReader(response.GetResponseStream()); string str = reader.ReadLine(); while (str != null) { str = reader.ReadLine(); MessageBox.Show(str); } reader.Close(); newStream.Close(); } catch { MessageBox.Show("error connecting"); }

    Read the article

  • CakePHP update multiple elements in one div

    - by sw3n
    The idea is that I've 3 ajax links and each link is coupled to one single element (a form). The element will be loaded in one div. Each time when you click on a different link, the element that is requested before must be removed and the new one must come in there. So it has to update the DIV. The problem is, that it request the element, but it does not remove the other one. So when you click one the first link, it shows the first element, click on the second link and it puts the second element beneath them, and with the third one will happen exactly the same. If you clicked all the 3 links, it shows 3 forms right under each other. Some Code: Controller: function view($id) { // content could come from a database, xml, etc. $content = array ( array($this->render(null, 'ajax', '/elements/ga')), array($this->render(null, 'ajax', '/elements/ex')), array($this->render(null, 'ajax', '/elements/both')) ); $this->set('google', $content[$id][0]); // use ajax layout $this->render('/pages/form', 'ajax'); } View code: echo $ajax-link( 'Google', array( 'controller' = 'analytics', 'action' = 'view', 0 ), array( 'update' = 'dynamic1')); echo ' | '; echo $ajax-link( 'Exact', array( 'controller' = 'analytics', 'action' = 'view', 1 ), array( 'update' = 'dynamic1', 'loading' = 'Effect.BlindDownUp(\'dynamic1\')')); echo ' | '; echo $ajax-link( 'Beide', array( 'controller' = 'analytics', 'action' = 'view', 2 ), array( 'update' = 'dynamic1', 'loading' = 'Effect.BlindDown(\'dynamic1\')' )); echo $ajax-div('dynamic1'); echo $google; echo $ajax-divEnd('dynamic1');

    Read the article

  • File upload fails when user is authenticated. Using IIS7 Integrated mode.

    - by Nikkelmann
    These are the user identities my website tells me that it uses: Logged on: NT AUTHORITY\NETWORK SERVICE (Can not write any files at all) and Not logged on: WSW32\IUSR_77 (Can write files to any folder) I have a ASP.NET 4.0 website on a shared hosting IIS7 web server running in Integrated mode with 32-bit applications support enabled and MSSQL 2008. Using classic mode is not an option since I need to secure some static files and I use Routing. In my web.config file I have set the following: <system.webServer> <modules runAllManagedModulesForAllRequests="true" /> </system.webServer> My hosting company says that Impersonation is enabled by default on machine level, so this is not something I can change. I asked their support and they referred me to this article: http://www.codinghub.net/2010/08/differences-between-integrated-mode-and.html Citing this part: Different windows identity in Forms authentication When Forms Authentication is used by an application and anonymous access is allowed, the Integrated mode identity differs from the Classic mode identity in the following ways: * ServerVariables["LOGON_USER"] is filled. * Request.LogognUserIdentity uses the credentials of the [NT AUTHORITY\NETWORK SERVICE] account instead of the [NT AUTHORITY\INTERNET USER] account. This behavior occurs because authentication is performed in a single stage in Integrated mode. Conversely, in Classic mode, authentication occurs first with IIS 7.0 using anonymous access, and then with ASP.NET using Forms authentication. Thus, the result of the authentication is always a single user-- the Forms authentication user. AUTH_USER/LOGON_USER returns this same user because the Forms authentication user credentials are synchronized between IIS 7.0 and ASP.NET. A side effect is that LOGON_USER, HttpRequest.LogonUserIdentity, and impersonation no longer can access the Anonymous user credentials that IIS 7.0 would have authenticated by using Classic mode. How do I set up my website so that it can use the proper identity with the proper permissions? I've looked high and low for any answers regarding this specific problem, but found nil so far... I hope you can help!

    Read the article

  • How do I compare two PropertyInfos or methods reliably?

    - by Rob Ashton
    Same for methods too: I am given two instances of PropertyInfo or methods which have been extracted from the class they sit on via GetProperty or GetMember etc, (or from a MemberExpression maybe). I want to determine if they are in fact referring to the same Property or the same Method so (propertyOne == propertyTwo) or (methodOne == methodTwo) Clearly that isn't going to actually work, you might be looking at the same property, but it might have been extracted from different levels of the class hierarchy (in which case generally, propertyOne != propertyTwo) Of course, I could look at DeclaringType, and re-request the property, but this starts getting a bit confusing when you start thinking about Properties/Methods declared on interfaces and implemented on classes Properties/Methods declared on a base class (virtually) and overridden on derived classes Properties/Methods declared on a base class, overridden with 'new' (in IL world this is nothing special iirc) At the end of the day, I just want to be able to do an intelligent equality check between two properties or two methods, I'm 80% sure that the above bullet points don't cover all of the edge cases, and while I could just sit down, write a bunch of tests and start playing about, I'm well aware that my low level knowledge of how these concepts are actually implemented is not excellent, and I'm hoping this is an already answered topic and I just suck at searching. The best answer would give me a couple of methods that achieve the above, explaining what edge cases have been taken care of and why :-)

    Read the article

  • Micrsoft Silverlight 3 cannot create service reference to localhost:port

    - by Monte
    Windows Server 2003 (IIS 6) Visual Studio 2008 .NET FrameWork 3.5 SP1 I am a .NET developer for a living and I have over 40 hours in the problem Project type = "Silverlight Navigation Application", "APS.NET Web Site" (when I tried it as "ASP.NET Web Application Project" I could not copy it to the production web site - well I could copy it but I could not make it run) Created a service.cs on the .Web side of the application. Created a reference to that service.cs on the Silverlight side. For a time all is good as I can reference the service as localhost:port (e.g. localhost:1374) in Visual Studio and debug both Silverlight side and service.cs To access the application in production mode (from IE) I update the service refrence and replace localhost:port with the IP address. The problem with the IP address is I cannot debug the service.cs so I have to change it back to localhost:port to debug. Now to the problem. After a period of time localhost:port just plain breaks. I get an error message no service at the other end Yes I know the port can change - that is not the problem - the port on the service side just plain breaks! For example from Visual Studio from the Silverlight side of the project right click "Service Reference", "Add Service Reverence". It finds 1 service in the application on a port. But when I click that service under "Services:" in the modal dialog box "Add Service Reference" I get an error: There was an error downloading 'http://localhost:1377/SehaleCSS.Web/Service.svc'. The request failed with the error message: -- Could not load file or assembly 'App_Web_tipnndfq, If I go back to the IP address the service is repsponding (with the right answer) The service just plain goes a while responding to localhost:port and then fails Even making NO change to service.cs it go a while then fails as a localhost:port It is not IIS environmental as I can go back to a prior saved version of the code and it works Something is happening that the .web side of the application is failing. It still works as an IP and it still exposes itself as a localhost:port but it fails to properly repsonde as a localhost:port.

    Read the article

  • before filter not working as expected

    - by Jimmy
    Hey guys I have a ruby on rails app with a before filter setup in my application controller to ensure only the owner can edit a document, but my permission check is always failing even when it shouldn't. Here is the code: def get_logged_in_user id = session[:user_id] unless id.nil? @current_user = User.find(id) end end def require_login get_logged_in_user if @current_user.nil? session[:original_uri] = request.request_uri flash[:notice] = "You must login first." redirect_to login end end def check_current_user_permission require_login logger.debug "user id is #{params[:user_id]}" logger.debug "current user id is #{session[:user_id]}" if session[:user_id] != params[:user_id] flash[:notice] = "You don't have permission to do that." redirect_to :controller => 'home' end end The code to note is in the check_current_user_permission. Here is an example of my log output: user id is 3 current user id is 3 Redirected to http://localhost:3000/home Filter chain halted as [:check_current_user_permission] rendered_or_redirected. Can anyone shed some light into why this is failing? Obviously the user_id of 3 is equal to the session's user_id of 3. What is going wrong?

    Read the article

  • CDN for Images in ASP.NET

    - by Chris
    I am in the process of moving all of the images in my web application over to a CDN but I want to easily be able to switch the CDN on or off without having to hard code the path to the images. My first thought was to add an HttpHandler for image extensions that depending whether a variable in the web.config (something like ) will serve the image from the server or from the CDN. But after giving this a little though I think I've essentially ruled this out as it will cause ASP.NET to handle the request for every single image, thus adding overhead, and it might actually completely mitigate the benefits of using a CDN. An alternative approach is, since all of my pages inherit from a base page class, I could create a function in the base class that determines what path to serve the files from based off the web.config variable. I would then do something like this in the markup: <img src='<%= GetImagePath()/image.png' /> I think this is probably what I'll have to end up doing, but it seems a little clunky to me. I also envision problems with the old .NET error of not being able to modify the control collection because of the "<%=" though the "<%#" solution will probably work. Any thoughts or ideas on how to implement this?

    Read the article

  • Zend, slow load, "waiting for response" for 20-80 seconds on local site

    - by Tony C.
    So I have several sites running under the same zend setup. All of the sites run pretty normally except one. Upon loading or reloading this one site, reguardless of which page your on (excluding the 404 page explanation later...) you get a serious pause before any content begins to download. Using firebugs net panel you can see that the first request which is www.(siteaddress).com.local you see a "waiting for response" bar (purple) that is going for anywhere from 20 to sometimes 80+ seconds and this isn't on a dev site, this is on a local site under localhost. What I've managed to figure out so far is that all the pages do this except my 404 page. The reason the 404 page doesn't succumb to this is because it uses a seperate controller (the error controller) and therefore bypasses much of the controller and functions the other parts of the site use. Using exit statements I've manged to figure out that the problem happens somewhere between my post dispatch and my main (top most) controllers Init function. If i exit in the main controllers init the page loads (then exits instantly, no wait). If i do the same in the pre or post dispatch the page waits the 20-80 seconds then exits. Is there a diagram or explanation somewhere or a way for me to find out what events fire inbetween the post dispatch and the main controllers init function? Or does anyone have any clue what might cause this? Any help would be greatly appreciated...

    Read the article

  • Model binding difficulty

    - by user281180
    I am having a model and I am using ajax.post. I can see that the model binding isn`t being done for the arraylists in my model, though binding done for the properties of int or string type. Why is that so? My code is as below. I have a model with the following properties public class ProjectModel { public int ID { get; set; } public ArrayList Boys= new ArrayList(); } In my view I have $(document).ready(function () { var project = new Object(); var Boys= new Array(); var ID; ....... ID = $('#ID').val(); project.Boys= Boys; ..... $.ajax({ type: "POST", url: '<%=Url.Action("Create","Project") %>', data: JSON.stringify(project), contentType: "application/json; charset=utf-8", dataType: "html", success: function () { }, error: function (request, status, error) { } }); // My controller [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create(ProjectModel project) { try { project.CreateProject(); return RedirectToAction("Index"); } ....

    Read the article

  • Parsing XML elements with dynamic namespace prefix in PHP

    - by BugKiller
    I have the following XML ( you can say SOAP request ) : <SOAPENV:Envelope xmlns:SOAPENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:NS="http://xyz.gov/headerschema" > <SOAPENV:Header> <NS:myHeader> <NS:SourceID>223423</NS:SourceID> </NS:myHeader> </SOAPENV:Header> </SOAPENV:Envelope> I use the following code and it works fine : <?php $myRequest ='<SOAPENV:Envelope xmlns:SOAPENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:NS="http://xyz.gov/headerschema" > <SOAPENV:Header> <NS:myHeader> <NS:SourceID>223423</NS:SourceID> </NS:myHeader> </SOAPENV:Header> </SOAPENV:Envelope>'; $xml = simplexml_load_string($myRequest, NULL, NULL, "http://schemas.xmlsoap.org/soap/envelope/"); $namespaces = $xml->getNameSpaces(true); $soapHeader = $xml->children($namespaces['SOAPENV'])->Header; $myHeader = $soapHeader->children($namespaces['NS'])->myHeader; echo (string)$myHeader->SourceID; ?> The Problem I know the prefix ( SOAPENV + NS ) , but the clients could change the prefix to whatever they want, so they may send me xml document that has ( MY-SOAPENV + MY-NS) prefixes. My Question How can I handle this since the namespace prefixes are not static , how can I parse it ? Thanks

    Read the article

  • How to control access to third party HTML pages

    - by Wylie
    Hello, We have a Learning Management System (LMS) that runs on its own server (IIS/Server 2003). Students must login with Forms authentication to gain access to the content. We want to offer access to third party flash and audio that is embedded in HTML pages hosted on the third party server (IIS/Server 2003). Currently we use a frame in a pop-up window that is populated via a simple URL to the third party HTML pages. How can the third party control access to their content, so that only students who launch the pop-up windows from our site can access their content? Since the content is mostly video and flash, we would prefer not to stream all of their content through our server to the Student. We have a programming staff, so we could maybe... - either post or get for our HTTP request to the third party server - we could use SSL - we could programmatically assign a global NT user account to all of our users and then do some kind of Active Directory login from the LMS server to the third party server - could the third party content be hosted at Amazon S3? Would this allow for secure access/download? These are just ideas. We really have no idea. Any suggestions would be greatly appreciated. TIA, Wylie

    Read the article

  • Building simple jQuery plugin, need assistance

    - by kirisu_kun
    Hi there, I'm building my first ever jQuery plugin (it's just a simple experiment). Here's what I have so far: (function($){ $.fn.extend({ auchieFader: function(options) { var defaults = { mask: '', topImg : '', } var options = $.extend(defaults, options); return this.each(function() { var o = options; var obj = $(this); var masker = $(o.mask, obj); masker.hover(function () { $(o.topImg).stop().animate({ "opacity": "0" }, "slow"); }, function () { $(o.topImg).stop().animate({ "opacity": "1" }, "slow"); }); }); } }); })(jQuery); I'm then calling the plugin using: $('.fader').auchieFader({mask: ".mask", topImg: ".top"}); If I then add another request say: $('.fader2').auchieFader({mask: ".mask", topImg: ".top"}); Then no matter what instance of my 2 faders I hover both of them will trigger. I know this is because my mask mask and topImg options have the same class - but how can I modify the plugin to allow for these items to have the same class? I know it's probably something really simple, but I'm still finding my way with jQuery and Javascript in general. Any other tips on improving my code would also be greatly appreciated! Cheers, Chris

    Read the article

< Previous Page | 586 587 588 589 590 591 592 593 594 595 596 597  | Next Page >