Search Results

Search found 10212 results on 409 pages for 'reverse ajax'.

Page 350/409 | < Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >

  • CodeIgniter: json_decode array issues

    - by thedp
    On my client side I'm sending an ajax request with jQuery in the following matter: $.post(script.php, { "var1":"something", "var2":"[1,2,3]" }, function(data) { }, "json"); On the server side, in the CodeIgniter's controller I'm receiving the values like so: $var1 = trim($this->input->post('var1')); $var2 = trim($this->input->post('var2')); My question is how do I convert the string in $var2 into a PHP array. I tried using json_decode($var2, true) but it returns a null since "[1,2,3]" is not a legal JSON string by itself. Also, if you believe there is a better way for me to read the values on the server-side please show me how. Thank you.

    Read the article

  • Streaming content from (sharepoint) web part

    - by Mikko Rantanen
    How does one stream files, html or custom AJAX responses from web parts? Our current quick-and-very-dirty solution is to make the web part call the current page with certain query parameters, which the web part checks and instead of performing normal load it writes the required things to output and calls response end. This sounds bad since SharePoint might load other web parts and execute their code before reaching our web part. The web part is configured with data source settings which means the streaming context must be specific to the web part so it can acquire the correct data source settings.

    Read the article

  • Is it dangerous to store user-enterable text into a hidden form via javascript?

    - by KallDrexx
    In my asp.net MVC application I am using in place editors to allow users to edit fields without having a standard form view. Unfortunately, since I am using Linq to Sql combined with my data mapping layer I cannot just update one field at a time and instead need to send all fields over at once. So the solution I came up with was to store all my model fields into hidden fields, and provide span tags that contain the visible data (these span tags become editable due to my jquery plugin). When a user triggers a save of their edits of a field, jquery then takes their value and places it in the hidden form, and sends the whole form to the server to commit via ajax. When the data goes into the hidden field originally (page load) and into the span tags the data is properly encoded, but upon the user changing the data in the contenteditable span field, I just run $("#hiddenfield").val($("#spanfield").html(); Am I opening any holes this method? Obviously the server also properly encodes stuff prior to database entry.

    Read the article

  • Facebook app: Using Flex or JQuery

    - by javanes
    Hello; I am about to start a new project, a facebook app. There are two alternatives for client-side in my mind. Write Flex-Facebook app. Or write html with Ajax and Jquery. So what are your opinion, which do you recommend? What are the issues about each to take into account? Advantages, disadvantages, subjective opinion? Thank you help me decide..

    Read the article

  • Client-side or server-side processing?

    - by Nick
    So, I'm new to dynamic web design (my sites have been mostly static with some PHP), and I'm trying to learn the latest technologies in web development (which seems to be AJAX), and I was wondering, if you're transferring a lot of data, is it better to construct the page on the server and "push" it to the user, or is it better to "pull" the data needed and create the HTML around it on the clientside using JavaScript? More specifically, I'm using CodeIgniter as my PHP framework, and jQuery for JavaScript, and if I wanted to display a table of data to the user (dynamically), would it be better to format the HTML using CodeIgniter (create the tables, add CSS classes to elements, etc..), or would it be better to just serve the raw data using JSON and then build it into a table with jQuery? My intuition says to do it clientside, as it would save bandwidth and the page would probably load quicker with the new JavaScript optimizations all these browsers have now, however, then the site would break for someone not using JavaScript... Thanks for the help

    Read the article

  • Can I bind multiple forms to a single model using the default model binder?

    - by MedicineMan
    I have a complex page with several forms on it. The page is divided into sections, and each section has a continue button on it. The page is bound to a pageViewModel, each section addresses a different set of properties on the model. The continue button makes an ajax call to the controller, and the model binder binds it appropriately to the appropriate sections of the model. The section is refreshed appropriately. Finally, I would like to have a save button at the bottom of the page that takes all the forms, and binds all of the forms to the model. The model, at this point has all of the properties filled out, and can be processed accordingly. Can I accomplish this by some ASP MVC magic?

    Read the article

  • Tapestry 4, get submitted value from non-component element

    - by cometta
    My form has a custom element like below, created using custom ajax: <select jwcid="testtest <at> Any"> <option value="x">California -- CA</option> <option value="y">Colorado -- CO</option> <option value="z">Connecticut -- CN</option> </select> After the form is submitted, how do I get the value of this custom html element? cycle.getPage().getComponents().get("testtest") ?

    Read the article

  • How do you access URL text following the # sign through Java?

    - by cmcculloh
    Using Java (.jsp or whatever) is there a way where I can send a request for this page: http://www.mystore.com/store/shelf.jsp?category=mens#page=2 and have the Java code parse the URL and see the #page=2 and respond accordingly? Basically, I'm looking for the Java code that allows me to access the characters following the hash tag. The reason I'm doing this is that I want to load subsequent pages via AJAX (on my shelf) and then allow the user to copy and paste the URL and send it to a friend. Without the ability of Java being able to read the characters following the hash tag I'm uncertain as to how I would manipulate the URL with Javascript in a way that the server would be able to also read without causing the page to re-load. I'm having trouble even figuring out how to access/see the entire URL (http://www.mystore.com/store/shelf.jsp?category=mens#page=2) from within my Java code...

    Read the article

  • How to do custom jquery lives in 1.4.1?

    - by chobo2
    Hi I been sort of using jquery livequery plugin and jquery live together. However now that I am using 1.4 it seems jquery livequery is not working 100%. So I am not sure how to tackle this problem I have this in livequery $('#Description').livequery(function () { $('#Description').htmlarea({ toolbar: [ ["bold", "italic", "underline", "strikethrough", "|", "subscript", "superscript"], ["increasefontsize", "decreasefontsize"], ["orderedlist", "unorderedlist"], ["indent", "outdent"], ["link", "unlink"] ] }); }); So everytime I loaded up my page. It would actually run that code in the livequery and display and if I went to another ajax tab and come back it would go into this again. Now I am not sure how to change it to .live() jquery 1.4 since I just tried to do this $('#Description').live(function () { $('#Description').htmlarea({ toolbar: [ ["bold", "italic", "underline", "strikethrough", "|", "subscript", "superscript"], ["increasefontsize", "decreasefontsize"], ["orderedlist", "unorderedlist"], ["indent", "outdent"], ["link", "unlink"] ] }); }); and it does not seem to work. the plugin is not binded and the rich html editor is not displayed.

    Read the article

  • Where to learn about HTTP?

    - by razass
    I am fluent in HTML and PHP and slowly learning JavaScript however I have noticed that there is a huge hole in my knowledge when it comes to understanding how web software communications actually work. I understand the flow of information across the net however I would like to learn about HTTP protocols to better understand how the data is actually sent back and forth through the internet to help me understand things like REST, HTTP Headers, AJAX requests, etc. However, I must be searching the wrong terms because I haven't been able to find a good description of HTTP protocols. Any help is appreciated to point me in the right direction. Thanks!

    Read the article

  • What JavaScript framework to choose? JQuery+JQueryUI, Dojo or ExtJS?

    - by Ivan
    I am choosing a JavaScript Framework to master and use extensively in all my future projects (mostly working with relational DATA, web services via AJAX and implementing complex rich client UIs). Now I am choosing between JQuery+JQueryUI, Dojo and ExtJS. What should I choose? 1st priority is power and functionality, 2nd priority is beauty and maintainability of code and ease of use, 3rd priority is flexibility and modularity, 4th priority is speed and size. IE compatibility hardly matters, I'd like it to be modern, legacy-free and standard-conformant.

    Read the article

  • Web Search API which allows automated queries

    - by Spi1988
    I need to develop a java desktop application which sends queries to a search engine in order to obtain the very first highest ranked pages (Example: the first 4 pages only). Some heavy processing needs to be performed on the retrieved pages, so the time between a query and another won't be less then a minute. I would like to know whether there is any web search API for java, suitable for my situation, i.e. which allows the use of automated queries? (since in my case, the queries are generated programatically, and not through user interaction) I have checked Google's AJAX Search API and also Yahoo's Search Boss, however they only allow queries triggered by direct user interaction.

    Read the article

  • Selenium fails to find component after dom update (reRender)

    - by Bozho
    I'm testing a richfaces application with selenium. It works fine, unless I use reRender. (for those unfamiliar with richfaces - whenever an ajax request finished, parts of the DOM are updated/chagned/removed). So, after a reRender selenium (the IDE at least) fails to locate the elements which were within the reRendered area. Both FireBug and WebDeveloper locate the elements, and on "view source" the elements are there. So, is there a way to tell selenium to update its DOM "knowledge" with the latest changes? Firefox 3.5.6, latest version of Selenium IDE.

    Read the article

  • How to set breakpoint in inline Javascript in Google Chrome browser for linux?

    - by Alan McCloud
    When I open Developer Tools in Google Chrome, I see all kinds useless crap like Profiles, Timelines, not to mentions Audits but basic functionality like being able to set breakpoint both in js files and within html javascript code is missing!. I tried to use javascript Console which itself is buggy ( like when once it encounter JS error, cannot get out of it unless refresh the whole page useless when ajax is involved). I am surprised google engineers still have not figured this out if these features still not available. If they are and there is some twisted way to do this, can some one help?

    Read the article

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • PHP upload with progress bar

    - by Mitchan Adams
    Hi all I want to create an upload form to upload large files. Thats pretty much easy, however, the upload process itself taks long and basically looks like nothing is happening for a few minutes. So now I'd like to insert a progress bar to show the user that something is happening and they should just sit tight. I've read of numerous methods like APC and certian flash plugins, but my site is hosted on a shared server and I cant install any new applications on it. I'm thinking, maybe if it is possible to read the size of the temp file it creates via an ajax page. By polling the size every few seconds I should be able to get the progress of the upload. Now the question I pose is...where is the temp file situated?

    Read the article

  • Recommendations or advice for shared computer control

    - by Telemachus
    Basic scenario: we are a school (overwhelmingly Mac, some Windows machines via BootCamp), and we are considering using DeepFreeze to guard the state of our shared machines. We have roughly 250 machines that are either shared laptops (which move around quite a bit) or common desktops in public spaces. Obviously, we spend a lot of time maintaining the machines and trying to reverse the inevitable drift as people make changes to the computers. We would like to control the integrity of the build we initially put onto the machines without handcuffing users and especially without using Mac's Parental Control software. (We've had nothing but bad experiences with it.) We've been testing DeepFreeze, and so far it's very impressive. But I'm curious to hear if people who have used DeepFreeze or any similar software have any advice or tips. To get things started, I will post my own pros and cons. Pros: The state of the machine is frozen in our chosen state. All changes made to the machine after that disappear upon restart. (This frozen state really appears to cover everything. I have yet to do something to a test machine that isn't instantly healed.) Tons of trivial but time-consuming maintenance is gone in an instant. Also, lots of not-so-trivial breakage should be avoided. There are good options, however, that allow you to create storage spaces either globally or per user. (Otherwise, stored files disappear upon reboot. For some machines, this is a good option itself. Simply warn people: save externally or else; this machine is a kiosk, not your storage space.) Cons: Anytime we actually need to make a change (upgrade basic software, add a printer or an airport permanently, add new software), the process is a bit more complex. Reboot into a special mode (thaw state), make changes, reboot back into frozen mode. If (when?) we forget this, we will end up making changes that disappear after the next reboot. Users will forget to save files correctly (in the right place or externally), and we will have loud, unpleasant conversations explaining that we can't recover the document they worked on all afternoon yesterday. The machine rebooted. The file is gone. These are my initial thoughts, but I would love to hear from other people who have experience with DeepFreeze or any similar software. What should we be careful about? Do the pros outweigh the cons? What gains or problems am I not seeing? Thanks.

    Read the article

  • How should I port this Prototype to JQuery?

    - by blu
    There is currently this Prototype code that does a PUT: new Ajax.Request(someUrl, { method: 'put', parameters: { 'foo': bar }, onSuccess: function(response) { } .bind(this) }); I found this post but the solution uses an extra parameter supported by RoR, however I am targeting an ASP.NET backend. I searched a bit and found that not all browsers support PUT operations so apparently this could fail in certain browsers? This is already in prod, so a direct port would be fine for now I suppose. As an aside, what is the deal with the bind(this) in the onSuccess function?

    Read the article

  • Primary/secondary ethernet interfaces via NetworkManager in Ubuntu 9.10

    - by Josh
    I have an Ubuntu 9.10 machine with three ethernet interfaces, eth0, eth1 and eth2. eth2 is connected to a private network. eth0 and eth2 are connected to two different LANs. Either one will provide access to the internet. All three networks have DHCP servers. Using Ubuntu's the default settings (And Gnome), when I boot up all the interfaces are active and my system gets three IP addresses. However any attempt to access the internet results in connection timeouts and other weirdness. I suspect that traffic is going out on one NIC (like eth0) and coming back in on another (like eth1). I'm not sure what's going on. The only way I can access the internet at the moment is to bring two of the devices down with ifdown. How can I configure eth0 as my primary interface so all trafic goes out by default on that interface, while keeping the other two active? Also, I want to make sure Avahi broadcasts properly on all three IPs so that the computers on the LAN of eth1 can still connect to myHostname.local... EDIT: Here's my routing table: Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 172.16.151.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2 172.16.30.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.1.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 0.0.0.0 172.16.30.2 0.0.0.0 UG 0 0 0 eth0 0.0.0.0 10.1.0.1 0.0.0.0 UG 0 0 0 eth1 I want the 172.16.30.2 network to be the primary one and the 10.1.0.0 network to be the secondary one. EDIT2: My nameservers are also incorrect. It seems like Ubuntu is bringing the networks up in order, eth0, then 1, then 2, and the DHCP information from eth1 is overriding eth0, and eth2 is overriding eth1. How can I reverse this so the DHCP information from eth0 is the "master"? EDIT3: This seems to be an issue with Gnome's NetworkManager.

    Read the article

  • How to manupilate data in VIew using Asp.Net Mvc RC 2?

    - by Picflight
    I have a table [Users] with the following columns: INT SmallDateTime Bit Bit [UserId], [BirthDate], [Gender], [Active] Gender and Active are Bit that hold either 0 or 1. I am displaying this data in a table on my View. For the Gender I want to display 'Male' or 'Female', how and where do I manipulate the 1's and 0's? Is it done in the repository where I fetch the data or in the View? For the Active column I want to show a checkBox that will AutoPostBack on selection change and update the Active filed in the Database. How is this done without Ajax or jQuery?

    Read the article

  • Sessions and uploadify

    - by Uffo
    I'm using uploadify, and i can't set sessions in my php files, my script looks like this: $("#uploadify").uploadify({ 'uploader' : '/extra/flash/uploadify.swf', 'script' : '/admin/uploads/artistsphotos', 'scriptData' : {'PHPSESSID' : '<?= session_id(); ?>'}, 'cancelImg' : '/images/cancel.png', 'folder' : '/img/artists', 'queueID' : 'fileQueue', 'auto' : false, 'multi' : true, 'onComplete' : function(a, b, c, d, e){ }, 'onAllComplete': function(event,data){ $bla = $('#art').find(':selected',this); $fi = $bla.val(); $.ajax({ type: "POST", url: "/admin/uploads/artistsphotosupload", data: "artist="+$fi, success: function(msg){ console.log(msg); } }); } }); And in php if i try: $_SESSION['name'] = 'something'; I can't access it in another file.and i have session_start(); activated Any solutions?

    Read the article

  • No method 'get' on backbone model save

    - by user888734
    I'm using backbone for a reasonably complicated form. I have a number of nested models, and have been computing other variables in the parent model like so: // INSIDE PARENT MODEL computedValue: function () { var value = this.get('childModel').get('childModelProperty'); return value; } This seems to work fine for keeping my UI in sync, but as soon as I call .save() on the parent model, I get: Uncaught TypeError: Object #<Object> has no method 'get' It seems that the child model kind of temporarily stops responding. Am I doing something inherently wrong? EDIT: The stack trace is: Uncaught TypeError: Object #<Object> has no method 'get' publish.js:90 Backbone.Model.extend.neutralDivisionComputer publish.js:90 Backbone.Model.extend.setNeutralComputed publish.js:39 Backbone.Events.trigger backbone.js:163 _.extend.change backbone.js:473 _.extend.set backbone.js:314 _.extend.save.options.success backbone.js:385 f.Callbacks.o jquery.min.js:2 f.Callbacks.p.fireWith jquery.min.js:2 w jquery.min.js:4 f.support.ajax.f.ajaxTransport.send.d

    Read the article

  • Recreating Cookies on another Domain

    - by Bill
    Hi, I have a site on A.com and an iframe on B.com which reads info from A.com. I realize that there is some problems with third party cookies, iframes and P3P - particularly in Safari [my problem] Is it possible to instead, use AJAX or a hidden iFrame to pass the cookie information from A.com to B.com which will then "recreate" another cookie with the same information on the iframe in B.com. I am trying to do this for authenication - i.e. a user is logged in on A.com and then goes to b.com and the iframe is also logged in ? I was hoping to perhaps pass the data in a hidden iframe and "recreate" the cookie in the iframe on B.com using JavaScript?

    Read the article

  • when i click on checkbox ,the image should be hiden though i dont make it happen somehow and i can g

    - by user309381
    function Psend() { new Ajax.Request('Handler.ashx', { method: 'get', onSuccess: function(transport) { var response = transport.responseText || "no response text"; //alert("Success! \n\n" + response); var obj = response.evalJSON(true); for (i = 0; i < 4; i++) { DeCheBX = $('MyDiv').insert(new Element('input', { 'type': 'checkbox', 'id': "Img" + obj[i].Nam, 'value': obj[i].IM, 'onClick': 'SayHi(this,i)' })); document.body.appendChild(DeCheBX); DeImg = $('MyDiv').insert(new Element('img', { 'id': "img" + obj[i].Nam, 'src': obj[i].IM })); document.body.appendChild(DeImg); SayHi = function(x,i) { try { if ($(x).checked == true) { img = "img" + obj[i].Nam; alert(img); $('img').hide(); } } catch (e) { alert("error"); } }; } }, onFailure: function() { alert('Something went wrong...') } }); }

    Read the article

  • .NEt on WIN to Mono on Ubuntu

    - by Srikanth
    I am looking at a possibility to change my ASP.NET 2.0 app to Mono framework. I have used the mono analyzer tool and it does detect some p/invoke and interop dependencies. For ex. 1) We use excel interops and on linux we are looking to use staroffice/Openoffice instead. Is there an easy way of substituting excel with staroffice? (I know it sounds bizarre, but just don't want to miss out in case anyone has done it already.) 2) LDAP auth: What could be the best alternative in Ubuntu (or an other flavour of Linux) ? 3) Is there an ajax framework for mono? Preferably with similar controls as Atlas?? I hope I am not too ambitious here.. thanks.

    Read the article

< Previous Page | 346 347 348 349 350 351 352 353 354 355 356 357  | Next Page >