Search Results

Search found 17356 results on 695 pages for 'document ready'.

Page 522/695 | < Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >

  • Js/Jquery bind Datatable or Datalist

    - by Robin Rieger
    Scenario I have a input box for post codes. When you put in a postcode it makes a call to a web service and gets all the suburb names back and binds them to a list, like an input dropdown list. This works when I return a List<string> from the web service for just the names. No problem. The problem arises when I go to save the values to the db. When it gets saved I cannot save the name of the suburb in the suburb column in the db. I have to save the id of that suburb for the foreign key constraint. So then I changed the List<> thing slighty and cheated and returned the following. <ArrayOfString> <string>8213</string> <string>BROOKFIELD</string> <string>8214</string> <string>CHAPEL HILL</string> <string>8215</string> <string>FIG TREE POCKET</string> etc.... The reason for this is so I can have the value of the item in the drop down as the id and hence can save that in the code behind on save. To do this, I did the following: $(result).each(function (index, value) { var suburbname; var pattern = new RegExp('[0-9*]'); var m = pattern.exec(value); if (m == null) { suburbname = value var o = new Option(suburbname, suburbid); /// jquerify the DOM object 'o' so we can use the html method $(o).html(suburbname); $(document.getElementById('<%= suburb.ClientID %>')).append($("<option></option>") .attr("value", suburbid) .text(suburbname)); } else { suburbid = value } That works as well... the drop down only has name and the value for those is the number The problem....? I am making assumptions above which I don't like... I.e. that the name never has a number it, that the web service will always return the id first to set the id before running the add to the dropdown for the first time. It just feels wrong.... :S (thoughts?) So, if I change the web service to return a datatable, to out the following (which is what the whole query returns): <Suburbs diffgr:id="Suburbs1" msdata:rowOrder="0"> <SuburbID>8213</SuburbID> <SuburbName>BROOKFIELD</SuburbName> <StateID>4</StateID> <StateCode>07</StateCode> <CountryCode>61</CountryCode> <TimeZones>10.00</TimeZones> <Postcode>4069</Postcode> </Suburbs> Is there a way of acheiving the same as above in js/jquery. So when the user inputs the postcode the webservice returns the datatable and then binds it to the select with the SuburbID going into the value and the name going into the text??? Any other ways I could return the data from the web service to solve this? Note: essentially I need the option to look like this: <option value="8213">BROOKFIELD</option> I also thought I could make a second call to get just the id on the bind of the text... but I kind of want to only make one call.... Thanks in adance, Cheers Robin .net, C#, js, jquery, web service..... Solution with guidance from Billy The adding to the select using the other method below did not work, but my original way did once I had the correct variables so just used that.... The service returns: <ArrayOfMyClass> <MyClass> <ID>8213</ID> <Name>BROOKFIELD</Name> </MyClass> ... etc The js is: (note: it is onchange of the postcode input box. it runs a web service and then on success runs the following) function OnCompleted(result) { var _suburbs = result; var i = 0; $(_suburbs).each(function () { var SuburbName = _suburbs[i].Name; var SuburbID = _suburbs[i].ID; $(document.getElementById('<%= suburb.ClientID %>')).append($("<option></option>") .attr("value", SuburbID) .text(SuburbName)); i = i + 1; });

    Read the article

  • Poor Customer Service Example

    - by MightyZot
    Lately I have been frustrated by examples of poor customer service. At least one is worth writing about because I don’t think companies realize the effects of their service policies on loyal customers. Bad Customer Service Example #1 Recently, I received an offer in the mail from my cable company, suddenLink. The offer was for an updated TiVo for $12/mo. Normally I ignore offers like this one because I already have the service they’re offering and many times advertisers are offering alternatives to what is already an excellent product offering. I tend to exhibit a high level of loyalty to the products and brands that I use. In this case, we were looking to upgrade our TiVo and this deal is attractive for several reasons: I don’t want to pay a huge amount up-front for the device, so paying a monthly amount for the device is attractive to me. My entertainment is almost all on a single invoice. I’m no longer going to be billed by suddenLink and TiVo. TiVo is still involved, so I am still loyal to the brand I love. I have resisted moving to other DVRs and services for over a decade. I called suddenLink to order the new TiVo and was rewarded with great customer service. In fact, I can’t remember ever getting poor customer service from suddenLink. They are always there to answer my technical support questions and they are very responsive to outages. Then I called TiVo. First of all, I chose the option on the phone system to change or cancel my service, which was consequently met by an inordinate hold time. (I’m calling this time inordinate because I get through very quickly if I want to purchase something.) This is a trend that I’ve noticed with companies – if you want me to be loyal to you, it should be just as easy to cancel your service as it is to purchase it. Because, I should never be cancelling because I am unhappy. And, if you ever want my business again, or more importantly a reference, then you’d better make the exit door open just as easy as the enter door. After quite some time on hold, I talked to “Victor” who was very courteous. Victor canceled my service and then told me that I could keep my current TiVo and transfer recorded programs to it from the new TiVo.  Cool I said, but what about the cost?  He said there was no extra cost.  This was also attractive to me because I paid for my TiVo and it would be good to use it for something at least.  That was four months ago. This month I noticed that TiVo was still charging me for my original service. I was a little upset, but I decided to give them the benefit of the doubt. After all, I am a loyal TiVo customer and I have resisted moving to other solutions for over a decade. I’m sure they will do whatever it takes to keep my business, through TiVo or through suddenLink. After quite some time on hold, I was able to talk to a customer service representative, “Les”. I explained that I am a loyal TiVo customer, but I purchased this deal through my cable provider. I’m still with TiVo, I just wanted a single bill and to take advantage of the pay-over-time option. “Les” told me that he was very sorry to hear that I’m leaving TiVo, to which I responded again that I wasn’t leaving TiVo, I just want one invoice, and to take advantage of the pay-over-time. So, after explaining that I requested a termination of the non-suddenLink account (TiVo can see both of course), I was put on hold again for quite some time while my refund was “approved”.  “Les” said that he could see my cancellation request back in July. Note that it is now November, so they have billed me inappropriately four times. After quite some time, he came back on the line and told me that he was able to “get me most of my money back.” He got approval to refund 90 days. Even though I requested cancellation of one of my accounts, TiVo has that cancellation request on file and they admit overbilling me, I am going to get “most” of my money back. To top this experience off, when we were ready to hang up, “Les” told me that he was sorry to see me go and that he hoped I would come back to TiVo again. Again, I explained to “Les” that I have not left TiVo. I am just paying them through suddenLink. At that point, he went into a small dissertation about how this is a special arrangement they have with suddenLink and very few others. He made me feel like I was doing something wrong. Why should I feel that way? TiVo made the deal with suddenLink, not me, and the deal seemed like a good compromise for me to be able to get what I need. Here is what TiVo Customer Service accomplished on those two calls – I no longer feel like I need to be loyal to the TiVo brand or service. If I had been treated better on these two calls, I would still be recommending TiVo to my friends. They would still be getting revenue from a loyal customer, who paid the same rate for over a decade, and this article wouldn’t be here for you to read. Interesting… In my opinion, if you want brand loyalty, be loyal to your customers!

    Read the article

  • Merging two templates in iText

    - by Shaggy Frog
    Let's say I have two PDF templates created with Adobe Acrobat, which are both single-page, 8.5x11 documents. The first template (A.pdf) has content for the top half of the page. The second template (B.pdf) has content for the bottom half of the page. (It just so happens the content in both templates does not "overlap" each other.) I would like to use iText to take these two templates and create a single, "merged" template from it (C.pdf) that is only a single page (with A.pdf's content on the top half and B.pdf's content on the bottom half). (I do not want to "merge" these two files into a 2-page document. I need the final product to be a single page.) I will be running iText in a servlet environment (Tomcat 6) but I don't think that makes a difference to the answer. Is this possible?

    Read the article

  • PDF Generation Help needed

    - by Kobojunkie
    Hi, I am brandnew to PDF Generation or rendering but have a project to, create a PDF Template system that allows users to save Template to Database, and later generate a PDF document using the template and values from my database. Questions a) Is there a PDF tool out there that can help me with this and documentation I can study to learn of this? b) Are there free tools out there for this? c) How do I create a PDF Template? XML? Thanks in Advance!

    Read the article

  • onclick event from XML Nodes

    - by user256007
    My Documents are a mixture of XML and HTML Tags. where HTML tags are pulled from xhtml namespace and mine are from various namspaces. I wanna take user interaction events from XML Nodes of my Document. I First tried <do:landingTime xhtml:onclick="fnc">20:48:29.45</do:landingTime> with no Luck. But <xhtml:span onclick="fnc"> works. So is there any solution/tricks/hacks/backdoor to take events from my xml Nodes.

    Read the article

  • ASP.Net Cross Page Posting

    - by John
    Currently I have two pages: The first page contains an input form, and the 2nd page generates an excel document. The input form's button posts to this 2nd page. What I'd like to do is add a second button which also posts to the 2nd page; however, I'll need requests created from this new button to act differently, which brings me to my question: Is there a way I can tell, from the 2nd page, which button was pressed to submit the request? The main reason I'm asking is I'd like to re-use the 2nd page's logic in parsing the information from the first page if possible; I'd rather not have to copy it to a new page and have the new button post to that. Thanks!

    Read the article

  • how do I automatically execute javascript?

    - by user317005
    how do I automatically execute javascript? I know of < body onLoad="" , but I just thought maybe there is another way to do it? html: <html><head></head><body><div id="test"></div></body></html> javascript: <script>(function(){var text = document.getElementById('test').innerHTML;var newtext = text.replace('', '');return newtext;})();</script> I wanna get the text within "test", replace certain parts, and then output it to the browser. Any ideas on how to do it? I'd appreciate any help. Thanks.

    Read the article

  • Java exception reading xml file

    - by xain
    Hi, I have the following xml file: <?xml version="1.0" encoding="UTF-8"?> <c1> <c2 id="0001" n="CM" urlget="/at/CsM" urle="/E/login.jsp"> </c2> <c2 id="0002" n="C2M" urlget="/a2t/CsM" urle="/E2/login.jsp"> </c2> </c1> I'm trying to load c2's attributes this way: Document d = DocumentBuilderFactory.newInstance() .newDocumentBuilder() .parse("epxy.xml"); Element c1 = d.getDocumentElement(); Element c2 = (Element)c1.getFirstChild(); while (c2 != null) { ... c2 = (Element)c2.getNextSibling(); } But I get the exception java.lang.ClassCastException: org.apache.xerces.dom.DeferredTextImpl incompatible with org.w3c.dom.Element in the line Element c2 = (Element)c1.getFirstChild(); before the loop. Any hints ? Thanks.

    Read the article

  • Cannot Debug Unmanaged Dll from C#

    - by JustSmith
    I have a DLL that was written in C++ and called from a C# application. The DLL is unmanaged code. If I copy the DLL and its .pdb files with a post build event to the C# app's debug execution dir I still can't hit any break points I put into the DLL code. The break point has a message attached to it saying that "no symbols have been loaded for this document". What else do I have to do to get the debugging in the dll source? I have "Tools-Options-Debugging-General-Enable only my code" Disabled. The DLL is being compiled with "Runtime tracking and disable optimizations (/ASSEMBLYDEBUG)" and Generate Debug Info to "Yes (/DEBUG)"

    Read the article

  • WPF Binding XAML vs C#

    - by kubal5003
    Hello, I've got a strange problem - binding created through XAML (both ways by markup extension or normal) isn't working(BindingOperations.IsDataBound returns false and in fact there is no Binding object created). When I do literally the same from code everything is working perfectly. One more thing is that the Binding in XAML is created in a DataTemplate - what's funny about that when I use the DataTemplate for the first time it fails, then I fix it from code (add binding to specific objects) and while adding more objects to the collection the binding set in XAML just works. If I try to remove all the objects from the collection and then add a new one the binding fails once again. In reality this is a shortened version of another of my questions. For details please refer to: http://stackoverflow.com/questions/2986511/wpf-debugging-avalonedit-binding-to-document-property Sorry for doing it this way, but there's no answer and it's probably too long for anybody to read. -

    Read the article

  • Solving iPhone/iPad out of memory issues

    - by Joonas Trussmann
    I have a strange issue where I'm scrolling through a paged UIScrollView which displays the pages of a PDF document (using Quartz 2D and CATiledLayer). When I page through memory allocation looks fine with it going up with a few initial pages and then keeping it steady as it obviously releases the memory kept for earlier pages. Upon hitting page x (not a certain PDF page or a certain number per se) memory usage goes from a couple of megs to 308 megs and the app crashes. So my question is: how to best try to find what's causing this? The object alloc tool in instruments shows the memory as simply going to malloc. (in huge chunks).

    Read the article

  • What's wrong with this javascript?

    - by Arlen Beiler
    What's wrong with this code? var divarray = document.getElementById("yui-main").getElementsByTagName("div"); var articleHTML = array(); var absHTML; var keyHTML; var bodyHTML = array(); var i = 0; for ( var j in divarray) { if(divarray[i].className == "articleBody"){ alert("found"); articleHTML = divarray[i]; break; } bodyHTML[i] = ''; if(articleHTML[i].className == "issueMiniFeature"){continue;} if(articleHTML[i].className == "abstract"){absHTML = articleHTML[i]; continue;} if(articleHTML[i].className == "journalKeywords"){keyHTML = articleHTML[i]; continue;} bodyHTML[i] = articleHTML[i]; i++; } This is the error I am getting: ReferenceError: array is not defined I am using Google Chrome if it helps any.

    Read the article

  • Compatibility issues with <a> and calling a function(); across different web browsers

    - by Matthew
    Hi, I am new to javascript. I wrote the following function rollDice() to produce 5 random numbers and display them. I use an anchor with click event to call the function. Problem is, in Chrome it won't display, works fine in IE, in firefox the 5 values display and then the original page w/anchor appears! I am suspicious that my script tag is too general but I am really lost. Also if there is a display function that doesn't clear the screen first that would be great. diceArray = new Array(5) function rollDice() { var i; for(i=0; i<5; i++) { diceArray[i]=Math.round(Math.random() * 6) % 6 + 1; document.write(diceArray[i]); } } when I click should display 5 rand variables

    Read the article

  • Get the text of an li element

    - by Deepak Prasanna
    <ul class="leftbutton" > <li id="menu-selected">Sample 1</li> <li>Sample 2</li> <li>Sample 3</li> <li>Sample 4</li> <li>Sample 5</li> </ul> I want to get the text of the li item which the id="menu-selected". Right now I am doing something like document.getElementById('menu_selected').childNodes.item(0).nodeValue Is there any simpler way of doing the same?

    Read the article

  • What to do after a servicing fails on TFS 2010

    - by Martin Hinshelwood
    What do you do if you run a couple of hotfixes against your TFS 2010 server and you start to see seem odd behaviour? A customer of mine encountered that very problem, but they could not just, or at least not easily, go back a version.   You see, around the time of the TFS 2010 launch this company decided to upgrade their entire 250+ development team from TFS 2008 to TFS 2010. They encountered a few problems, owing mainly to the size of their TFS deployment, and the way they were using TFS. They were not doing anything wrong, but when you have the largest deployment of TFS outside of Microsoft you tend to run into problems that most people will never encounter. We are talking half a terabyte of source control in TFS with over 80 proxy servers. Its certainly the largest deployment I have ever heard of. When they did their upgrade way back in April, they found two major flaws in the product that meant that they had to back out of the upgrade and wait for a couple of hotfixes. KB983504 – Hotfix KB983578 – Patch KB2401992 -Hotfix In the time since they got the hotfixes they have run 6 successful trial migrations, but we are not talking minutes or hours here. When you have 400+ GB of data it takes time to copy it around. It takes time to do the upgrade and it takes time to do a backup. Well, last week it was crunch time with their developers off for Christmas they had a window of opportunity to complete the upgrade. Now these guys are good, but they wanted Northwest Cadence to be available “just in case”. They did not expect any problems as they already had 6 successful trial upgrades. The problems surfaced around 20 hours in after the first set of hotfixes had been applied. The new Team Project Collection, the only thing of importance, had disappeared from the Team Foundation Server Administration console. The collection would not reattach either. It would not even list the new collection as attachable! Figure: We know there is a database there, but it does not This was a dire situation as 20+ hours to repeat would leave the customer over time with 250+ developers sitting around doing nothing. We tried everything, and then we stumbled upon the command of last resort. TFSConfig Recover /ConfigurationDB:SQLServer\InstanceName;TFS_ConfigurationDBName /CollectionDB:SQLServer\instanceName;"Collection Name" -http://msdn.microsoft.com/en-us/library/ff407077.aspx WARNING: Never run this command! Now this command does something a little nasty. It assumes that there really should not be anything wrong and sets about fixing it. It ignores any servicing levels in the Team Project Collection database and forcibly applies the latest version of the schema. I am sure you can imagine the types of problems this may cause when the schema is updated leaving the data behind. That said, as far as we could see this collection looked good, and we were even able to find and attach the team project collection to the Configuration database. Figure: After attaching the TPC it enters a servicing mode After reattaching the team project collection we found the message “Re-Attaching”. Well, fair enough that sounds like something that may need to happen, and after checking that there was disk IO we left it to it. 14+ hours later, it was still not done so the customer raised a priority support call with MSFT and an engineer helped them out. Figure: Everything looks good, it is just offline. Tip: Did you know that these logs are not represented in the ~/Logs/* folder until they are opened once? The engineer dug around a bit and listened to our situation. He knew that we had run the dreaded “tfsconfig restore”, but was not phased. Figure: This message looks suspiciously like the wrong servicing version As it turns out, the servicing version was slightly out of sync with the schema. KB Schema Successful           KB983504 341 Yes   KB983578 344 sort of   KB2401992 360 nope   Figure: KB, Schema table with notation to its success The Schema version above represents the final end of run version for that hotfix or patch. The only way forward The problem was that the version was somewhere between 341 and 344. This is not a nice place to be in and the engineer give us the  only way forward as the removal of the servicing number from the database so that the re-attach process would apply the latest schema. if his sounds a little like the “tfsconfig recover” command then you are exactly right. Figure: Sneakily changing that 3 to a 1 should do the trick Figure: Changing the status and dropping the version should do it Now that we have done that we should be able to safely reattach and enable the Team Project Collection. Figure: The TPC is now all attached and running You may think that this is the end of the story, but it is not. After a while of mulling and seeking expert advice we came to the opinion that the database was, for want of a better term, “hosed”. There could well be orphaned data in there and the likelihood that we would have problems later down the line is pretty high. We contacted the customer back and made them aware that in all likelihood the repaired database was more like a “cut and shut” than anything else, and at the first sign of trouble later down the line was likely to split in two. So with 40+ hours invested in getting this new database ready the customer threw it away and started again. What would you do? Would you take the “cut and shut” to production and hope for the best?

    Read the article

  • Equivalent of LaTeX's \label and \ref in HTML.

    - by dreeves
    I have an FAQ in HTML (example) in which the questions refer to each other a lot. That means whenever we insert/delete/rearrange the questions, the numbering changes. LaTeX solves this very elegantly with \label and \ref -- you give items simple tags and LaTeX worries about converting to numbers in the final document. How do people deal with that in HTML? ADDED: Note that this is no problem if you don't have to actually refer to items by number, in which case you can set a tag with <a name="foo"> and then link to it with <a href="#foo">some non-numerical way to refer to foo</a>. But I'm assuming "foo" has some auto-generated number, say from an <ol> list, and I want to use that number to refer to and link to it.

    Read the article

  • Wrapping \NewEnviron into \newenvironment fails

    - by o_O Tync
    Hello! I am trying to wrap an environment created with \NewEnviron (package 'environ') into an old good \newenvironment: \NewEnviron{test}{aaa(\BODY)bbb} \newenvironment{wrapper}{\begin{test}}{\end{test}} \begin{wrapper} debug me \end{wrapper} However, this gives me a strange error: LaTeX Error: \begin{test} on input line 15 ended by \end{wrapper}. LaTeX Error: \begin{wrapper} on input line 15 ended by \end{document}. If I replace \NewEnviron{test}{aaa(\BODY)bbb} with \newenvironment{test}{aaa(}{)bbb} — everything works as expected! It seems like \NewEnviron fails to find its end for some reason. I'm trying to do some magic with 'floatfig' wrapped into a \colorbox so I need a way to convert \colorbox to an environment and wrap it into another one. I can define a new command but it's not a very good idea. Thanks in advance!

    Read the article

  • Physics System ignores collision in some rare cases

    - by Gajoo
    I've been developing a simple physics engine for my game. since the game physics is very simple I've decided to increase accuracy a little bit. Instead of formal integration methods like fourier or RK4, I'm directly computing the results after delta time "dt". based on the very first laws of physics : dx = 0.5 * a * dt^2 + v0 * dt dv = a * dt where a is acceleration and v0 is object's previous velocity. Also to handle collisions I've used a method which is somehow different from those I've seen so far. I'm detecting all the collision in the given time frame, stepping the world forward to the nearest collision, resolving it and again check for possible collisions. As I said the world consist of very simple objects, so I'm not loosing any performance due to multiple collision checking. First I'm checking if the ball collides with any walls around it (which is working perfectly) and then I'm checking if it collides with the edges of the walls (yellow points in the picture). the algorithm seems to work without any problem except some rare cases, in which the collision with points are ignored. I've tested everything and all the variables seem to be what they should but after leaving the system work for a minute or two the system the ball passes through one of those points. Here is collision portion of my code, hopefully one of you guys can give me a hint where to look for a potential bug! void PhysicalWorld::checkForPointCollision(Vec2 acceleration, PhysicsComponent& ball, Vec2& collisionNormal, float& collisionTime, Vec2 target) { // this function checks if there will be any collision between a circle and a point // ball contains informations about the circle (it's current velocity, position and radius) // collisionNormal is an output variable // collisionTime is also an output varialbe // target is the point I want to check for collisions Vec2 V = ball.mVelocity; Vec2 A = acceleration; Vec2 P = ball.mPosition - target; float wallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; float r = ball.mRadius / (mMap->getWallWidth() + mMap->getHallWidth()); // r is ball radius scaled to match actual rendered object. if (A.any()) // todo : I need to first correctly solve the collisions in case there is no acceleration return; if (V.any()) // if object is not moving there will be no collisions! { float D = P.x * V.y - P.y * V.x; float Delta = r*r*V.length2() - D*D; if(Delta < eps) return; Delta = sqrt(Delta); float sgnvy = V.y > 0 ? 1: (V.y < 0?-1:0); Vec2 c1(( D*V.y+sgnvy*V.x*Delta) / V.length2(), (-D*V.x+fabs(V.y)*Delta) / V.length2()); Vec2 c2(( D*V.y-sgnvy*V.x*Delta) / V.length2(), (-D*V.x-fabs(V.y)*Delta) / V.length2()); float t1 = (c1.x - P.x) / V.x; float t2 = (c2.x - P.x) / V.x; if(t1 > eps && t1 <= collisionTime) { collisionTime = t1; collisionNormal = c1; } if(t2 > eps && t2 <= collisionTime) { collisionTime = t2; collisionNormal = c2; } } } // this function should step the world forward by dt. it doesn't check for collision of any two balls (components) // it just checks if there is a collision between the current component and 4 points forming a rectangle around it. void PhysicalWorld::step(float dt) { for (unsigned i=0;i<mObjects.size();i++) { PhysicsComponent &current = *mObjects[i]; Vec2 acceleration = current.mForces * current.mInvMass; float rt=dt; // stores how much more the world should advance while(rt > eps) { float collisionTime = rt; Vec2 collisionNormal = Vec2(0,0); float halfWallWidth = mMap->getWallWidth() / (mMap->getWallWidth() + mMap->getHallWidth()) / 2; // we check if there is any collision with any of those 4 points around the ball // if there is a collision both collisionNormal and collisionTime variables will change // after these functions collisionTime will be exactly the value of nearest collision (if any) // and if there was, collisionNormal will report in which direction the ball should return. checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2(floor(current.mPosition.x) + halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth,floor(current.mPosition.y) + halfWallWidth)); checkForPointCollision(acceleration,current,collisionNormal,collisionTime,Vec2( ceil(current.mPosition.x) - halfWallWidth, ceil(current.mPosition.y) - halfWallWidth)); // either if there is a collision or if there is not we step the forward since we are sure there will be no collision before collisionTime current.mPosition += collisionTime * (collisionTime * acceleration * 0.5 + current.mVelocity); current.mVelocity += collisionTime * acceleration; // if the ball collided with anything collisionNormal should be at least none zero in one of it's axis if (collisionNormal.any()) { collisionNormal *= Dot(collisionNormal, current.mVelocity) / collisionNormal.length2(); current.mVelocity -= 2 * collisionNormal; // simply reverse velocity along collision normal direction } rt -= collisionTime; } // reset all forces for current object so it'll be ready for later game event current.mForces.zero(); } }

    Read the article

  • Part 1 - Load Testing In The Cloud

    - by Tarun Arora
    Azure is fascinating, but even more fascinating is the marriage of Azure and TFS! Introduction Recently a client I worked for had 2 major business critical applications being delivered, with very little time budgeted for Performance testing, we immediately hit a bottleneck when the performance testing phase started, the in house infrastructure team could not support the hardware requirements in the short notice. It was suggested that the performance testing be performed on one of the QA environments which was a fraction of the production environment. This didn’t seem right, the team decided to turn to the cloud. The team took advantage of the elasticity offered by Azure, starting with a single test agent which was provisioned and ready for use with in 30 minutes the team scaled up to 17 test agents to perform a very comprehensive performance testing cycle. Issues were identified and resolved but the highlight was that the cost of running the ‘test rig’ proved to be less than if hosted on premise by the infrastructure team. Thank you for taking the time out to read this blog post, in the series of posts, I’ll try and cover the start to end of everything you need to know to use Azure to build your Test Rig in the cloud. But Why Azure? I have my own Data Centre… If the environment is provisioned in your own datacentre, - No matter what level of service agreement you may have with your infrastructure team there will be down time when the environment is patched - How fast can you scale up or down the environments (keeping the enterprise processes in mind) Administration, Cost, Flexibility and Scalability are the areas you would want to think around when taking the decision between your own Data Centre and Azure! How is Microsoft's Public Cloud Offering different from Amazon’s Public Cloud Offering? Microsoft's offering of the Cloud is a hybrid of Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) which distinguishes Microsoft's offering from other providers such as Amazon (Amazon only offers IaaS). PaaS – Platform as a Service IaaS – Infrastructure as a Service Fills the needs of those who want to build and run custom applications as services. Similar to traditional hosting, where a business will use the hosted environment as a logical extension of the on-premises datacentre. A service provider offers a pre-configured, virtualized application server environment to which applications can be deployed by the development staff. Since the service providers manage the hardware (patching, upgrades and so forth), as well as application server uptime, the involvement of IT pros is minimized. On-demand scalability combined with hardware and application server management relieves developers from infrastructure concerns and allows them to focus on building applications. The servers (physical and virtual) are rented on an as-needed basis, and the IT professionals who manage the infrastructure have full control of the software configuration. This kind of flexibility increases the complexity of the IT environment, as customer IT professionals need to maintain the servers as though they are on-premises. The maintenance activities may include patching and upgrades of the OS and the application server, load balancing, failover clustering of database servers, backup and restoration, and any other activities that mitigate the risks of hardware and software failures.   The biggest advantage with PaaS is that you do not have to worry about maintaining the environment, you can focus all your time in solving the business problems with your solution rather than worrying about maintaining the environment. If you decide to use a VM Role on Azure, you are asking for IaaS, more on this later. A nice blog post here on the difference between Saas, PaaS and IaaS. Now that we are convinced why we should be turning to the cloud and why in specific Azure, let’s discuss about the Test Rig. The Load Test Rig – Topology Now the moment of truth, Of course a big part of getting value from cloud computing is identifying the most adequate workloads to take to the cloud, so I’ve decided to try to make a Load Testing rig where the Agents are running on Windows Azure.   I’ll talk you through the above Topology, - User: User kick starts the load test run from the developer workstation on premise. This passes the request to the Test Controller. - Test Controller: The Test Controller is on premise connected to the same domain as the developer workstation. As soon as the Test Controller receives the request it makes use of the Windows Azure Connect service to orchestrate the test responsibilities to all the Test Agents. The Windows Azure Connect endpoint software must be active on all Azure instances and on the Controller machine as well. This allows IP connectivity between them and, given that the firewall is properly configured, allows the Controller to send work loads to the agents. In parallel, the Controller will collect the performance data from the agents, using the traditional WMI mechanisms. - Test Agents: The Test Agents are on the Windows Azure Public Cloud, as soon as the test controller issues instructions to the test agents, the test agents start executing the load tests. The HTTP requests are issued against the web server on premise, the results are captured by the test agents. And finally the results are passed over to the controller. - Servers: The Web Server and DB Server are hosted on premise in the datacentre, this is usually the case with business critical applications, you probably want to manage them your self. Recap and What’s next? So, in the introduction in the series of blog posts on Load Testing in the cloud I highlighted why creating a test rig in the cloud is a good idea, what advantages does Windows Azure offer and the Test Rig topology that I will be using. I would also like to mention that i stumbled upon this [Video] on Azure in a nutshell, great watch if you are new to Windows Azure. In the next post I intend to start setting up the Load Test Environment and discuss pricing with respect to test agent machine types that will be used in the test rig. Hope you enjoyed this post, If you have any recommendations on things that I should consider or any questions or feedback, feel free to add to this blog post. Remember to subscribe to http://feeds.feedburner.com/TarunArora.  See you in Part II.   Share this post : CodeProject

    Read the article

  • Stop scrolling to top in UIWebView - iPhone

    - by sagar
    I have placed following javascript in my html file. <script TYPE="text/javascript"> function srk(){ document.ontouchmove = function(e){ e.preventDefault(); } } </script> I am scrolling my webview by following code with some animation. [myWebView stringByEvaluatingJavaScriptFromString:[NSString stringWithFormat: @"window.scrollTo(0,%i);",414*self.initialScrollPosition]]; Everything going right, but on problem that I am facing is as follows. Whenever User/I tap on the status bar of iPhone, WebView Bydefault scrolls to top. This should not be done. Is it possible to prevent inbuilt functionality ? I know one of the option is as follows. ((UIScrollView *)[[myWebView valueForKey:@"_internal"] valueForKey:@"scroller"]).scrollsToTop = NO; But is it valid to do ?

    Read the article

  • Most efficient way to create and nest divs with appendChild using *plain* javascript (no libraries)

    - by Matrym
    Is there a more efficient way to write the following appendChild / nesting code? var sasDom, sasDomHider; var d = document; var docBody = d.getElementsByTagName("body")[0]; var newNode = d.createElement('span'); var secondNode = d.createElement('span'); // Hider dom newNode.setAttribute("id", "sasHider"); docBody.appendChild(newNode); sasDomHider = d.getElementById("sasHider"); // Copyier dom secondNode.setAttribute("id", "sasText"); sasDomHider.appendChild(secondNode); sasDom = d.getElementById("sasText"); Thanks in advance for your time :)

    Read the article

  • how can I set the subfigure labels in latex to uppercase?

    - by gojira
    I made a figure within my latex document using the subfigure package, containing three subfigures. The part where the subfigures are now automatically labelled (a), (b), (c) needs to be uppercase, though. i.e. the result from the latex code in the finally rendered PDF is (a) img1 (b) img2 (c) img3 but it needs to be (A) img1 (B) img2 (C) img3 How can I set this? The code looks something like this: \usepackage{subfigure} [...] \begin{sidewaysfigure}[p] \centering \subfigure[image1]{ \label{fig:img1} \includegraphics[scale=2]{figures/img1.png} } \subfigure[image2]{ \label{fig:img2} \includegraphics[scale=2]{figures/img2.png} } \subfigure[image3]{ \label{fig:img3} \includegraphics[scale=2]{figures/img3.png} } \caption[my images]{\textbf{ My images} I am referring to (A), (B) and (C) respectively.}\label{fig:imgs}\end{sidewaysfigure}

    Read the article

  • Call server side method from JSON

    - by Zerotoinfinite
    How to call a method on a server side from JSON. Below is the code I am using SERVER SIDE: [WebMethod] private void GetCustomer( string NoOfRecords) { string connString = "Data Source=Something;Initial Catalog=AdventureWorks;Trusted_Connection=True;"; SqlConnection sCon = new SqlConnection(connString); SqlDataAdapter da = new SqlDataAdapter("SELECT * FROM Sales.Customer WHERE CustomerID < '" + NoOfRecords+ "' ", sCon); DataSet ds = new DataSet(); da.Fill(ds); GvDetails.DataSource = ds.Tables[0]; GvDetails.DataBind(); } On Client Side: var txtID = document.getElementById('txtValue'); $.ajax({ type: "POST", url: "Default.aspx/GetCustomer", data: "{Seconds:'" + txtID +"'}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(response) { alert(response); } }); Now I want that on button click, I would call the function at the client side [JSON], which pass the textbox value to the function. Please help

    Read the article

  • NSOutlineView with Drag and Drop

    - by bob
    I read the other post here on Outlineviews and DND, but I can't get my program to work. At the bottom of this post is a link to a zip of my project. Its very basic with only an outlineview and button. I want it to receive text files being dropped on it, but something is wrong with my code or connections. I tried following Apple's example code of their NSOutline Drag and Drop, but I'm missing something. 1 difference is my program is a document based program and their example isn't. I set the File's Owner to receive delegate actions, since that's where my code to handle drag and drop is, as well as a button action. Its probably a simple mistake, so could someone please look at it and tell me what I'm doing wrong? Here is a link to the file: http://www.4shared.com/file/or1A7aHk/OutlineDragDrop.html

    Read the article

  • Tracking Mouse Stop

    - by Hallik
    I am trying to track mouse movements in the browser, and if a user stops their mouse for 0.5 seconds, to execute some code. I have put a breakpoint in the code below in firebug, and it breaks on the var mousestop = function(evt) line, but then jumps to the return statement. Am I missing something simple? Why isn't it executing the POST statement? I am tracking mouse clicks in a similar way, and it posts to the server just fine. Just not mouse stops. $.fn.saveStops = function() { $(this).bind('mousemove.clickmap', function(evt) { var mousestop = function(evt) { $.post('/heat-save.php', { x:evt.pageX, y:evt.pageY, click:"false", w:window.innerWidth, h:window.innerHeight, l:escape(document.location.pathname) }); }, thread; return function() { clearTimeout(thread); thread = setTimeout(mousestop, 500); }; }); };

    Read the article

< Previous Page | 518 519 520 521 522 523 524 525 526 527 528 529  | Next Page >