Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 684/1086 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • ASP.NET - Display Message While Page Is Loading

    - by Chris
    I have a page that performs a long-running task (10 to 15 seconds) in the page_load method. I have client-side javascript code that will display a decent "page loading" animated gif to the user. I am able to invoke the JavaScript method from the code-behind, to display the "page loading" animated gif, however, the long-running task is hanging up the UI such that the animated gif doesn't actually display until after the long-running task is complete, which is the exact opposite of what I want. To test this out, in my page_load method I make a call to the JavaScript method to display the animated gif. Then, I use Thread.Sleep(10000). What happens is that the animated gif doesn't display until after Thread.Sleep is complete. Obviously I am doing something incorrect. Any advice would be appreciated. Thanks. Chris

    Read the article

  • WCF: limit number of calls per hour - per user

    - by Eric Eijkelenboom
    Hi guys, I've got a WCF service (basicHttpBinding, basic authentication, IIS 6.0) on which I want to restrict the number of calls per hour - on user basis. For example, max 1000 calls per user, per hour (a la Google Maps, etc). I also want to implement some sort of subscription mechanism, so that users can upgrade their call-limit across various 'price plans'. I know that I could achieve this with a custom Inspector, backed by a DB containing some sort of 'subscription' table and a counter, but I'd like to avoid reinventing the wheel. Does anyone have experience doing this? Are there 3rd party projects/libraries that support this out of the box? Thanks. Eric

    Read the article

  • What are the requirements for an application health monitoring system?

    - by Steven A. Lowe
    What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the oeprations (on-call) staff? What else should it do above the minimum requirements? Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored? if the latter, what do you need to know about them? ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both

    Read the article

  • twitter bootstrap typeahead ajax example

    - by emeraldjava
    I'm trying to find a working example of the twitter bootstrap typeahead element that will make an ajax call to populate it's dropdown. I have an existing working jquery autocomplete example which defines the ajax url to and how to process the reply <script type="text/javascript"> //<![CDATA[ $(document).ready(function() { var options = { minChars:3, max:20 }; $("#runnerquery").autocomplete('./index/runnerfilter/format/html',options).result( function(event, data, formatted) { window.location = "./runner/index/id/"+data[1]; } ); .. What do i need change to convert this to the typeahead example? <script type="text/javascript"> //<![CDATA[ $(document).ready(function() { var options = { source:'/index/runnerfilter/format/html', items:5 }; $("#runnerquery").typeahead(options).result( function(event, data, formatted) { window.location = "./runner/index/id/"+data[1]; } ); .. I'm going to wait for the 'Add remote sources support for typeahead' issue to be resolved.

    Read the article

  • GlassFish Starting Up Java SE Client - No Initial Context Exception

    - by Marcel
    Hi I have developed a java se client that calls some session beans on a glassfish server. I connect to the bean remote interface like this. context = new InitialContext(); em = (ICrudService) context.lookup("java:global/BackITServer/CrudServiceImpl"); This works fine from inside eclipse (gf-client on build path). When I export my project as a runnable jar and call it on the console with java -jar BackItClient.jar I get a NoInitialContextException. MMMM. I would very much appreciate some help. Thank You Greetings Marcel PS: Do I really have to pack all the jars which gf-client is referencing into my jar?

    Read the article

  • VB.net Net after load event?

    - by themaninthesuitcase
    I need some way of knowing when a form has finished loading. My reasoning is I have a 2nd form that is loaded when this form loads. The code for this is called from form1.load. Form2 is currently being displayed behind form1 as I am guessing form1 calls an activate or similar at the end of the load so any Activate, BringToFront etc calls on from2 are over ridden. If you look at the code below I have tried adding frmAllocationSearch.Activate, frmAllocationSearch.BringToFront and Me.SendToBack after the call to ShowAlloactionSearchDialog() but these are all wasted as something is happening after the load event is fired to bring Me to the front. Code is: Private Sub Allocation_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load ShowAlloactionSearchDialog() End Sub Private Sub ShowAlloactionSearchDialog() If frmAllocationSearch Is Nothing OrElse frmAllocationSearch.IsDisposed Then frmAllocationSearch = New AllocationSearch frmAllocationSearch.MdiParent = Me.MdiParent frmAllocationSearch.Info = Me.Info frmAllocationSearch.Top = Me.Top frmAllocationSearch.Left = Me.Left + Me.Width - frmAllocationSearch.Width frmAllocationSearch.AllocationWindow = Me frmAllocationSearch.Show() Else If frmAllocationSearch.WindowState = FormWindowState.Minimized Then frmAllocationSearch.WindowState = FormWindowState.Normal frmAllocationSearch.Activate() End If End Sub

    Read the article

  • How to programmatically check availibilty of internet connection in Android?

    - by Fahad
    Hi! I want to check programmatically whether there is an internet connection in android phone/emulator. So that once I am sure that an internet connection is present then I'll make a call to the internet. So its like "Hey emulator! If you have an internet connection, then please open this page, else doSomeThingElse();" hope you get the idea. I would highly appreciate a quick response I need it quite early. regards Fahad Ali Shaikh

    Read the article

  • MySQL INTO OUTFILE overide existing file?

    - by Derek Organ
    I've written a big sql script that creates a CSV file. I want to call a cronjob every night to create a fresh CSV file and have it available on the website. Say for example I'm store my file in '/home/sites/example.com/www/files/backup.csv' and my SQL is SELECT * INTO OUTFILE '/home/sites/example.com/www/files/backup.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n' FROM ( .... MySQL gives me an error when the file already exists File '/home/sites/example.com/www/files/backup.csv' already exists Is there a way to make MySQL overwrite the file? I could have PHP detect if the file exists and delete it before creating it again but it would be more succinct if I can do it directly in MySQL.

    Read the article

  • IE page redirect hanging

    - by 08Hawkeye
    My app does a POST to my local server to create a new DOM element, comes back and should redirect to the same page with the new element. The problem is when it gets back from the server, the app hangs for almost 2 minutes before doing the redirect. I've isolated the issue to the fact that IE seems to have trouble with my tree structure of 100+ DOM elements, and I can see in HTTPWatch that it sits in a "Blocked" call for the 2 minutes before doing the redirect. Our temporary workaround is to set the inner-html of the tree structure to an empty string before submitting, thus eliminating the heavy DOM lifting, but we shouldn't need to do this (firefox has no trouble with the redirect). Question 1: Is there a better fix for this issue? Question 2: Why does ANY page care about the content before a redirect if it's going to be refreshed anyway? Thanks yall //sw

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • i have problem with include file

    - by user309381
    //this is intializer.php defined('DS')? null :define('DS',DIRECTORY_SEPARATOR); defined('SITE_ROOT')? null : define('SITE_ROOT',DS.'C:',DS.'wamp',DS.'www',DS.'photo_gallery'); defined('LIB_PATH')?null:define('LIB_PATH',SITE_ROOT.DS.'includes'); require_once(LIB_PATH.DS.'datainfo.php'); require_once(LIB_PATH.DS.'function.php'); require_once(LIB_PATH.DS.'session.php'); require_once(LIB_PATH.DS.'database.php'); require_once(LIB_PATH.DS.'user.php'); //this is other file where i call php file // ERROR Use of undefined constant LIB_PATH - assumed 'LIB_PATH' in //C:\wamp\www\photo_gallery\includes\database.php on //Notice: Use of undefined constant DS - assumed 'DS' in //C:\wamp\www\photo_gallery\includes\database.php on include(LIB_PATH.DS."database.php") ?

    Read the article

  • Can I get the matched DOM string with PHP and DOMDocument?

    - by alex
    I've got my HTML inside of $html. dom = new DOMDocument(); $dom->loadHTML($html); $xpath = new DOMXPath($dom); $tags = $xpath->query('//div[@id="header"]'); foreach($tags as $tag) { var_dump($tag->nodeValue); // the innerHTML of that element var_dump($tag); // object(DOMElement)#3 (0) { } } Is there a way to get that node, or remove it? Basically, I'm parsing an existing website and need to remove elements from it. What method do I call to do that? Thanks

    Read the article

  • Why C# calls different overloaded method for different values of same type?

    - by Fabio Veronez
    Hello all, I have one doubt concerning c# method overloading call resolution. Let's suppose I have the following C# code: enum MyEnum { Value1, Value2 } public void test() { method(0); // this calls method(MyEnum) method(1); // this calls method(object) } public void method(object o) { } public void method(MyEnum e) { } Note that I know how to make it work but I would like to know why for one value of int (0) it calls one method and for another (1) it calls another. It sounds awkward since both values have the same type (int) but they are "linked" for different methods. Ps.: This is my first question here, i'm sorry if I made something wrong. =P

    Read the article

  • Stop a postback in javascript

    - by jmpena
    hello, i have an ASP webform with a JQuery Thickbox, i have an image that opens the thickbox when user click. once open the thickbox it shows me a grid with several rows and a button to select one and after the user select the record it returns to the main page the recordselected and cause a __doPostBack() BUT! sometimes in IE6 it stay loading the postback and never ends i have to refresh the page and when it refresh it shows everything fine. but i dont want the postback stay loading AND it does not happend always. i have to call a __doPostBack because i need to find info related to the selected record. thanks.

    Read the article

  • Caching view-port based Geo-queries

    - by friism
    I have a web app with a giant Google Map in it. As users pan and zoom around on the map, points are dynamically loaded through AJAX call which include the viewport bounds (NE and SW corner coordinates) and some other assorted parameters. How do I cache these request for points? The problem is that the parameters are highly variable and (worst) not discrete i.e. floats with a lots of decimal places. I'm using ASP.NET-MVC/C#/LINQ2SQL/SQL-Server but the problem is not tied to that platform. This is the signature of the the relevant method: [AcceptVerbs(HttpVerbs.Post)] public JsonResult Data(string date, string categories, string ne_lat, string ne_lng, string sw_lat, string sw_lng)

    Read the article

  • What are the rules for Javascript's automatic semicolon insertion?

    - by T.R.
    Well, first I should probably ask if this is browser dependent. I've read that if an invalid token is found, but the section of code is valid until that invalid token, a semicolon is inserted before the token if it is preceded by a line break. However, the common example cited for bugs caused by semicolon insertion is: return _a+b; which doesn't seem to follow this rule, since _a would be a valid token. On the other hand, breaking up call chains works as expected: $('#myButton') .click(function(){alert("Hello!")}); Does anyone have a more in-depth description of the rules?

    Read the article

  • How can I refactor this to use an inline function or template instead of a macro?

    - by BillyONeal
    Hello, everyone :) I have a useful macro here: #define PATH_PREFIX_RESOLVE(path, prefix, environment) \ if (boost::algorithm::istarts_with(path, prefix)) { \ ExpandEnvironmentStringsW(environment, buffer, MAX_PATH); \ path.replace(0, (sizeof(prefix)/sizeof(wchar_t)) - 1, buffer); \ if (Exists(path)) return path; \ } It's used about 6 times within the scope of a single function (that's it), but macros seem to have "bad karma" :P Anyway, the problem here is the sizeof(prefix) part of the macro. If I just replace this with a function taking a const wchar_t[], then the sizeof() will fail to deliver expected results. Simply adding a size member doesn't really solve the problem either. Making the user supply the size of the constant literal also results in a mess of duplicated constants at the call site. Any ideas on this one?

    Read the article

  • Is it possible to replace groovy method for existing object?

    - by Jean Barmash
    The following code tried to replace an existing method in a Groovy class: class A { void abc() { println "original" } } x= new A() x.abc() A.metaClass.abc={-> println "new" } x.abc() A.metaClass.methods.findAll{it.name=="abc"}.each { println "Method $it"} new A().abc() And it results in the following output: original original Method org.codehaus.groovy.runtime.metaclass.ClosureMetaMethod@103074e[name: abc params: [] returns: class java.lang.Object owner: class A] Method public void A.abc() new Does this mean that when modify the metaclass by setting it to closure, it doesn't really replace it but just adds another method it can call, thus resulting in metaclass having two methods? Is it possible to truly replace the method so the second line of output prints "new"? When trying to figure it out, I found that DelegatingMetaClass might help - is that the most Groovy way to do this?

    Read the article

  • Testing a Django view cause "AttributeError: 'NoneType' object has no attribute 'handler500'" error

    - by jack
    I just wanted to start testing a Django view using the code below: from django.test.client import Client c = Client() response = c.get('/search/keyword') print response.content It just throws out following error message: "/usr/local/lib/python2.6/dist-packages/django/test/client.py", line 286, in get response = self.request(**r) File "/usr/local/lib/python2.6/dist-packages/django/test/client.py", line 230, in request response = self.handler(environ) File "/usr/local/lib/python2.6/dist-packages/django/test/client.py", line 74, in __call__ response = self.get_response(request) File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 143, in get_response return self.handle_uncaught_exception(request, resolver, exc_info) File "/usr/local/lib/python2.6/dist-packages/django/core/handlers/base.py", line 178, in handle_uncaught_exception callback, param_dict = resolver.resolve500() File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py", line 268, in resolve500 return self._resolve_special('500') File "/usr/local/lib/python2.6/dist-packages/django/core/urlresolvers.py", line 258, in _resolve_special callback = getattr(self.urlconf_module, 'handler%s' % view_type) AttributeError: 'NoneType' object has no attribute 'handler500' The view works in browser. What's wrong with above code?

    Read the article

  • MySQL vs PHP when retrieving a random item

    - by andufo
    Hi, which is more efficient (when managing over 100K records): A. Mysql SELECT * FROM user ORDER BY RAND(); of course, after that i would already have all the fields from that record. B. PHP use memcached to have $cache_array hold all the data from "SELECT id_user FROM user ORDER BY id_user" for 1 hour or so... and then: $id = array_rand($cache_array); of course, after that i have to make a MYSQL call with: SELECT * FROM user WHERE id_user = $id; so... which is more efficient? A or B?

    Read the article

  • On Ubuntu, how do you install a newer version of python and keep the older python version?

    - by Trevor Boyd Smith
    Background: I am using Ubuntu The newer python version is not in the apt-get repository (or synaptic) I plan on keeping the old version as the default python when you call "python" from the command line I plan on calling the new python using pythonX.X (X.X is the new version). Given the background, how do you install a newer version of python and keep the older python version? I have downloaded from python.org the "install from source" *.tgz package. The readme is pretty simple and says "execute three commands: ./configure; make; make test; sudo make install;" If I do the above commands, will the installation overwrite the old version of python I have (I definitely need the old version)?

    Read the article

  • How to render a Partial from a Model in Rails 2.3.5

    - by empire29
    I have a Rails 2.3.5 application and Im trying to render several Partials from within a Model (i know, i know -- im not supposed to). The reason im doing this is im integrating a Comet server (APE) into my Rails app and need to push updates out based on the Model's events (ex. after_create). I have tried doing this: ActionView::Base.new(Rails::Configuration.new.view_path).render(:partial => "pages/show", :locals => {:page => self}) Which allows me to render simple partials that don't user helpers, however if I try to user a link_to in my partial, i receive an error stating: undefined method `url_for' for nil:NilClass I've made sure that the object being passed into the "project_path(project)" is not nil. I've also tried including: include ActionView::Helpers::UrlHelper include ActionController::UrlWriter in the Module that contains the method that makes the above "render" call. Does anyone know how to work around this? Thanks

    Read the article

  • [C#] Onpaint events (invalidated) changing execution order after a period normal operation (runtime)

    - by Luke Mcneice
    Hi all, I have 3 data graphs that are painted via the their paint events. When I have data that I need to insert into the graph I call the controls invalidate() command. The first control's paint event actually creates a bitmap buffer for the other 2 graphs to avoid repeating a long loop. So the invalidate commands are in a specific order (1,2,3). This works well, however when the graphed data reaches the end of the graph window (PictureBox) where the data would normally start scrolling, the paint events begin firing in the wrong order (2,3,1). has anyone came across this before? why might this be happening?

    Read the article

  • @OneToMany property null in Entity after (second) merge

    - by iNPUTmice
    Hi, I'm using JPA (with Hibernate) and Gilead in a GWT project. On the server side I have this method and I'm calling this method twice with the same "campaign". On the second call it throws a null pointer exception in line 4 "campaign.getTextAds()" public List<WrapperTextAd> getTextAds(WrapperCampaign campaign) { campaign = em.merge(campaign); System.out.println("getting textads for "+campaign.getName()); for(WrapperTextAd textad: campaign.getTextAds()) { //do nothing } return new ArrayList<WrapperTextAd>(campaign.getTextAds()); } The code in WrapperCampaign Entity looks like this @OneToMany(mappedBy="campaign") public Set<WrapperTextAd> getTextAds() { return this.textads; }

    Read the article

  • How can i use ClearCanvas in remote database?

    - by programmerist
    How can i get data from REMOTE database using OnStart method? protected override int OnStart(StudyLoaderArgs studyLoaderArgs) { ApplicationEntity ae = studyLoaderArgs.Server as ApplicationEntity; _ae = ae; EventResult result = EventResult.Success; AuditedInstances loadedInstances = new AuditedInstances(); try { XmlDocument doc = RetrieveHeaderXml(studyLoaderArgs); StudyXml studyXml = new StudyXml(); studyXml.SetMemento(doc); _instances = GetInstances(studyXml).GetEnumerator(); loadedInstances.AddInstance(studyXml.PatientId, studyXml.PatientsName, studyXml.StudyInstanceUid); return studyXml.NumberOfStudyRelatedInstances; } finally { AuditHelper.LogOpenStudies(new string[] { ae.AETitle }, loadedInstances, EventSource.CurrentUser, result); } } i need to use OnStart in main project. How cn i use or call OnStart method

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >