Search Results

Search found 19055 results on 763 pages for 'high performance'.

Page 646/763 | < Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >

  • Speed up bitstring/bit operations in Python?

    - by Xavier Ho
    I wrote a prime number generator using Sieve of Eratosthenes and Python 3.1. The code runs correctly and gracefully at 0.32 seconds on ideone.com to generate prime numbers up to 1,000,000. # from bitstring import BitString def prime_numbers(limit=1000000): '''Prime number generator. Yields the series 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 ... using Sieve of Eratosthenes. ''' yield 2 sub_limit = int(limit**0.5) flags = [False, False] + [True] * (limit - 2) # flags = BitString(limit) # Step through all the odd numbers for i in range(3, limit, 2): if flags[i] is False: # if flags[i] is True: continue yield i # Exclude further multiples of the current prime number if i <= sub_limit: for j in range(i*3, limit, i<<1): flags[j] = False # flags[j] = True The problem is, I run out of memory when I try to generate numbers up to 1,000,000,000. flags = [False, False] + [True] * (limit - 2) MemoryError As you can imagine, allocating 1 billion boolean values (1 byte 4 or 8 bytes (see comment) each in Python) is really not feasible, so I looked into bitstring. I figured, using 1 bit for each flag would be much more memory-efficient. However, the program's performance dropped drastically - 24 seconds runtime, for prime number up to 1,000,000. This is probably due to the internal implementation of bitstring. You can comment/uncomment the three lines to see what I changed to use BitString, as the code snippet above. My question is, is there a way to speed up my program, with or without bitstring?

    Read the article

  • To Ajax or Not to Ajax a listing page

    - by kaivalya
    Here i am talking about product listing pages where there are multiple filters that filter the list of products appearing on the page like product types, categories price range etc. I have done such pages using both ajax and no ajax way in the past. What I like about using ajax in such page is that, when filters are selected I only update the section that contains the product list. There is no need to refresh the whole page which could end up re-loading the images on top bar, banners etc and slow down the user performance. Ajax way in my opinion becomes more compact and responsive from user experience. Down side for ajax route for me is; since filter states are not maintained in the URL I end up maintaining them on the server. This becomes complicated if I want to handle multi window scenarios and it is also costly to maintain such state on server memory for each session. Not using ajax and simply keeping all filter values on url and refreshing the page is quite simple but the luxury of refreshing only the pane that really needs to be refreshed is lost. Lately I am seeing a lot of large scale e-commerce sites that are using non-ajax approach on their listing pages and this is making me question one more time if it might be more efficient to build non-ajax listing make due to the long term maintenance ease and sacrifice a little bit from user experience. I am about to start implementing a new listing page for a product which I have the flexibility to go either way and I would appreciate your inputs.

    Read the article

  • MVC ASP.NET, ObjectContext and Ajax. Weird Behaviour

    - by fabianadipolo
    Hi, i've been creating a web application in mvc asp.net. I have three different project/solutions. One solution contains the model in EF (DAL) and all the methods to add, update, delete and query the objects in the model, the objectcontext is managed here in a per request basis. Other solution contains a content management system in wich authorized users insert, delete, update and access objects through the DAL mentioned before. And the last solution contains the web page that is accessed by all users and where the only operations executed are selects, no update, inserts or deletes here. All the selects are executed against the DAL mentioned before (the first solution). The problem here is that i'm not sure whether an HttpContext lifespan ObjectContext is the best solution. I have a lot of ajax calls in my web app and i'm not sure if an httpcontext could interfere with the performance of the application. I've been noticed that in some cases, specially when someone is working in the content manager inserting, updating or deleting, when you try to click on any link of the user web application (the web app that is accessed by any user - the third one that i mentioned before) the web page freezes and it remains stucked transferring data. In order to stop that behaviour you have to stop and refresh or click several times on the link. Excuse me for my bad english. I hope you could understand and could help me to solve this issue. Thanx in advance.

    Read the article

  • General question about DirectShow.NET, DirectShow and Windows Media Format

    - by Paul Andrews
    I searched and googled for an answer but couldn't find one. Basically I'm developing a webcam/audio streaming application which should capture audio and video from a pc (usb webcam/microphone) and send them to a receiving server. What the server will do with that it's another story and phase two (which I'm skipping for now) I wrote some code using DirectShow and Windows Media Format and it worked great for capture audio/video and sending them to another client, but there's a major problem: latency. Everywhere in the internet everyone gave me the same answer: "sorry dude but media format isn't for video conferencing, their codecs have too high latency". I thought I could skip the .wmv problems but seems like it's not possible to do... this road ends here then. So I saw a few examples with DirectShow.NET which were faster for both audio and video.. my question is: how come that DirectShow.NET is faster and better for video/audio conferencing? Shouldn't it be just a .NET porting of C++'s DirectShow? Am I missing something? I'm a bit confused at this point

    Read the article

  • Agile and Scrum burning me down please help me figuring out the truth

    - by jadook
    hi all, in the last while I installed MS-TFS 2008 then started to get myself prepared to use Agile Process Guidance template shipped with the TFS. with little googling I passed through Mike Cohn materials: I watched his conference in youtube "sponsored by google: http://www.youtube.com/watch?v=fb9Rzyi8b90 http://www.youtube.com/watch?v=jeT0pOVg0EI Read his book "Agile Estimating and Planning" Watching the video series in his website: http://www.mountaingoatsoftware.com/presentations-tag/video-recorded I was very happy while absorbing and eating the techniques he is using with the teams and how agile and scrum is such a great software process/methodology until I saw Mike answering a question regarding an architect role and talking about the requirements document... at that point everything start falling apart due to the following: Last year I had been assigned to make full analysis "including requirements gathering" for big project "very high priority project". within 2 months of hardwork, dedication and commitment I delivered the whole analysis with full satisfaction of the customer and my BOSS and ZERO amendments. Later on, the project entered the architecting, development ... phases. due to the fact that the system included many competitive and exciting features I requested patenting it and its going in the process... so imagine you are the kind of person who used to love facing all kind of challenges and returning with excellent experience and results for the stakeholders and yourself, How fairly agile and scrum processes will credit and admit your talent and passion while the scrum master/coach treat the team as one unit that accomplish user stories and converge through trial and error approach??!!!! with that dark thoughts about agile and scrum I found many people "anti agile" and on top of them is "Crispin Rogers Johnson": http://agile-crispin.blogspot.com/ that guy made anti statement for everything Mike Cohn used to talk about. I really don't know what to do next! so any guidance will be appreciated. Thanks,

    Read the article

  • how to define fill colours in ggplot histogram?

    - by Andreas
    I have the following simple data data <- structure(list(status = c(9, 5, 9, 10, 11, 10, 8, 6, 6, 7, 10, 10, 7, 11, 11, 7, NA, 9, 11, 9, 10, 8, 9, 10, 7, 11, 9, 10, 9, 9, 8, 9, 11, 9, 11, 7, 8, 6, 11, 10, 9, 11, 11, 10, 11, 10, 9, 11, 7, 8, 8, 9, 4, 11, 11, 8, 7, 7, 11, 11, 11, 6, 7, 11, 6, 10, 10, 9, 10, 10, 8, 8, 10, 4, 8, 5, 8, 7), statusgruppe = c(0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 1, 1, 0, 1, 1, 0, NA, 0, 1, 0, 1, 0, 0, 1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 0, 1, 0, 0, 0, 0, 0)), .Names = c("status", "statusgruppe"), class = "data.frame", row.names = c(NA, -78L )) from that I'd like to make a histogram: ggplot(data, aes(status))+ geom_histogram(aes(y=..density..), binwidth=1, colour = "black", fill="white")+ theme_bw()+ scale_x_continuous("Staus", breaks=c(min(data$status,na.rm=T), median(data$status, na.rm=T), max(data$status, na.rm=T)),labels=c("Low", "Middle", "High"))+ scale_y_continuous("Percent", formatter="percent") Now - i'd like for the bins to take colou according to value - e.g. bins with value 9 gets dark grey - everything else should be light grey. I have tried with "fill=statusgruppe", scale_fill_grey(breaks=9) etc. - but I can't get it to work. Any ideas?

    Read the article

  • highcharts correct json input

    - by Linus
    i am trying to do a basic column chart. i have looked the examples but not sure why i do not see any graph (lines). I can see the title and subtitle appear an no javascript errors in firebug. any help please $(function () { var chart; $(document).ready(function() { chart = new Highcharts.Chart({ chart: { renderTo: 'container', type: 'column', events: { load: requestData } }, title: { text: 'Some title' }, subtitle: { text: 'subtitle' }, xAxis: { categories: [], title: { text: null } }, yAxis: { min: 0, title: { text: 'y-Axis', align: 'high' } }, tooltip: { formatter: function() { return ''+ this.series.name +': '+ this.y +' '; } }, plotOptions: { bar: { dataLabels: { enabled: true } } }, legend: { layout: 'vertical', align: 'right', verticalAlign: 'top', x: -100, y: 100, floating: true, borderWidth: 1, backgroundColor: '#FFFFFF', shadow: true }, credits: { enabled: false }, series:[] }); }); function requestData() { $.ajax({ url: 'test.json', success: function(data) { options.series[0].push(data); chart.redraw(); }, cache: false }); } }); my json input file is below [ { name: 'name1', y: [32.6,16.6,1.5] }, { name: 'name2', y: [6.7,0.2,0.6] }, { name: 'name3', y: [1,3.7,0.7] }, { name: 'name4', y: [20.3,8.8,9.5] },{ name: 'name5', y: [21.5,10,7.2] }, { name: 'name6', y: [1.4,1.8,3.7] }, { name: 'name7', y: [8.1,0,0] }, { name: 'name8', y: [28.9,8.9,6.6] } ]

    Read the article

  • Setting corelocation results to NSNumber object parameters

    - by Dan Ray
    This is a weird one, y'all. - (void)locationManager:(CLLocationManager *)manager didUpdateToLocation:(CLLocation *)newLocation fromLocation:(CLLocation *)oldLocation { CLLocationCoordinate2D coordinate = newLocation.coordinate; self.mark.longitude = [NSNumber numberWithDouble:coordinate.longitude]; self.mark.latitude = [NSNumber numberWithDouble:coordinate.latitude]; NSLog(@"Got %f %f, set %f %f", coordinate.latitude, coordinate.longitude, self.mark.latitude, self.mark.longitude); [manager stopUpdatingLocation]; manager.delegate = nil; if (self.waitingForLocation) { [self completeUpload]; } } The latitude and longitude in that "mark" object are synthesized parameters referring to NSNumber iVars. In the simulator, my NSLog output for that line in the middle there reads: 2010-05-28 15:08:46.938 EverWondr[8375:207] Got 37.331689 -122.030731, set 0.000000 -44213283338325225829852024986561881455984640.000000 That's a WHOLE lot further East than 1 Infinite Loop! The numbers are different on the device, but similar--lat is still zero and long is a very unlikely high negative number. Elsewhere in the controller I'm accepting a button press and uploading a file (an image I just took with the camera) with its geocoding info associated, and I need that self.waitingForLocation to inform the CLLocationManager delegate that I already hit that button and once its done its deal, it should go ahead and fire off the upload. Thing is, up in the button-click-receiving method, I test see if CL is finished by testing self.mark.latitude, which seems to be getting set zero...

    Read the article

  • Microsoft Reporting 2005 and Report Viewer Report ASP.Net Session Has Expired on Load

    - by ThaKidd
    At my job, I have been tasked with fixing an error with our reporting server. That error is ASP.Net Session Has Expired. This error occurs when the Visual Studio ReportViewer 2005 Control attempts to load a report. We are trying to host this report to users hitting our Internet exposed Windows 2003 Server running IIS 6.0. The reportviewer control is attempting to load this report from a second server running Microsoft SQL 2005 w/Reporting Services. The SQL server is not exposed to the Internet. Here is the weird thing. This error never occurs on the development box. When it is transferred to the production IIS server, the error starts to occur. It only happens every time the report is first loaded. If the browser's refresh button is clicked 5-10 times, the report will finally load correctly. I have reproduced this same error on the latest version of Mozilla Firefox, IE 7, and IE 8. The report only takes 10-20 seconds to load. I have tried timeouts in the 300+ second range on the reporting server/iis production server. I have tried a few options like Async (which causes images not to load properly) and setting the session mode to iproc with a high timeout value in the Reporting Server's web.config. I have also tried using the reporting server's IP address in the report viewer's code instead of the server name. I plan on verifying a picture loading issue which I also read about tomorrow when I get into work. I am unsure what service packs Visual Studio 2005 and the MSSQL server are running. Was an update released to fix this problem that I could not find? Does anyone have a fix for this?

    Read the article

  • How to prepopulate an Outlook MailItem and avoiding a com Exception from the object model guard

    - by torrente
    Hi, I work for a company that develops a CRM tool and offers integration with MS Office(2003 & 2007) from windows XP to 7. (I'm working using Win7) My task is to call an Outlook instance (using C#) from this CRM tool when the user wants to send an email and prepopulate with data of the CRM tool (email, recipient, etc..) All of this already works just fine. The problem I'm having is that Outlook's "object model guard" is throwing com Exception (Operation aborted (Exception from HRESULT: 0x80004004 (E_ABORT))) the moment I try to read a protected value from the mailItem (such as mail.bodyHTML). Example Snippet: using MSOutlook = Microsoft.Office.Interop.Outlook; //untrusted Instance _outlook = new MSOutlook.Application(); MSOutlook.MailItem mail = (MSOutlook.MailItem)_outlook.CreateItem(MSOutlook.OlItemType.olMailItem); //this where the Exception occurs string outlookStdHTMLBody = mail.HTMLBody; I've done quite a bit of reading and know that my Outlook Instance (derived by using new Application) is considered untrusted and therefore the "omg" kicks in. I do have a workaround for development: I'm running VS2010 as Administrator and if I run Outlook as Administrator as well - all is good. I suppose this is due to them having the same integrity levels(high) and the UAC(?) is not complaining. But that just ain't the way to go for deployment. Now the question is: Is there a way to obtain a trusted instance of Outlook so that I can avoid this exception? I've already read that when developing an Office Add-In using VSTO one can obtain a trusted Instance from the OnComplete event and/or using "ThisAddin" But I "merely" want to start an outlook instance and preopulate it, and do not want to develop an Add-In since this is not the requirement. And to make it clear - I have no problem with pop ups informing the user that outlook is being accessed - I just want to get rid of the exception! So how can I get around this problem using code? Any help is highly appreciated! Thomas

    Read the article

  • Best way to pan & scan images with jquery

    - by guy
    Hello, I am trying to make an image pan & scan system. I have a slider that zooms the image (that can be dragged) and also a small map in the corner of the image (that can also be dragged). You can see a rough example here (sorry, I am not allowed to use the design, so it's not formatted): http://lighe.madetokill.com/test/test.html My problem is that although it works great in firefox and opera, it stutters in chrome, safari and IE (it doesn't currently work in IE) at any zoom level except 100% (at 100% it's butter smooth). What is the reason for webkit's poor performance? Am I implementing this wrong? I am basically changing the margin-left and margin-top properties of the image. I know this is fast enough since at 100% it's perfectly smooth. Would I be better off using canvas? I am trying to avoid flash (or any other plugins) if possible. Also please note this is work in progress, there are other bugs except this, so do not bother with those :) Thanks in advance!

    Read the article

  • RESTful idempotence

    - by DutrowLLC
    I'm designing a RESTful web service utilizing ROA(Resource oriented architecture). I'm trying to work out an efficient way to guarantee idempotence for PUT requests that create new resources in cases that the server designates the resource key. From my understanding, the traditional approach is to create a type of transaction resource such as /CREATE_PERSON. The the client-server interaction for creating a new person resource would be in two parts: Step 1: Get unique transaction id for creating the new PERSON resource::: **Client request:** GET /CREATE_PERSON **Server response:** 200 OK transaction-id:"as8yfasiob" Step 2: Create the new person resource in a request guaranteed to be unique by using the transaction id::: **Client request** PUT /CREATE_PERSON/{transaction_id} first_name="Big bubba" **Server response** 201 Created // (If the request is a duplicate, it would send this PersonKey="398u4nsdf" // same response without creating a new resource. It // would perhaps send an error response if the was used // on a transaction id non-duplicate request, but I have // control over the client, so I can guarantee that this // won't happen) The problem that I see with this approach is that it requires sending two requests to the server in order to do to single operation of creating a new PERSON resource. This creates a performance issues increasing the chance that the user will be waiting around for the client to complete their request. I've been trying to hash out ideas for eliminating the first step such as pre-sending transaction-id's with each request, but most of my ideas have other issues or involve sacrificing the statelessness of the application. Is there a way to do this?

    Read the article

  • Use Java exceptions internally for REST API user errors?

    - by user303396
    We have a REST API that works great. We're refactoring and deciding how to internally handle errors by the users of our API. For example the user needs to specify the "movie" url parameter which should take the value of "1984", "Crash", or "Avatar". First we check to see if it has a valid value. What would be the best approach if the movie parameter is invalid? return null from one of the internal methods and check for the null in the main API call method throw an exception from the internal method and catch exceptions in the main API method I think it would make our code more readable and elegant to use exceptions. However, we're reluctant because we'd be potentially throwing many exceptions because of user API input errors, our code could be perfect. This doesn't seem to be the proper use of exceptions. If there are heavy performance penalties with exceptions, which would make sense with stack traces needing to be collected, etc., then we're unnecessarily spending resources when all we need to do is tell the user the parameter is wrong. These are REST API methods, so we're not propogating the exceptions to the users of the API, nor would we want to even if possible. So what's the best practice here? Use ugly nulls or use java's exception mechanism?

    Read the article

  • What is the best way to send structs containing enum values via sockets in C.

    - by Axel
    I've lots of different structs containing enum members that I have to transmit via TCP/IP. While the communication endpoints are on different operating systems (Windows XP and Linux) meaning different compilers (gcc 4.x.x and MSVC 2008) both program parts share the same header files with type declarations. For performance reasons, the structures should be transmitted directly (see code sample below) without expensively serializing or streaming the members inside. So the question is how to ensure that both compilers use the same internal memory representation for the enumeration members (i.e. both use 32-bit unsigned integers). Or if there is a better way to solve this problem... //type and enum declaration typedef enum { A = 1, B = 2, C = 3 } eParameter; typedef enum { READY = 400, RUNNING = 401, BLOCKED = 402 FINISHED = 403 } eState; #pragma pack(push,1) typedef struct { eParameter mParameter; eState mState; int32_t miSomeValue; uint8_t miAnotherValue; ... } tStateMessage; #pragma pack(pop) //... send via socket tStateMessage msg; send(iSocketFD,(void*)(&msg),sizeof(tStateMessage)); //... receive message on the other side tStateMessage msg_received; recv(iSocketFD,(void*)(&msg_received),sizeof(tStateMessage)); Additionally... Since both endpoints are little endian maschines, endianess is not a problem here. And the pack #pragma solves alignment issues satisfactorily. Thx for your answers, Axel

    Read the article

  • Request for the permission of type 'System.Data.Odbc.OdbcPermission.. help needed

    - by Matt
    I'm getting the following error when trying to connect to a remote mysql server. Request for the permission of type 'System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. I've installed the odbc 5.1 driver, and can connect to the database using the Data Sources (ODBC) tool in Control Panel. However when I try and run my C# scrip to connect, I get the above error. I've read its something to do with trust levels or something? I don't quite understand what people were talking about though. I went to C:... Framework/v2.0.50727/CONFIG and added to the medium and high trust.config files, but that didn't help.. Can someone help me out here please? My connection string is MyConString = "DRIVER={MySQL ODBC 5.1 Driver};" + "SERVER=" + m_strHost + ";" + "PORT=3306;" + "DATABASE=" + m_strDatabase + ";" + "UID=" + m_strUserName + ";" + "PWD=" + m_strPassword + ";" + "OPTION=3;";

    Read the article

  • cellForRowAtIndexPath: crashes when trying to access the indexPath.row/[indexPath row]

    - by Emil
    Hey. I am loading in data to a UITableView, from a custom UITableViewCell (own class and nib). It works great, until I try to access the indexPath.row/[indexPath.row]. I'll post my code first, it will probably be easier for you to understand what I mean then. - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { static NSString *CellIdentifier = @"CustomCell"; CustomCell *cell = (CustomCell *) [tableView dequeueReusableCellWithIdentifier:CellIdentifier]; if (cell == nil){ NSArray *topLevelObjects = [[NSBundle mainBundle] loadNibNamed:@"CustomCell" owner:self options:nil]; for (id currentObject in topLevelObjects){ if ([currentObject isKindOfClass:[CustomCell class]]){ cell = (CustomCell *) currentObject; break; } } } NSLog(@"%@", indexPath.row); // Configure the cell... NSUInteger row = indexPath.row; cell.titleLabel.text = [postsArrayTitle objectAtIndex:indexPath.row]; cell.dateLabel.text = [postsArrayDate objectAtIndex:indexPath.row]; cell.cellImage.image = [UIImage imageWithContentsOfFile:[postsArrayImg objectAtIndex:indexPath.row]]; NSLog(@"%@", indexPath.row); return cell; } The odd thing is, it does work when it loads in the first three cells (they are 125px high), but crashes when I try to scroll down. The indexPath is always avaliable, but not the indexPath.row, and I have no idea why it isn't! Please help me.. Thanks. Additional error code: *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSCFString objectAtIndex:]: unrecognized selector sent to instance 0x5d10c20' *** Call stack at first throw: ( 0 CoreFoundation 0x0239fc99 __exceptionPreprocess + 185 1 libobjc.A.dylib 0x024ed5de objc_exception_throw + 47 2 CoreFoundation 0x023a17ab -[NSObject(NSObject) doesNotRecognizeSelector:] + 187 3 CoreFoundation 0x02311496 ___forwarding___ + 966 4 CoreFoundation 0x02311052 _CF_forwarding_prep_0 + 50 5 MyFancyAppName 0x00002d1c -[HomeTableViewController tableView:cellForRowAtIndexPath:] + 516 [...])

    Read the article

  • Stack and Hash joint

    - by Alexandru
    I'm trying to write a data structure which is a combination of Stack and HashSet with fast push/pop/membership (I'm looking for constant time operations). Think of Python's OrderedDict. I tried a few things and I came up with the following code: HashInt and SetInt. I need to add some documentation to the source, but basically I use a hash with linear probing to store indices in a vector of the keys. Since linear probing always puts the last element at the end of a continuous range of already filled cells, pop() can be implemented very easy without a sophisticated remove operation. I have the following problems: the data structure consumes a lot of memory (some improvement is obvious: stackKeys is larger than needed). some operations are slower than if I have used fastutil (eg: pop(), even push() in some scenarios). I tried rewriting the classes using fastutil and trove4j, but the overall speed of my application halved. What performance improvements would you suggest for my code? What open-source library/code do you know that I can try?

    Read the article

  • WordPress > Optimizing a query to show recent posts with a "View All" link when postcount exceeds ma

    - by Scott B
    I have a setting in my theme that allows the site owner to set the maximum number of posts ($maxPosts) to display in a "Recent Posts" menu. I'm using a custom script to generate the recent posts (because the Recent Posts widget does not highlight the current page, which I need for my css). My menu also is set up to display a "View All" link below the post listing, but only if the actual post count is $maxposts I'm trying to work out the best method for getting the post count and comparing it to $maxposts in order to determine whether or not to show a "View All" link. I'm sure there's probably a better way, but here's my code. I'm looking to optimize it to support very large post counts... $cat=get_cat_ID('excludeFromRecentPosts'); $catHidden=get_cat_ID('hidden'); $myquery = new WP_Query(); $myquery->query(array( 'cat' => "-$cat,-$catHidden", 'post_not_in' => get_option('sticky_posts') )); $myrecentpostscount = $myquery->found_posts; if ($myrecentpostscount > 0) { //show the menu if ($myrecentpostscount > $maxPosts) { //show "View All" link } } I really only need to determine if the total post count from the query is greater than the maxPost setting in order to determine whether to show the "View All" link, so I'm wondering if, in the case there are thousands of posts matching the criteria, to avoid performance issues, I don't need to get a count of all of them. I just need to count up until the point of maxPosts + 1, and that's where I'm struggling a bit because the user could elect to make maxPosts = -1 which means they want to show all posts. But this would be impractical, so I would probably set a upper limit of 20...

    Read the article

  • C# - update variable based upon results from backgroundworker

    - by Bruce
    I've got a C# program that talks to an instrument (spectrum analyzer) over a network. I need to be able to change a large number of parameters in the instrument and read them back into my program. I want to use backgroundworker to do the actual talking to the instrument so that UI performance doesn't suffer. The way this works is - 1) send command to the instrument with new parameter value, 2) read parameter back from the instrument so I can see what actually happened (for example, I try to set the center frequency above the max that the instrument will handle and it tells me what it will actually handle), and 3) update a program variable with the actual value received from the instrument. Because there are quite a few parameters to be updated I'd like to use a generic routine. The part I can't seem to get my brain around is updating the variable in my code with what comes back from the instrument via backgroundworker. If I used a separate RunWorkerCompleted event for each parameter I could hardwire the update directly to the variable. I'd like to come up with a way of using a single routine that's capable of updating any of the variables. All I can come up with is passing a reference number (different for each parameter) and using a switch statement in the RunWorkerCompleted handler to direct the result. There has to be a better way. Thanks for your help.

    Read the article

  • Limit the number of service calls in a RESTful application

    - by Slavo
    Imagine some kind of a banking application, with a screen to create accounts. Each Account has a Currency and a Bank as a property, Currency being a separate class, as well as Bank. The code might look something like this: public class Account { public Currency Currency { get; set; } public Bank Bank { get; set; } } public class Currency { public string Code { get; set; } public string Name { get; set; } } public class Bank { public string Name { get; set; } public string Country { get; set; } } According to the REST design principles, each resource in the application should have its own service, and each service should have methods that map nicely to the HTTP verbs. So in our case, we have an AccountService, CurrencyService and BankService. In the screen for creating an account, we have some UI to select the bank from a list of banks, and to select a currency from a list of currencies. Imagine it is a web application, and those lists are dropdowns. This means that one dropdown is populated from the CurrencyService and one from the BankService. What this means is that when we open the screen for creating an account, we need to make two service calls to two different services. If that screen is not by itself on a page, there might be more service calls from the same page, impacting the performance. Is this normal in such an application? If not, how can it be avoided? How can the design be modified without going away from REST?

    Read the article

  • Does replace into have a where clause?

    - by Lajos Arpad
    I'm writing an application and I'm using MySQL as DBMS, we are downloading property offers and there were some performance issues. The old architecture looked like this: A property is updated. If the number of affected rows is not 1, then the update is not considered successful, elseway the update query solves our problem. If the update was not successful, and the number of affected rows is more than 1, we have duplicates and we delete all of them. After we deleted duplicates if needed if the update was not successful, an insert happens. This architecture was working well, but there were some speed issues, because properties are deleted if they were not updated for 15 days. Theoretically the main problem is deleting properties, because some properties are alive for months and the indexes are very far from each other (we are talking about 500, 000+ properties). Our host told me to use replace into instead of deleting properties and all deprecated properties should be considered as DEAD. I've done this, but problems started to occur because of syntax error and I couldn't find anywhere an example of replace into with a where clause (I'd like to replace a DEAD property with the new property instead of deleting the old property and insert a new to assure optimization). My query looked like this: replace into table_name(column1, ..., columnn) values(value1, ..., valuen) where ID = idValue Of course, I've calculated idValue and handled everything but I had a syntax error. I would like to know if I'm wrong and there is a where clause for replace into. I've found an alternative solution, which is even better than replace into (using simply an update query) because deletes are happening behind the curtains if I use replace into, but I would like to know if I'm wrong when I say that replace into doesn't have a where clause. For more reference, see this link: http://dev.mysql.com/doc/refman/5.0/en/replace.html Thank you for your answers in advance, Lajos Árpád

    Read the article

  • Multiple sendto() using UDP socket

    - by ereOn
    Hi, I have a network software which uses UDP to communicate with other instances of the same program. For different reasons, I must use UDP here. I recently had problems sending huge ammounts of data over UDP and had to implement a fragmentation system to split my messages into small data chunks. So far, it worked well but I now encounter an issue when I have to send a lot of data chunks. I have the following algorithm: Split message into small data chunks (around 1500 bytes) Iterate over the data chunks list and for each, send it using sendto() However, when I send a lot of data chunks, the receiver only gets the first 6 messages. Sometimes it misses the sixth and receives the seventh. It depends. Anyway, sendto() always indicates success. This always happen when I test my software over a loopback interface (127.0.0.1) but never over my LAN network. If I add something like std::cout << "test" << std::endl; between the sendto() then every frame is received. I am aware that UDP allows packet loss and that my frames might be loss for a lot of reasons and I suppose it has to do with the rate I am sending the data chunks at. What would be the right approach here ? Implementing some acknowledgement mechanism (just like TCP) seems overkill. Adding some arbitrary waiting time between the sendto() is ugly and will probably decrease performance. Increasing (if possible) the receiver UDP internal buffer ? I don't even know if this is possible. Something else ? I really need your advices here. Thank very much.

    Read the article

  • Is there any danger in calling free() or delete instead of delete[]? [closed]

    - by Matt Joiner
    Possible Duplicate: ( POD )freeing memory : is delete[] equal to delete ? Does delete deallocate the elements beyond the first in an array? char *s = new char[n]; delete s; Does it matter in the above case seeing as all the elements of s are allocated contiguously, and it shouldn't be possible to delete only a portion of the array? For more complex types, would delete call the destructor of objects beyond the first one? Object *p = new Object[n]; delete p; How can delete[] deduce the number of Objects beyond the first, wouldn't this mean it must know the size of the allocated memory region? What if the memory region was allocated with some overhang for performance reasons? For example one could assume that not all allocators would provide a granularity of a single byte. Then any particular allocation could exceed the required size for each element by a whole element or more. For primitive types, such as char, int, is there any difference between: int *p = new int[n]; delete p; delete[] p; free p; Except for the routes taken by the respective calls through the delete-free deallocation machinery?

    Read the article

  • What is your best-practice advice on implementing SQL stored procedures (in a C# winforms applicatio

    - by JYelton
    I have read these very good questions on SO about SQL stored procedures: When should you use stored procedures? and Are Stored Procedures more efficient, in general, than inline statements on modern RDBMS’s? I am a beginner on integrating .NET/SQL though I have used basic SQL functionality for more than a decade in other environments. It's time to advance with regards to organization and deployment. I am using .NET C# 3.5, Visual Studio 2008 and SQL Server 2008; though this question can be regarded as language- and database- agnostic, meaning that it could easily apply to other environments that use stored procedures and a relational database. Given that I have an application with inline SQL queries, and I am interested in converting to stored procedures for organizational and performance purposes, what are your recommendations for doing so? Here are some additional questions in my mind related to this subject that may help shape the answers: Should I create the stored procedures in SQL using SQL Management Studio and simply re-create the database when it is installed for a client? Am I better off creating all of the stored procedures in my application, inside of a database initialization method? It seems logical to assume that creating stored procedures must follow the creation of tables in a new installation. My database initialization method creates new tables and inserts some default data. My plan is to create stored procedures following that step, but I am beginning to think there might be a better way to set up a database from scratch (such as in the installer of the program). Thoughts on this are appreciated. I have a variety of queries throughout the application. Some queries are incredibly simple (SELECT id FROM table) and others are extremely long and complex, performing several joins and accepting approximately 80 parameters. Should I replace all queries with stored procedures, or only those that might benefit from doing so? Finally, as this topic obviously requires some research and education, can you recommend an article, book, or tutorial that covers the nuances of using stored procedures instead of direct statements?

    Read the article

  • jsf and huge web site

    - by darko petreski
    Hi All, I need to program a very huge web site. The site should contain public part with search engine, user registration forms, cart and all other stuff. The private part should contain administrative web application with a lot of data grids, complex edit forms, security checks, services and so on. Also I need very good support for html mailing and pdf exports. Web services are also required. I am considering to use jsf. Can you recommend me books or other stuff where I can find all this info for building what i need? Is seam good option for this? I have looked their site and page seam on production. The sites listed there are far from professional and enterprise. Is there any book that explains how to build complete web site with jsf technology from login, security, sending mails, pdf conversion, time dependent background processes and everything else that big sites needs. I do not have time reading 20 books. In every php book all this stuff is explained but I have not seen a single java book that at least from high view will explain all of this. Regards

    Read the article

< Previous Page | 642 643 644 645 646 647 648 649 650 651 652 653  | Next Page >