Search Results

Search found 6231 results on 250 pages for 'slow diver'.

Page 213/250 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Optimization in Python - do's, don'ts and rules of thumb.

    - by JV
    Well I was reading this post and then I came across a code which was: jokes=range(1000000) domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)] I thought wouldn't it be better to calculate the value of len(jokes) once outside the list comprehension? Well I tried it and timed three codes jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)]' 10000000 loops, best of 3: 0.0352 usec per loop jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes);domain=[(0,(l*2)-i-1) for i in range(0,l*2)]' 10000000 loops, best of 3: 0.0343 usec per loop jv@Pioneer:~$ python -m timeit -s 'jokes=range(1000000);l=len(jokes)*2;domain=[(0,l-i-1) for i in range(0,l)]' 10000000 loops, best of 3: 0.0333 usec per loop Observing the marginal difference 2.55% between the first and the second made me think - is the first list comprehension domain=[(0,(len(jokes)*2)-i-1) for i in range(0,len(jokes)*2)] optimized internally by python? or is 2.55% a big enough optimization (given that the len(jokes)=1000000)? If this is - What are the other implicit/internal optimizations in Python ? What are the developer's rules of thumb for optimization in Python? Edit1: Since most of the answers are "don't optimize, do it later if its slow" and I got some tips and links from Triptych and Ali A for the do's. I will change the question a bit and request for don'ts. Can we have some experiences from people who faced the 'slowness', what was the problem and how it was corrected? Edit2: For those who haven't here is an interesting read Edit3: Incorrect usage of timeit in question please see dF's answer for correct usage and hence timings for the three codes.

    Read the article

  • Reading numpy arrays outside of Python

    - by Abiel
    In a recent question I asked about the fastest way to convert a large numpy array to a delimited string. My reason for asking was because I wanted to take that plain text string and transmit it (over HTTP for instance) to clients written in other programming languages. A delimited string of numbers is obviously something that any client program can work with easily. However, it was suggested that because string conversion is slow, it would be faster on the Python side to do base64 encoding on the array and send it as binary. This is indeed faster. My question now is, (1) how can I make sure my encoded numpy array will travel well to clients on different operating systems and different hardware, and (2) how do I decode the binary data on the client side. For (1), my inclination is to do something like the following import numpy as np import base64 x = np.arange(100, dtype=np.float64) base64.b64encode(x.tostring()) Is there anything else I need to do? For (2), I would be happy to have an example in any programming language, where the goal is to take the numpy array of floats and turn them into a similar native data structure. Assume we have already done base64 decoding and have a byte array, and that we also know the numpy dtype, dimensions, and any other metadata which will be needed. Thanks.

    Read the article

  • Can I perform a search on mail server in Java?

    - by twofivesevenzero
    I am trying to perform a search of my gmail using Java. With JavaMail I can do a message by message search like so: Properties props = System.getProperties(); props.setProperty("mail.store.protocol", "imaps"); Session session = Session.getDefaultInstance(props, null); Store store = session.getStore("imaps"); store.connect("imap.gmail.com", "myUsername", "myPassword"); Folder inbox = store.getFolder("Inbox"); inbox.open(Folder.READ_ONLY); SearchTerm term = new SearchTerm() { @Override public boolean match(Message mess) { try { return mess.getContent().toString().toLowerCase().indexOf("boston") != -1; } catch (IOException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } catch (MessagingException ex) { Logger.getLogger(JavaMailTest.class.getName()).log(Level.SEVERE, null, ex); } return false; } }; Message[] searchResults = inbox.search(term); for(Message m:searchResults) System.out.println("MATCHED: " + m.getFrom()[0]); But this requires downloading each message. Of course I can cache all the results, but this becomes a storage concern with large gmail boxes and also would be very slow (I can only imagine how long it would take to search through gigabytes of text...). So my question is, is there a way of searching through mail on the server, a la gmail's search field? Maybe through Microsoft Exchange? Hours of Googling has turned up nothing.

    Read the article

  • Looking for fast, minimal, preferrably free disc cloning software [closed]

    - by Dave
    We have to test our application installation and functionality on many Windows operating system versions and languages (XP, Vista, Win7; English, Spanish, Portuguese, etc; 32-bit & b4-bit.) While we can do much of this in virtual machines, we have noticed that VM's sometimes hide problems, or raise false bugs. So, we need to do "bare metal" OS installation for much of our testing. I have been using Acronis True Image for the past year, and am not impressed. It often gives random errors which require a reboot, and is really slow. For example, when trying to restore an image, it goes through a "Locking partition" cycle about three times (once after you click OK on each step of the wizard), each of which can take 5 minutes to complete. This all happens BEFORE it actually starts the image copy, which is sometimes quick (3-5 minutes), sometimes long (hours). The size of all of our images are roughly the same, so that is not related. So, anyway, I'm looking to switch to something else: I only need very basic functionality--just creating images of entire discs, and then restoring those images onto the exact same hard drive at a later date. That's it. I'm not opposed to paying for a good piece of software, but if there is something free out there that does the job well, that would be a preference. My OS on which the imaging software would run is Windows Vista, but a bootable media (into a Linux flavor) would be fine also, as long as its quick to use and reliable. Recommendations? (Also, moderators, if this should be a CW, I'll be happy to mark it as such; unclear about the rules there.)

    Read the article

  • CakePHP - hasMany not fetching?

    - by Paolo Bergantino
    Maybe I am just having a slow day, but for the life of me I can't figure out why this is happening. I haven't done CakePHP in a while and I am trying to use the 1.3 version, but this doesn't seem to be working... I have two models: area.php <?php class Area extends AppModel { var $name = 'Area'; var $useTable = 'OR_AREA'; var $primaryKey = 'A_ID'; var $belongsTo = array( 'Building' => array( 'className' => 'Building', 'foreignKey' => 'FK_B_ID', ), 'Facility' => array( 'className' => 'Facility', 'foreignKey' => 'FK_F_ID', ), 'System' => array( 'className' => 'System', 'foreignKey' => 'FK_S_ID', ) ); } ?> building.php <?php class Building extends AppModel { var $name = 'Building'; var $useTable = 'OR_BLDG'; var $primaryKey = 'B_ID'; var $hasMany = array( 'Area' => array( 'className' => 'Area', 'foreignKey' => 'FK_B_ID', ) ); } ?> OR_AREA has a column titled FK_B_ID that refers to the B_ID. If I run something like: $this->Building->find('all', array('recursive' => 2)); I get empty [Area] arrays for all the Buildings even though there are plenty of Areas in the OR_AREA table that are associated to buildings. Not only that, the Query Table doesn't even show CakePHP attempted to find anything but all the records in OR_BLDG. All the more puzzling, if I do: $this->Area->find('all'); I get all the Areas and the [Building] arrays are populated when appropriate. What am I missing?

    Read the article

  • Save memory in Python. How to iterate over the lines and save them efficiently with a 2million line

    - by skyl
    I have a tab-separated data file with a little over 2 million lines and 19 columns. You can find it, in US.zip: http://download.geonames.org/export/dump/. I started to run the following but with for l in f.readlines(). I understand that just iterating over the file is supposed to be more efficient so I'm posting that below. Still, with this small optimization, I'm using 10% of my memory on the process and have only done about 3% of the records. It looks like, at this pace, it will run out of memory like it did before. Also, the function I have is very slow. Is there anything obvious I can do to speed it up? Would it help to del the objects with each pass of the for loop? def run(): from geonames.models import POI f = file('data/US.txt') for l in f: li = l.split('\t') try: p = POI() p.geonameid = li[0] p.name = li[1] p.asciiname = li[2] p.alternatenames = li[3] p.point = "POINT(%s %s)" % (li[5], li[4]) p.feature_class = li[6] p.feature_code = li[7] p.country_code = li[8] p.ccs2 = li[9] p.admin1_code = li[10] p.admin2_code = li[11] p.admin3_code = li[12] p.admin4_code = li[13] p.population = li[14] p.elevation = li[15] p.gtopo30 = li[16] p.timezone = li[17] p.modification_date = li[18] p.save() except IndexError: pass if __name__ == "__main__": run()

    Read the article

  • Making GUI applications on Linux/Windows. What languages/tools to use?

    - by Javed Ahamed
    My student group and I are trying to continue working on a project we worked on this semester over the summer to become a professional, deployable app. We originally did it in Adobe AIR but it seems now that the computers this program will be running on will be very slow, maybe 600mhz and 128-256mb ram so flash just isn't going to cut it. It is basically a health diagnosis application that we will be shipping out to impoverished countries. Now comes the real question. We are wondering what language to rebuild our application in. It has to have a good gui builder associated with it, like adobe flex/air gui builder or visual studio's gui builder but the application should run on linux primarily, and if it can run on windows thats just a plus. We are all students too without really any outside help so whatever we decide to do this in there must be ample documentation available when we hit problems. Some things we have considered so far are using python and glade or c# and monodevelop, but again we really are not experts on any of this which is why I am asking for help as I would rather spend the time now choosing the right tools instead of wasting time down the line when we hit a roadblock. Thanks in advance!

    Read the article

  • Any high-level languages that can use c libraries?

    - by Isaiah
    I know this question could be in vain, but it's just out of curiosity, and I'm still much a newb^^ Anyways I've been loving python for some time while learning it. My problem is obviously speed issues. I'd like to get into indie game creation, and for the short future, 2d and pygame will work. But I'd eventually like to branch into the 3d area, and python is really too slow to make anything 3d and professional. So I'm wondering if there has ever been work to create a high-level language able to import and use c libraries? I've looked at Genie and it seems to be able to use certain libraries, but I'm not sure to what extent. Will I be able to use it for openGL programing, or in a c game engine? I do know some lisp and enjoy it a lot, but there aren't a great many libraries out there for it. Which leads to the problem: I can't stand C syntax, but C has libraries galore that I could need! And game engines like irrlicht. Is there any language that can be used in place of C around C? Thanks so much guys

    Read the article

  • Using a php://memory wrapper causes errors...

    - by HorusKol
    I'm trying to extend the PHP mailer class from Worx by adding a method which allows me to add attachments using string data rather than path to the file. I came up with something like this: public function addAttachmentString($string, $name='', $encoding = 'base64', $type = 'application/octet-stream') { $path = 'php://memory/' . md5(microtime()); $file = fopen($path, 'w'); fwrite($file, $string); fclose($file); $this->AddAttachment($path, $name, $encoding, $type); } However, all I get is a PHP warning: PHP Warning: fopen() [<a href='function.fopen'>function.fopen</a>]: Invalid php:// URL specified There aren't any decent examples with the original documentation, but I've found a couple around the internet (including one here on SO), and my usage appears correct according to them. Has anyone had any success with using this? My alternative is to create a temporary file and clean up - but that will mean having to write to disc, and this function will be used as part of a large batch process and I want to avoid slow disc operations (old server) where possible. This is only a short file but has different information for each person the script emails.

    Read the article

  • FILE* issue PPU side code

    - by Cristina
    We are working on a homework on CELL programming for college and their feedback response to our questions is kinda slow, thought i can get some faster answers here. I have a PPU side code which tries to open a file passed down through char* argv[], however this doesn't work it cannot make the assignment of the pointer, i get a NULL. Now my first idea was that the file isn't in the correct directory and i copied in every possible and logical place, my second idea is that maybe the PPU wants this pointer in its LS area, but i can't deduce if that's the bug or not. So... My question is what am i doing wrong? I am working with a Fedora 7 SDK Cell, with Eclipse as an IDE. Maybe my argument setup is wrong tho he gets the name of the file correctly. Code on request: images_t *read_bin_data(char *name) { FILE *file; images_t *img; uint32_t *buffer; uint8_t buf; unsigned long fileLen; unsigned long i; //Open file file = (FILE*)malloc(sizeof(FILE)); file = fopen(name, "rb"); printf("[Debug]Opening file %s\n",name); if (!file) { fprintf(stderr, "Unable to open file %s", name); return NULL; } //....... } Main launch: int main(int argc,char* argv[]) { int i,img_width; int modif_this[4] __attribute__ ((aligned(16))) = {1,2,3,4}; images_t *faces, *nonfaces; spe_context_ptr_t ctxs[SPU_THREADS]; pthread_t threads[SPU_THREADS]; thread_arg_t arg[SPU_THREADS]; //intializare img_width img_width = atoi(argv[1]); printf("[Debug]Img size is %i\n",img_width); faces = read_bin_data(argv[3]); //....... } Thanks for the help.

    Read the article

  • Have I taken a wrong path in programming by being excessively worried about code elegance and style?

    - by Ygam
    I am in a major stump right now. I am a BSIT graduate, but I only started actual programming less than a year ago. I observed that I have the following attitude in programming: I tend to be more of a purist, scorning unelegant approaches to solving problems using code I tend to look at anything in a large scale, planning everything before I start coding, either in simple flowcharts or complex UML charts I have a really strong impulse on refactoring my code, even if I miss deadlines or prolong development times I am obsessed with good directory structures, file naming conventions, class, method, and variable naming conventions I tend to always want to study something new, even, as I said, at the cost of missing deadlines I tend to see software development as something to engineer, to architect; that is, seeing how things relate to each other and how blocks of code can interact (I am a huge fan of loose coupling) i.e the OOP thinking I tend to combine OOP and procedural coding whenever I see fit I want my code to execute fast (thus the elegant approaches and refactoring) This bothers me because I see my colleagues doing much better the other way around (aside from the fact that they started programming since our first year in college). By the other way around I mean, they fire up coding, gets the job done much faster because they don't have to really look at how clean their codes are or how elegant their algorithms are, they don't bother with OOP however big their projects are, they mostly use web APIs, piece them together and voila! Working code! CLients are happy, they get paid fast, at the expense of a really unmaintainable or hard-to-read code that lacks structure and conventions, or slow executions of certain actions (which the common reasoning against would be that internet connections are much faster these days, hardware is more powerful). The excuse I often receive is clients don't care about how you write the code, but they do care about how long you deliver it. If it works then all is good. Now, did my "purist" approach to programming may have been the wrong way to start programming? Should I just dump these purist concepts and just code the hell up because I have seen it: clients don't really care how beautifully coded it is?

    Read the article

  • Synchronizing screencasting (ffmpeg) and capturing from the webcam (OpenCV)

    - by lyuba
    As from my previous questions, I am trying to build a simple eye tracker. Decided to start from a Linux version (run Ubuntu). To complete this task one should organize screencasting and webcam capturing in such way that frames from both streams exactly match each other and there is the same number of frames in each of them totally. Screencasting fsp fully depends on the camera's fsp, so each time we get the image from the webcam we can potentially grab a screen frame and stay happy. However, all the tools for the fast screencasting, like ffmpeg, for example, return the .avi file as the result and require the fsp already known to be started. From the other side, tools like Java+Robot or ImageMagick seem to require around 20ms to return the .jpg screenshot, which is pretty slow for the task. But they may be requested right after each time the webcam frame is grabbed and provide the needed synchronization. So the sub-questions are: 1. Does the USD camera's frame rate vary during a single session? 2. Are there any tools which provide fast screencasting frame by frame? 3. Is there any way to make ffmpeg push a new frame to the .avi file only when program initiates this request? For my task I may either use C++ or Java. I am, actually, an interface designer, not the driver programmer, and this task seems to be pretty low-level. I would be grateful for any suggestion and tip!

    Read the article

  • Need guidelines for optimizing WebGL performance by minimizing shader changes

    - by brainjam
    I'm trying to get an idea of the practicality of WebGL for rendering large architectural interior scenes, consisting of 100K's of triangles. These triangles are distributed over many objects, and there are many materials in the scene. On the other hand, there are no moving parts. And the materials tend to be fairly simple, mostly based on texture maps. There is a lot of texture map sharing .. for example all the chairs in scene will share a common map. There is also some multitexturing - up to three textures overlaid in a material. I've been doing a little experimentation and reading, and gather that frequently switching materials during a rendering pass will slow things down. For example, a scene with 200K triangles will have significant performance differences, depending on whether there are 10 or 1000 objects, assuming that each time an object is displayed a new material is set up. So it seems that if performance is important the scene should be sorted by materials so as to minimize material switching. What I'm looking for is guidelines on how to think of the overhead of various state changes, and where do I get the biggest bang for the buck. For example, what are the relative performance costs of, say, gl.useProgram(), gl.uniformMatrix4fv(), gl.drawElements() should I try to write ubershaders to minimize shader switching? should I try to aggregate geometry to minimize the number of gl.drawElements() calls I realize that mileage may vary depending on browser, OS, and graphics hardware. And I'm also not looking for heroic measures. Just some guidelines from people who have already had some experience in making scenes fast. I'll add that while I've had some experience with fixed-pipeline OpenGL programming in the past, I'm rather new to the WebGL/OpenGL ES 2.0 way of doing things.

    Read the article

  • Legitimate uses of the Function constructor

    - by Marcel Korpel
    As repeatedly said, it is considered bad practice to use the Function constructor (also see the ECMAScript Language Specification, 5th edition, § 15.3.2.1): new Function ([arg1[, arg2[, … argN]],] functionBody) (where all arguments are strings containing argument names and the last (or only) string contains the function body). To recapitulate, it is said to be slow, as explained by the Opera team: Each time […] the Function constructor is called on a string representing source code, the script engine must start the machinery that converts the source code to executable code. This is usually expensive for performance – easily a hundred times more expensive than a simple function call, for example. (Mark ‘Tarquin’ Wilton-Jones) Though it's not that bad, according to this post on MDC (I didn't test this myself using the current version of Firefox, though). Crockford adds that [t]he quoting conventions of the language make it very difficult to correctly express a function body as a string. In the string form, early error checking cannot be done. […] And it is wasteful of memory because each function requires its own independent implementation. Another difference is that a function defined by a Function constructor does not inherit any scope other than the global scope (which all functions inherit). (MDC) Apart from this, you have to be attentive to avoid injection of malicious code, when you create a new Function using dynamic contents. Lots of disadvantages and it is intelligible that ECMAScript 5 discourages the use of the Function constructor by throwing an exception when using it in strict mode (§ 13.1). That said, T.J. Crowder says in an answer that [t]here's almost never any need for the similar […] new Function(...), either, again except for some advanced edge cases. So, now I am wondering: what are these “advanced edge cases”? Are there legitimate uses of the Function constructor?

    Read the article

  • Oracle output: cursor, file, or very long string?

    - by Klay
    First, the setup: I have a table in an Oracle10g database with spatial columns. I need to be able to pass in a spatial reference so that I can reproject the geometry into an arbitrary coordinate system. Ultimately, I need to compress the results of this projection to a zip file and make it available for download through a Silverlight project. I would really appreciate ideas as to the best way to accomplish this. In the examples below, the SRID is the Spatial reference ID integer used to convert the geometric points into a new coordinate system. In particular, I can see a couple of possibilities. There are many more, but this is an idea of how I'm thinking: a) Pass SRID to a dynamic view -- perform projection, output a cursor -- send cursor to UTL_COMPRESS -- write output to a file (somehow) -- send URL to Silverlight app b) Use SRID to call Oracle function from Silverlight app -- perform projection, output a string -- build strings into a file -- compress file using SharpZipLib library in .NET -- send bytestream back to Silverlight app I've done the first two steps of b), and the conversion of 100 points took about 7 seconds, which is unacceptably slow. I'm hoping it would be faster doing the processing totally in Oracle. If anyone can see potential problems with either way of doing this, or can suggest a better way, it would be very helpful. Thanks!

    Read the article

  • What's the best way to select max over multiple fields in SQL?

    - by allyourcode
    The I kind of want to do is select max(f1, f2, f3). I know this doesn't work, but I think what I want should be pretty clear (see update 1). I was thinking of doing select max(concat(f1, '--', f2 ...)), but this has various disadvantages. In particular, doing concat will probably slow things down. What's the best way to get what I want? update 1: The answers I've gotten so far aren't what I'm after. max works over a set of records, but it compares them using only one value; I want max to consider several values, just like the way order by can consider several values. update 2: Suppose I have the following table: id class_name order_by1 order_by_2 1 a 0 0 2 a 0 1 3 b 1 0 4 b 0 9 I want a query that will group the records by class_name. Then, within each "class", select the record that would come first if you ordered by order_by1 ascending then order_by2 ascending. The result set would consist of records 2 and 3. In my magical query language, it would look something like this: select max(* order by order_by1 ASC, order_by2 ASC) from table group by class_name

    Read the article

  • Using different numeric variable types

    - by DataPimp
    Im still pretty new so bear with me on this one, my question(s) are not meant to be argumentative or petty but during some reading something struck me as odd. Im under the assumption that when computers were slow and memory was expensive using the correct variable type was much more of a necessity than it is today. Now that memory is a bit easier to come by people seem to have relaxed a bit. For example, you see this sample code everywhere: for (int i = 0; i < length; i++) int? (-2,147,483,648 to 2,147,483,648) for length? Isnt byte (0-255) a better choice? So Im curious of your opinion and what you believe to be best practice, I hate to think this would be used only because the acronym "int" is more intuitive for a beginner...or has memory just become so cheap that we really dont need to concern ourselves with such petty things and therefore we should just use long so we can be sure any other numbers/types(within reason) used can be cast automagically? ...or am Im just being silly by concerning myself with such things?

    Read the article

  • How can I find out if two arguments are instances of the same, but unknown class?

    - by Ingmar
    Let us say we have a method which accepts two arguments o1 and o2 of type Object and returns a boolean value. I want this method to return true only when the arguments are instances of the same class, e.g.: foo(new Integer(4),new Integer(5)); Should return true, however: foo(new SomeClass(), new SubtypeSomeClass()); should return false and also: foo(new Integer(3),"zoo"); should return false. I believe one way is to compare the fully qualified class names: public boolean foo(Object o1, Object o2){ Class<? extends Object> c1 = o1.getClass(); Class<? extends Object> c2 = o2.getClass(); if(c1.getName().equals(c2.getName()){ return true;} return false; } An alternative conditional statement would be : if (c1.isAssignableFrom(c2) && c2.isAssignableFrom(c1)){ return true; } The latter alternative is rather slow. Are there other alternatives to this problem?

    Read the article

  • Multi-dimensional array edge/border conditions

    - by kirbuchi
    Hi, I'm iterating over a 3 dimensional array (which is an image with 3 values for each pixel) to apply a 3x3 filter to each pixel as follows: //For each value on the image for (i=0;i<3*width*height;i++){ //For each filter value for (j=0;j<9;j++){ if (notOutsideEdgesCondition){ *(**(outArray)+i)+= *(**(pixelArray)+i-1+(j%3)) * (*(filter+j)); } } } I'm using pointer arithmetic because if I used array notation I'd have 4 loops and I'm trying to have the least possible number of loops. My problem is my notOutsideEdgesCondition is getting quite out of hands because I have to consider 8 border cases. I have the following handled conditions Left Column: ((i%width)==0) && (j%3==0) Right Column: ((i-1)%width ==0) && (i>1) && (j%3==2) Upper Row: (i<width) && (j<2) Lower Row: (i>(width*height-width)) && (j>5) and still have to consider the 4 corner cases which will have longer expressions. At this point I've stopped and asked myself if this is the best way to go because If I have a 5 line long conditional evaluation it'll not only be truly painful to debug but will slow the inner loop. That's why I come to you to ask if there's a known algorithm to handle this cases or if there's a better approach for my problem. Thanks a lot.

    Read the article

  • Poor DB4O performance - What am I doing wrong?

    - by Jon
    I read Rob Conery's post about Object databases and thought I'd give it a go. I have a class lets say Book with 15 properties mostly string, some int, one Datetime and 2 decimals. I used Rob's code to open the database file etc to test on a Winforms project not a Web project. So I did the following: for(int i = 0; i < 1000000; i++) { var Book = new Book(); //Assign variables s.Save(Book); } s.CommitChanges(); This took at least a couple of minutes. I then tried to retrieve all data to test speed: var spp = s.All<Passports>(); This one line was quick, however, then doing: sppp.Count() as well as a seperate test doing a foreach loop and incrementing a counter took a couple of minutes. This seems quite slow to me. Am I doing something wrong or am I expecting too much?

    Read the article

  • Web page database query optimization

    - by morpheous
    I am putting together a web page which is quite 'expensive' in terms of database hits. I don't want to start optimizing at this stage - though with me trying to hit a deadline, I may end up not optimizing at all. Currently the page requires 18 (that's right eighteen) hits to the db. I am already using joins, and some of the queries are UNIONed to minimize the trips to the db. My local dev machine can handle this (page is not slow) however, I feel if I release this into the wild, the number of queries will quickly overwhelm my database (MySQL). I could always use memcache or something similar, but I would much rather continue with my other dev work that needs to be completed before the deadline - at least retrieving the page works - its simply a matter of optimization now (if required). My question therefore is - is 18 db queries for a single page retrieval completely outrageous - (i.e. I should put everything on hold and optimize the hell of the retrieval logic), or shall I continue as normal, meet the deadline and release on schedule and see what happens? [Edit] Just to clarify, I have already done the 'obvious' things like using (single and composite) indexes for fields used in the queries. What I haven't yet done is to run a query analyzer to see if my indexes etc are optimal.

    Read the article

  • C# DynamicMethod prelink

    - by soywiz
    I'm the author of a psp emulator made in C#. I'm generating lots of "DynamicMethod" using ILGenerator. I'm converting assembly code into an AST, and then generating IL code and building that DynamicMethod. I'm doing this in another thread, so I can generate new methods while the program is executing others so it can run smoothly. My problem is that the native code generation is lazy, so the machine code is generated when the function is called, not when the IL is generated. So it generates in the program executing thread, native code generation is prettly slow as it is the asm-ast-il step. I have tried the Marshal.Prelink method that it is suposed to generate the machine code before executing the function. It does work on Mono, but it doesn't work on MS .NET. Marshal.Prelink(MethodInfo); Is there a way of prelinking a DynamicMethod on MS .NET? I thought adding a boolean parameter to the function that if set, exits the function immediately so no code is actually executed. I could "prelink" that way, but I think that's a nasty solution I want to avoid. Any idea?

    Read the article

  • Implementing Excel 2003 COM Add-in UDF in Asyc Programming model using C#(VS 2005)

    - by Venu
    Hi: I am trying to implement a UDF using Excel COM Add-in(2003) with Visual Studio 2005 in C#. I would like to implement the UDF using async programming. The UDF is a slow operation as its results are fetched from a server. As an illustration(not a real world example),the following UDF works fine without any issue: public double mul(double number1, double number2) { return number1 * number2; } How can I do the same functionality in an async way: For example, I would like the UDF return immediately and later when the results are available from a server, I would like to update the desired cells. // This method returns immediately. public object mul(double number1, double number2) { return "calculating"; } // This method of a worker thread will update the results. public OnResultsAvailable(object result) { // Question: how should I update the cells that triggerred the calcualtions above? } Constraints: I cannot use Excel RTD as I have to work with existing codebase written using Excel C# COM Add-in. Thanks for the help. -Venu

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • Starting and stopping firefox from c#

    - by Lucas Meijer
    When I start /Applications/Firefox.app/Contents/MacOS/firefox-bin on MacOSX using Process.Start() using Mono, the id of the process that gets returned does not match the process that firefox ends up running under. It looks like firefox quickly decides to start another process, and kill the current one. This makes it difficult to stop firefox, and to detect if it is still running. I've tried starting firefox using the -no-remote flag, to no avail. Is there a way to start firefox in such a way that it doesn't do this "I'll quickly make a new process for you" dance? The situation can somewhat be detected by making sure Firefox keeps on running for at least 3 seconds after its start, and when it does not, scan for other firefox processes. However, this technique is shaky at best, as on slow days it might take a bit more than 3 seconds, and then all tests depending on this behaviour fail. It turns out, that this behaviour only happens when asking firefox to start a specific profile using -P MyProfile. (Which I need to do, as I need to start firefox with specific proxyserver settings) If I start firefox "normally" it does stick to its process.

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >