Search Results

Search found 6363 results on 255 pages for 'buford speed'.

Page 219/255 | < Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >

  • optimize 2D array in C++

    - by Hristo
    I'm dealing with a 2D array with the following characteristics: const int cols = 500; const int rows = 100; int arr[rows][cols]; I access array arr in the following manner to do some work: for(int k = 0; k < T; ++k) { // for each trainee myscore[k] = 0; for(int i = 0; i < N; ++i) { // for each sample for(int j = 0; j < E[i]; ++j) { // for each expert myscore[k] += delta(i, anotherArray[k][i], arr[j][i]); } } } So I am worried about the array 'arr' and not the other one. I need to make this more cache-friendly and also boost the speed. I was thinking perhaps transposing the array but I wasn't sure how to do that. My implementation turns out to only work for square matrices. How would I make it work for non-square matrices? Also, would mapping the 2D array into a 1D array boost the performance? If so, how would I do that? Finally, any other advice on how else I can optimize this... I've run out of ideas, but I know that arr[j][i] is the place where I need to make changes because I'm accessing columns by columns instead of rows by rows so that is not cache friendly at all. Thanks, Hristo

    Read the article

  • What is different about C++ math.h abs() compared to my abs()

    - by moka
    I am currently writing some glsl like vector math classes in c++, and I just implemented an abs() function like this: template<class T> static inline T abs(T _a) { return _a < 0 ? -_a : _a; } I compared its speed to the default c++ abs from math.h like this: clock_t begin = clock(); for(int i=0; i<10000000; ++i) { float a = abs(-1.25); }; clock_t end = clock(); unsigned long time1 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); begin = clock(); for(int i=0; i<10000000; ++i) { float a = myMath::abs(-1.25); }; end = clock(); unsigned long time2 = (unsigned long)((float)(end-begin) / ((float)CLOCKS_PER_SEC/1000.0)); std::cout<<time1<<std::endl; std::cout<<time2<<std::endl; Now the default abs takes about 25ms while mine takes 60. I guess there is some low level optimisation going on. Does anybody know how math.h abs works internally? The performance difference is nothing dramatic, but I am just curious!

    Read the article

  • wpf: capturing mouse does not work

    - by amethyste
    hello I am developing an kind of outlook calendar application where I need to make the appointment resizable from mouse. My first try with a thumb did not work properly so I tried another way. What I did is that: 1) on the botton of the appointmennt panel I added a rectangle to figure out the resize zone (the thumb). The appointment panel is put on a grid panel. 2) I intercept down event on the rectangle and send event to this code: private Point startPoint; private void OnResizeElementMouseDown(object sender, MouseButtonEventArgs e) { e.Handled = true; this.MouseMove += new MouseEventHandler(ResizeEndElement_MouseMove); this.MouseLeftButtonUp += new MouseButtonEventHandler(OnResizeElementMouseUp); // some code to perform new height computation Mouse.Capture(this); } where this is the appointment panel that own the thumb. Decreasing height works well. But increasing is more difficult. If I move the mouse very very slowly it's OK, if I speed it up a little bit it tends to leave out the appointment panel and then all MouseMove event are lost. I thought Mouse.Capture() was propose to solve this kind of problem, but in fact not. Does anybody know what is wrong in my code?

    Read the article

  • Question about how to implement a c# host application with a plugin-like architecture

    - by devoured elysium
    I want to have an application that works as a Host to many other small applications. Each one of those applications should work as kind of plugin to this main application. I call them plugins not in the sense they add something to the main application, but because they can only work with this Host application as they depend on some of its services. My idea was to have each of those plugins run in a different app domain. The problem seems to be that my host application should have a set of services that my plugins will want to use and from what is my understanding making data flow in and out from different app domains is not that great of a thing. On one hand I'd like them to behave as stand-alone applications(although, as I said, they need to use lots of times the host application services), but on the other hand I'd like that if any of them crashes, my main application wouldn't suffer from it. What is the best (.NET) approach to this kind of situation? Make them all run on the same AppDomain but each one in a different Thread? Use different AppDomains? One for each "plugin"? How would I make them communicate with the Host Application? Any other way of doing this? Although speed is not an issue here, I wouldn't like for function calls to be that much slower than they are when we're working with just a regular .NET application. Thanks

    Read the article

  • jQuery weirdness. Div becomes attached to chained element markup?

    - by Scott B
    I've got a div in my app that is displayed each time my theme options panel is "saved". The markup is... <form method="post"> <?php if ( $_REQUEST['saved']) { ?> <div id="message" class="updated fade"><p>Sweet! The settings were saved :)</p></div> <script type="text/javascript"> $('#message').delay(3000).fadeOut(3000);</script> <?php }?> This has the effect of showing the div (which is absolutely positioned to overlay the interface). I'm also using jQuery to fade the message offscreen after 3 seconds. This works fine, however, when I add a bit of script to my jQuery chain (see the commented out block below), the message div is only visible when the jPicker popup appears. $(function() { $("#carousel").jCarouselLite ( { btnNext: ".next", btnPrev: ".prev", visible: 6, speed: 700 } ); $('#carousel').show(); $('#myTheme').change ( function() { var myImage = $('#myTheme :selected').text(); $('.selectedImage img').attr('src','../wp-content/themes/myTheme/styles/'+myImage+'/screenshot.jpg'); } ); $('#carousel ul li').click ( function(e) { var myOption = $(this).children('img').attr('title'); $("#myTheme option[value='"+myOption+"']").attr('selected', 'selected'); $("#myTheme").css('backgroundColor', '#A9A9A9').animate({backgroundColor: "#ffffff"}, 'slow'); } ); $('#carousel ul li').hover ( function(e) { var img_src = $(this).children('img').attr('src'); $('.selectedImage img').attr('src',img_src); } ,function() { $('.selectedImage img').attr('src', '<?php echo $selectedThumb; ?>'); } ); /* $('#myTheme_sidebar_color').jPicker ( {}, function(color) { $(this).val(color.get_Hex()); }, function(color) { $(this).val(color.get_Hex()); } ); */ });

    Read the article

  • Do you know a date picker to quickly pick one day of the current week?

    - by Murmelschlurmel
    Most date pickers allow you to pick the date from a tine calendar or enter the date by hand. For example http://jqueryui.com/demos/datepicker/ This requires - two clicks (one to display the calendar and one to select the correct date) - good eyesight (usually the pop-up calendar is very small) - and good hand-eye coordination to pick the correct date in the tiny calendar with your mouse. That's no problem for power users, but a hassle for older people and computer beginners. I found a website with a different approach. It seems like their users mostly select dates of the current week. So they listed all the days of the week in a bar together with the weekday. The current day is marked in another color. There is a tiny calender icon on the right hand that opens up a regular date picker. That gives you access to all regular date picker functionality. Here is a screenshot: http://mite.yo.lk/assets/img/tour/de/zeiten-erfassen.png Do you know of any jquery plugin which has a similar feature? If not, do you any other plugin or widget which would help me speed up development ? Thank you!

    Read the article

  • What is the fastest way for reading huge files in Delphi?

    - by dummzeuch
    My program needs to read chunks from a huge binary file with random access. I have got a list of offsets and lengths which may have several thousand entries. The user selects an entry and the program seeks to the offset and reads length bytes. The program internally uses a TMemoryStream to store and process the chunks read from the file. Reading the data is done via a TFileStream like this: FileStream.Position := Offset; MemoryStream.CopyFrom(FileStream, Size); This works fine but unfortunately it becomes increasingly slower as the files get larger. The file size starts at a few megabytes but frequently reaches several tens of gigabytes. The chunks read are around 100 kbytes in size. The file's content is only read by my program. It is the only program accessing the file at the time. Also the files are stored locally so this is not a network issue. I am using Delphi 2007 on a Windows XP box. What can I do to speed up this file access?

    Read the article

  • Fastest Java way to remove the first/top line of a file (like a stack)

    - by christangrant
    I am trying to improve an external sort implementation in java. I have a bunch of BufferedReader objects open for temporary files. I repeatedly remove the top line from each of these files. This pushes the limits of the Java's Heap. I would like a more scalable method of doing this without loosing speed because of a bunch of constructor calls. One solution is to only open files when they are needed, then read the first line and then delete it. But I am afraid that this will be significantly slower. So using Java libraries what is the most efficient method of doing this. --Edit-- For external sort, the usual method is to break a large file up into several chunk files. Sort each of the chunks. And then treat the sorted files like buffers, pop the top item from each file, the smallest of all those is the global minimum. Then continue until for all items. http://en.wikipedia.org/wiki/External_sorting My temporary files (buffers) are basically BufferedReader objects. The operations performed on these files are the same as stack/queue operations (peek and pop, no push needed). I am trying to make these peek and pop operations more efficient. This is because using many BufferedReader objects takes up too much space.

    Read the article

  • Images not showing in ie7 using jquery cycle and jCarouselLite plugin

    - by Geetha
    Hi All, I am using jquery cycle and jCarouselLite plugin to display images as slide. Images are getting displayed in ie7. but working perfect in ie6. Image Property inside the cycle control: Protocol: Not available Type: Not available Address(url): Not available Size: Not available Dimensions: 100X100 but control having the url. if i tried that image url separate it showing the image. Code: $('#slide').cycle({ fx: 'fade', continuous: true, speed: 7500, timeout: 55000, sync: 1 }); Html Code: <div id="slide"> <img src="samp1.jpg" width="664" height="428" border="0" /> <img src="samp2.jpg" width="664" height="428" border="0" /> <img src="samp3.jpg" width="664" height="428" border="0" /> <img src="samp4.jpg" width="664" height="428" border="0" /> <img src="samp5.jpg" width="664" height="428" border="0" /> <img src="samp6.jpg" width="664" height="428" border="0" /> <img src="samp7.jpg" width="664" height="428" border="0" /> <img src="samp8.jpg" width="664" height="428" border="0" /> </div> Geetha.

    Read the article

  • Serialized NHibernate Configuration objects - detect out of date or rebuild on demand?

    - by fostandy
    I've been using serialized nhibernate configuration objects (also discussed here and here) to speed up my application startup from about 8s to 1s. I also use fluent-nhibernate, so the path is more like ClassMap class definitions in code fluentconfiguration xml nhibernate configuration configuration serialized to disk. The problem from doing this is that one runs the risk of out of date mappings - if I change the mappings but forget to rebuild the serialized configuration, then I end up using the old mappings without realising it. This does not always result in an immediate and obvious error during testing, and several times the misbehaviour has been a real pain to detect and fix. Does anybody have any idea how I would be able to detect if my classmaps have changed, so that I could either issue an immediate warning/error or rebuild it on demand? At the moment I am comparing timestamps on my compiled assembly against the serialized configuration. This will pickup mapping changes, but unfortunately it generates a massive false positive rate as ANY change to the code results in an out of date flag. I can't move the classmaps to another assembly as they are tightly integrated into the business logic. This has been niggling me for a while so I was wondering if anybody had any suggestions?

    Read the article

  • Creating an appropriate index for a frequently used query in SQL Server

    - by Slauma
    In my application I have two queries which will be quite frequently used. The Where clauses of these queries are the following: WHERE FieldA = @P1 AND (FieldB = @P2 OR FieldC = @P2) and WHERE FieldA = @P1 AND FieldB = @P2 P1 and P2 are parameters entered in the UI or coming from external datasources. FieldA is an int and highly on-unique, means: only two, three, four different values in a table with say 20000 rows FieldB is a varchar(20) and is "almost" unique, there will be only very few rows where FieldB might have the same value FieldC is a varchar(15) and also highly distinct, but not as much as FieldB FieldA and FieldB together are unique (but do not form my primary key, which is a simple auto-incrementing identity column with a clustered index) I'm wondering now what's the best way to define an index to speed up specifically these two queries. Shall I define one index with... FieldB (or better FieldC here?) FieldC (or better FieldB here?) FieldA ... or better two indices: FieldB FieldA and FieldC FieldA Or are there even other and better options? What's the best way and why? Thank you for suggestions in advance!

    Read the article

  • How to display many SVGs in Java with high performance

    - by Oak
    What I want My goal is to be able to display a large number of SVG images on a single drawing area in Java, each with its own translation/rotation/scale values. I'm looking for the simplest solution allowing this, optionally even using OpenGL to speed things up. What I've Tried My initial naive approach was to use SVGSalamander to draw directly on a JPanel, but the performance was pathetic. I poked around around and learned that I should do something like manually convert each SVG into a BufferedImage created with createCompatibleImage, then do the transformations I want, then draw it using double buffering. I ran into some troubles here, and before I continued I tried looking for frameworks to simplify things. What I've Looked At I've been a bit overwhelmed by the available options, which is why I'm turning to SO for help. I've looked at: Cairo (with Glitz maybe?) Libart - not sure if this actually supports SVGs FengGUI Slick - looks promising but a bit of an overkill But couldn't decide what is best for me to start working with, and I hope someone here as experience with any of these doing similar things.

    Read the article

  • JDBC call not executing

    - by dbyrne
    I am working on one of the DAOs for a medium sized web application. Unfortunately, it contains very convoluted logic, and makes hundreds of JDBC stored proc calls in loops. This is out of my control. I am working on a method inside the DAO which makes a single JDBC call. The simplified version of what this method looks like is this: DriverManager.registerDriver(new com.sybase.jdbc2.jdbc.SybDriver()); Connection con = DriverManager.getConnection((String)connectionDetails.get("DATABASE_URL") (String)connectionDetails.get("USERID"), (String)connectionDetails.get("PASSWORD")); String sqlToExecute = "{call " + STORED_PROC + "(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)}"; CallableStatement stmt = con.prepareCall(sqlToExecute); //Maybe I should try calling clearParameters here? stmt.setString(1,someData); //....Set of parameters.... if (!stmt.execute()) { //execute method never returns false } stmt.close(); Its pretty much a textbook JDBC call. All this stored proc does is insert a single row. Here is where things get crazy: This code works when you run it through a debugger line by line, but fails when you run it "full speed". Not only does it fail, but it doesn't throw any exception! The execute method always returns true. It just breezes right through the JDBC call without inserting a row to the database. If you go through the log files, copy the stored proc call and run it manually, it works (just like it does in debug mode). Whats strange is that the rest of the DAO, with all its hundreds of looped stored proc calls, works fine. My thinking is that Connection or CallableStatement is caching some value behind the scenes that is screwing things up. Has anyone ever seen anything like this before? A JDBC call failing with no exceptions? I know it will be impossible to provide a complete solution to this without seeing the whole application, I am just looking for suggestions on possible issues to investigate.

    Read the article

  • real time stock quotes, StreamReader performance optimization

    - by sean717
    I am working on a program that extracts real time quote for 900+ stocks from a website. I use HttpWebRequest to send HTTP request to the site and store the response to a stream and open a stream using the following code: HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream stream = response.GetResponseStream (); StreamReader reader = new StreamReader( stream ) the size of the received HTML is large (5000+ lines), so it takes a long time to parse it and extract the price. For 900 files, It takes about 6 mins for parsing and extracting. Which my boss isn't happy with, he told me he'd want the whole process to be done in TWO mins. I've identified the part of the program that takes most of time to finish is parsing and extracting. I've tried to optimize the code to make it faster, the following is what I have now after some optimization: // skip lines at the top for(int i=0;i<1500;++i) reader.ReadLine(); // read the line that contains the price string theLine = reader.ReadLine(); // ... extract the price from the line now it takes about 4 mins to process all the files, there is still a significant gap to what my boss's expecting. So I am wondering, is there other way that I can further speed up the parsing and extracting and have everything done within 2 mins?

    Read the article

  • iPhone OpenGL ES Texture2D Masking

    - by Robert Neagu
    What's the best choice when trying to mask a texture like ColorSplash or other apps like iSteam, etc? I started learning OPENGL ES like... 4 days ago (I'm a total rookie) and tried the following approach: 1) I created a colored texture2D, a grayscale version of the first texture and a third texture2D called mask 2) I also created a texture2D for the brush... which is grayscale and it's opaque (brush = black = 0,0,0,1 and surroundings = white = 1,1,1,1). My intention was to create an antialiased brush with smooth edges but i'm fine with a normal one right now 3) I searched for masking techniques on the internet and found this tutorial ZeusCMD - Design and Development Tutorials : OpenGL ES Programming Tutorials - Masking about masking. The tutorial tells me to use blending to achieve masking... first draw colored, then mask with glBlendFunc(GL_DST_COLOR, GL_ZERO) and then grayscale with glBlendFunc(GL_ONE, GL_ONE) ... and this gives me something close to what i want... but not exactly what i want. The result is masked but it's somehow overbright-ed 4) For drawing to the mask texture i used an extra frame buffer object (FBO) I'm not really happy with the resulting image (overbright-ed picture) nor with the speed achieved with this method. I think the normal way was to draw directly to the grayscale (overlay) texture2D affecting only it's alpha channel in the places where the brush hits. Is there a fast way to achieve this? I have searched a lot and never got an answer that's clear and understandable. Then, in the main draw loop I could only draw the colored texture and then blend the grayscale ontop with glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I just want to learn to use OPENGL ES and it's driving me nuts because i can't get it to work properly. An advice, a link to a tutorial would be much appreciated.

    Read the article

  • Matlab matrix translation and rotation multiple times

    - by pinnacler
    I have a map of individual trees from a forest stored as x,y points in a matrix. I call it fixedPositions. It's cartesian and (0,0) is the origin. I would like 0/360 degrees to be the top of the screen and 90 degrees to be to the right. Given a velocity and a heading, i.e. .5 m/s and 60 degrees (2 o'clock equivalent on a watch), how do I rotate that x,y points, so that the new origin is centered at (.5cos(60),.5sin(60)) and 60 degrees is now at the top of the screen? Then if I were to give you another heading and speed, i.e. 0 degrees and 2m/s, it should calculate it from the last point, not the original fixedPositions origin. I've wasted my day trying to figure this out. I wish I took matrix algebra but I'm at a loss. I tried doing cos(30) and even those wouldn't compute correctly, which after an hour I realize were in radians.

    Read the article

  • should i advocate migrating from access to (my)sql

    - by HotOil
    Hi: We have a windows MFC app that is written against an access database on a company server. The db is not that big: 19 MB. There are at most 2-3 users accessing it at any one time. It is used in a factory environment where access speed (or lack thereof) over the intranet becomes noticeable as it is part of the manufacturing time for our widgets. The scenario is this: as each widget is completed, it gets a record in the db.. by the end of the year, the db is larger and searching for a record takes longer and longer. The solution so far has been to manually move older records to an archival table about once a year. We are reworking other portions of this app right now, and it would be a good time to move to another db if we are going to do it. It is my understanding that if we were using sql, the search time would not go up as the table gets bigger because the entire .mdb does not have to be sent over the network each time. Is this correct? Does anyone have any insight about whether it could be worth it to go to the trouble (time and money) of migrating to a new db, or should I just add more functionality to the application we have now, and maybe automatically purge the older records from time to time, and add additional facilities to the app to get at the older records when needed? Thanks for any wisdom you can share..

    Read the article

  • wordpress generating slow mysql queries - is it index problem?

    - by tash
    Hello Stack Overflow I've got very slow Mysql queries coming up from my wordpress site. It's making everything slow and I think this is eating up CPU usage. I've pasted the Explain results for the two most frequently problematic queries below. This is a typical result - although very occasionally teh queries do seem to be performed at a more normal speed. I have the usual wordpress indexes on the database tables. You will see that one of the queries is generated from wordpress core code, and not from anything specific - like the theme - for my site. I have a vague feeling that the database is not always using the indexes/is not using them properly... Is this right? Does anyone know how to fix it? Or is it a different problem entirely? Many thanks in advance for any help anyone can offer - it is hugely appreciated Query: [wp-blog-header.php(14): wp()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort Query time: 34.2829 (ms) 9) Query: [wp-content/themes/LMHR/index.php(40): query_posts()] SELECT SQL_CALC_FOUND_ROWS wp_posts.* FROM wp_posts WHERE 1=1 AND wp_posts.ID NOT IN ( SELECT tr.object_id FROM wp_term_relationships AS tr INNER JOIN wp_term_taxonomy AS tt ON tr.term_taxonomy_id = tt.term_taxonomy_id WHERE tt.taxonomy = 'category' AND tt.term_id IN ('217', '218', '223', '224') ) AND wp_posts.post_type = 'post' AND (wp_posts.post_status = 'publish' OR wp_posts.post_status = 'private') ORDER BY wp_posts.post_date DESC LIMIT 0, 6 id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY wp_posts ref type_status_date type_status_date 63 const 427 Using where; Using filesort 2 DEPENDENT SUBQUERY tr ref PRIMARY,term_taxonomy_id PRIMARY 8 func 1 Using index 2 DEPENDENT SUBQUERY tt eq_ref PRIMARY,term_id_taxonomy,taxonomy PRIMARY 8 antin1_lovemusic2010.tr.term_taxonomy_id 1 Using where Query time: 70.3900 (ms)

    Read the article

  • How should I pass the translated text to my object in my multilingual application?

    - by boatingcow
    Up until now, I have maintained a 'dictionary' table in my database, for example: +-----------+---------------------------------------+-----------------------------------------------+--------+ | phrase | en | fr | etc... | +-----------+---------------------------------------+-----------------------------------------------+--------+ | generated | Generated in %1$01.2f seconds at %2$s | Créée en %1$01.2f secondes à %2$s aujourd'hui | ... | | submit | Submit... | Envoyer... | ... | +-----------+---------------------------------------+-----------------------------------------------+--------+ I'll then select all rows from the database for the column that matches the locale we're interested in (or read the cache from a file to speed db lookup) and dump the dictionary into an array called $lng. Then I'll have HTML helper objects like this in my view: $html->input(array('type' => 'submit', 'value' => $lng['submit'], etc...)); ... $html->div(array('value' => sprintf($lng['generated'], $generated, date('H:i')), etc...)); The translations can appear in PDF, XLS and AJAX responses too. The problem with my approach so far is that I now have loads of global $lng; in every class where there is a function that spits out UI code.. How do other people get the translation into the object? Is it one scenario where globals aren't actually that bad? Would it be madness to create a class with accessors when the dictionary terms are all static?

    Read the article

  • What is an efficient strategy for multiple threads posting jobs and waiting for response from a single thread?

    - by jakewins
    In java, what is an efficient solution to the following problem: I have multiple threads (10-20 or so) generating jobs ("Job Creators"), and a single thread capable of performing them ("The worker"). Once a job creator has posted a job, it should wait for the job to finish, yielding no result other than "it's done", before it keeps going. For sending the jobs to the worker thread, I think a ring buffer or similar standard fan-in setup would perhaps be a good approach? But for a Job Creator to find out that her job has been done, I'm not so sure.. The job creators could sleep, and the worker interrupt them when done.. Or each job creator could have an atomic boolean that it checks, and that the worker sets. I dunno, neither of those feel very nice. I'd like to do it with as few (none, if possible) locks as absolutely possible. So to be clear: What I'm looking for is speed, not necessarily simplicity. Does anyone have any suggestions? Links to reading about concurrency strategies would also be very welcome!

    Read the article

  • How can I improve the performance of LinqToSql queries that use EntitySet properties?

    - by DanM
    I'm using LinqToSql to query a small, simple SQL Server CE database. I've noticed that any operations involving sub-properties are disappointingly slow. For example, if I have a Customer table that is referenced by an Order table, LinqToSql will automatically create an EntitySet<Order> property. This is a nice convenience, allowing me to do things like Customer.Order.Where(o => o.ProductName = "Stopwatch"), but for some reason, SQL Server CE hangs up pretty bad when I try to do stuff like this. One of my queries, which isn't really that complicated takes 3-4 seconds to complete. I can get the speed up to acceptable, even fast, if I just grab the two tables individually and convert them to List<Customer> and List<Order>, then join then manually with my own query, but this is throwing out a lot of what makes LinqToSql so appealing. So, I'm wondering if I can somehow get the whole database into RAM and just query that way, then occasionally save it. Is this possible? How? If not, is there anything else I can do to boost the performance besides resorting to doing all the joins manually? Note: My database in its initial state is about 250K and I don't expect it to grow to more than 1-2Mb. So, loading the data into RAM certainly wouldn't be a problem from a memory point of view. Update Here are the table definitions for the example I used in my question: create table Order ( Id int identity(1, 1) primary key, ProductName ntext null ) create table Customer ( Id int identity(1, 1) primary key, OrderId int null references Order (Id) )

    Read the article

  • Proper way to scan a range of IP addresses

    - by Josh G
    Given a range of IP addresses entered by a user (through various means), I want to identify which of these machines have software running that I can talk to. Here's the basic process: Ping these addresses to find available machines Connect to a known socket on the available machines Send a message to the successfully established sockets Compare the response to the expected response Steps 2-4 are straight forward for me. What is the best way to implement the first step in .NET? I'm looking at the System.Net.NetworkInformation.Ping class. Should I ping multiple addresses simultaneously to speed up the process? If I ping one address at a time with a long timeout it could take forever. But with a small timeout, I may miss some machines that are available. Sometimes pings appear to be failing even when I know that the address points to an active machine. Do I need to ping twice in the event of the request getting discarded? To top it all off, when I scan large collections of addresses with the network cable unplugged, Ping throws a NullReferenceException in FreeUnmanagedResources(). !? Any pointers on the best approach to scanning a range of IPs like this?

    Read the article

  • Programming exercises in Java inheritance for intern

    - by Tenner
    I work for a small software development team, working primarily in Java, for a very large company. Our new intern showed up sight-unseen (not uncommon in my company). He has some C++ experience but no Java. Worse, he's never worked with inheritance in C++. Our code has a great deal of abstraction and a heavy reliance on inheritance. We need to get him up to speed as quickly as possible. Of course the rest of the team is busy, and so we can't take the time out of our day to teach a one-student 200-level CS course. Instead, I'd like to give him an actual programming project to work on which highlights how classes, interfaces, method overrides, etc. work. I've had him look at Project Euler, but most of the solutions end up being procedural, and not object-oriented programs. Do any of you have any somewhat-straightforward (and relatively quick) projects which you would give to an intern in this situation? Or, any recent (or current) students have a school project they'd be willing to share? Anyone else had this experience?

    Read the article

  • HTTP Compression problems on IIS7

    - by Jonathan Wood
    I've spent quite a bit of time on this but seem to be going nowhere. I have a large page that I really want to speed up. The obvious place to start seems to be HTTP compression, but I just can't seem to get it to work for me. After considerable searching, I've tried several variations of the code below. It kind of works, but after refreshing the browser, the results seem to fall apart. They were turning to garbage when the page used caching. If I turn off caching, then the page seems right but I lose my CSS formatting (stored in a separate file) and get an error that an included JS file contains invalid characters. Most of the resources I've found on the Web were either very old or focused on accessing IIS directly. My page is running on a shared hosting account and I do not have direct access to IIS7, which it's running on. protected void Application_BeginRequest(object sender, EventArgs e) { // Implement HTTP compression if (Request["HTTP_X_MICROSOFTAJAX"] == null) // Avoid compressing AJAX calls { // Retrieve accepted encodings string encodings = Request.Headers.Get("Accept-Encoding"); if (encodings != null) { // Verify support for or gzip (deflate takes preference) encodings = encodings.ToLower(); if (encodings.Contains("gzip") || encodings == "*") { Response.Filter = new GZipStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "gzip"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } else if (encodings.Contains("deflate")) { Response.Filter = new DeflateStream(Response.Filter, CompressionMode.Compress); Response.AppendHeader("Content-Encoding", "deflate"); Response.Cache.VaryByHeaders["Accept-encoding"] = true; } } } } Is anyone having better success with this?

    Read the article

  • Program structure in long running data processing python script

    - by fmark
    For my current job I am writing some long-running (think hours to days) scripts that do CPU intensive data-processing. The program flow is very simple - it proceeds into the main loop, completes the main loop, saves output and terminates: The basic structure of my programs tends to be like so: <import statements> <constant declarations> <misc function declarations> def main(): for blah in blahs(): <lots of local variables> <lots of tightly coupled computation> for something in somethings(): <lots more local variables> <lots more computation> <etc., etc.> <save results> if __name__ == "__main__": main() This gets unmanageable quickly, so I want to refactor it into something more manageable. I want to make this more maintainable, without sacrificing execution speed. Each chuck of code relies on a large number of variables however, so refactoring parts of the computation out to functions would make parameters list grow out of hand very quickly. Should I put this sort of code into a python class, and change the local variables into class variables? It doesn't make a great deal of sense tp me conceptually to turn the program into a class, as the class would never be reused, and only one instance would ever be created per instance. What is the best practice structure for this kind of program? I am using python but the question is relatively language-agnostic, assuming a modern object-oriented language features.

    Read the article

< Previous Page | 215 216 217 218 219 220 221 222 223 224 225 226  | Next Page >