Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 240/255 | < Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >

  • Can GPU capabilities impact virtual machine performance?

    - by Dave White
    While this many not seem like a programming question directly, it impacts my development activities and so it seems like it belongs here. It seems that more and more developers are turning to virtual environments for development activities on their computers, SharePoint development being a prime example. Also, as a trainer, I have virtual training environments for all of the classes that I teach. I recently purchased a new Dell E6510 to travel around with. It has the i7 620M (Dual core, HyperThreaded cpu running at 2.66GHz) and 8 GB of memory. Reading the spec sheet, it sounded like it would be a great laptop to carry around and run virtual machines on. Getting the laptop though, I've been pretty disappointed with the user experience of developing in a virtual machine. Giving the Virtual Machine 4 GB of memory, it was slow and I could type complete sentences and watch the VM "catchup". My company has training laptops that we provide for our classes. They are Dell Precision M6400 Intel Core 2 Duo P8700 running at 2.54Ghz with 8 GB of memory and the experience on this laptops is night and day compared to the E6510. They are crisp and you barely aware that you are running in a virtual environment. Since the E6510 should be faster in all categories than the M6400, I couldn't understand why the new laptop was slower, so I did a component by component comparison and the only place where the E6510 is less performant than the M6400 is the graphics department. The M6400 is running a nVidia FX 2700m GPU and the E6510 is running a nVidia 3100M GPU. Looking at benchmarks of the two GPUs suggest that the FX 2700M is twice as fast as the 3100M. http://www.notebookcheck.net/Mobile-Graphics-Cards-Benchmark-List.844.0.html 3100M = 111th (E6510) FX 2700m = 47th (Precision M6400) Radeon HD 5870 = 8th (Alienware) The host OS is Windows 7 64bit as is the guest OS, running in Virtual Box 3.1.8 with Guest Additions installed on the guest. The IDE being used in the virtual environment is VS 2010 Premium. So after that long setup, my question is: Is the GPU significantly impacting the virtual machine's performance or are there other factors that I'm not looking at that I can use to boost the vm's performance? Do we now have to consider GPU performance when purchasing laptops where we expect to use virtualized development environments? Thanks in advance. Cheers, Dave

    Read the article

  • The Math of a Jump in a 2D game.

    - by ONi
    I have been all around with this question and I can't find the correct answer! So, behold, the description of my question: I'm working in J2ME, I have my gameloop that do the following: public void run() { Graphics g = this.getGraphics(); while (running) { long diff = System.currentTimeMillis() - lastLoop; lastLoop = System.currentTimeMillis(); input(); this.level.doLogic(); render(g, diff); try { Thread.sleep(10); } catch (InterruptedException e) { stop(e); } } } so it's just a basic gameloop, the doLogic() function calls for all the logic functions of the characters in the scene and render(g, diff) calls the animateChar function of every character on scene, following this, the animChar function in the Character class sets up everything in the screen as this: protected void animChar(long diff) { this.checkGravity(); this.move((int) ((diff * this.dx) / 1000), (int) ((diff * this.dy) / 1000)); if (this.acumFrame > this.framerate) { this.nextFrame(); this.acumFrame = 0; } else { this.acumFrame += diff; } } This ensures me that everything must to move according to the time that the machine takes to go from cycle to cycle (remember it's a phone, not a gaming rig). I'm sure it's not the most efficient way to achieve this behavior so I'm totally open for criticism of my programming skills in the comments, but here my problem: When I make I character jump, what I do is that I put his dy to a negative value, say -200 and I set the boolean jumping to true, that makes the character go up, and then I have this function called checkGravity() that ensure that everything that goes up has to go down, checkGravity also checks for the character being over platforms so I will strip it down a little for the sake of your time: public void checkGravity() { if (this.jumping) { this.jumpSpeed += 10; if (this.jumpSpeed > 0) { this.jumping = false; this.falling = true; } this.dy = this.jumpSpeed; } if (this.falling) { this.jumpSpeed += 10; if (this.jumpSpeed > 200) this.jumpSpeed = 200; this.dy = this.jumpSpeed; if (this.collidesWithPlatform()) { this.falling = false; this.standing = true; this.jumping = false; this.jumpSpeed = 0; this.dy = this.jumpSpeed; } } } So, the problem is, that this function updates the dy regardless of the diff, making the characters fly like Superman in slow machines, and I have no idea how to implement the diff factor so that when a character is jumping, his speed decrement in a proportional way to the game speed. Can anyone help me fix this issue? or give me pointers on how to make a 2D Jump in J2ME the right way. Thank you very much for your time.

    Read the article

  • 8bpp Bitmap format on the Compact Framework

    - by Kieran
    Hi friends. I am messing around with Conway's Game of Life - http://en.wikipedia.org/wiki/Conway's_Game_of_Life I started out coding algorithmns for winforms and now want to port my work onto windows mobile 6.1 (compact framework). I came across an article by Jon Skeet where he compared several different algorithmns for calculating next generations in the game. He used an array of bytes to store a cells state (alive or dead) and then he would copy this array to an 8bpp bitmap. For each new generation, he works out the state of each byte, then copies the array to a bitmap, then draws that bitmap to a picturebox. void CreateInitialImage() { bitmap = new Bitmap(Width, Height, PixelFormat.Format8bppIndexed); ColorPalette palette = bitmap.Palette; palette.Entries[0] = Color.Black; palette.Entries[1] = Color.White; bitmap.Palette = palette; } public Image Render() { Rectangle rect = new Rectangle(0, 0, Width, Height); BitmapData bmpData = bitmap.LockBits(rect, ImageLockMode.ReadWrite, bitmap.PixelFormat); Marshal.Copy(Data, 0, bmpData.Scan0, Data.Length); bitmap.UnlockBits(bmpData); return bitmap; } His code above is beautifully simple and very fast to render. Jon is using Windows Forms but now I want to port my own version of this onto Windows Mobile 6.1 (Compact Framework) but . . . .there is no way to format a bitmap to 8bpp in the cf. Can anyone suggest a way of rendering an array of bytes to a drawable image in the CF. This array is created in code on the fly (it is NOT loaded from an image file on disk). I basically need to store an array of cells represented by bytes, they are either alive or dead and I then need to draw that array as an image. The game is particularly slow on the CF so I need to implement clever optimised algoritmns but also need to render as fast as possible and the above solution would be pretty dam perfect if only it was available on the compact framework. Many thanks for any help Any suggestions?

    Read the article

  • Callback function in jquery doesn't seem to work......

    - by Pandiya Chendur
    I use the following jquery pagination plugin and i got the error a.parentNode is undefined when i executed it... <script type="text/javascript"> $(document).ready(function() { getRecordspage(1, 5); $(".pager").pagination(17, { callback: pagechange, current_page: '0', items_per_page: '5', num_display_entries : '5', next_text: 'Next', prev_text: 'Prev', num_edge_entries: '1' }); }); function pagechange() { $("#ResultsDiv").empty(); $("#ResultsDiv").css('display', 'none'); getRecordspage($(this).text(), 5); } function getRecordspage(curPage, pagSize) { $.ajax({ type: "POST", url: "Default.aspx/GetRecords", data: "{'currentPage':" + curPage + ",'pagesize':" + pagSize + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(jsonObj) { var strarr = jsonObj.d.split('##'); var jsob = jQuery.parseJSON(strarr[0]); var divs = ''; $.each(jsob.Table, function(i, employee) { divs += '<div class="resultsdiv"><br /><span class="resultName">' + employee.Emp_Name + '</span><span class="resultfields" style="padding-left:100px;">Category&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Desig_Name + '</span><br /><br /><span id="SalaryBasis" class="resultfields">Salary Basis&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.SalaryBasis + '</span><span class="resultfields" style="padding-left:25px;">Salary&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.FixedSalary + '</span><span style="font-size:110%;font-weight:bolder;padding-left:25px;">Address&nbsp;:</span>&nbsp;<span class="resultfieldvalues">' + employee.Address + '</span></div>'; }); $("#ResultsDiv").append(divs).show('slow'); $(".resultsdiv:even").addClass("resultseven"); $(".resultsdiv").hover(function() { $(this).addClass("resultshover"); }, function() { $(this).removeClass("resultshover"); }); } }); } </script> and in my page, <div id="ResultsDiv" style="display:none;"> </div> <div id="pager" class="pager"> </div> Any suggestion....

    Read the article

  • How to do a search from a list with non-prefix keywords

    - by aNui
    First of all, sorry if my english or my post got any mistakes. I am programming a program to search the name from the list and I need to find them if the keyword is not in front of the names (that's what I mean non-prefix) e.g. if I my list is the music instruments and I type "guit" to the search textbox. It should find the names "Guitar, Guitarrón, Acoustic Guitar, Bass Guitar, ..." or something like this Longdo Dictionary's search suggestion. here is my simple and stupid algorithm (that's all I can do) const int SEARCHROWLIMIT = 30; private string[] DoSearch(string Input, string[] ListToSearch) { List<string> FoundNames = new List<string>(); int max = 0; bool over = false; for (int k = 0; !over; k++) { foreach (string item in ListToSearch) { max = (max > item.Length) ? max : item.Length; if (k > item.Length) continue; if (k >= max) { over = true; break; } if (!Input.Equals("Search") && item.Substring(k, item.Length - k).StartsWith(Input, StringComparison.OrdinalIgnoreCase)) { bool exist = false; int i = 0; while (!exist && i < FoundNames.Count) { if (item.Equals(FoundNames[i])) { exist = true; break; } i++; } if (!exist && FoundNames.Count < SEARCHROWLIMIT) FoundNames.Add(item); else if (FoundNames.Count >= SEARCHROWLIMIT) over = true; } } } return FoundNames.ToArray(); } I think this algorithm is too slow for a large number of names and after several trial-and-error, I decided to add SEARCHROWLIMIT to breaks the operation And I also think there're some readymade methods that can do that. And another problem is I need to search music instruments by a category like strings, percussions, ... and by the country of origins. So I need to search them with filter by type and country. please help me. P.S. Me and my friends are just student from Thailand and developing the project to compete in Microsoft Imagine Cup 2010 and please become fan on our facebook page [KRATIB][3]. And we're so sorry we don't have much information in English but you can talk to us in English.

    Read the article

  • C++0x Overload on reference, versus sole pass-by-value + std::move?

    - by dean
    It seems the main advice concerning C++0x's rvalues is to add move constructors and move operators to your classes, until compilers default-implement them. But waiting is a losing strategy if you use VC10, because automatic generation probably won't be here until VC10 SP1, or in worst case, VC11. Likely, the wait for this will be measured in years. Here lies my problem. Writing all this duplicate code is not fun. And it's unpleasant to look at. But this is a burden well received, for those classes deemed slow. Not so for the hundreds, if not thousands, of smaller classes. ::sighs:: C++0x was supposed to let me write less code, not more! And then I had a thought. Shared by many, I would guess. Why not just pass everything by value? Won't std::move + copy elision make this nearly optimal? Example 1 - Typical Pre-0x constructor OurClass::OurClass(const SomeClass& obj) : obj(obj) {} SomeClass o; OurClass(o); // single copy OurClass(std::move(o)); // single copy OurClass(SomeClass()); // single copy Cons: A wasted copy for rvalues. Example 2 - Recommended C++0x? OurClass::OurClass(const SomeClass& obj) : obj(obj) {} OurClass::OurClass(SomeClass&& obj) : obj(std::move(obj)) {} SomeClass o; OurClass(o); // single copy OurClass(std::move(o)); // zero copies, one move OurClass(SomeClass()); // zero copies, one move Pros: Presumably the fastest. Cons: Lots of code! Example 3 - Pass-by-value + std::move OurClass::OurClass(SomeClass obj) : obj(std::move(obj)) {} SomeClass o; OurClass(o); // single copy, one move OurClass(std::move(o)); // zero copies, two moves OurClass(SomeClass()); // zero copies, one move Pros: No additional code. Cons: A wasted move in cases 1 & 2. Performance will suffer greatly if SomeClass has no move constructor. What do you think? Is this correct? Is the incurred move a generally acceptable loss when compared to the benefit of code reduction?

    Read the article

  • Objective-c design advice for use of different data sources, swapping between test and live

    - by user200341
    I'm in the process of designing an application that is part of a larger piece of work, depending on other people to build an API that the app can make use of to retrieve data. While I was thinking about how to setup this project and design the architecture around it, something occurred to me, and I'm sure many people have been in similar situations. Since my work is depending on other people to complete their tasks, and a test server, this slows work down at my end. So the question is: What's the best practice for creating test repositories and classes, implementing them, and not having to depend on altering several places in the code to swap between the test classes and the actual repositories / proper api calls. Contemplate the following scenario: GetDataFromApiCommand *getDataCommand = [[GetDataFromApiCommand alloc]init]; getDataCommand.delegate = self; [getDataCommand getData]; Once the data is available via the API, "GetDataFromApiCommand" could use the actual API, but until then a set of mock data could be returned upon the call of [getDataCommand getData] There might be multiple instances of this, in various places in the code, so replacing all of them wherever they are, is a slow and painful process which inevitably leads to one or two being overlooked. In strongly typed languages we could use dependency injection and just alter one place. In objective-c a factory pattern could be implemented, but is that the best route to go for this? GetDataFromApiCommand *getDataCommand = [GetDataFromApiCommandFactory buildGetDataFromApiCommand]; getDataCommand.delegate = self; [getDataCommand getData]; What is the best practices to achieve this result? Since this would be most useful, even if you have the actual API available, to run tests, or work off-line, the ApiCommands would not necessarily have to be replaced permanently, but the option to select "Do I want to use TestApiCommand or ApiCommand". It is more interesting to have the option to switch between: All commands are test and All command use the live API, rather than selecting them one by one, however that would also be useful to do for testing one or two actual API commands, mixing them with test data. EDIT The way I have chosen to go with this is to use the factory pattern. I set up the factory as follows: @implementation ApiCommandFactory + (ApiCommand *)newApiCommand { // return [[ApiCommand alloc]init]; return [[ApiCommandMock alloc]init]; } @end And anywhere I want to use the ApiCommand class: GetDataFromApiCommand *getDataCommand = [ApiCommandFactory newApiCommand]; When the actual API call is required, the comments can be removed and the mock can be commented out. Using new in the message name implies that who ever uses the factory to get an object, is responsible for releasing it (since we want to avoid autorelease on the iPhone). If additional parameters are required, the factory needs to take these into consideration i.e: [ApiCommandFactory newSecondApiCommand:@"param1"]; This will work quite well with repositories as well.

    Read the article

  • Setting up Netbeans/Eclipse for Linux Kernel Development

    - by red.october
    Hi: I'm doing some Linux kernel development, and I'm trying to use Netbeans. Despite declared support for Make-based C projects, I cannot create a fully functional Netbeans project. This is despite compiling having Netbeans analyze a kernel binary that was compiled with full debugging information. Problems include: files are wrongly excluded: Some files are incorrectly greyed out in the project, which means Netbeans does not believe they should be included in the project, when in fact they are compiled into the kernel. The main problem is that Netbeans will miss any definitions that exist in these files, such as data structures and functions, but also miss macro definitions. cannot find definitions: Pretty self-explanatory - often times, Netbeans cannot find the definition of something. This is partly a result of the above problem. can't find header files: self-explanatory I'm wondering if anyone has had success with setting up Netbeans for Linux kernel development, and if so, what settings they used. Ultimately, I'm looking for Netbeans to be able to either parse the Makefile (preferred) or extract the debug information from the binary (less desirable, since this can significantly slow down compilation), and automatically determine which files are actually compiled and which macros are actually defined. Then, based on this, I would like to be able to find the definitions of any data structure, variable, function, etc. and have complete auto-completion. Let me preface this question with some points: I'm not interested in solutions involving Vim/Emacs. I know some people like them, but I'm not one of them. As the title suggest, I would be also happy to know how to set-up Eclipse to do what I need While I would prefer perfect coverage, something that only misses one in a million definitions is obviously fine SO's useful "Related Questions" feature has informed me that the following question is related: http://stackoverflow.com/questions/149321/what-ide-would-be-good-for-linux-kernel-driver-development. Upon reading it, the question is more of a comparison between IDE's, whereas I'm looking for how to set-up a particular IDE. Even so, the user Wade Mealing seems to have some expertise in working with Eclipse on this kind of development, so I would certainly appreciate his (and of course all of your) answers. Cheers

    Read the article

  • Improving performance on data pasting 2000 rows with validations

    - by Lohit
    I have N rows (which could be nothing less than 1000) on an excel spreadsheet. And in this sheet our project has 150 columns like this: Now, our application needs data to be copied (using normal Ctrl+C) and pasted (using Ctrl+V) from the excel file sheet on our GUI sheet. Copy pasting 1000 records takes around 5-6 seconds which is okay for our requirement, but the problem is when we need to make sure the data entered is valid. So we have to validate data in each row generate appropriate error messages and format the data as per requirement. So we need to at runtime parse and evaluate data in each row. Now all the formatting of data and validations come from the back-end database and we have it in a data-table (dtValidateAndFormatConditions). The conditions would be around 50. So you can see how slow this whole process becomes since N X 150 X 50 operations are required to complete this whole process. Initially it took approximately 2-3 minutes but now i have reduced it to 20 - 30 seconds. However i have increased the speed by making an expression parser of my own - and not by any algorithm, is there any other way i can improve performance, by using Divide and Conquer or some other mechanism. Currently i am not really sure how to go about this. Here is what part of my code looks like: public virtual void ValidateAndFormatOnCopyPaste(DataTable DtCopied, int CurRow) { foreach (DataRow dRow in dtValidateAndFormatConditions.Rows) { string Condition = dRow["Condition"]; string FormatValue = Value = dRow["Value"]; GetValidatedFormattedData(DtCopied,ref Condition, ref FormatValue ,iRowIndex); Condition = Parse(Condition); dRow["Condition"] = Condition; FormatValue = Parse(FormatValue ); dRow["Value"] = FormatValue; } } The above code gets called row-wise like this: public override void ValidateAndFormat(DataTable dtChangedRecords, CellRange cr) { int iRowStart = cr.Row, iRowEnd = cr.Row + cr.RowCount; for (int iRow = iRowStart; iRow < iRowEnd; iRow++) { ValidateAndFormatOnCopyPaste(dtChangedRecords,iRow); } } Please know my question needs a more algorithmic solution than code optimization, however any answers containing code related optimizations will be appreciated as well. (Tagged Linq because although not seen i have been using linq in some parts of my code).

    Read the article

  • LINQ Query Returning Multiple Copies Of First Result

    - by Mike G
    I'm trying to figure out why a simple query in LINQ is returning odd results. I have a view defined in the database. It basically brings together several other tables and does some data munging. It really isn't anything special except for the fact that it deals with a large data set and can be a bit slow. I want to query this view based on a long. Two sample queries below show different queries to this view. var la = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).ToList(); var deDa = Runtime.OmsEntityContext.Positions.Where(p => p.AccountNumber == 12345678).Select(p => new { p.AccountNumber, p.SecurityNumber, p.CUSIP }).ToList(); The first one should hand back a List. The second one will be a list of anonymous objects. When I do these queries in entities framework the first one will hand me back a list of results where they're all exactly the same. The second query will hand me back data where the account number is the one that I queried and the other values differ. This seems to do this on a per account number basis, ie if I were to query for one account number or another all the Position objects for one account would have the same value (the first one in the list of Positions for that account) and the second account would have a set of Position objects that all had the same value (again, the first one in it's list of Position objects). I can write SQL that is in effect the same as either of the two EF queries. They both come back with results (say four) that show the correct data, one account number with different securities numbers. Why does this happen??? Is there something that I could be doing wrong so that if I had four results for the first query above that the first record's data also appears in the 2-4th's objects??? I cannot fathom what would/could be causing this. I've searched Google for all kinds of keywords and haven't seen anyone with this issue. We partial class out the Positions class for added functionality (smart object) and some smart properties. There are even some constructors that provide some view model type support. None of this is invoked in the request (I'm 99% sure of this). However, we do this same pattern all over the app. The only thing I can think of is that the mapping in the EDMX is screwy. Is there a way that this would happen if the "primary keys" in the EDMX were not in fact unique given the way the view is constructed? I'm thinking that the dev who imported this model into the EDMX let the designer auto select what would be unique. Any help would give a haggered dev some hope!

    Read the article

  • How can this PHP/FQL code be modified to increase the performance and usability?

    - by Kaoukkos
    I try to get some insights from the pages I am administrator on Facebook. What my code does, it gets the IDs of the pages I want to work with through mySQL. I did not include that part though.After this, I get the page_id, name and fan_count of each of those facebook IDs and are saved in fancounts[]. Using the IDs ( pages[] ) I get two messages max from each page. There may be no messages, there may be 1 or 2 messages max. Possibly I will increase it later. messages[] holds the messages of each page. I have two problems with it. It has a very slow performance I can't find a way to echo the data like this: ID - Name of the page - Fan Count Here goes the first message Here goes the second one //here is a break ID - Name of the page 2 - Fan Count Here goes the first message of page 2 Here goes the second one of page 2 My questions are, how can the code be modified to increase performance and show the data as above? I read about fql.multiquery. Can it be used here? Please provide me with code examples. Thank you $pages = array(); // I get the IDs I want to work with $pagesIds = implode(',', $pages); // fancounts[] holds the page_id, name and fan_count of the Ids I work with $fancounts = array(); $pagesFanCounts = $facebook->api("/fql", array( "q" => "SELECT page_id, name, fan_count FROM page WHERE page_id IN ({$pagesIds})" )); foreach ($pagesFanCounts['data'] as $page){ $fancounts[] = $page['page_id']."-".$page['name']."-".$page['fan_count']; } //messages[] holds from 0 to 2 messages from each of the above pages $messages = array(); foreach( $pages as $id) { $getMessages = $facebook->api("/fql", array( "q" => "SELECT message FROM stream WHERE source_id = '$id' LIMIT 2" )); $messages[] = $getMessages['data']; } // this is how I print them now but it does not give me the best output. ( thanks goes to Mark for providing me this code ) $count = min(count($fancounts),count($messages)); for($i=0; $i<$count; ++$i) { echo $fancounts[$i],'<br>'; foreach($messages[$i] as $msg) { echo $msg['message'],'<br>'; } }

    Read the article

  • How do I do high quality scaling of a image?

    - by pbhogan
    I'm writing some code to scale a 32 bit RGBA image in C/C++. I have written a few attempts that have been somewhat successful, but they're slow and most importantly the quality of the sized image is not acceptable. I compared the same image scaled by OpenGL (i.e. my video card) and my routine and it's miles apart in quality. I've Google Code Searched, scoured source trees of anything I thought would shed some light (SDL, Allegro, wxWidgets, CxImage, GD, ImageMagick, etc.) but usually their code is either convoluted and scattered all over the place or riddled with assembler and little or no comments. I've also read multiple articles on Wikipedia and elsewhere, and I'm just not finding a clear explanation of what I need. I understand the basic concepts of interpolation and sampling, but I'm struggling to get the algorithm right. I do NOT want to rely on an external library for one routine and have to convert to their image format and back. Besides, I'd like to know how to do it myself anyway. :) I have seen a similar question asked on stack overflow before, but it wasn't really answered in this way, but I'm hoping there's someone out there who can help nudge me in the right direction. Maybe point me to some articles or pseudo code... anything to help me learn and do. Here's what I'm looking for: 1. No assembler (I'm writing very portable code for multiple processor types). 2. No dependencies on external libraries. 3. I am primarily concerned with scaling DOWN, but will also need to write a scale up routine later. 4. Quality of the result and clarity of the algorithm is most important (I can optimize it later). My routine essentially takes the following form: DrawScaled( uint32 *src, uint32 *dst, src_x, src_y, src_w, src_h, dst_x, dst_y, dst_w, dst_h ); Thanks! UPDATE: To clarify, I need something more advanced than a box resample for downscaling which blurs the image too much. I suspect what I want is some kind of bicubic (or other) filter that is somewhat the reverse to a bicubic upscaling algorithm (i.e. each destination pixel is computed from all contributing source pixels combined with a weighting algorithm that keeps things sharp. EXAMPLE: Here's an example of what I'm getting from the wxWidgets BoxResample algorithm vs. what I want on a 256x256 bitmap scaled to 55x55. And finally: the original 256x256 image

    Read the article

  • Using Constants in Perl

    - by David W.
    I am trying to define constants in Perl using the use Constant pragma: use Constant { FOO => "bar", BAR => "foo" }; I'm running into a bit of trouble, and hoping there's a standard way of handling it. First of all... I am defining a hook script for Subversion. To make things simple, I want to have a single file where the class (package) I'm using is in the same file as my actual script. Most of this package will have constants involved in it: print "This is my program"; package "MyClass"; use constant { FOO => "bar" }; sub new { yaddah, yaddah, yaddah. I would like my constant FOO to be accessible to my main program. I would like to do this without having to refer to it as MyClass::FOO. Normally, when the package is a separate file, I could do this in my main program: use MyClass qw(FOO); but, since my class and program are a single file, I can't do that. What would be the best way for my main program to be able to access my constants defined in my class? The second issue... I would like to use the constant values as hash keys: $myHash{FOO} = "bar"; The problem is that %myHash has the literal string FOO as the key and not the value of the constant. This causes problems when I do things like this: if (defined($myHash{FOO})) { print "Key " . FOO . " does exist!\n"; } I could force the context: if (defined("" . FOO . "")) { I could add parentheses: if (defined(FOO())) { Or, I could use a temporary variable: my $foo = FOO; if (defined($foo)) { None of these are really nice ways of handling this issue. So, what is the best way? Is there one way I'm missing? By the way, I don't want to use Readonly::Scalar because it is 1). slow, and 2). not part of the standard Perl package. I want to define my hook not to require additional Perl packages and to be as simple as possible to work.

    Read the article

  • Alternative to Page_Load in ASP.NET (and a good WTF story)

    - by Jason
    Woo, I have a doozy of a problem (might even be one for the Daily WTF) and I'm hoping there's a solution. (My apologies for the long post...) I have been working on a website that I inherited about a month ago. One of the parts I have been working on is fixing one of the controls (essentially a dynamic header bar) so that it displays additional information as requested by my users. As part of doing this project, I created a Site.master file so that I wouldn't have to recode the header bar into every single page. When I first started doing this, it seemingly worked very well. All the pages I had developed looked great and the bar updated as it should displaying the information as it should. Well, when I dropped the Site.master (and this control) into older site pages (ones I did not specifically develop) I noticed that it looked bad on some of them, but not all of them. When I say it looked bad, basically, the control would left-align itself to the page rather than center as it should. I spent a couple hours debugging to no avail - CSS looked correct, the HTML appeared to be okay, I didn't see anything in the Javascript (although, I did miss something as I'll point out in a second), and even the old code looked correct (to the best that it could - it's not very well written). Another coworker took a look at the site and couldn't find anything at first, either. It wasn't until I just thought to look at the rendered source code of the page (I had been working in the developer view up to this point in IE8) that it became clear what was wrong. The original developer performs searches on many of the pages. To accomplish this, he queries the database for ALL the data and then loads them into Javascript arrays within the page so he can get access to them. This in itself is a huge problem because we're talking about thousands of items, and it obviously isn't scalable (and, yes, the site is slow). However, it finally clicked what was screwing up the Site.master - when he loads the data into the Javascript arrays, he writes out the data to the HTML upon Page_Load using numerous Response.Write(string) calls. The WTF (and what was messing me up) is that he inserts the Javascript before the DOCTYPE causing IE to go into quirks mode! So, because I need to at least get this release out (I'll fix the real problem later), I was wondering: is there a way I can force this Javascript to be inserted elsewhere into the HTML—after the DOCTYPE at the very least? Right now, all the Response.Write() calls are being done in the Page_Load method. I just need them to be inserted later.

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • Calculate pixels within a polygon

    - by DoomStone
    In an assignment for school do we need to do some image recognizing, where we have to find a path for a robot. So far have we been able to find all the polygons in the image, but now we need to generate a pixel map, that be used for an astar algorithm later. We have found a way to do this, show below, but the problem is that is very slow, as we go though each pixel and test if it is inside the polygon. So my question is, are there a way that we can generate this pixel map faster? We have a list of cordinates for the polygon private List<IntPoint> hull; The fuction "getMap" is called to get the pixel map public Point[] getMap() { List<Point> points = new List<Point>(); lock (hull) { Rectangle rect = getRectangle(); for (int x = rect.X; x <= rect.X + rect.Width; x++) { for (int y = rect.Y; y <= rect.Y + rect.Height; y++) { if (inPoly(x, y)) points.Add(new Point(x, y)); } } } return points.ToArray(); } Get Rectangle is used to limit the search, se we don't have to go thoug the whole image public Rectangle getRectangle() { int x = -1, y = -1, width = -1, height = -1; foreach (IntPoint item in hull) { if (item.X < x || x == -1) x = item.X; if (item.Y < y || y == -1) y = item.Y; if (item.X > width || width == -1) width = item.X; if (item.Y > height || height == -1) height = item.Y; } return new Rectangle(x, y, width-x, height-y); } And atlast this is how we check to see if a pixel is inside the polygon public bool inPoly(int x, int y) { int i, j = hull.Count - 1; bool oddNodes = false; for (i = 0; i < hull.Count; i++) { if (hull[i].Y < y && hull[j].Y >= y || hull[j].Y < y && hull[i].Y >= y) { try { if (hull[i].X + (y - hull[i].X) / (hull[j].X - hull[i].X) * (hull[j].X - hull[i].X) < x) { oddNodes = !oddNodes; } } catch (DivideByZeroException e) { if (0 < x) { oddNodes = !oddNodes; } } } j = i; } return oddNodes; }

    Read the article

  • SQL/Schema comparison and upgrade

    - by Workshop Alex
    I have a simple situation. A large organisation is using several different versions of some (desktop) application and each version has it's own database structure. There are about 200 offices and each office will have it's own version, which can be one of 7 different ones. The company wants to upgrade all applications to the latest versions, which will be version 8. The problem is that they don't have a separate database for each version. Nor do they have a separate database for each office. They have one single database which is handled by a dedicated server, thus keeping things like management and backups easier. Every office has it's own database schema and within the schema there's the whole database structure for their specific application version. As a result, I'm dealing with 200 different schema's which need to be upgraded, each with 7 possible versions. Fortunately, every schema knows the proper version so checking the version isn't difficult. But my problem is that I need to create upgrade scripts which can upgrade from version 1 to version 2 to version 3 to etc... Basically, all schema's need to be bumped up one version until they're all version 8. Writing the code that will do this is no problem. the challenge is how to create the upgrade script from one version to the other? Preferably with some automated tool. I've examined RedGate's SQL Compare and Altova's DatabaseSpy but they're not practical. Altova is way too slow. RedGate requires too much processing afterwards, since the generated SQL Script still has a few errors and it refers to the schema name. Furthermore, the code needs to become part of a stored procedure and the code generated by RedGate doesn't really fit inside a single procedure. (Plus, it's doing too much transaction-handling, while I need everything within a single transaction. I have been considering using another SQL Comparison tool but it seems to me that my case is just too different from what standard tools can deliver. So I'm going to write my own comparison tool. To do this, I'll be using ADOX with Delphi to read the catalogues for every schema version in the database, then use this to write the SQL Statements that will need to upgrade these schema's to their next version. (Comparing 1 with 2, 2 with 3, 3 with 4, etc.) I'm not unfamiliar with generating SQL-Script-Generators so I don't expect too many problems. And I'll only be upgrading the table structures, not any of the other database objects. So, does anyone have some good tips and tricks to apply when doing this kind of comparisons? Things to be aware of? Practical tips to increase speed?

    Read the article

  • How can I create the XML::Simple data structure using a Perl XML SAX parser?

    - by DVK
    Summary: I am looking a fast XML parser (most likely a wrapper around some standard SAX parser) which will produce per-record data structure 100% identical to those produced by XML::Simple. Details: We have a large code infrastructure which depends on processing records one-by-one and expects the record to be a data structure in a format produced by XML::Simple since it always used XML::Simple since early Jurassic era. An example simple XML is: <root> <rec><f1>v1</f1><f2>v2</f2></rec> <rec><f1>v1b</f1><f2>v2b</f2></rec> <rec><f1>v1c</f1><f2>v2c</f2></rec> </root> And example rough code is: sub process_record { my ($obj, $record_hash) = @_; # do_stuff } my $records = XML::Simple->XMLin(@args)->{root}; foreach my $record (@$records) { $obj->process_record($record) }; As everyone knows XML::Simple is, well, simple. And more importantly, it is very slow and a memory hog—due to being a DOM parser and needing to build/store 100% of data in memory. So, it's not the best tool for parsing an XML file consisting of large amount of small records record-by-record. However, re-writing the entire code (which consist of large amount of "process_record"-like methods) to work with standard SAX parser seems like an big task not worth the resources, even at the cost of living with XML::Simple. I'm looking for an existing module which will probably be based on a SAX parser (or anything fast with small memory footprint) which can be used to produce $record hashrefs one by one based on the XML pictured above that can be passed to $obj->process_record($record) and be 100% identical to what XML::Simple's hashrefs would have been. I don't care much what the interface of the new module is; e.g whether I need to call next_record() or give it a callback coderef accepting a record.

    Read the article

  • jquery code not working in chrome

    - by Paul
    This code works is FF and IE but not in Chrome. Any help would be greatly appreciated. Thank You! here is the html Item One Item Two Item Three Item Four Item Five Item Six Item Seven Item Eight Item Nine Item ten here is the css popularsearches { border-bottom: 1px solid #D4D4D4; border-left: 1px solid #D4D4D4; border-right: 1px solid #D4D4D4; overflow:hidden; height: 130px; width:248px; margin-bottom:20px; } popularsearches ul { padding:0 5px 0 0; margin:0; } popularsearches ul li { list-style-type:none; list-style-position:inside; border-bottom: solid 1px #D4D4D4; font-size:14px; padding:3px 0 3px 0; margin:0 0 0 10px; text-align:left; } popularsearches ul li a { text-decoration:none; } popularsearches ul li a:hover, a:link, a:visited { text-decoration:none; } popularsearches-inside { width: 500px; } popularsearches-left { float:left; width:250px; height:100px; } popularsearches-right { float:left; width:250px; height:100px; } here is the jQuery var closeinterval = 0; function scrollContent() { //Toggle left between 250 and 0 var top = jQuery("#popularsearches").scrollLeft() == 0 ? 250 : 0; jQuery("#popularsearches").animate({ scrollLeft: top }, "slow"); } // Call scrollContent function every 6 secs closeinterval = setInterval("scrollContent()", 6000); jQuery(document).ready(function() { jQuery("#popular-button-left").bind("click", function() { if (closeinterval) { window.clearInterval(closeinterval); closeinterval = null; } jQuery("#popularsearches").animate({ scrollLeft: 0 }, 1000); }); jQuery("#popular-button-right").bind("click", function() { if (closeinterval) { window.clearInterval(closeinterval) closeinterval = null; } jQuery("#popularsearches").animate({ scrollLeft: 250 }, 1000); }); });

    Read the article

  • IE browser caching and the jQuery Form Plugin

    - by Harfleur
    Like so many lost souls before me, I'm floundering in the snake pit that is Ajax form submission and IE browser caching. I'm trying to write a simple script using the jQuery Form Plugin to Ajaxify Wordpress comments. It's working fine in Firefox, Chrome, Safari, et. al., but in IE, the response text is cached with the result that Ajax is pulling in the wrong comment. jQuery(this).ajaxSubmit({ success: function(data) { var response = $("<ol>"+data+"</ol>"); response.find('.commentlist li:last').hide().appendTo(jQuery('.commentlist')).slideDown('slow'); } }); ajaxSubmit sends the comment to wp-comments-post.php, which inelegantly spits back the entire page as a response. So, despite the fact that it's ugly as toads, I'm sticking the response text in a variable, using :last to isolate the most recent comment, and sliding it down in its place. IE, however, is returning the cached version of the page, which doesn't include the new comment. So ".commentlist li:last" selects the previous comment, a duplicate of which then uselessly slides down beneath the original. I've tried setting "cache: false" in the ajaxSubmit options, but it has no effect. I've tried setting a url option and tacking on a random number or timestamp, but it winds up being attached to the POST that submits the comment to the server rather than the GET that returns the response, and so has no effect. I'm not sure what else to try. Everything works fine in IE if I turn off browser caching, but that's obviously not something I can expect anyone viewing the page to do. Any help will be hugely appreciated. Thanks in advance! EDIT WITH A PROGRESS REPORT: A couple of people have suggested using PHP headers to prevent caching, and this does indeed work. The trouble is that wp-comments-post is spitting back the entire page when a new comment is submitted, and the only way I can see to add headers is to put them in the Wordpress post template, which disables caching on all posts at all times--not quite the behavior I'm looking for. Is there a way to set a php conditional--"if is_ajax" or something like that--that would keep the headers from being applied during regular pageloads, but plug them in if the page was called by an Ajax GET?

    Read the article

  • Hide jQuery Accordion while loading

    - by zac
    I am testing a site build with a slow connection and I noticed the jQuery Accordion stays expanded for a long time, until the rest of the site is loaded, and then finally collapses. Not very pretty. I was wondering how I could keep it collapsed through the loading process and only expand when clicked. I am working with the standalone 1.6 version of the accordion plugin. The basic structure : <div class="sidebar"> <ul id="navigation" class="ui-accordion-container"> <li><a class="head" href="#">1</a> <ul class="sub"> <li><a href="#">1a</a></li> <li><a href="#">2a</a></li> </ul> </li> </ul> </div> and the script jQuery().ready(function(){ jQuery('#navigation').accordion({ active: 'false', header: '.head', navigation: true, animated: 'easeslide', collapsible: true }); }); I tried to hide the elements in the CSS to keep them from appearing while loading but all that achieved is in having them always hidden. Maybe the problem is in the CSS I have a background image in each of the sub menus: #navigation{ margin:0px; margin-left: 10px; padding:0px; text-indent:0px; font-size: 1.1em; width:200px; text-transform: uppercase; padding-bottom: 30px; } #navigation ul{ border-width:0px; margin:0px; padding:0px; text-indent:0px; } #navigation li{ list-style:none outside none; } #navigation li ul{ height:185px; overflow:auto; } #navigation li ul.sub{ background:url('../images/sub.jpg') no-repeat; dispaly: block; } #navigation li li a{ color:#000000; display:block; text-indent:20px; text-decoration: none; padding: 6px 0; } #navigation li li a:hover{ background-color:#FFFF99; color:#FF0000; } Thanks in advance for any advice on how to have this thing run a little smoother and having the accordion always collapsed. -edit - I forgot to mention that I am also hoping for a solution that will allow the nav to still be accessible for those without javscript.

    Read the article

  • Running out of memory.. How?

    - by maxdj
    I'm attempting to write a solver for a particular puzzle. It tries to find a solution by trying every possible move one at a time until it finds a solution. The first version tried to solve it depth-first by continually trying moves until it failed, then backtracking, but this turned out to be too slow. I have rewritten it to be breadth-first using a queue structure, but I'm having problems with memory management. Here are the relevant parts: int main(int argc, char *argv[]) { ... int solved = 0; do { solved = solver(queue); } while (!solved && !pblListIsEmpty(queue)); ... } int solver(PblList *queue) { state_t *state = (state_t *) pblListPoll(queue); if (is_solution(state->pucks)) { print_solution(state); return 1; } state_t *state_cp; puck new_location; for (int p = 0; p < puck_count; p++) { for (dir i = NORTH; i <= WEST; i++) { if (!rules(state->pucks, p, i)) continue; new_location = in_dir(state->pucks, p, i); if (new_location.x != -1) { state_cp = (state_t *) malloc(sizeof(state_t)); state_cp->move.from = state->pucks[p]; state_cp->move.direction = i; state_cp->prev = state; state_cp->pucks = (puck *) malloc (puck_count * sizeof(puck)); memcpy(state_cp->pucks, state->pucks, puck_count * sizeof(puck)); /*CRASH*/ state_cp->pucks[p] = new_location; pblListPush(queue, state_cp); } } } return 0; } When I run it I get the error: ice(90175) malloc: *** mmap(size=2097152) failed (error code=12) *** error: can't allocate region *** set a breakpoint in malloc_error_break to debug Bus error The error happens around iteration 93,000. From what I can tell, the error message is from malloc failing, and the bus error is from the memcpy after it. I have a hard time believing that I'm running out of memory, since each game state is only ~400 bytes. Yet that does seem to be what's happening, seeing as the activity monitor reports that it is using 3.99GB before it crashes. I'm using http://www.mission-base.com/peter/source/ for the queue structure (it's a linked list). Clearly I'm doing something dumb. Any suggestions?

    Read the article

  • How to stream semi-live audio over internet

    - by Thomas Tempelmann
    I want to write something like Skype, i.e. I have a constant audio stream on one computer and then recompress it in a format that's suitable for a latent internet connection, receive it on the other end and play it. Let's also assume that the internet connection is fairly modern and fast, i.e. DSL or alike, no slow connections over phone and such. The involved computers will also be rather modern (Dual Core Intel CPUs at 2GHz or more). I know how to handle the audio on the machines. What I don't know is how to transmit the audio in an efficient way. The challenges are: I'd like get good audio quality across the line. The stream should be received without drops. The stream may, however, be received with a little delay (a second delay is acceptable). I imagine that the transport software could first determine the average (and max) latency, then start the stream and tell the receiver to wait for that max latency before starting to play the audio. With that, if the latency doesn't get any higher, the entire stream will be playable on the other side without stutter or drops. If, due to unexpected IP latencies or blockages, the stream does get cut off, I want to be able to notice this so that I can take actions (e.g. abort the stream) and eventually start a new transmission. What are my options if I want do use ready-made software for the compression and tranmission? I have no intention to write my own audio compression engine, really. OTOH, I plan to sell the solution in a vertical market, meaning I can afford a few dollars of license fees per copy, but not $100s. I guess the simplest solution would be to just open a TCP stream, send a few packets back and forth to determine their running time (or even use UDP for that), then use the results as the guide for my max latency value, then simply fire the audio data in its raw form (uncompressed 16 bit stereo), along with a timing code over the TCP connection. The receiver reads the data and plays it with the pre-determined delay. That might just work with the type of fast connection I expect. I just wonder if there are better solutions to reach this goal, with better performance (lower latency) and less data (compressed). BTW, I first try to implement this on OS X, but might want to do it on Windows, too, if it proves successful.

    Read the article

  • how to develop a program to minimize errors in human transcription of hand written surveys

    - by Alex. S.
    I need to develop custom software to do surveys. Questions may be of multiple choice, or free text in a very few cases. I was asked to design a subsystem to check if there is any error in the manual data entry for the multiple choices part. We're trying to speed up the user data entry process and to minimize human input differences between digital forms and the original questionnaires. The surveys are filled with handwritten marks and text by human interviewers, so it's possible to find hard to read marks, or also the user could accidentally select a different value in some question, and we would like to avoid that. The software must include some automatic control to detect possible typing differences. Each answer of the multiple choice questions has the same probability of being selected. This question has two parts: The GUI. The most simple thing I have in mind is to implement the most usable design of the questions display: use of large and readable fonts and space generously the choices. Is there something else? For faster input, I would like to use drop down lists (favoring keyboard over mouse). Given the questions are grouped in sections, I would like to show the answers selected for the questions of that section, but this could slow down the process. Any other ideas? The error checking subsystem. What else can I do to minimize or to check human typos in the multiple choice questions? Is this a solvable problem? is there some statistical methodology to check values that were entered by the users are the same from the hand filled forms? For example, let's suppose the survey has 5 questions, and each has 4 options. Let's say I have n survey forms filled in paper by interviewers, and they're ready to be entered in the software, then how to minimize the accidental differences that can have the manual transcription of the n surveys, without having to double check everything in the 5 questions of the n surveys? My first suggestion is that at the end of the processing of all the hand filled forms, the software could choose some forms randomly to make a double check of the responses in a few instances, but on what criteria can I make this selection? This validation would be enough to cover everything in a significant way? The actual survey is nation level and it has 56 pages with over 200 questions in total, so it will be a lot of hand written pages by many people, and the intention is to reduce the likelihood of errors and to optimize speed in the data entry process. The surveys must filled in paper first, given the complications of taking laptops or handhelds with the interviewers.

    Read the article

  • Postgresql count+sort performance

    - by invictus
    I have built a small inventory system using postgresql and psycopg2. Everything works great, except, when I want to create aggregated summaries/reports of the content, I get really bad performance due to count()'ing and sorting. The DB schema is as follows: CREATE TABLE hosts ( id SERIAL PRIMARY KEY, name VARCHAR(255) ); CREATE TABLE items ( id SERIAL PRIMARY KEY, description TEXT ); CREATE TABLE host_item ( id SERIAL PRIMARY KEY, host INTEGER REFERENCES hosts(id) ON DELETE CASCADE ON UPDATE CASCADE, item INTEGER REFERENCES items(id) ON DELETE CASCADE ON UPDATE CASCADE ); There are some other fields as well, but those are not relevant. I want to extract 2 different reports: - List of all hosts with the number of items per, ordered from highest to lowest count - List of all items with the number of hosts per, ordered from highest to lowest count I have used 2 queries for the purpose: Items with host count: SELECT i.id, i.description, COUNT(hi.id) AS count FROM items AS i LEFT JOIN host_item AS hi ON (i.id=hi.item) GROUP BY i.id ORDER BY count DESC LIMIT 10; Hosts with item count: SELECT h.id, h.name, COUNT(hi.id) AS count FROM hosts AS h LEFT JOIN host_item AS hi ON (h.id=hi.host) GROUP BY h.id ORDER BY count DESC LIMIT 10; Problem is: the queries runs for 5-6 seconds before returning any data. As this is a web based application, 6 seconds are just not acceptable. The database is heavily populated with approximately 50k hosts, 1000 items and 400 000 host/items relations, and will likely increase significantly when (or perhaps if) the application will be used. After playing around, I found that by removing the "ORDER BY count DESC" part, both queries would execute instantly without any delay whatsoever (less than 20ms to finish the queries). Is there any way I can optimize these queries so that I can get the result sorted without the delay? I was trying different indexes, but seeing as the count is computed it is possible to utilize an index for this. I have read that count()'ing in postgresql is slow, but its the sorting that are causing me problems... My current workaround is to run the queries above as an hourly job, putting the result into a new table with an index on the count column for quick lookup. I use Postgresql 9.2.

    Read the article

< Previous Page | 236 237 238 239 240 241 242 243 244 245 246 247  | Next Page >