Search Results

Search found 5679 results on 228 pages for 'kill processes'.

Page 199/228 | < Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >

  • Quality assurance in small developer teams

    - by Kim L
    Ideally, in a project you will developers, testers, QA manager(s) etc which all make their contribution to the quality of the code. But what if you don't have that kind of resources? If you just have, for example, three developers and don't have the resources to hire a full time QA manager, how do you assure that the code quality meets set standards? What kind of things do you pay attention to in quality assurance? Quality isn't just about the code doing what it is supposed to do (code is properly tested with automatic tests). Quality is also about the code being clean (readable, maintainable, well structured, documented, etc). I'm looking forward to hear what kind of processes you have applied to your team to assure that the quality meets the set standards. We've applied a process where we rotate the QA role between the developers. Each developer is responsible for QA one week at a time. Each changeset is revised and checked that existing tests pass, required new tests have been written, that the code is clean and, of course, that the project builds.

    Read the article

  • CLR Stored Procedures

    - by Paul Hatcherian
    In an ASP.NET application, I have a small number of fairly complex, frequently used operations to execute against a database. In these operations, one or more of several tables needs updates or inserts based a logical evaluation of both input parameters and values of certain tables. I've maintained a separation of logic and data access, so the operation currently looks like this: Request received from client Business layer invokes data layer to retrieve data from database Business layer processes result and determines which operation to execute Business layer invokes appropriate data operation Response sent to client As you can see, the client is kept waiting while two separate requests are made to the database. In searching for a solution to this, I've found CLR Stored Procedures, but I'm not sure if I have the right idea about what they are useful for. I have written a replacement for the code above which especially places steps 2-4 in a CLR SP. My understanding is that the SP will be executed locally by SQL Server and result in only one call being made to the server. My initial benchmark tests show this is actually orders of magnitude slower than my original code, but I attribute that recompilation of the code I have not worked out yet and/or some flaw in my environment. My question is basically, is this the intended use of CLR SPs or am I missing something? I realize this is a bit of a compromise structurally, so if there's a better way to do it I'd love to hear it.

    Read the article

  • PHP database simulation

    - by Emdiesse
    I have a PHP script that works by calling items from a database based upon the time they were placed in there and it deletes them if they are older than 5 minutes. Basically, I want to now simulate what would happen if this database was being updated regularly. So I was considering sticking in some code that loads an XML file then goes through and parses that into the database based upon the time data located within a node of the xml data... but the problem there is I want it to continually loop through an enter this data so it'll never actually run the other processes So I was thinking of having another PHP script do that that could do this independantly of the php script that is going to display this data... In theory: I am looking to have a button that I can press and it will then run some php code to load up an XML file from a directory on my web server and then iterate though the data sending the data, to a database, based upon the time within a node in the PHP script and when the script was first called So back to my page that displayed the data... if I continually hit refresh it will contain different results each time because data is being added by the other process and this php script removes the older data when it is refreshed Any information on this? Is there a way I can silently, and safely, run a php script without it being loaded into a browser... like a thread!?

    Read the article

  • OpenGL behaving strangely

    - by Mk12
    OpenGL is acting very strangely for some reason. In my subclass of NSOpenGLView, I have the following code in -prepareOpenGL: - (void)prepareOpenGL { GLfloat lightAmbient[] = { 0.5f, 0.5f, 0.5f, 1.0f }; GLfloat lightDiffuse[] = { 1.0f, 1.0f, 1.0f, 1.0f }; GLfloat lightPosition[] = { 0.0f, 0.0f, 2.0f }; quality = 0; zCoord = -6; [self loadTextures]; glEnable(GL_LIGHTING); glEnable(GL_TEXTURE_2D); glShadeModel(GL_SMOOTH); glClearColor(0.2f, 0.2f, 0.2f, 0.0f); glClearDepth(1.0f); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glLightfv(GL_LIGHT1, GL_AMBIENT, lightAmbient); glLightfv(GL_LIGHT1, GL_DIFFUSE, lightDiffuse); glLightfv(GL_LIGHT1, GL_POSITION, lightPosition); glEnable(GL_LIGHT1); gameState = kGameStateRunning; int i = 0; // HERE ******** [NSTimer scheduledTimerWithTimeInterval:0.03f target:self selector:@selector(processKeys) userInfo:nil repeats:YES]; // Synchronize buffer swaps with vertical refresh rate GLint swapInt = 1; [[self openGLContext] setValues:&swapInt forParameter:NSOpenGLCPSwapInterval]; // Setup and start displayLink [self setupDisplayLink]; } I wanted to assign the timer that processes key input to an ivar so that I could invalidate it when the game is paused (and reinstantiate it on resume), however when I did that (as apposed to leaving it at [NSTimer scheduledTimer…), OpenGL doesn't display the cube I draw. When I take it away, it's fine. So i tried just adding a harmless statement, int i = 0; (maked // HERE *******), and that makes the lighting not work! When I take it away, everything is fine, but when I put it back, everything is dark. Can someone come up with a rational explanation for this? Thanks.

    Read the article

  • Efficient data importing?

    - by Kevin
    We work with a lot of real estate, and while rearchitecting how the data is imported, I came across an interesting issue. Firstly, the way our system works (loosely speaking) is we run a Coldfusion process once a day that retrieves data provided from an IDX vendor via FTP. They push the data to us. Whatever they send us is what we get. Over the years, this has proven to be rather unstable. I am rearchitecting it with PHP on the RETS standard, which uses SOAP methods of retrieving data, which is already proven to be much better than what we had. When it comes to 'updating' existing data, my initial thought was to query only for data that was updated. There is a field for 'Modified' that tells you when a listing was last updated, and the code I have will grab any listing updated within the last 6 hours (give myself a window in case something goes wrong). However, I see a lot of real estate developers suggest creating 'batch' processes that run through all listings regardless of updated status that is constantly running. Is this the better way to do it? Or am I fine with just grabbing the data I know I need? It doesn't make a lot of sense to me to do more processing than necessary. Thoughts?

    Read the article

  • How can I open VLC via browser with PHP (Mac OS X)

    - by Damiqib
    I'm trying to open VLC via browser and make it instantly play the given video file on Mac OS X. This runs on my local server and is only meant to run locally - therefore I already run apache (MAMP) with my username and with group "staff" (defined in httpd.conf). YES - I do know that VLC has http interface - however that is not what I need, so do not suggest that... My current system works without any problems when I run it via Terminal: php /var/www/Movies/index.php - This leads to VLC opening and video starts playing fullscreen like intented. Problems start when I run the same PHP-page with browser. Then the VLC-process starts, but there's no GUI for it, video file won't start playing and the VLC-process takes nearly 100% of CPU. Both; terminal and browser started VLC-processes run with the same user (mine) Both have "Parent process" bash VLC-process begun with Terminal has empty "Process group" (only process id-number) and browser started has "httpd" + (id-number) VLC-process started via browser makes 1000-times more "Mach System Calls" than it's Terminal-started counterpart. Could anyone give me any pointers on how to get this thing working? index.php # $j is a file path to the videofile and is defined before exec('/var/www/Movies/vlc.sh "' . $j . '" > /dev/null 2>&1 & echo $!;'); # If I do this in the given PHP-page it tells me that apache is running # with my username and with the group "staff" like it should be... exec('whoamI'); vlc.sh #!/bin/bash # Activate VLC in 5 seconds to make it the front-most window (sleep 5; open -a VLC) & # Open video file /Applications/VLC.app/Contents/MacOS/VLC --quiet --fullscreen "$1"

    Read the article

  • Textually diffing JSON

    - by Richard Levasseur
    As part of my release processes, I have to compare some JSON configuration data used by my application. As a first attempt, I just pretty-printed the JSON and diff'ed them (using kdiff3 or just diff). As that data has grown, however, kdiff3 confuses different parts in the output, making additions look like giant modifies, odd deletions, etc. It makes it really hard to figure out what is different. I've tried other diff tools, too (meld, kompare, diff, a few others), but they all have the same problem. Despite my best efforts, I can't seem to format the JSON in a way that the diff tools can understand. Example data: [ { "name": "date", "type": "date", "nullable": true, "state": "enabled" }, { "name": "owner", "type": "string", "nullable": false, "state": "enabled", } ...lots more... ] The above probably wouldn't cause the problem (the problem occurs when there begin to be hundreds of lines), but thats the gist of what is being compared. Thats just a sample; the full objects are 4-5 attributes, and some attributes have 4-5 attributes in them. The attribute names are pretty uniform, but their values pretty varied. In general, it seems like all the diff tools confuse the closing "}" with the next objects closing "}". I can't seem to break them of this habit. I've tried adding whitespace, changing indentation, and adding some "BEGIN" and "END" strings before and after the respective objects, but the tool still get confused.

    Read the article

  • Get current PHP executable from within script?

    - by benizi
    I want to run a PHP cli program from within PHP cli. On some machines where this will run, both php4 and php5 are installed. If I run the outer program as php5 outer.php I want the inner script to be run with the same php version. In Perl, I would use $^X to get the perl executable. It appears there's no such variable in PHP? Right now, I'm using $_SERVER['_'], because bash (and zsh) set the environment variable $_ to the last-run program. But, I'd rather not rely on a shell-specific idiom. UPDATE: Version differences are but one problem. If PHP isn't in PATH, for example, or isn't the first version found in PATH, the suggestions to find the version information won't help. Additionally, csh and variants appear to not set the $_ environment variable for their processes, so the workaround isn't applicable there. UPDATE 2: I was using $_SERVER['_'], until I discovered that it doesn't do the right thing under xargs (which makes sense... zsh sets it to the command it ran, which is xargs, not php5, and xargs doesn't change the variable). Falling back to using: $version = explode('.', phpversion()); $phpcli = "php{$version[0]}";

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • .NET threading solution for long queries

    - by Eddie
    Senerio We have an application that records incidents. An external database needs to be queried when an incident is approved by a supervisor. The queries to this external database are sometimes taking a while to run. This lag is experienced through the browser. Possible Solution I want to use threading to eliminate the simulated hang to the browser. I have used the Thread class before and heard about ThreadPool. But, I just found BackgroundWorker in this post. MSDN states: The BackgroundWorker class allows you to run an operation on a separate, dedicated thread. Time-consuming operations like downloads and database transactions can cause your user interface (UI) to seem as though it has stopped responding while they are running. When you want a responsive UI and you are faced with long delays associated with such operations, the BackgroundWorker class provides a convenient solution. Is BackgroundWorker the way to go when handling long running queries? What happens when 2 or more BackgroundWorker processes are ran simultaneously? Is it handled like a pool?

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

  • Log4Net GetLogger creates rolling files even for the unreferenced files

    - by ybastiand
    Hi, I have a C# solution that contains three executables. I have each of these three executables sharing the same log4net configuration file. At startup of each of the executable, they retrieve a logger (one logger per executable, as per configuration file further below). When one of the executable performs Log.GetLogger(), it creates all the rolling files instead of only the one rolling file that is referred to as appender-ref in the executable's logger configuration. For instance, when I startup my sending daemon executable, it performs Log.GetLogger("SendingDaemonLogger") which creates 3 files Log/RuleScheduler.txt, Log/NotificationGenerator.txt and Log/NotificationSender.txt instead of only the desired Log/NotificationSender.txt. Then when I startup another of the executables, for instance the rule scheduler daemon, this other process cannot write in Log/RuleScheduler.txt because it has been created and locked by the sending daemon process. I am guessing that there may be three different solutions to my problem: The GetLogger should only create the rolling file appenders that are referenced in the config I should have one config file per executable, this way each config file could list only one rolling file appender and starting each of the executable would not create the rolling files of the other daemons. I am however reluctant to do this because some of the configuration (SMTP appender, console appender) is shared between the daemons and I don't want to have duplicate copies to maintain. Unless there is a way to have a config file including another one? Maybe there is a way to configure the rolling file so that concurrent access across processes is allowed? This solution still isn't perfect in my opinion because any of the daemons should not be creating the rolling files of some other daemons. Thanks in advance for your help! I have difficulties for posting the config file properly here (this website interprets as HTML). Please go to the following link for seeing my log4net configuration file: log4Net configuration file

    Read the article

  • .net c# cannot find img resources when open with exe

    - by okuryazar
    Hello, My exe processes text documents and I want to be able to right click on documents, select open with and point to my exe file. I can double click on my exe and choose a file to process with OpenFileDialog and it works fine. However, when I do open with, I get FileNotFound error. Here is the error log: System.IO.FileNotFoundException: attention.jpg at System.Drawing.Image.FromFile(String filename, Boolean useEmbeddedColorManagement) at System.Drawing.Image.FromFile(String filename) at ImzaDogrulamaUygulamasi.frmCertificate.FillTreeView() in D:\VSS\SOURCE\VS2008\EGA\ImzaDogrulamaUygulamasi\ImzaDogrulamaUygulamasi\frmCertificate.cs:line 76 at ImzaDogrulamaUygulamasi.frmCertificate.Form2_Load(Object sender, EventArgs e) in D:\VSS\SOURCE\VS2008\EGA\ImzaDogrulamaUygulamasi\ImzaDogrulamaUygulamasi\frmCertificate.cs:line 244 at System.Windows.Forms.Form.OnLoad(EventArgs e) at System.Windows.Forms.Form.OnCreateControl() at System.Windows.Forms.Control.CreateControl(Boolean fIgnoreVisible) at System.Windows.Forms.Control.CreateControl() at System.Windows.Forms.Control.WmShowWindow(Message& m) at System.Windows.Forms.Control.WndProc(Message& m) at System.Windows.Forms.ScrollableControl.WndProc(Message& m) at System.Windows.Forms.ContainerControl.WndProc(Message& m) at System.Windows.Forms.Form.WmShowWindow(Message& m) at System.Windows.Forms.Form.WndProc(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m) at System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m) and this is how I add my images in my code, all resources are in the same directory with the exe file: ImageList myImageList = new ImageList(); myImageList.Images.Add(Image.FromFile("attention.jpg")); myImageList.Images.Add(Image.FromFile("sandglass.jpg")); myImageList.Images.Add(Image.FromFile("11.JPG")); myImageList.Images.Add(Image.FromFile("checkGif.jpg")); treeView1.ImageList = myImageList; Any help is much appreciated. Thanks

    Read the article

  • best way to switch between secure and unsecure connection without bugging the user

    - by Brian Lang
    The problem I am trying to tackle is simple. I have two pages - the first is a registration page, I take in a few fields from the user, once they submit it takes them to another page that processes the data, stores it to a database, and if successful, gives a confirmation message. Here is my issue - the data from the user is sensitive - as in, I'm using an https connection to ensure no eavesdropping. After that is sent to the database, I'd like on the confirmation page to do some nifty things like Google Maps navigation (this is for a time reservation application). The problem is by using the Google Maps api, I'd be linking to items through a unsecure source, which in turn prompts the user with a nasty warning message. I've browsed around, Google has an alternative to enterprise clients, but it costs $10,000 a year. What I am hoping is to find a workaround - use a secure connection to take in the data, and after it is processed, bring them to a page that isn't secure and allows me to utilize the Google Maps API. If any of you have a Netflix account you can see exactly what I would like to do when you sign-in, it is a secure page, which then takes you to your account / queue, on an unsecure page. Any suggestions? Thanks!

    Read the article

  • Distributed Lock Service over MySql/GigaSpaces/Netapp

    - by ripper234
    Disclaimer: I already asked this question, but without the deployment requirement. I got an answer that got 3 upvotes, and when I edited the question to include the deployment requirement the answer then became irrelevant. The reason I'm resubmitting is because SO considers the original question 'answered', even though I got no meaningful upvoted answer. I opened a uservoice submission about this problem. The reason I reposted is so StackOverflow consider the original question answered, so it doesn't show up on the 'unanswered questions' tab. Which distributed lock service would you use? Requirements are: A mutual exclusion (lock) that can be seen from different processes/machines lock...release semantics Automatic lock release after a certain timeout - if lock holder dies, it will automatically be freed after X seconds Java implementation Easy deployment - must not require complicated deployment beyond either Netapp, MySql or GigaSpaces. Must play well with those products (especially GigaSpaces - this is why TerraCotta was ruled out). Nice to have: .Net implementation If it's free: Deadlock detection / mitigation I'm not interested in answers like "it can be done over a database", or "it can be done over JavaSpaces" - I know. Relevant answers should only contain a ready, out-of-the-box, proven implementation.

    Read the article

  • Is NSPasteboard thread-safe?

    - by Joe
    Is it safe to write data to an NSPasteboard object from a background thread? I can't seem to find a definitive answer anywhere. I think the assumption is that the data will be written to the pasteboard before the drag begins. Background: I have an application that is fetching data from Evernote. When the application first loads, it gets the meta data for each note, but not the note content. The note stubs are then listed in an outline view. When the user starts to drag a note, the notes are passed to the background thread that handles getting the note content from Evernote. Having the main thread block until the data is gotten results in a significant delay and a poor user experience, so I have the [outlineView:writeItems:toPasteboard:] function return YES while the background thread processes the data and invokes the main thread to write the data to the pasteboard object. If the note content gets transferred before the user drops the note somewhere, everything works perfectly. If the user drops the note somewhere before the data has been processed... well, everything blocks forever. Is it safe to just have the background thread write the data to the pasteboard?

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • To what degree should I use Marshal.ReleaseComObject with Excel Interop objects?

    - by DanM
    I've seen several examples where Marshal.ReleaseComObject() is used with Excel Interop objects (i.e., objects from namespace Microsoft.Office.Interop.Excel), but I've seen it used to various degrees. I'm wondering if I can get away with something like this: var application = new ApplicationClass(); try { // do work with application, workbooks, worksheets, cells, etc. } finally { Marashal.ReleaseComObject(application) } Or if I need to release every single object created, as in this method: public void CreateExcelWorkbookWithSingleSheet() { var application = new ApplicationClass(); var workbook = application.Workbooks.Add(_missing); var worksheets = workbook.Worksheets; for (var worksheetIndex = 1; worksheetIndex < worksheets.Count; worksheetIndex++) { var worksheet = (WorksheetClass)worksheets[worksheetIndex]; worksheet.Delete(); Marshal.ReleaseComObject(worksheet); } workbook.SaveAs( WorkbookPath, _missing, _missing, _missing, _missing, _missing, XlSaveAsAccessMode.xlExclusive, _missing, _missing, _missing, _missing, _missing); workbook.Close(true, _missing, _missing); application.Quit(); Marshal.ReleaseComObject(worksheets); Marshal.ReleaseComObject(workbook); Marshal.ReleaseComObject(application); } What prompted me to ask this question is that, being the LINQ devotee I am, I really want to do something like this: var worksheetNames = worksheets.Cast<Worksheet>().Select(ws => ws.Name); ...but I'm concerned I'll end up with memory leaks or ghost processes if I don't release each worksheet (ws) object. Any insight on this would be appreciated.

    Read the article

  • Using pipes in Linux with C

    - by Dave
    Hi, I'm doing a course in Operating Systems and we're supposed to learn how to use pipes to transfer data between processes. We were given this simple piece of code which demonstrates how to use pipes,but I'm having difficulty understanding it. #include <stdio.h> #include <stdlib.h> #include <unistd.h> main() { int pipefd [2], n; char buff[100] ; if( pipe( pipefd) < 0) { printf("can not create pipe \n"); } printf("read fd = %d, write fd = %d \n", pipefd[0], pipefd[1]); if ( write (pipefd[1],"hello world\n", 12)!= 12) { printf("pipe write error \n"); } if( ( n = read ( pipefd[0] , buff, sizeof ( buff) ) ) <= 0 ) { printf("pipe read error \n"); } write ( 1, buff, n ) ; exit (0); } What does the write function do? It seems to send data to the pipe and also print it to the screen (at least it seems like the second time the write function is called it does this). Does anyone have any suggestions of good websites for learning about topics such as this, FIFO, signals, other basic linux commands used in C?

    Read the article

  • global variables in php not working as expected

    - by Josh Smeaton
    I'm having trouble with global variables in php. I have a $screen var set in one file, which requires another file that calls an initSession() defined in yet another file. The initSession() declares "global $screen" and then processes $screen further down using the value set in the very first script. How is this possible? To make things more confusing, if you try to set $screen again then call the initSession(), it uses the value first used once again. The following code will describe the process. Could someone have a go at explaining this? $screen = "list1.inc"; // From model.php require "controller.php"; // From model.php initSession(); // From controller.php global $screen; // From Include.Session.inc echo $screen; // prints "list1.inc" // From anywhere $screen = "delete1.inc"; // From model2.php require "controller2.php" initSession(); global $screen; echo $screen; // prints "list1.inc" Update: If I declare $screen global again just before requiring the second model, $screen is updated properly for the initSession() method. Strange.

    Read the article

  • best way to set up a VM for development (regarding performance)

    - by raticulin
    I am trying to set up a clean vm I will use in many of my devs. Hopefully I will use it many times and for a long time, so I want to get it right and set it up so performance is as good as possible. I have searched for a list of things to do, but strangely found only older posts, and none here. My requirements are: My host is Vista 32b, and guest is Windows2008 64b, using Vmware Workstation. The VM should also be able to run on a Vmware ESX I cannot move to other products (VirtualBox etc), but info about performance of each one is welcomed for reference. Anyway I guess most advices would apply to other OSs and other VM products. I need network connectivity to my LAN Guest will run many java processes, a DB and perform lots of file I/O What I have found so far is: HOWTO: Squeeze Every Last Drop of Performance Out of Your Virtual PCs: it's and old post, and about Virtual PC, but I guess most things still apply (and also apply to vmware). I guess it makes a difference to disable all unnecessary services, but the ones mentioned in 1 seem like too few, I specifically always disable Windows Search. Any other service I should disable?

    Read the article

  • Unexpected performance curve from CPython merge sort

    - by vkazanov
    I have implemented a naive merge sorting algorithm in Python. Algorithm and test code is below: import time import random import matplotlib.pyplot as plt import math from collections import deque def sort(unsorted): if len(unsorted) <= 1: return unsorted to_merge = deque(deque([elem]) for elem in unsorted) while len(to_merge) > 1: left = to_merge.popleft() right = to_merge.popleft() to_merge.append(merge(left, right)) return to_merge.pop() def merge(left, right): result = deque() while left or right: if left and right: elem = left.popleft() if left[0] > right[0] else right.popleft() elif not left and right: elem = right.popleft() elif not right and left: elem = left.popleft() result.append(elem) return result LOOP_COUNT = 100 START_N = 1 END_N = 1000 def test(fun, test_data): start = time.clock() for _ in xrange(LOOP_COUNT): fun(test_data) return time.clock() - start def run_test(): timings, elem_nums = [], [] test_data = random.sample(xrange(100000), END_N) for i in xrange(START_N, END_N): loop_test_data = test_data[:i] elapsed = test(sort, loop_test_data) timings.append(elapsed) elem_nums.append(len(loop_test_data)) print "%f s --- %d elems" % (elapsed, len(loop_test_data)) plt.plot(elem_nums, timings) plt.show() run_test() As much as I can see everything is OK and I should get a nice N*logN curve as a result. But the picture differs a bit: Things I've tried to investigate the issue: PyPy. The curve is ok. Disabled the GC using the gc module. Wrong guess. Debug output showed that it doesn't even run until the end of the test. Memory profiling using meliae - nothing special or suspicious. ` I had another implementation (a recursive one using the same merge function), it acts the similar way. The more full test cycles I create - the more "jumps" there are in the curve. So how can this behaviour be explained and - hopefully - fixed? UPD: changed lists to collections.deque UPD2: added the full test code UPD3: I use Python 2.7.1 on a Ubuntu 11.04 OS, using a quad-core 2Hz notebook. I tried to turn of most of all other processes: the number of spikes went down but at least one of them was still there.

    Read the article

  • What are some programming techniques for converting SD images to HD images

    - by Dr Dork
    I'm taking programming class and instructor loves to work with images so most of our assignments involve manipulating raw RGB image data. One of our assignments is to implement a standard image converter that converts SD images to HD images and vice versa. I always take advantage of these types of assignments to go a little beyond what we were asked to do, so I added a basic anti-aliasing process that uses the average pixel color of the 3x3 surrounding pixels to improve the converted image. While it helps a bit, the resulting image still doesn't look good, which is ok because it's not expected to for the assignment. I've learned that converting an SD to HD images has shown to be much harder than down sampling to SD from HD just because SD to HD effectively involves increasing resolution when it is not there. Obviously, it is hard to create pixels from nothing, but I'd like enhance my anti-aliasing to something that provides better results when upscaling an image. Most of the techniques I find and read on the internet are far beyond my level of image processing and programming. Can anybody suggest any better methods or processes to create good HD content from SD content that may be within my programming skill level? I know that's a difficult thing to gauge since you don't know me, but perhaps knowing that I can write c++ code to read in raw RGB data and upscale/downscale it with simple average-anti-aliasing will give you an idea. Thanks in advance for all your help!

    Read the article

  • C# WinForms MultiThreading in Loop

    - by Goober
    Scenario I have a background worker in my application that runs off and does a bunch of processing. I specifically used this implementation so as to keep my User Interface fluid and prevent it from freezing up. I want to keep the background worker, but inside that thread, spawn off ONLY 3 MORE threads - making them share the processing (currently the worker thread just loops through and processes each asset one-by-one. However I would like to speed this up but using only a limited number of threads. Question Given the code below, how can I get the loop to choose a thread that is free, and then essentially wait if there isn't one free before it continues. CODE foreach (KeyValuePair<int, LiveAsset> kvp in laToHaganise) { Haganise h = new Haganise(kvp.Value, busDate, inputMktSet, outputMktSet, prodType, noOfAssets, bulkSaving); h.DoWork(); } Thoughts I'm guessing that I would have to start off by creating 3 new threads, but my concern is that if I'm instantiating a new Haganise object each time - how can I pass the correct "h" object to the correct thread..... Thread firstThread = new Thread(new ThreadStart(h.DoWork)); Thread secondThread =new Thread(new ThreadStart(h.DoWork)); Thread thirdThread = new Thread(new ThreadStart(h.DoWork)); Help greatly appreciated.

    Read the article

  • is there a Universal Model for languages?

    - by Smandoli
    Many programming languages share generic and even fairly universal features. For example, if you compared Java, VB6, .NET, PHP, Python, then you would find common functions such as control structures, numeric and string manipulation, etc. What has been done to define these features at a meta-language (or language-agnostic) level? UML offers a descriptive reference of software in every aspect, but the real-world focus seems to be data processes. Is UML relevant? I'm not asking "Why we don't have a single language that replaces the current plethora." We need many different tools (at least in this eon). I'm not asking that all languages fit a template -- assembly vs. compiled languages are different enough to make that unfeasible (and some folks call HTML a language, though I wouldn't). Any attempt would start with a properly narrow scope. In line with this, I wouldn't expect the model to cover even a small selection with full validity. I would expect however that such a model could be used to transpose from one language to another (with limited goals -- think jist translation).

    Read the article

< Previous Page | 195 196 197 198 199 200 201 202 203 204 205 206  | Next Page >