Search Results

Search found 6030 results on 242 pages for 'quick poll'.

Page 205/242 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Anyone Experiencing Slow Builds With VS2010?

    - by MrKWatkins
    Hi, We've recently upgraded to the final release of VS2010 and are experiencing very slow build times compared to the same code under 2008. I was wondering if anyone else is experiencing the same so I can work out whether it's just our environment or not? A few details: Using VS2010 Ultimate on Windows 7 with fairly beefy machines, talking to TFS 2010. The solution has been upgraded from VS2008 but still builds against .NET 3.5 and ASP.NET MVC 1.0. It doesn't seem to be the compilation itself taking long but something else in the build process. This is because even projects that are up to date and don't need compiling are taking a few seconds or so to process. It's not due to an Visual Studio addin because a couple guys in the team haven't installed any. The first build after loading VS2010 is pretty quick, then they seem to slow down over time. For example on of the projects in my solution just took 00:00:00.08 to process after a restart. (The project was up to date and didn't need compiling) I then immediately hit rebuild and it jumps to 00:00:01.33. We're also experiencing the problem with another solution that uses .NET 4.0 that was building perfectly fine under VS2010 RC. There are no build events or anything like that I can blame, just straightforward assembly builds. The IDE is not very responsive during the slow builds. Anyone else has similar problems? Update: It looks like the resolving assembly references is taking a long time. Looking at the MSBuild diagnostic output or the example above the first build has 30ms for ResolveAssemblyReferences, the second build has 800ms. Subsequent builds seem to be taking longer copying stuff around, e.g. CopyFilesToOutputDirectory jumps from 1ms to 27ms.

    Read the article

  • PHP, MySQL, Memcache / Ajax Scaling Problem

    - by Jeff Andersen
    I'm building a ajax tic tac toe game in PHP/MySQL. The premise of the game is to be able to share a url like mygame.com/123 with your friends and you play multiple simultaneous games. The way I have it set up is that a file (reload.php) is being called every 3 seconds while the user is viewing their game board space. This reload.php builds their game boards and the output (html) replaces their current game board (thus showing games in which it is their turn) Initially I built it entirely with PHP/MySQL and had zero caching. A friend gave me a suggestion to try doing all of the temporary/quick read information through memcache (storing moves, and ID matchups) and then building the game boards from this information. My issue is that, both solutions encounter a wall when there is roughly 30-40 active users with roughly 40-50 games running. It is running on a VPS from VPS.net with 2 nodes. (Dedicated CPU: 1.2GHz, RAM: 752MB) Each call to reload.php peforms 3 selects and 2 insert queries. The size of the data being pulled is negligible. The same actions happen on index.php to build the boards for the initial visit. Now that the backstory is done, my question is: Would there be a bottleneck in that each user is polling the same file every 3 seconds to rebuild their gameboards, and that all users are sitting on index.php from which the AJAX calls are made within the HTML. If so, is it possible to spread the users' calls out over a set of files designated to building the game boards (eg. reload1.php 2, 3 etc) and direct users to the appropriate file. Would this relieve the pressure? A long winded explanation; however, I didn't have anywhere else to ask. Thanks very much for any insight.

    Read the article

  • Send email from webpage without postback/refresh

    - by jb
    Hi all. Long time reader, first time writer. Quick query i hope someone can help me with. I have a webpage (php, javascript) which has a small contact form for a user to fill in and email us. Not a problem in itself, i have a working form already. Nothing noteworthy, just a form posting to a php page which sends an email. <form id="myform" name="myform" method="post" action="Mailer.php"> So user fills in form, hits submit and the page changes to mailer.php and they find their way back to where they were. What I want instead is for the to stay on the same page when submit is pushed. The form div to update itself to just say 'message sent' or something. I just want to avoid a full page refresh. Much like how on say facebook, commenting on a status only updates that div. Hope that is phrased clearly enough. Cheers all,

    Read the article

  • How do I call Matlab in a script on Windows?

    - by Benjamin Oakes
    I'm working on a project that uses several languages: SQL for querying a database Perl/Ruby for quick-and-dirty processing of the data from the database and some other bookkeeping Matlab for matrix-oriented computations Various statistics languages (SAS/R/SPSS) for processing the Matlab output Each language fits its niche well and we already have a fair amount of code in each. Right now, there's a lot of manual work to run all these steps that would be much better scripted. I've already done this on Linux, and it works relatively well. On Linux: matlab -nosplash -nodesktop -r "command" or echo "command" | matlab -nosplash -nodesktop ...opens Matlab in a "command line" mode. (That is, no windows are created -- it just reads from STDIN, executes, and outputs to STDOUT/STDERR.) My problem is that on Windows (XP and 7), this same code opens up a window and doesn't read from / write to the command line. It just stares me blankly in the face, totally ignoring STDIN and STDOUT. How can I script running Matlab commands on Windows? I basically want something that will do: ruby database_query.rb perl legacy_code.pl ruby other_stuff.rb matlab processing_step_1.m matlab processing_step_2.m # etc, etc. I've found out that Matlab has an -automation flag on Windows to start an "automation server". That sounds like overkill for my purposes, and I'd like something that works on both platforms. What options do I have for automating Matlab in this workflow?

    Read the article

  • Asynchronous Controller is blocking requests in ASP.NET MVC through jQuery

    - by Jason
    I have just started using the AsyncController in my project to take care of some long-running reports. Seemed ideal at the time since I could kick off the report and then perform a few other actions while waiting for it to come back and populate elements on the screen. My controller looks a bit like this. I tried to use a thread to perform the long task which I'd hoped would free up the controller to take more requests: public class ReportsController : AsyncController { public void LongRunningActionAsync() { AsyncManager.OutstandingOperations.Increment(); var newThread = new Thread(LongTask); newThread.Start(); } private void LongTask() { // Do something that takes a really long time //....... AsyncManager.OutstandingOperations.Decrement(); } public ActionResult LongRunningActionCompleted(string message) { // Set some data up on the view or something... return View(); } public JsonResult AnotherControllerAction() { // Do a quick task... return Json("..."); } } But what I am finding is that when I call LongRunningAction using the jQuery ajax request, any further requests I make after that back up behind it and are not processed until LongRunningAction completes. For example, call LongRunningAction which takes 10 seconds and then call AnotherControllerAction which is less than a second. AnotherControllerAction simply waits until LongRunningAction completes before returning a result. I've also checked the jQuery code, but this still happens if I specifically set "async: true": $.ajax({ async: true, type: "POST", url: "/Reports.aspx/LongRunningAction", dataType: "html", success: function(data, textStatus, XMLHttpRequest) { // ... }, error: function(XMLHttpRequest, textStatus, errorThrown) { // ... } }); At the moment I just have to assume that I'm using it incorrectly, but I'm hoping one of you guys can clear my mental block!

    Read the article

  • How to split this string and identify first sentence after last '*'?

    - by DaveDev
    I have to get a quick demo for a client, so this is a bit hacky. Please don't flame me too much! :-) I'm getting a string similar to the following back from the database: The object of the following is to do: * blah 1 * blah 2 * blah 3 * blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. I need to format this so that each * represents a bullet point, and the sentence after the last * goes onto a new line, ideally as follows: The object of the following is to do: blah 1 (StackOverflow wants to add bullet points here, but I just need '*') blah 2 blah 3 blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. It's easy enough to split the string by the * character and replace that with <br /> *. I'm using the following for that: string description = GetDescription(); description = description.Replace("*", "<br />*"); // it's going onto a web page. but the result this gives me is: The object of the following is to do: blah 1 blah 2 blah 3 blah 4. Some more extremely uninteresting text. Followed by yet another sentence full of extrememly uninteresting text. Thankfully this is the last sentence. I'm having a bit of difficulty identifying the fist sentence after the last '*' so I can put a break there too. Can somebody show me how to do this?

    Read the article

  • Effect.toggle and AJAX Calls

    - by kunalsawlani
    Hi, I am using the following code to capture all the AJAX calls being made from my app, in order to show a busy spinner till the call is completed. It works well when the requests are well spaced out. However, when the request are made in quick successions, some of the AJAX calls do not get registered, or the onCreate and onComplete actions get kicked off at the same time, and more often than not, the busy spinner continues to appear on the screen, after all the calls have successfully completed. Is there any check I can perform at the end of the call, to check if the element is visible, I can hide it. document.observe("dom:loaded", function() { $('loading').hide(); Ajax.Responders.register({ //When an Ajax call is made. onCreate: function() { new Effect.toggle('loading', 'appear'); new Effect.Opacity('display-area', { from: 1.0, to: 0.3, duration: 0.7 }); }, onComplete: function() { new Effect.toggle('loading', 'appear'); new Effect.Opacity('display-area', { from: 0.3, to: 1, duration: 0.7 }); } }); }); Thanks!

    Read the article

  • Skip subdirectory in python import

    - by jstaab
    Ok, so I'm trying to change this: app/ - lib.py - models.py - blah.py Into this: app/ - __init__.py - lib.py - models/ - __init__.py - user.py - account.py - banana.py - blah.py And still be able to import my models using from app.models import User rather than having to change it to from app.models.user import User all over the place. Basically, I want everything to treat the package as a single module, but be able to navigate the code in separate files for development ease. The reason I can't do something like add for file in __all__: from file import * into init.py is I have circular references between the model files. A fix I don't want is to import those models from within the functions that use them. But that's super ugly. Let me give you an example: user.py ... from app.models import Banana ... banana.py ... from app.models import User ... I wrote a quick pre-processing script that grabs all the files, re-writes them to put imports at the top, and puts it into models.py, but that's hardly an improvement, since now my stack traces don't show the line number I actually need to change. Any ideas? I always though init was probably magical but now that I dig into it, I can't find anything that lets me provide myself this really simple convenience.

    Read the article

  • Combining two exe files

    - by Sophia
    Here's some background to my problem: I have a project in Visual C++ 2006 and a project in in Visual C++ 2010 Express. Both compiles to form an exe file each. I cannot convert my 2006 project to 2010 because I get a lot of "unable to load project" errors. I also cannot port my 2010 project code to 2006 (I always get errors no matter what I try, something to do with libraries). My final solution requires me to only have ONE executable. Is there anything I can do to achieve that? I've done some quick search on Google and found there to be exe joiners, but I've also heard that those things are often used to make malware. For reference, I am working with "dummy" clients, and therefore want to simplify things on their end as much as possible. Thus, having them executing one exe is better than having them execute two. Also, I do not wish for their antivirus to go haywire because I used some program to join two exe together. What do? Edit: The two project files do different things. For example, the project in VS2006 one sets up a server, and the project in VS2010 one grabs info on the user's OS. The code for the "server", I think, has a lot of dependencies and for some reason cannot convert to Visual c++ 2010. The code for "grabbing" requires some newer libraries and compiling options, and would not work if I port to 2006.

    Read the article

  • Mgmt wants to re-title my position: Any help...? [closed]

    - by JohnFlyTN
    Management here wants to re-title my position, since I'm doing quite a bit of different work than was originally planned. They want my input. After a quick glance over my skill set and job duties, what would we need to describe this position as? I'll just list things I'm at least proficient in, I will not list things I have a passing knowledge of. About me : ~10 years software development. Languages : C, C++, Perl, PHP, C#, TCL, Unix shell scripting, SQL (TSQL, PLSQL) Systems : MS-Dos, Windows 3.1 to 7 for client, NT 4 to 2008 for server, OS/2, IBM MVS & z/OS, Linux ( multiple distros), AIX Current position: I do all sorts of in-house software. The range is single user apps to large systems spanning multiple OS's. One of the larger projects I've designed and coded is about 100k lines of C#, and a database where I have been the sole designer and maintainer. I have near total freedom to design as I see fit, restraints are usually budgetary. Skills required to replace me in my current role: Windows and Unix admin, Database design, .NET up to 3.5 (C#, ASP.NET), C++, Perl, good skills in designing large and efficient data processing systems. Given this small level of information what would you see this as being titled? (is more information required to render a decision?)

    Read the article

  • Too many columns to index - use mySQL Partitions?

    - by Christopher Padfield
    We have an application with a table with 20+ columns that are all searchable. Building indexes for all these columns would make write queries very slow; and any really useful index would often have to be across multiple columns increasing the number of indexes needed. However, for 95% of these searches, only a small subset of those rows need to be searched upon, and quite a small number - say 50,000 rows. So, we have considered using mySQL Partition tables - having a column that is basically isActive which is what we divide the two partitions by. Most search queries would be run with isActive=1. Most queries would then be run against the small 50,000 row partition and be quick without other indexes. Only issue is the rows where isActive=1 is not fixed; i.e. it's not based on the date of the row or anything fixed like that; we will need to update isActive based on use of the data in that row. As I understand it that is no problem though; the data would just be moved from one partition to another during the UPDATE query. We do have a PK on id for the row though; and I am not sure if this is a problem; the manual seemed to suggest the partition had to be based on any primary keys. This would be a huge problem for us because the primary key ID has no basis on whether the row isActive.

    Read the article

  • Testing Web Application: "Mirror" ad-hoc testing in another window

    - by Narcissus
    I don't even really know if the title is the best way to explain what I'm trying to do, but anyway... We have a web app that is being ported to a number of DB backends via MDB2. Our unit tests are pretty lacking at the moment, but our internal users are pretty good at knowing what to test to see if things are broken. What I'm 'imagining' is a browser plug in (don't really care which browser it is for) or a similar system that essentially takes every event from one window and 'mirrors' it in the other browser/s. The reason I'd like this is so that I can have various installations that use different DB backends, and have the user open a window/tab to each installation. From there, however, I'd like them to be able to 'work' in one window and have that 'work' I occur at the same time in each of the 'cloned' windows. From there, they should be able to do some quick eyeballing of the information that comes back, without having to worry about timing differences and so (very much). I know it's a big ask, but I figure if anyone knows of a solution, I'd find it here... Any thoughts?

    Read the article

  • How can i test my TSQL syntax?

    - by acidzombie24
    Quick question: How do i get some kind of database to use to test my sql syntax and create basic data. I have Sqlite Code which i'll soon put on a server. I have sql server 2008 installed with visual studio 2010. I tried connecting to the database and had no luck. I also tried using a .mdf file instead thinking its a file and i wont have connectivity issues. Wrong, i still couldnt connect and i used this site to help me (i'm aware its 2005) In that case i used var conn = new SqlConnection(@"Server=.\SQLExpress;AttachDbFilename=C:\dev\src\test\SQL_DB_VS_Test\test.mdf;Database=dbo;Trusted_Connection=Yes;"); exception Unable to open the physical file "C:\dev\src\test\SQL_DB_VS_Test\test.mdf". Operating system error 5: "5(Access is denied.)". Cannot attach the file 'C:\dev\src\test\SQL_DB_VS_Test\test.mdf' as database 'dbo'. with trusted = no i get Login failed for user ''. (What user am i suppose to set...). I created the .mdf with visual studio somehow.

    Read the article

  • How do I introspect on a SQL Server?

    - by MetaHyperBolic
    I have a server with a vendor application which is heavily database-reliant. I need to make some minor changes to the data in a few tables in the database in an automated fashion. Just INSERTs and UPDATEs, nothing fancy. Vendors being vendors, I can never be quite sure when they change the schema of a database during upgrade. To that end, how do I ask the SQL server, in some scriptable fashion, "Hey, does this table still exist? Yeah, cool, okay, but does it have this column? What's the data type and size on that? Is it nullable? Could you give me a list of tables? In this table, could you give me a list of columns? Any primary keys there?" I do not need to do this for the whole schema, only part of it, just a quick check of the database before I launch into things. We have Microsoft SQL Server 2005 on it currently, but it might easily move to Microsoft SQL Server 2008. I am probably not using the correct terminology when searching. I do know that ORM is not only too much overhead for this sort of thing, but also that I have no chance of pitching it to my coworkers.

    Read the article

  • What is the fastest (to access) struct-like object in Python?

    - by DNS
    I'm optimizing some code whose main bottleneck is running through and accessing a very large list of struct-like objects. Currently I'm using namedtuples, for readability. But some quick benchmarking using 'timeit' shows that this is really the wrong way to go where performance is a factor: Named tuple with a, b, c: >>> timeit("z = a.c", "from __main__ import a") 0.38655471766332994 Class using __slots__, with a, b, c: >>> timeit("z = b.c", "from __main__ import b") 0.14527461047146062 Dictionary with keys a, b, c: >>> timeit("z = c['c']", "from __main__ import c") 0.11588272541098377 Tuple with three values, using a constant key: >>> timeit("z = d[2]", "from __main__ import d") 0.11106188992948773 List with three values, using a constant key: >>> timeit("z = e[2]", "from __main__ import e") 0.086038238242508669 Tuple with three values, using a local key: >>> timeit("z = d[key]", "from __main__ import d, key") 0.11187358437882722 List with three values, using a local key: >>> timeit("z = e[key]", "from __main__ import e, key") 0.088604143037173344 First of all, is there anything about these little timeit tests that would render them invalid? I ran each several times, to make sure no random system event had thrown them off, and the results were almost identical. It would appear that dictionaries offer the best balance between performance and readability, with classes coming in second. This is unfortunate, since, for my purposes, I also need the object to be sequence-like; hence my choice of namedtuple. Lists are substantially faster, but constant keys are unmaintainable; I'd have to create a bunch of index-constants, i.e. KEY_1 = 1, KEY_2 = 2, etc. which is also not ideal. Am I stuck with these choices, or is there an alternative that I've missed?

    Read the article

  • Polling a web URL until event

    - by Jaxo
    I'm really sorry about the crappy title - if anybody has a better way of wording it, please edit it! I basically need to have a C# application run a function if the output of a URL is a certain value. For example, if the website says blue the background colour will be blue, red to make it red, etc. The problem is I don't want to spam my webserver with checks. The 4 bytes it downloads each time is negligible, but if I were to deploy this type of system on multiple computers, it would get slower and slower and the bandwidth would add up quickly. So my question is: How can my desktop application run a piece of code only when a web URL has a different output without checking each time? I can't use sockets, and any sort of LAN protocol won't end up working. My reasoning behind this potentially nefarious code is to be able to mute computers by updating a file on the website (as you may have seen in my previous question today, sorry!). I'd like it to be rather quick, and not have the refresh time minutes apart, a few seconds at the most would be ideal. How can I accomplish this? The website's code is easy, but getting the C# application to check when it changes is the part I'm stuck on. Nothing shows up on the website other than the command. Thanks!

    Read the article

  • Need to sort 3 arrays by one key array

    - by jeff6461
    I am trying to get 3 arrays sorted by one key array in objective c for the iphone, here is a example to help out... Array 1 Array 2 Array 3 Array 4 1 15 21 7 3 12 8 9 6 7 8 0 2 3 4 8 When sorted i want this to look like Array 1 Array 2 Array 3 Array 4 1 15 21 7 2 3 4 8 3 12 8 9 6 7 8 0 So array 2,3,4 are moving with Array 1 when sorted. Currently i am using a bubble sort to do this but it lags so bad that it crashes by app. The code i am using to do this is int flag = 0; int i = 0; int temp = 0; do { flag=1; for(i = 0; i < distancenumber; i++) { if(distance[i] > distance[i+1]) { temp = distance[i]; distance[i]=distance[i + 1]; distance[i + 1]=temp; temp = FlowerarrayNumber[i]; FlowerarrayNumber[i] = FlowerarrayNumber[i+1]; FlowerarrayNumber[i + 1] = temp; temp = BeearrayNumber[i]; BeearrayNumber[i] = BeearrayNumber[i + 1]; BeearrayNumber[i + 1] = temp; flag=0; } } }while (flag==0); where distance number is the amount of elements in all of the arrays, distance is array 1 or my key array. and the other 2 are getting sorted. If anyone can help me get a merge sort(or something faster, it is running on a iPhone so it needs to be quick and light) to do this that would be great i cannot figure out how the recursion works in this method and so having a hard time to get the code to work. Any help would be greatly appreciated

    Read the article

  • Setting up a NAS with Citrix XenServer

    - by JasonBrown
    Just a quick query on anyone whos worked with XenServer, I want to setup a NAS at home but with virtualization (I've looked into VMWare Server and KVM, I quite like KVM!) but I was told about XenServer 5.5. I have comomodity hardware (ASUS board, dual core 2.66Ghz CPU with 8Gb RAM), I need to setup a fileserver to house about 2-3Tb worth of data (big chunky video - not porn!). Need to run Linux (preferably CentOS) but also run Windows virtualised for testing. I was thinking of going the XenServer route, however I want to be able to offer a VM access to the 2-3Tb of HDDs (5 HDD drives) directly so it can do its thing (maybe using FreeNAS). Would this be possible with XenServer? Or will I have to do more work - and another box - to offer this? My goals are to use FreeNAS (ZFS!) for the filesserver, CentOS for SVN and aother bits we need to use (LAMP Stack), Windows for our win32 testing all on one box. I see this iSCSI target bits and get scared.

    Read the article

  • Consecutive calls/evaulations in a form?

    - by Dave
    Hey guys, simple question... Working with XLISP to write a program, but I've seemed to run into a simple fundamental problem that I can't seem to work around: perhaps someone has a quick fix. I'm trying to write an if statement who's then-clause evaluates multiple forms and returns the value of the last. In example: (setq POSITION 'DINING-ROOM) (defun LOOK (DIRECTION ROOM) ... ) (defun SETPOS (ROOM) ... ) (defun WHERE () ... ) (defun MOVE (DIRECTION) (if (not(equal nil (LOOK DIRECTION POSITION))) ; If there is a room in that direction ( ; Then-block: Go to that room. Return where you are. (SETPOS (LOOK DIRECTION ROOM)) (WHERE) ) ( ; Else-block: Return error (list 'CANT 'GO 'THERE) ) ) The logical equivalent intended is: function Move (Direction) { if(Look(Direction, Room) != null) { SetPos(Look(Direction,Room)); return Where(); } else { return "Can't go there"; } } (Apologies for the poor web-formatting.) The problem I have is with: ( (SETPOS (LOOK DIRECTION ROOM)) (WHERE) ) I simply want to return the evaluation of WHERE, but I need to execute the SETPOS function first. XLISP doesn't like the extra parentheses: if I remove the outer set, my WHERE list becomes my else (I don't want that). If I remove the sets around SETPOS and WHERE, it treats WHERE like an argument for SETPOS; I also don't want that. So, how do I simply evaluate the first, then the second and then return the values of the last evaluated?

    Read the article

  • Delete on a very deep tree

    - by Kathoz
    I am building a suffix trie (unfortunately, no time to properly implement a suffix tree) for a 10 character set. The strings I wish to parse are going to be rather long (up to 1M characters). The tree is constructed without any problems, however, I run into some when I try to free the memory after being done with it. In particularly, if I set up my constructor and destructor to be as such (where CNode.child is a pointer to an array of 10 pointers to other CNodes, and count is a simple unsigned int): CNode::CNode(){ count = 0; child = new CNode* [10]; memset(child, 0, sizeof(CNode*) * 10); } CNode::~CNode(){ for (int i=0; i<10; i++) delete child[i]; } I get a stack overflow when trying to delete the root node. I might be wrong, but I am fairly certain that this is due to too many destructor calls (each destructor calls up to 10 other destructors). I know this is suboptimal both space, and time-wise, however, this is supposed to be a quick-and-dirty solution to a the repeated substring problem. tl;dr: how would one go about freeing the memory occupied by a very deep tree? Thank you for your time.

    Read the article

  • jQuery fade li elemet and cycle it when mouse hovered.

    - by Pennf0lio
    I'm trying to recreate an effect in jQuery, where an element(<li> <img>) will be cycled and has a fading effect when hovered. The '<li>' contains '<img>' (image screenshots). when the mouse is on top of element it will keep cycling all that are in '<ul>' with a fading effect. when the mouse is away it will stop cycling the list. I also want to add a pager where you can navigate to the list. My Existing Code: link text My Problem: The current code has some problem with the pagination, it added all images that can be seen on the code. instead of 1-8 only, it continued to add another 1-8 and another. Second problem is, It also start cycling and fading when the page loads. The cycling and fading should only be working when the mouse is on top of the element. I don't know if the 'Cycle Plugin' is really required on this approach, I wan't it to be minimal as much as possible. I just use 'Cycle Plugin' because it's a quick answer to my problem. The effect i'm trying to achieve: link text (WARNING: Contains some Pornography), Sorry of my example, I tried finding other alternative but no luck. Thanks and merry xmas!

    Read the article

  • Looking for out-of-place directories in an SVN working copy?

    - by jthg
    An annoyance that I sometimes come across with SVN is the working copy getting corrupted by one of the .svn folders getting moved from its original location. It doesn't happen often if you're careful and use the proper tools for all moves and renames, but it still somehow happens from time to time. First, does anyone know if there's a good way to catch the problem before a commit is even done? Cruise control usually catches the problem, but there are plenty of cases it wouldn't catch. Second, is there a quick and easy way to check for out-of-place .svn folder if I suspect that there is one? I can definitely do it manually by deducing what directory is out of place based on the compiler errors or by diffing the working copy with another clean checkout. But, this seems like a problem that SVN can diagnose in a second by giving me a list of all directories whose parent directory in the working copy doesn't match its parent directory in the repository. There there some way to have SVN give me a list like that? Thanks.

    Read the article

  • NullPointerException on Activity Testint Tutorial

    - by Bendik
    Hello, I am currently trying the activity testing tutorial (Found here), and have a problem. It seems that whenever I try to call something inside the UIThread, I get a java.lang.NullPointerException. public void testSpinnerUI() { mActivity.runOnUiThread( new Runnable() { public void run() { mSpinner.requestFocus(); } }); } This gives me: Incomplete: java.lang.NullPointerException and nothing else. I have tried this on two different samples now, with the same result. I tried with a try/catch clause around the mSpinner.requestFocus() call, and it seems that mSpinner is null inside the thread. I have set it properly up with the setUp() function found in the same sample, and a quick assertNotNull( mSpinner ) shows me that mSpinner is in fact not null after the setUp() function. What can be the cause of this? EDIT; ok, some more testing has been done. It seems that the application that is being tested resets between each test. This essentially makes me have to reinstantiate all variables between each test. Is this normal?

    Read the article

  • Best ways to format LINQ queries.

    - by Aren B
    Before you ignore / vote-to-close this question, I consider this a valid question to ask because code clarity is an important topic of discussion, it's essential to writing maintainable code and I would greatly appreciate answers from those who have come across this before. I've recently run into this problem, LINQ queries can get pretty nasty real quick because of the large amount of nesting. Below are some examples of the differences in formatting that I've come up with (for the same relatively non-complex query) No Formatting var allInventory = system.InventorySources.Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }).GroupBy(i => i.Region, i => i.Inventory); Elevated Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory); Block Formatting var allInventory = system.InventorySources .Select( src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy( i => i.Region, i => i.Inventory ); List Formatting var allInventory = system.InventorySources .Select(src => new { Inventory = src.Value.GetInventory(product.OriginalProductId, true), Region = src.Value.Region }) .GroupBy(i => i.Region, i => i.Inventory); I want to come up with a standard for linq formatting so that it maximizes readability & understanding and looks clean and professional. So far I can't decide so I turn the question to the professionals here.

    Read the article

  • How do I daemonize an arbitrary script in unix?

    - by dreeves
    I'd like a daemonizer that can turn an arbitrary, generic script or command into a daemon. There are two common cases I'd like to deal with: I have a script that should run forever. If it ever dies (or on reboot), restart it. Don't let there ever be two copies running at once (detect if a copy is already running and don't launch it in that case). I have a simple script or command line command that I'd like to keep executing repeatedly forever (with a short pause between runs). Again, don't allow two copies of the script to ever be running at once. Of course it's trivial to write a "while(true)" loop around the script in case 2 and then apply a solution for case 1, but a more general solution will just solve case 2 directly since that applies to the script in case 1 as well (you may just want a shorter or no pause if the script is not intended to ever die (of course if the script really does never die then the pause doesn't actually matter)). Note that the solution should not involve, say, adding file-locking code or PID recording to the existing scripts. More specifically, I'd like a program "daemonize" that I can run like % daemonize myscript arg1 arg2 or, for example, % daemonize 'echo `date` >> /tmp/times.txt' which would keep a growing list of dates appended to times.txt. (Note that if the argument(s) to daemonize is a script that runs forever as in case 1 above, then daemonize will still do the right thing, restarting it when necessary.) I could then put a command like above in my .login and/or cron it hourly or minutely (depending on how worried I was about it dying unexpectedly). NB: The daemonize script will need to remember the command string it is daemonizing so that if the same command string is daemonized again it does not launch a second copy. Also, the solution should ideally work on both OS X and linux but solutions for one or the other are welcome. (If I'm thinking of this all wrong or there are quick-and-dirty partial solutions, I'd love to hear that too.)

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >