Search Results

Search found 9125 results on 365 pages for 'task grouping'.

Page 316/365 | < Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >

  • Is there a more efficient way to do this?

    - by garethdn
    I'm hoping there is a better way to the following. I'm creating a jigsaw-type application and this is the current code i'm using: -(void) touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; //location of current touch CGPoint location = [touch locationInView:self.view]; if ([touch view] == img1) { [self animateFirstTouch:img1 withLocation:location]; } else if ([touch view] == img2) { [self animateFirstTouch:img2 withLocation:location]; } else if ([touch view] == img3) { [self animateFirstTouch:img3 withLocation:location]; } else if ([touch view] == img4) { [self animateFirstTouch:img4 withLocation:location]; } else if { ...... ...... } else if ([touch view] == img40) { [self animateFirstTouch:img40 withLocation:location]; return; } } I'm hoping that there is a better, more efficieny way to do this, rather than naming every image. I'm thinking something like, if touch view is equal to a UIImageView, then perform some task. The same for touchesEnded: -(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event { UITouch *touch = [touches anyObject]; //location of current touch CGPoint location = [touch locationInView:self.view]; if ([touch view] == image1) { [self animateReleaseTouch:image1 withLocation:location]; } else if ([touch view] == image2) { [self animateReleaseTouch:image2 withLocation:location]; } else if ([touch view] == image3) { [self animateReleaseTouch:image3 withLocation:location]; } else if ([touch view] == image4) { [self animateReleaseTouch:image4 withLocation:location]; } else if{ ...... ...... } else if ([touch view] == image40) { [self animateReleaseTouch:image40 withLocation:location]; } return; } Any help please?

    Read the article

  • SQL problem - select accross multiple tables (user groups)

    - by morpheous
    I have a db schema which looks something like this: create table user (id int, name varchar(32)); create table group (id int, name varchar(32)); create table group_member (foobar_id int, user_id int, flag int); I want to write a query that allows me to so the following: Given a valid user id (UID), fetch the ids of all users that are in the same group as the specified user id (UID) AND have group_member.flag=3. Rather than just have the SQL. I want to learn how to think like a Db programmer. As a coder, SQL is my weakest link (since I am far more comfortable with imperative languages than declarative ones) - but I want to change that. Anyway here are the steps I have identified as necessary to break down the task. I would be grateful if some SQL guru can demonstrate the simple SQL statements - i.e. atomic SQL statements, one for each of the identified subtasks below, and then finally, how I can combine those statements to make the ONE statement that implements the required functionality. Here goes (assume specified user_id [UID] = 1): //Subtask #1. Fetch list of all groups of which I am a member Select group.id from user inner join group_member where user.id=group_member.user_id and user.id=1 //Subtask #2 Fetch a list of all members who are members of the groups I am a member of (i.e. groups in subtask #1) Not sure about this ... select user.id from user, group_member gm1, group_member gm2, ... [Stuck] //Subtask #3 Get list of users that satisfy criteria group_member.flag=3 Select user.id from user inner join group_member where user.id=group_member.user_id and user.id=1 and group_member.flag=3 Once I have the SQL for subtask2, I'd then like to see how the complete SQL statement is built from these subtasks (you dont have to use the SQL in the subtask, it just a way of explaining the steps involved - also, my SQL may be incorrect/inefficient, if so, please feel free to correct it, and point out what was wrong with it). Thanks

    Read the article

  • How to make smooth transition from a WebBrowser control to an Image in Silverlight 4?

    - by Trex
    Hi, I have the following XAML on my page: `<Grid x:Name="LayoutRoot"> <Viewbox Stretch="Uniform"> <Image x:Name="myImage" /> </Viewbox> <WebBrowser x:Name="myBrowser" /> </Grid>` and then in the codebehind I'm switching the visibility between the image and the browser content: myBrowser.Visibility = Visibility.Collapsed; myImage.Source = new BitmapImage(new Uri(p)); myImage.Visibility = Visibility.Visible; and myImage.Visibility = Visibility.Collapsed; myBrowser.Source = new Uri(myPath + p, UriKind.Absolute); myBrowser.Visibility = Visibility.Visible; This works fine, but what the client now wants is a smooth transition between when the Image is shown and when the browser is shown. I tried several approaches but always ran into dead end. Do you have any ideas? I tried setting two states using the VSM and than displaying a white rectangle on top as an overlay, before the swap takes place, but that didn't work (I guess it's because nothing can be placed above the WebBroser???) I tried setting the Visibility of the image control and the webbrowser control using the VSM, but that didn't work either. I really don't know what else to try to solve this simple task. Any help is greatly appreciated. Jan

    Read the article

  • Get active window title in X

    - by dutt
    I'm trying to get the title of the active window. The application is a background task so if the user has Eclipse open the function returns "Eclipse - blabla", so it's not getting the window title of my own window. I'm developing this in Python 2.6 using PyQt4. My current solution, borrowed and slightly modified from an old answer here at SO, looks like this: def get_active_window_title(): title = '' root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) for j in id_w.stdout: if 'WM_ICON_NAME(STRING)' in j: if title != j.split()[2]: return j.split("= ")[1].strip(' \n\"') It works for most windows, but not all. For example it can't find my kopete chat windows, or the name of the application i'm currently developing. My next try looks like this: def get_active_window_title(self): screen = wnck.screen_get_default() if screen == None: return "Could not get screen" window = screen.get_active_window() if window == None: return "Could not get window" title = window.get_name() return title; But for some reason window is always None. Does somebody have a better way of getting the current window title, or how to modify one of my ways, that works for all windows? Edit: In case anybody is wondering this is the way I found that seems to work for all windows. def get_active_window_title(self): root_check = '' root = Popen(['xprop', '-root'], stdout=PIPE) if root.stdout != root_check: root_check = root.stdout for i in root.stdout: if '_NET_ACTIVE_WINDOW(WINDOW):' in i: id_ = i.split()[4] id_w = Popen(['xprop', '-id', id_], stdout=PIPE) id_w.wait() buff = [] for j in id_w.stdout: buff.append(j) for line in buff: match = re.match("WM_NAME\((?P<type>.+)\) = (?P<name>.+)", line) if match != None: type = match.group("type") if type == "STRING" or type == "COMPOUND_TEXT": return match.group("name") return "Active window not found"

    Read the article

  • How to test a Grails Service that utilizes a criteria query (with spock)?

    - by user569825
    I am trying to test a simple service method. That method mainly just returns the results of a criteria query for which I want to test if it returns the one result or not (depending on what is queried for). The problem is, that I am unaware of how to right the corresponding test correctly. I am trying to accomplish it via spock, but doing the same with any other way of testing also fails. Can one tell me how to amend the test in order to make it work for the task at hand? (BTW I'd like to keep it a unit test, if possible.) The EventService Method public HashSet<Event> listEventsForDate(Date date, int offset, int max) { date.clearTime() def c = Event.createCriteria() def results = c { and { le("startDate", date+1) // starts tonight at midnight or prior? ge("endDate", date) // ends today or later? } maxResults(max) order("startDate", "desc") } return results } The Spock Specification package myapp import grails.plugin.spock.* import spock.lang.* class EventServiceSpec extends Specification { def event def eventService = new EventService() def setup() { event = new Event() event.publisher = Mock(User) event.title = 'et' event.urlTitle = 'ut' event.details = 'details' event.location = 'location' event.startDate = new Date(2010,11,20, 9, 0) event.endDate = new Date(2011, 3, 7,18, 0) } def "list the Events of a specific date"() { given: "An event ranging over multiple days" when: "I look up a date for its respective events" def results = eventService.listEventsForDate(searchDate, 0, 100) then: "The event is found or not - depending on the requested date" numberOfResults == results.size() where: searchDate | numberOfResults new Date(2010,10,19) | 0 // one day before startDate new Date(2010,10,20) | 1 // at startDate new Date(2010,10,21) | 1 // one day after startDate new Date(2011, 1, 1) | 1 // someday during the event range new Date(2011, 3, 6) | 1 // one day before endDate new Date(2011, 3, 7) | 1 // at endDate new Date(2011, 3, 8) | 0 // one day after endDate } } The Error groovy.lang.MissingMethodException: No signature of method: static myapp.Event.createCriteria() is applicable for argument types: () values: [] at myapp.EventService.listEventsForDate(EventService.groovy:47) at myapp.EventServiceSpec.list the Events of a specific date(EventServiceSpec.groovy:29)

    Read the article

  • string comparision and counting the key in target [closed]

    - by mesun
    Suppose we want to count the number of times that a key string appears in a target string. We are going to create two different functions to accomplish this task: one iterative, and one recursive. For both functions, you can rely on Python's find function - you should read up on its specifications to see how to provide optional arguments to start the search for a match at a location other than the beginning of the string. For example, find("atgacatgcacaagtatgcat","atgc") #returns the value 5, while find("atgacatgcacaagtatgcat","atgc",6) #returns the value 15, meaning that by starting the search at index 6, #the next match is found at location 15. For the recursive version, you will want to think about how to use your function on a smaller version of the same problem (e.g., on a smaller target string) and then how to combine the result of that computation to solve the original problem. For example, given you can find the first instance of a key string in a target string, how would you combine that result with invocation of the same function on a smaller target string? You may find the string slicing operation useful in getting substrings of string.

    Read the article

  • Proper API Design for Version Independence?

    - by Justavian
    I've inherited an enormous .NET solution of about 200 projects. There are now some developers who wish to start adding their own components into our application, which will require that we begin exposing functionality via an API. The major problem with that, of course, is that the solution we've got on our hands contains such a spider web of dependencies that we have to be careful to avoid sabotaging the API every time there's a minor change somewhere in the app. We'd also like to be able to incrementally expose new functionality without destroying any previous third party apps. I have a way to solve this problem, but i'm not sure it's the ideal way - i was looking for other ideas. My plan would be to essentially have three dlls. APIServer_1_0.dll - this would be the dll with all of the dependencies. APIClient_1_0.dll - this would be the dll our developers would actual refer to. No references to any of the mess in our solution. APISupport_1_0.dll - this would contain the interfaces which would allow the client piece to dynamically load the "server" component and perform whatever functions are required. Both of the above dlls would depend upon this. It would be the only dll that the "client" piece refers to. I initially arrived at this design, because the way in which we do inter process communication between windows services is sort of similar (except that the client talks to the server via named pipes, rather than dynamically loading dlls). While i'm fairly certain i can make this work, i'm curious to know if there are better ways to accomplish the same task.

    Read the article

  • What about parallelism across network using multiple PCs?

    - by MainMa
    Parallel computing is used more and more, and new framework features and shortcuts make it easier to use (for example Parallel extensions which are directly available in .NET 4). Now what about the parallelism across network? I mean, an abstraction of everything related to communications, creation of processes on remote machines, etc. Something like, in C#: NetworkParallel.ForEach(myEnumerable, () => { // Computing and/or access to web ressource or local network database here }); I understand that it is very different from the multi-core parallelism. The two most obvious differences would probably be: The fact that such parallel task will be limited to computing, without being able for example to use files stored locally (but why not a database?), or even to use local variables, because it would be rather two distinct applications than two threads of the same application, The very specific implementation, requiring not just a separate thread (which is quite easy), but spanning a process on different machines, then communicating with them over local network. Despite those differences, such parallelism is quite possible, even without speaking about distributed architecture. Do you think it will be implemented in a few years? Do you agree that it enables developers to easily develop extremely powerfull stuff with much less pain? Example: Think about a business application which extracts data from the database, transforms it, and displays statistics. Let's say this application takes ten seconds to load data, twenty seconds to transform data and ten seconds to build charts on a single machine in a company, using all the CPU, whereas ten other machines are used at 5% of CPU most of the time. In a such case, every action may be done in parallel, resulting in probably six to ten seconds for overall process instead of forty.

    Read the article

  • Deployment a web-site on IIS from another program

    - by slo2ols
    Hi, I developed a web-site on ASP.NET 3.5 SP1 platform. And additional I have 2 win services. My task is to build install package. I decided that Visual Studio install projects are not met my requirements. I design my own installer for this project, because I need to resolve many question and problem in install process. My problem: I need to deploy web-site into IIS, but I don't know how to do it easy. I found Microsoft tool as Web Deployment Tool, but I didn't find any documentation. And must I include this tool into my installer for deployment at destination customer? Another side I found SDC Tasks Library and it looks like a solution for me. But I saw many topics where people had problems and because the project was dead anybody couldn't help them. I know it is a long story... My question: how can I deploy the web-site from another program (I know that IIS versions have some differences and it is another headache), set a virtual directory, application pool (very important), a type of authentification and so forth ??? Thanks.

    Read the article

  • How to amend return value design in OO manner?

    - by FrontierPsycho
    Hello. I am no newb on OO programming, but I am faced with a puzzling situation. I have been given a program to work on and extend, but the previous developers didn't seem that comfortable with OO, it seems they either had a C background or an unclear understanding of OO. Now, I don't suggest I am a better developer, I just think that I can spot some common OO errors. The difficult task is how to amend them. In my case, I see a lot of this: if (ret == 1) { out.print("yadda yadda"); } else if (ret == 2) { out.print("yadda yadda"); } else if (ret == 3) { out.print("yadda yadda"); } else if (ret == 0) { out.print("yadda yadda"); } else if (ret == 5) { out.print("yadda yadda"); } else if (ret == 6) { out.print("yadda yadda"); } else if (ret == 7) { out.print("yadda yadda"); } ret is a value returned by a function, in which all Exceptions are swallowed, and in the catch blocks, the above values are returned explicitly. Oftentimes, the Exceptions are simply swallowed, with empty catch blocks. It's obvious that swalllowing exceptions is wrong OO design. My question concerns the use of return values. I believe that too is wrong, however I think that using Exceptions for control flow is equally wrong, and I can't think of anything to replace the above in a correct, OO manner. Your input, please?

    Read the article

  • PHP Object Creation and Memory Usage

    - by JohnO
    A basic dummy class: class foo { var $bar = 0; function foo() {} function boo() {} } echo memory_get_usage(); echo "\n"; $foo = new foo(); echo memory_get_usage(); echo "\n"; unset($foo); echo memory_get_usage(); echo "\n"; $foo = null; echo memory_get_usage(); echo "\n"; Outputs: $ php test.php 353672 353792 353792 353792 Now, I know that PHP docs say that memory won't be freed until it is needed (hitting the ceiling). However, I wrote this up as a small test, because I've got a much longer task, using a much bigger object, with many instances of that object. And the memory just climbs, eventually running out and stopping execution. Even though these large objects do take up memory, since I destroy them after I'm done with each one (serially), it should not run out of memory (unless a single object exhausts the entire space for memory, which is not the case). Thoughts?

    Read the article

  • Turning a series of raw images into movie frames in Android

    - by Nicholas Killewald
    I've got an Android project I'm working on that, ultimately, will require me to create a movie file out of a series of still images taken with a phone's camera. That is to say, I want to be able to take raw image frames and string them together, one by one, into a movie. Audio is not a concern at this stage. Looking over the Android API, it looks like there are calls in it to create movie files, but it seems those are entirely geared around making a live recording from the camera on an immediate basis. While nice, I can't use that for my purposes, as I need to put annotations and other post-production things on the images as they come in before they get fed into a movie (plus, the images come way too slowly to do a live recording). Worse, looking over the Android source, it looks like a non-trivial task to rewire that to do what I want it to do (at least without touching the NDK). Is there any way I can use the API to do something like this? Or alternatively, what would be the best way to go about this, if it's even feasible on cell phone hardware (which seems to keep getting more and more powerful, strangely...)?

    Read the article

  • Was Visual Studio 2008 or 2010 written to use multi cores?

    - by Erx_VB.NExT.Coder
    basically i want to know if the visual studio IDE and/or compiler in 2010 was written to make use of a multi core environment (i understand we can target multi core environments in 08 and 10, but that is not my question). i am trying to decide on if i should get a higher clock dual core or a lower clock quad core, as i want to try and figure out which processor will give me the absolute best possible experience with Visual Studio 2010 (ide and background compiler). if they are running the most important section (background compiler and other ide tasks) in one core, then the core will get cut off quicker if running a quad core, esp if background compiler is the heaviest task, i would imagine this would b e difficult to seperate in more then one process, so even if it uses multi cores you might still be better off with going for a higher clock cpu if the majority of the processing is still bound to occur in one core (ie the most significant part of the VS environment). i am a vb programmer, they've made great performance improvements in beta 2, congrats, but i would love to be able to use VS seamlessly... anyone have any ideas? thanks, erx

    Read the article

  • Performing measures within the execution of a c++ code every t milliseconds

    - by user506901
    Given a while loop and the function ordering as follows: int k=0; int total=100; while(k<total){ doSomething(); if(approx. t milliseconds elapsed) { measure(); } ++k; } I want to perform 'measure' every t-th milliseconds. However, since 'doSomething' can be close to the t-th millisecond from the last execution, it is acceptable to perform the measure after approximately t milliseconds elapsed from the last measure. My question is: how could this be achieved? One solution would be to set timer to zero, and measure it after every 'doSomething'. When it is withing the acceptable range, I perform measures, and reset. However, I'm not which c++ function I should use for such a task. As I can see, there are certain functions, but the debate on which one is the most appropriate is outside of my understanding. Note that some of the functions actually take into account the time taken by some other processes, but I want my timer to only measure the time of the execution of my c++ code (I hope that is clear). Another thing is the resolution of the measurements, as pointed out below. Suppose the medium option of those suggested.

    Read the article

  • Yeoman 'grunt test' fails on clean project with 'port already in use'

    - by XMLilley
    With: Mac OS 10.8.4 Node 0.10.12 npm 1.3.1 grunt-cli 0.1.9 yo 1.0.0-rc.1 bower 0.9.2 [email protected] I encounter the following error with a clean yo angular project, followed by grunt server then grunt test: Running "connect:test" (connect) task Fatal error: Port 9000 is already in use by another process. I'm new to Yeoman and am stumped. I've deleted my original project and created a new one in a fresh folder just to make sure I wasn't overlooking any invisible configs. I restarted the machine to make sure I wasn't running any temporary server processes I had forgotten about. After all attempts, the basic server starts fine, attaches to Chrome, and the watcher updates the browser on any changes. (Notably, the server is running on 9000, which seems odd for the test-runner to also be trying to use 9000.) But I get that same error on attempting to start the test runner. Is this something I can fix, or an issue I should report to the Yeoman team? Thanks.

    Read the article

  • The application has stopped unexpectedly: How to Debug?

    - by Android Eve
    Please note, unlike many other questions having the subject title "application has stopped unexpectedly", I am not asking for troubleshooting a particular problem. Rather, I am asking for an outline of the best strategy for an Android/Eclipse/Java rookie to tackle this formidable task of digesting huge amounts of information in order to develop (and debug!) a simple Android application. In my case, I took the sample skeleton app from the SDK, modified it slightly and what did I get the moment I try to run it? The application (process.com.example.android.skeletonapp) has stopped unexpectedly. Please try again. OK, so I know that I have to look LogCat. It's full of timestamped lines staring at me... What do I do now? What do I need to look for? Is there a way to single-step the program, to find the statement that makes the app crash? (I thought Java programs never crash, but apparently I was mistaken) How do I place a breakpoint? Can you recommend an Android debug tutorial online, other than this one?

    Read the article

  • File size monitoring in C#

    - by manemawanna
    Hello, I work in the Systems & admin team and have been given the task of creating a quota management application to try and encourage users to better manage there resources as we currently have issues with disc space and don't enforce hard quotas. At the moment I'm using the code below to go through all the files in a users homespace to retrieve the overall amount of space they are using. As from what I've seen else where theres no other way to do this in C#, the issue with it is theirs quite a high overhead while it retireves the size of each file then creates a total. try { long dirSize = 0; FileInfo[] FI = new DirectoryInfo("I:\\").GetFiles("*.*", SearchOption.AllDirectories); foreach (FileInfo F1 in FI) { dirSize += F1.Length; } return dirSize; } So I'm looking for a quicker way to do this or a quick way to monitor changes in the size of files while using the options avaliable through FileSystemWatcher. At the moment the only thing I can think of is creating a hashtable containing the file location and size of each file, so when a size changed event occurs I can compare the old size against the new one and update the total. Any suggestions would be greatly appreciated.

    Read the article

  • Pre Project Documentation

    - by DeanMc
    I have an issue that I feel many programmers can relate to... I have worked on many small scale projects. After my initial paper brain storm I tend to start coding. What I come up with is usually a rough working model of the actual application. I design in a disconnected fashion so I am talking about underlying code libraries, user interfaces are the last thing as the library usually dictates what is needed in the UI. As my projects get bigger I worry that so should my "spec" or design document. The above paragraph, from my investigations, is echoed all across the internet in one fashion or another. When a UI is concerned there is a bit more information but it is UI specific and does not relate to code libraries. What I am beginning to realise is that maybe code is code is code. It seems from my extensive research that there is no 1:1 mapping between a design document and the code. When I need to research a topic I dump information into OneNote and from there I prioritise features into versions and then into related chunks so that development runs in a fairly linear fashion, my tasks tend to look like so: Implement Binary File Reader Implement Binary File Writer Create Object to encapsulate Data for expression to the caller Now any programmer worth his salt is aware that between those three to do items could be a potential wall of code that could expand out to multiple files. I have tried to map the complete code process for each task but I simply don't think it can be done effectively. By the time one mangles pseudo code it is essentially code anyway so the time investment is negated. So my question is this: Am I right in assuming that the best documentation is the code itself. We are all in agreement that a high level overview is needed. How high should this be? Do you design to statement, class or concept level? What works for you?

    Read the article

  • Is there a more memory efficient way to search through a Core Data database?

    - by Kristian K
    I need to see if an object that I have obtained from a CSV file with a unique identifier exists in my Core Data Database, and this is the code I deemed suitable for this task: NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity; entity = [NSEntityDescription entityForName:@"ICD9" inManagedObjectContext:passedContext]; [fetchRequest setEntity:entity]; NSPredicate *pred = [NSPredicate predicateWithFormat:@"uniqueID like %@", uniqueIdentifier]; [fetchRequest setPredicate:pred]; NSError *err; NSArray* icd9s = [passedContext executeFetchRequest:fetchRequest error:&err]; [fetchRequest release]; if ([icd9s count] > 0) { for (int i = 0; i < [icd9s count]; i++) { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc]init]; NSString *name = [[icd9s objectAtIndex:i] valueForKey:@"uniqueID"]; if ([name caseInsensitiveCompare:uniqueIdentifier] == NSOrderedSame && name != nil) { [pool release]; return [icd9s objectAtIndex:i]; } [pool release]; } } return nil; After more thorough testing it appears that this code is responsible for a huge amount of leaking in the app I'm writing (it crashes on a 3GS before making it 20 percent through the 1459 items). I feel like this isn't the most efficient way to do this, any suggestions for a more memory efficient way? Thanks in advance!

    Read the article

  • Any Alternate way for writing to a file other than ofstream

    - by Aditya
    Hi All, I am performing file operations (writeToFile) which fetches the data from a xml and writes into a output file(a1.txt). I am using MS Visual C++ 2008 and in windows XP. currently i am using this method of writing to output file.. 01.ofstreamhdr OutputFile; 02./* few other stmts / 03.hdrOutputFile.open(fileName, std::ios::out); 04. 05.hdrOutputFile << "#include \"commondata.h\""<< endl ; 06.hdrOutputFile << "#include \"Commonconfig.h\"" << endl ; 07.hdrOutputFile << "#include \"commontable.h\"" << endl << endl ; 08. hdrOutputFile << "#pragma pack(push,1)" << endl ; 09.hdrOutputFile << "typedef struct \n {" << endl ; 10./ simliar hdrOutputFiles statements... */.. I have around 250 lines to write.. Is any better way to perform this task. I want to reduce this hdrOutputFile and use a buffer to do this. Please guide me how to do that action. I mean, buff = "#include \"commontable.h\"" + "typedef struct \n {" + ....... hdrOutputFile << buff. is this way possible? Thanks Ramm

    Read the article

  • How to Bind a Command in WPF

    - by MegaMind
    Sometimes we used complex ways so many times, we forgot the simplest ways to do the task. I know how to do command binding, but i always use same approach. Create a class that implements ICommand interface and from the view model i create new instance of that class and binding works like a charm. This is the code that i used for command binding public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); DataContext = this; testCommand = new MeCommand(processor); } ICommand testCommand; public ICommand test { get { return testCommand; } } public void processor() { MessageBox.Show("hello world"); } } public class MeCommand : ICommand { public delegate void ExecuteMethod(); private ExecuteMethod meth; public MeCommand(ExecuteMethod exec) { meth = exec; } public bool CanExecute(object parameter) { return false; } public event EventHandler CanExecuteChanged; public void Execute(object parameter) { meth(); } } But i want to know the basic way to do this, no third party dll no new class creation. Do this simple command binding using a single class. Actual class implements from ICommand interface and do the work.

    Read the article

  • How to detect a Socket disconnection?

    - by AngryHacker
    I've implemented a task using the async Sockets pattern in Silverlight 3. I started with Michael Schwarz's implementation and built on top of that. So basically, my Silverlight app establishes a persistent socket connection to a device and then data flows both ways as necessary between the device and the Silverlight app. One thing I am struggling with is how to detect disconnection. I could think of 2 approaches: Keep-Alive. I know this can be done at the Sockets level, but I am not sure how to do this in an async model. How would the Socket class let me know there has been a disconnection. Manual keep alive. Basically, I am having the Silverlight app send a dummy packet every 20 seconds or so. If it fails, I'd assume disconnection. However, incredibly, SocketAsyncEventArgs.SocketError always reports success, even if I simply unplug the device that the Silverlight app is connected to. I am not sure whether this is a bug or what or perhaps I need to upgrade to SL4. Any ideas, direction or implementation would be appreciated.

    Read the article

  • Advice: Python Framework Server/Worker Queue management (not Website)

    - by Muppet Geoff
    I am looking for some advice/opinions of which Python Framework to use in an implementation of multiple 'Worker' PCs co-ordinated from a central Queue Manager. For completeness, the 'Worker' PCs will be running Audio Conversion routines (which I do not need advice on, and have standalone code that works). The Audio conversion takes a long time, and I need to co-ordinate an arbitrary number of the 'Workers' from a central location, handing them conversion tasks (such as where to get the source files, or where to ask for the job configuration) with them reporting back some additional info, such as the runtime of the converted audio etc. At present, I have a script that makes a webservice call to get the 'configuration' for a conversion task, based on source files located on the worker already (we manually copy the source files to the worker, and that triggers a conversion routine). I want to change this, so that we can distribute conversion tasks ("Oy you, process this: xxx") based on availability, and in an ideal world, based on pending tasks too. There is a chance that Workers can go offline mid-conversion (but this is not likely). All the workers are Windows based, the co-ordinator can be WIndows or Linux. I have (in my initial searches) come across the following - and I know that some are cross-dependent: Celery (with RabbitMQ) Twisted Django Using a framework, rather than home-brewing, seems to make more sense to me right now. I have a limited timeframe in which to develop this functional extension. An additional consideration would be using a Framework that is compatible with PyQT/PySide so that I can write a simple UI to display Queue status etc. I appreciate that the specifics above are a little vague, and I hope that someone can offer me a pointer or two. Again: I am looking for general advice on which Python framework to investigate further, for developing a Server/Worker 'Queue management' solution, for non-web activities (this is why DJango didn't seem the right fit).

    Read the article

  • Capturing output of find . -print0 into a bash array

    - by Idris
    Using find . -print0 seems to be the only safe way of obtaining a list of files in bash due to the possibility of filenames containing spaces, newlines, quotation marks etc. However, I'm having a hard time actually making find's output useful within bash or with other command line utilities. The only way I have managed to make use of the output is by piping it to perl, and changing perl's IFS to null: find . -print0 | perl -e '$/="\0"; @files=<>; print $#files;' This example prints the number of files found, avoiding the danger of newlines in filenames corrupting the count, as would occur with: find . | wc -l As most command line programs do not support null-delimited input, I figure the best thing would be to capture the output of find . -print0 in a bash array, like I have done in the perl snippet above, and then continue with the task, whatever it may be. How can I do this? This doesn't work: find . -print0 | ( IFS=$'\0' ; array=( $( cat ) ) ; echo ${#array[@]} ) A much more general question might be: How can I do useful things with lists of files in bash?

    Read the article

  • Application process never terminates on each run

    - by rockyurock
    I am seeing an application always remains live even after closing the application using my Perl script below. Also, for the subsequent runs, it always says that "The process cannot access the file because it is being used by another process. iperf.exe -u -s -p 5001 successful. Output was:" So every time I have to change the file name $file used in script or I have to kill the iperf.exe process in the Task Manager. Could anybody please let me know the way to get rid of it? Here is the code I am using ... my @command_output; eval { my $file = "abc6.txt"; $command = "iperf.exe -u -s -p 5001"; alarm 10; system("$command > $file"); alarm 0; close $file; }; if ($@) { warn "$command timed out.\n"; } else { print "$command successful. Output was:\n", $file; } unlink $file;

    Read the article

< Previous Page | 312 313 314 315 316 317 318 319 320 321 322 323  | Next Page >