Search Results

Search found 22641 results on 906 pages for 'use case'.

Page 695/906 | < Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >

  • Are there some cases where Python threads can safely manipulate shared state?

    - by erikg
    Some discussion in another question has encouraged me to to better understand cases where locking is required in multithreaded Python programs. Per this article on threading in Python, I have several solid, testable examples of pitfalls that can occur when multiple threads access shared state. The example race condition provided on this page involves races between threads reading and manipulating a shared variable stored in a dictionary. I think the case for a race here is very obvious, and fortunately is eminently testable. However, I have been unable to evoke a race condition with atomic operations such as list appends or variable increments. This test exhaustively attempts to demonstrate such a race: from threading import Thread, Lock import operator def contains_all_ints(l, n): l.sort() for i in xrange(0, n): if l[i] != i: return False return True def test(ntests): results = [] threads = [] def lockless_append(i): results.append(i) for i in xrange(0, ntests): threads.append(Thread(target=lockless_append, args=(i,))) threads[i].start() for i in xrange(0, ntests): threads[i].join() if len(results) != ntests or not contains_all_ints(results, ntests): return False else: return True for i in range(0,100): if test(100000): print "OK", i else: print "appending to a list without locks *is* unsafe" exit() I have run the test above without failure (100x 100k multithreaded appends). Can anyone get it to fail? Is there another class of object which can be made to misbehave via atomic, incremental, modification by threads? Do these implicitly 'atomic' semantics apply to other operations in Python? Is this directly related to the GIL?

    Read the article

  • iPhone and Core Data: how to retain user-entered data between updates?

    - by Shaggy Frog
    Consider an iPhone application that is a catalogue of animals. The application should allow the user to add custom information for each animal -- let's say a rating (on a scale of 1 to 5), as well as some notes they can enter in about the animal. However, the user won't be able to modify the animal data itself. Assume that when the application gets updated, it should be easy for the (static) catalogue part to change, but we'd like the (dynamic) custom user information part to be retained between updates, so the user doesn't lose any of their custom information. We'd probably want to use Core Data to build this app. Let's also say that we have a previous process already in place to read in animal data to pre-populate the backing (SQLite) store that Core Data uses. We can embed this database file into the application bundle itself, since it doesn't get modified. When a user downloads an update to the application, the new version will include the latest (static) animal catalogue database, so we don't ever have to worry about it being out of date. But, now the tricky part: how do we store the (dynamic) user custom data in a sound manner? My first thought is that the (dynamic) database should be stored in the Documents directory for the app, so application updates don't clobber the existing data. Am I correct? My second thought is that since the (dynamic) user custom data database is not in the same store as the (static) animal catalogue, we can't naively make a relationship between the Rating and the Notes entities (in one database) and the Animal entity (in the other database). In this case, I would imagine one solution would be to have an "animalName" string property in the Rating/Notes entity, and match it up at runtime. Is this the best way to do it, or is there a way to "sync" two different databases in Core Data?

    Read the article

  • MS SQL datetime precision problem

    - by Nailuj
    I have a situation where two persons might work on the same order (stored in an MS SQL database) from two different computers. To prevent data loss in the case where one would save his copy of the order first, and then a little later the second would save his copy and overwrite the first, I've added a check against the lastSaved field (datetime) before saving. The code looks roughly like this: private bool orderIsChangedByOtherUser(Order localOrderCopy) { // Look up fresh version of the order from the DB Order databaseOrder = orderService.GetByOrderId(localOrderCopy.Id); if (databaseOrder != null && databaseOrder.LastSaved > localOrderCopy.LastSaved) { return true; } else { return false; } } This works for most of the time, but I have found one small bug. If orderIsChangedByOtherUser returns false, the local copy will have its lastSaved updated to the current time and then be persisted to the database. The value of lastSaved in the local copy and the DB should now be the same. However, if orderIsChangedByOtherUser is run again, it sometimes returns true even though no other user has made changes to the DB. When debugging in Visual Studio, databaseOrder.LastSaved and localOrderCopy.LastSaved appear to have the same value, but when looking closer they some times differ by a few milliseconds. I found this article with a short notice on the millisecond precision for datetime in SQL: Another problem is that SQL Server stores DATETIME with a precision of 3.33 milliseconds (0. 00333 seconds). The solution I could think of for this problem, is to compare the two datetimes and consider them equal if they differ by less than say 10 milliseconds. My question to you is then: are there any better/safer ways to compare two datetime values in MS SQL to see if they are exactly the same?

    Read the article

  • How do you deal with denormalization / secondary indexes in database sharding?

    - by Continuation
    Say I have a "message" table with 2 secondary indexes: "recipient_id" "sender_id" I want to shard the "message" table by "recipient_id". That way to retrieve all messages sent to a certain recipient I only need to query one shard. But at the same time, I want to be able to make a query that ask for all messages sent by a certain sender. Now I don't want to send that query to every single shard of the "message" table. One way to do this is to duplicate the data and have a "message_by_sender" table sharded by "sender_id". The problem with that approach is that every time a message has been sent, I need to insert the message into both "message" and "message_by_sender" tables. But what if after inserting into "message" the insertion into "message_by_sender" fail? In that case the message exists in "message" but not in "message_by_sender". How do I make sure that if a message exists in "message" then it also exists in "message_by_sender" without resorting to 2 phase commit? This must be a very common issue for anyone who shards their databases. How do you deal woth it?

    Read the article

  • Exam Questions that use .Demand or .LinkDemand COULD NOT BE ANY MORE CONFUSING OR AMBIGIOUS ????

    - by IbrarMumtaz
    I am 110% sure this is WRONG !!!! Q.12) You develop a library, and want to ensure that the functions in the library cannot be either directly or indirectly invoked by applications that are not running on the local intranet. What attribute would you add to each method? A. [UrlIdentityPermission(SecurityAction.RequestRefuse, Url="http://myintranet")] B. [UrlIdentityPermission(SecurityAction.LinkDemand, Url="http://myintranet")] (correct answer) C. [UrlIdentityPermission(SecurityAction.Demand, Url="http://myintranet")] D. [UrlIdentityPermission(SecurityAction.Assert, Url="http://myintranet")] Explanation Link-Demand should be used as it ensures that all callers in the call stack have the necessary permission. In this case it ensures that all callers in the call stack are on the local intranet. There is an indentical question on Transencer so I already had a clue what was goin but Transcender was much more informative that this drivel as it mentioned class level and not assembly level. It also mentioned that some callers maybe coming externally from the company intranet via authroised and authenticated credentials. With information is easy to see why .Demand on would be wong option to go for? So Transcender was right .... so I thgt fine, that makes sense. With think information still fresh in my brain I had a good idea was was going on in the question. To my surprise .Demand was wrong agin !!!! WHAT? I am really starting to hate this setting now? I cannot be any more p*ssed right now!!! :@ Thanks For Reading, Ibrar

    Read the article

  • Top 10 collection completion - a monster in-query formula in MySQL?

    - by Andrew Heath
    I've got the following tables: User Basic Data (unique) [userid] [name] [etc] User Collection (one to one) [userid] [game] User Recorded Plays (many to many) [userid] [game] [scenario] [etc] Game Basic Data (unique) [game] [total_scenarios] I would like to output a table that shows the collection play completion percentage for the Top 10 users in descending order of %: Output Table [userid] [collection_completion] 3 95% 1 81% 24 68% etc etc In my mind, the calculation sequence for ONE USER is: grab user's total owned scenarios from User Collection joined with Game Basic Data and COUNT(gbd.total_scenarios) grab all recorded plays by COUNT(DISTINCT scenario) for that user Divide all recorded plays by total owned scenarios So that's 2 queries and a little PHP massage at the end. For a list of users sorted by completion percentage things get a little more complicated. I figure I could grab all users' collection totals in one query, and all users recorded plays in another, and then do the calcs and sort the final array in PHP, but it seems like overkill to potentially be doing all that for 1000+ users when I only ever want the Top 10. Is there a wicked monster query in MySQL that could do all that and LIMIT 10? Or is sticking with PHP handling the bulk of the work the way to go in this case?

    Read the article

  • iphone certain PDFs rendering as black image

    - by skantner
    I'm trying to draw the pages of a PDF using the code below. Some PDF's render correctly, but others simply show as a completely black image, or have partial portions rendered and the rest black. In comparing what's going on, the ones that show OK seem to have always have "regular" text in them along with some graphics (diagrams, etc.), while the ones that come out black are typically all graphics (like a page of sheet music, for example). Can anyone point me in the right direction on this? I building this on the new 3.2 SDK. Thanks! // PDF page drawing expects a Lower-Left coordinate system, so we flip the coordinate system // before we start drawing. CGContextTranslateCTM(context, 0.0, self.bounds.size.height); CGContextScaleCTM(context, 1.0, -1.0); // Grab the first PDF page CGPDFPageRef page = CGPDFDocumentGetPage(myPDF, pageNo); // We're about to modify the context CTM to draw the PDF page where we want it, so save the graphics state in case we want to do more drawing CGContextSaveGState(context); // CGPDFPageGetDrawingTransform provides an easy way to get the transform for a PDF page. It will scale down to fit, including any // base rotations necessary to display the PDF page correctly. CGAffineTransform pdfTransform = CGPDFPageGetDrawingTransform(page, kCGPDFCropBox, self.bounds, 0, true); // And apply the transform. CGContextConcatCTM(context, pdfTransform); // Finally, we draw the page and restore the graphics state for further manipulations! CGContextDrawPDFPage(context, page); CGContextRestoreGState(context);

    Read the article

  • JTree events seem misordered

    - by MeBigFatGuy
    It appears to me that tree selection events should happen after focus events, but this doesn't seem to be the case. Assume you have a JTree and a JTextField, where the JTextField is populated by what is selected in the tree. When the user changes the text field, on focus lost, you update the tree from the text field. however, the tree selection is changed before the focus is lost on the text field. this is incorrect, right? Any ideas? Here is some sample code: import java.awt.*; import java.awt.event.*; import javax.swing.*; import javax.swing.event.*; public class Focus extends JFrame { public static void main(String[] args) { Focus f = new Focus(); f.setLocationRelativeTo(null); f.setVisible(true); } public Focus() { Container cp = getContentPane(); cp.setLayout(new BorderLayout()); final JTextArea ta = new JTextArea(5, 10); cp.add(new JScrollPane(ta), BorderLayout.SOUTH); JSplitPane sp = new JSplitPane(); cp.add(sp, BorderLayout.CENTER); JTree t = new JTree(); t.addTreeSelectionListener(new TreeSelectionListener() { public void valueChanged(TreeSelectionEvent tse) { ta.append("Tree Selection changed\n"); } }); t.addFocusListener(new FocusListener() { public void focusGained(FocusEvent fe) { ta.append("Tree focus gained\n"); } public void focusLost(FocusEvent fe) { ta.append("Tree focus lost\n"); } }); sp.setLeftComponent(new JScrollPane(t)); JTextField f = new JTextField(10); sp.setRightComponent(f); pack(); f.addFocusListener(new FocusListener() { public void focusGained(FocusEvent fe) { ta.append("Text field focus gained\n"); } public void focusLost(FocusEvent fe) { ta.append("Text field focus lost\n"); } }); setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } }

    Read the article

  • asynchronous pages

    - by lockedscope
    I have just read the multi-threading and custom threading in asp.net articles. http://www.williablog.net/williablog/post/2008/12/16/Custom-Threading-in-ASPNET.aspx http://www.williablog.net/williablog/post/2008/12/16/Multi-Threading-in-ASPNET.aspx I have couple of questions. What does he mean by returning a thread to the pool? Is that thread completely removed from memory or put in to a state that it does not scheduled to CPU(is it in sleep state or whatever)? If that thread is removed from memory how could it survive after async point? How this mechanism works? Are every objects(pages class, request,response etc.) are copied to somewhere else before they are disposed? (Or, is it just waiting in a sleep state and then its waked when async call ends?) He is saying that; "Having said that, making pages asynchronous is not really about improving performance, it is about improving scalability" then he is saying; "I'm sorry to say that it will do nothing for scalability or performance." So which one is true? or for which case(s) are they true?

    Read the article

  • Associations and the Grails webflow

    - by callie16
    Hi, it's my first time using webflows in Grails and I can't seem to solve this. I have 3 domain classes with associations that look something like this: class A { ... static hasMany = [ b : B ] ... } class B { ... static belongsTo = [ a : A ] static hasMany = [ c : C ] ... } class C { ... static belongsTo = [ b : B ] ... } Now, the GSP communicates with the controller via Javascript (due to my use of Dojo). When I try to remoteFunction a normal action, I can do something like this: def action1 = { def anId = params.id def currA = A.get(anId) def sample = currA.b?.c // I can get all the way to 'c' without any problems ... } However, I have a webflow and the contents of that action is in the webflow... It looks something like this: def someFlow = { ... someState { on("next") { def anId = params.id // this does NOT return a null value def currA = A.get(anId) // this does NOT return a null value def sample = currA.b // error already occurs here and I need to get 'c'! }.("somePage") ... } ... } In this case, it tells me that b doesn't exist... so I can't even get to 'c'. Any suggestions on what to do??? Thanks... getting real desperate...

    Read the article

  • Handling dependencies with IoC that change within a single function call

    - by Jess
    We are trying to figure out how to setup Dependency Injection for situations where service classes can have different dependencies based on how they are used. In our specific case, we have a web app where 95% of the time the connection string is the same for the entire Request (this is a web application), but sometimes it can change. For example, we might have 2 classes with the following dependencies (simplified version - service actually has 4 dependencies): public LoginService (IUserRepository userRep) { } public UserRepository (IContext dbContext) { } In our IoC container, most of our dependencies are auto-wired except the Context for which I have something like this (not actual code, it's from memory ... this is StructureMap): x.ForRequestedType().Use() .WithCtorArg("connectionString").EqualTo(Session["ConnString"]); For 95% of our web application, this works perfectly. However, we have some admin-type functions that must operate across thousands of databases (one per client). Basically, we'd want to do this: public CreateUserList(IList<string> connStrings) { foreach (connString in connStrings) { //first create dependency graph using new connection string ???? //then call service method on new database _loginService.GetReportDataForAllUsers(); } } My question is: How do we create that new dependency graph for each time through the loop, while maintaining something that can easily be tested?

    Read the article

  • How do you add a key handler to a GWT FlexTable?

    - by Eric Landry
    I'm trying to change the row highlighting in my FlexTable using KeyCodes.KEY_UP/DOWN. This doesn't seem to work (based on 1809155): public class KeyAwareFlexTable extends FlexTable implements KeyDownHandler, HasKeyDownHandlers { public KeyAwareFlexTable() { this.addKeyDownHandler(this); } @Override public void onKeyDown(KeyDownEvent event) { GWT.log("onKeyDown"); // check if up/down & do something useful } @Override public HandlerRegistration addKeyDownHandler(KeyDownHandler handler) { return addDomHandler(handler, KeyDownEvent.getType()); } } I've also tried this (based on this site): FlexTable table = new FlexTable() { @Override public void onBrowserEvent(Event event) { super.onBrowserEvent(event); GWT.log("Event type = " + DOM.eventGetType(event)); switch (DOM.eventGetType(event)) { case Event.ONKEYDOWN: if (DOM.eventGetKeyCode(event) == KeyCodes.KEY_UP) { GWT.log("up"); } else if (DOM.eventGetKeyCode(event) == KeyCodes.KEY_DOWN) { GWT.log("down"); } break; default: break; } } }; table.sinkEvents(Event.ONKEYDOWN); I'm looking for a way to have this behavior more or less. Does anybody have a way to do this in GWT?

    Read the article

  • AMQP gem specifying a dead letter exchange

    - by JP.
    I've specified a queue on the RabbitMQ server called MyQueue. It is durable and has x-dead-letter-exchange set to MyQueue.DLX. (I also have an exchange called MyExchange bound to that queue, and another exchange called MyQueue.DLX, but I don't believe this is important to the question) If I use ruby's amqp gem to subscribe to those messages I would do it like this: # Doing this before and in a new thread has to do with how my code is structured # shown here in case it has a bearing on the question Thread.new do AMQP.start('amqp://guest:[email protected]:5672') end EventMachine.next_tick do channel = AMQP::Channel.new(AMQP.connection) queue = channel.queue("MyQueue", :durable => true, :'x-dead-letter-exchange' => "MyQueue.DLX") queue.subscribe(:ack => true) do |metadata, payload| p metadata p payload end end If I execute this code with the queues and exchanges already created and bound (as they need to be in my set up) then RabbitMQ throws the following error in its logs: =ERROR REPORT==== 19-Aug-2013::14:25:53 === connection <0.19654.2>, channel 2 - soft error: {amqp_error,precondition_failed, "inequivalent arg 'x-dead-letter-exchange'for queue 'MyQueue' in vhost '/': received none but current is the value 'MyQueue.DLX' of type 'longstr'", 'queue.declare'} Which seems to be saying that I haven't specified the same Dead Letter Exchange as the pre-existing queue - but I believe I have with the queue = ... line. Any ideas?

    Read the article

  • Ninject InThreadScope Binding

    - by e36M3
    I have a Windows service that contains a file watcher that raises events when a file arrives. When an event is raised I will be using Ninject to create business layer objects that inside of them have a reference to an Entity Framework context which is also injected via Ninject. In my web applications I always used InRequestScope for the context, that way within one request all business layer objects work with the same Entity Framework context. In my current Windows service scenario, would it be sufficient to switch the Entity Framework context binding to a InThreadScope binding? In theory when an event handler in the service triggers it's executed under some thread, then if another file arrives simultaneously it will be executing under a different thread. Therefore both events will not be sharing an Entity Framework context, in essence just like two different http requests on the web. One thing that bothers me is the destruction of these thread scoped objects, when you look at the Ninject wiki: .InThreadScope() - One instance of the type will be created per thread. .InRequestScope() - One instance of the type will be created per web request, and will be destroyed when the request ends. Based on this I understand that InRequestScope objects will be destroyed (garbage collected?) when (or at some point after) the request ends. This says nothing however on how InThreadScope objects are destroyed. To get back to my example, when the file watcher event handler method is completed, the thread goes away (back to the thread pool?) what happens to the InThreadScope-d objects that were injected? EDIT: One thing is clear now, that when using InThreadScope() it will not destroy your object when the handler for the filewatcher exits. I was able to reproduce this by dropping many files in the folder and eventually I got the same thread id which resulted in the same exact Entity Framework context as before, so it's definitely not sufficient for my applications. In this case a file that came in 5 minutes later could be using a stale context that was assigned to the same thread before.

    Read the article

  • How to remove the error "Cant find PInvoke DLL SQLite.interop.dll"

    - by Shailesh Jaiswal
    I am developing windows mobile application. I am using the SQLlite database. I am using the following code to connect to this database as follows SQLiteConnection cn = new SQLiteConnection(); SQLiteDataReader SQLiteDR; cn.ConnectionString = @"Data Source=F:\CompNetDB.db3"; cn.Open(); SQLiteCommand cmd = new SQLiteCommand(); cmd.CommandText = "select * from CustomerInfo"; cmd.CommandType = CommandType.Text; cmd.Connection = cn; SQLiteDR = cmd.ExecuteReader(); In the above case I am getting the error "Cant find PInvike DLL SQLite.interop.dll". I have added the DLL System.Data.SQLLite from the \SQLite.NET\bin\compactframework this folder. This is the folder which is installed by default when I installed the SQLite. In the same folder there is one DLL file named SQLlite.Interop.66.DLL. When I try to add reference to this dll it is giving error that dll can not be added. Are the two dlls SQLlite.Interop.dll & System.Interop.066.dll same ? In the above code how to solve the error "Cant find PInvoke.SQLite.Interop.dll" Please can you tell whether there is mistake in my code or I am missing something in my application?

    Read the article

  • How do I copy or move an NSManagedObject from one context to another?

    - by Aeonaut
    I have what I assume is a fairly standard setup, with one scratchpad MOC which is never saved (containing a bunch of objects downloaded from the web) and another permanent MOC which persists objects. When the user selects an object from scratchMOC to add to her library, I want to either 1) remove the object from scratchMOC and insert into permanentMOC, or 2) copy the object into permanentMOC. The Core Data FAQ says I can copy an object like this: NSManagedObjectID *objectID = [managedObject objectID]; NSManagedObject *copy = [context2 objectWithID:objectID]; (In this case, context2 would be permanentMOC.) However, when I do this, the copied object is faulted; the data is initially unresolved. When it does get resolved, later, all of the values are nil; none of the data (attributes or relationships) from the original managedObject are actually copied or referenced. Therefore I can't see any difference between using this objectWithID: method and just inserting an entirely new object into permanentMOC using insertNewObjectForEntityForName:. I realize I can create a new object in permanentMOC and manually copy each key-value pair from the old object, but I'm not very happy with that solution. (I have a number of different managed objects for which I have this problem, so I don't want to have to write and update copy: methods for all of them as I continue developing.) Is there a better way?

    Read the article

  • What's the best practice to setup testing for ASP.Net MVC? What to use/process/etc?

    - by melaos
    hi there, i'm trying to learn how to properly setup testing for an ASP.Net MVC. and from what i've been reading here and there thus far, the definition of legacy code kind of piques my interests, where it mentions that legacy codes are any codes without unit tests. so i did my project in a hurry not having the time to properly setup unit tests for the app and i'm still learning how to properly do TDD and unit testing at the same time. then i came upon selenium IDE/RC and was using it to test on the browser end. it was during that time too that i came upon the concept of integration testing, so from my understanding it seems that unit testing should be done to define the test and basic assumptions of each function, and if the function is dependent on something else, that something else needs to be mocked so that the tests is always singular and can be run fast. Questions: so am i right to say that the project should have started with unit test with proper mocks using something like rhino mocks. then anything else which requires 3rd party dll, database data access etc to be done via integration testing using selenium? because i have a function which calls a third party dll, i'm not sure whether to write a unit test in nunit to just instantiate the object and pass it some dummy data which breaks the mocking part to test it or just cover that part in my selenium integration testing when i submit my forms and call the dll. and for user acceptance tests, is it safe to say we can just use selenium again? Am i missing something or is there a better way/framework? i'm trying to put in more tests for regression testing, and to ensure that nothing breaks when we put in new features. i also like the idea of TDD because it helps to better define the function, sort of like a meta documentation. thanks!! hope this question isn't too subjective because i need it for my case.

    Read the article

  • Python script, runs well, but not perfectly, debugging help.

    - by S1syphus
    What it does (sort of)... or is meant to, the script reads from a csv file that contains information on sound files and create a play list exactly 60 minutes long. An example csv, contains: their title, duration (in seconds), minium total time to be played (in minutes) An example is: Soundfoo,120,10 Soundbar,30,6 Sounddev,60,20 Soundrandom,15,8 The script works out the minimum instances of plays, take 'Soundfoo' for example, the length of each sample is 120 seconds and the minimum time to be played is 10 minutes, so basic maths 10*60/120 gives the number of instances the song is to be played, in this case 5. It is meant to take minimum number of instances and spread out equally from each other; so there will never be a period where for example Soundbar is played twice in a row. Then if the minium instances of each song has been used, and there is still time with in the 60 min, how is it possible to tell it to go back and fill the time by selecting each sound and including it till the 60 min is filled while remaining sparsely populated. Heres the issue(s)! The script fails to calculate the actual time require to play all the sounds in a file and the total time of the playlist, the thing is tho it doesn't get it wrong all the time maybe 3/5 times, even if I run it on the same csv file it will give me different answers. Here is the file I shall run the script on e for sake of ease to see the issue: Sound1,60,10 Sound2,60,10 Sound3,60,10 Sound4,60,10 Sound5,60,10 Sound6,60,10 I'll do it three times and post the results: 1 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 62 Total playtime in minutes: 62.0 2 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 71 Total playtime in minutes: 71.0 3 Required playtime in minutes: 60 Actual time in minutes to play all required ads: 60 Total playtime in minutes: 60.0 Relevant Code: pastebin.com/demkBXk6 And finally... in context: http://pastebin.com/demkBXk6 If you made it down to here, thanks for staying and reading, kudos.

    Read the article

  • flash as3 document class and event listeners

    - by Lee
    I think i have this document class concept entirly wrong now, i was wondering if someone mind explaining it.. I assumed that the above class would be instantiated within the first frame on scene one of a movie. I also assumed that when changing scenes the state of the class would remain constant so any event listeners would still be running.. Scene 1: I have a movieclip named ui_mc, that has a button in for muting sound. Scene 2: I have the same movie clip with the same button. Now the eventListener picks it up in the first scene, however it does not in the second. I am wondering for every scene do the event listeners need to be resetup? If that is the case if their an event listener to listen for the change in scene, so i can set them back up again lol.. Thanks in advance.. package { import flash.display.MovieClip; import flash.events.MouseEvent; import flash.media.Sound; import flash.media.SoundChannel; public class game extends MovieClip { public var snd_state:Boolean = true; public function game() { ui_setup(); } public function ui_setup():void { ui_mc.toggleMute_mc.addEventListener(MouseEvent.CLICK, snd_toggle); } private function snd_toggle(event:MouseEvent):void { // 0 = No Sound, 1 = Full Sound trace("Toggle"); } } }

    Read the article

  • Seam:token tag not being respected

    - by JBristow
    When I click a command button, and then hit the browser back button to the form and click it again, it submits a second time without throwing the proper exception... Even stranger, the form id itself is DIFFERENT when I come back, which implies it has regenerated a "valid" form id at some point. Here's the relevant code: Any ideas? <h:form id="accountActivationForm"> <s:token/> <a4j:commandButton id="cancelActivateAccountButton" action="#{controller[cancelAction]}" image="/images/button-Cancel-gray.gif" reRender="#{reRenderList}" oncomplete="#{onCancelComplete}" /> &#160; <a4j:commandButton id="activateAccountButton" action="#{controller[agreeAction]}" image="/images/button-i-agree-continue.gif" styleClass="activate-account-button" reRender="#{reRenderList}" oncomplete="#{onActivationComplete}"/> </h:form> Clarifications: I inherited this, so I'm trying to change it as little as possible. (It's used in a couple places.) Each action returns a view, not null. I have confirmed this by stepping through line-by-line. The reRenderList is empty in my current test-case. onActivationComplete is also empty. I'm going to be going template-by-template to see if someone made it with nested forms, because my coworkers have had unrelated problems due to that, so it couldn't hurt to eliminate that as a possible problem.

    Read the article

  • String to array or Array to string tips on formats, etc

    - by user316841
    hi, first of all thanks for taking your time! I'm a junior Dev, working with PHP + mysql. My issue: I'm saving data from a form to my database. From this form, there's only need to save the contacts: Name, phone number, address. But, it would be nice to have a small reference to the user answers. Let's say for each question we've got a value betwee 1 and 4. Since there's no need to create a table just for it, because what's needed is just the personal contacts. I'm thinking of recording each question/answer, as a letter and its correspondent value. Example (A2, B1, C5, D3, etc). My question is: Is there a format I could afterwards, handle easily ? Convert to array (string to array) in case the client change ideas, and ask this data, placed in table columns ? Just to prevent this situation! Example, From (A2, B1, C5 ) to array( "A" = "1", "B" = "1", "C" = "5" ) For now I guess, Regex is the answer, but it's allways hard to figure it out and I'm allways getting in troubles =) Thanks!

    Read the article

  • Git checkout doesn't change anything, and it's getting very frustrating

    - by Josh
    I really like git. At least, I like the idea of git. Being able to checkout my master project as a separate branch where I can change whatever I want without risk of screwing everything else up is awesome. But it's not working. Every time I checkout a branch to another branch, make changes to the one branch, and then checkout the original branch, I still have all the files and changes that happened in the other branch. This is getting extremely frustrating. I've read that this can happen when you have files open in the IDE while doing this, but I've been pretty careful about that and both closed the files in the IDE, closed the IDE, and shut down my rails server before switching branches, and this still happens. Also, running 'git clean -f' either deletes everything that happened after some arbitrary commit (and randomly, at that), or, as in the latest case, didn't change anything back to its original state. I thought I was using git correctly, but at this point, I'm at my wit's end here. I'm trying to work with a bunch of experimental code using a stable version of my project, but I keep having to manually track down and fix all the changes I made. Any ideas or suggestions? git checkout -b photo_tagging git branch # to make sure it's right # make a bunch of changes, creations, etc git status # see what's changed since before git add . # approve of the changes, I guess, since if I do git commit after this, it says no changes git commit -m 'these are changes I made' git checkout master git branch #=> *master # look at files, tags_controller is still there, added in photo_tagging # and code added in photo_tagging branch are still there in *master This seems to happen whether I do a commit or not on the branch.

    Read the article

  • Yet another Python Windows CMD mklink problem ... can't get it to work!

    - by Felix Dombek
    OK I have just posted another question which outlined my program but the specific problem was different. Now, my program just stops working without any message whatsoever. I'd be grateful if someone could help me here. I want to create symlinks for each file in a directory structure, all in one large flat folder, and have the following code by now: # loop over directory structure: # for all items in current directory, # if item is directory, recurse into it; # else it's a file, then create a symlink for it def makelinks(folder, targetfolder, cmdprocess = None): if not cmdprocess: cmdprocess = subprocess.Popen("cmd", stdin = subprocess.PIPE, stdout = subprocess.PIPE, stderr = subprocess.PIPE) print(folder) for name in os.listdir(folder): fullname = os.path.join(folder, name) if os.path.isdir(fullname): makelinks(fullname, targetfolder, cmdprocess) else: makelink(fullname, targetfolder, cmdprocess) #for a given file, create one symlink in the target folder def makelink(fullname, targetfolder, cmdprocess): linkname = os.path.join(targetfolder, re.sub(r"[\/\\\:\*\?\"\<\>\|]", "-", fullname)) if not os.path.exists(linkname): try: os.remove(linkname) print("Invalid symlink removed:", linkname) except: pass if not os.path.exists(linkname): cmdprocess.stdin.write("mklink " + linkname + " " + fullname + "\r\n") So this is a top-down recursion where first the folder name is printed, then the subdirectories are processed. If I run this now over some folder, the whole thing just stops after 10 or so symbolic links. Here is the output: D:\Musik\neu D:\Musik\neu\# Electronic D:\Musik\neu\# Electronic\# tag & reencode D:\Musik\neu\# Electronic\# tag & reencode\ChillOutMix D:\Musik\neu\# Electronic\# tag & reencode\Unknown D&B D:\Musik\neu\# Electronic\# tag & reencode\Unknown D&B 2 The program still seems to run but no new output is generated. It created 9 symlinks for some files in the # tag & reencode and the first three files in the ChillOutMix folder. The cmd.exe Window is still open and empty, and shows in its title bar that it is currently processing the mklink command for the third file in ChillOutMix. I tried to insert a time.sleep(2) after each cmdprocess.stdin.write in case Python is just too fast for the cmd process, but it doesn't help. Does anyone know what the problem might be?

    Read the article

  • Comparing two collections for equality

    - by Crossbrowser
    I would like to compare two collections (in C#), but I'm not sure of the best way to implement this efficiently. I've read the other thread about Enumerable.SequenceEqual, but it's not exactly what I'm looking for. In my case, two collections would be equal if they both contain the same items (no matter the order). Example: collection1 = {1, 2, 3, 4}; collection2 = {2, 4, 1, 3}; collection1 == collection2; // true What I usually do is to loop through each item of one collection and see if it exists in the other collection, then loop through each item of the other collection and see if it exists in the first collection. (I start by comparing the lengths). if (collection1.Count != collection2.Count) return false; // the collections are not equal foreach (Item item in collection1) { if (!collection2.Contains(item)) return false; // the collections are not equal } foreach (Item item in collection2) { if (!collection1.Contains(item)) return false; // the collections are not equal } return true; // the collections are equal However, this is not entirely correct, and it's probably not the most efficient way to do compare two collections for equality. An example I can think of that would be wrong is: collection1 = {1, 2, 3, 3, 4} collection2 = {1, 2, 2, 3, 4} Which would be equal with my implementation. Should I just count the number of times each item is found and make sure the counts are equal in both collections? The examples are in some sort of C# (let's call it pseudo-C#), but give your answer in whatever language you wish, it does not matter. Note: I used integers in the examples for simplicity, but I want to be able to use reference-type objects too (they do not behave correctly as keys because only the reference of the object is compared, not the content).

    Read the article

  • Make sure <a href="local file"> is opened outside of browser

    - by Heinzi
    For an Intranet web application (document management), I want to show a list of files associated with a certain customer. The resulting HTML is like this: <a href="file:///server/share/dir/somefile.docx">somefile.docx</a> <a href="file:///server/share/dir/someotherfile.pdf">somefile.pdf</a> <a href="file:///server/share/dir/yetanotherfile.txt">yetanotherfile.txt</a> This works fine. Unfortunetly, when clicking on a text file (or image file), Internet Explorer (and I guess most other browsers as well) insist on showing it in the browser instead of opening the file with the associated application (e.g. Notepad). In our case, this is undesired behavior, since it does not allow the user to edit the file. Is there some workaround to this behavior (e.g. something like <a href="file:///..." open="external">)? I'm aware that this is a browser-specific thing, and an IE-only solution would be fine (it's an Intranet application after all).

    Read the article

< Previous Page | 691 692 693 694 695 696 697 698 699 700 701 702  | Next Page >