Search Results

Search found 1486 results on 60 pages for 'dan moulding'.

Page 34/60 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Unlocking a mutex from a different thread (C++)

    - by dan
    I'm using the C++ boost::thread library, which in my case means I'm using pthreads. Officially, a mutex must be unlocked from the same thread which locks it, and I want the effect of being able to lock in one thread and then unlock in another. There are many ways to accomplish this. One possibility would be to write a new mutex class which allows this behavior. For example: class inter_thread_mutex{ bool locked; boost::mutex mx; boost::condition_variable cv; public: void lock(){ boost::unique_lock<boost::mutex> lck(mx); while(locked) cv.wait(lck); locked=true; } void unlock(){ { boost::lock_guard<boost::mutex> lck(mx); if(!locked) error(); locked=false; } cv.notify_one(); } // bool try_lock(); void error(); etc. } I should point out that the above code doesn't guarantee FIFO access, since if one thread calls lock() while another calls unlock(), this first thread may acquire the lock ahead of other threads which are waiting. (Come to think of it, the boost::thread documentation doesn't appear to make any explicit scheduling guarantees for either mutexes or condition variables). But let's just ignore that (and any other bugs) for now. My question is, if I decide to go this route, would I be able to use such a mutex as a model for the boost Lockable concept. For example, would anything go wrong if I use a boost::unique_lock< inter_thread_mutex for RAII-style access, and then pass this lock to boost::condition_variable_any.wait(), etc. On one hand I don't see why not. On the other hand, "I don't see why not" is usually a very bad way of determining whether something will work. The reason I ask is that if it turns out that I have to write wrapper classes for RAII locks and condition variables and whatever else, then I'd rather just find some other way to achieve the same effect.

    Read the article

  • Using OUTPUT/INTO within instead of insert trigger invalidates 'inserted' table

    - by Dan
    I have a problem using a table with an instead of insert trigger. The table I created contains an identity column. I need to use an instead of insert trigger on this table. I also need to see the value of the newly inserted identity from within my trigger which requires the use of OUTPUT/INTO within the trigger. The problem is then that clients that perform INSERTs cannot see the inserted values. For example, I create a simple table: CREATE TABLE [MyTable]( [MyID] [int] IDENTITY(1,1) NOT NULL, [MyBit] [bit] NOT NULL, CONSTRAINT [PK_MyTable_MyID] PRIMARY KEY NONCLUSTERED ( [MyID] ASC )) Next I create a simple instead of trigger: create trigger [trMyTableInsert] on [MyTable] instead of insert as BEGIN DECLARE @InsertedRows table( MyID int, MyBit bit); INSERT INTO [MyTable] ([MyBit]) OUTPUT inserted.MyID, inserted.MyBit INTO @InsertedRows SELECT inserted.MyBit FROM inserted; -- LOGIC NOT SHOWN HERE THAT USES @InsertedRows END; Lastly, I attempt to perform an insert and retrieve the inserted values: DECLARE @tbl TABLE (myID INT) insert into MyTable (MyBit) OUTPUT inserted.MyID INTO @tbl VALUES (1) SELECT * from @tbl The issue is all I ever get back is zero. I can see the row was correctly inserted into the table. I also know that if I remove the OUTPUT/INTO from within the trigger this problem goes away. Any thoughts as to what I'm doing wrong? Or is how I want to do things not feasible? Thanks.

    Read the article

  • How to access and run field events from extension js?

    - by Dan Roberts
    I have an extension that helps in submitting forms automatically for a process at work. We are running into a problem with dual select boxes where one option is selected and then that selection changes another field's options. Since setting an option selected property to true doesn't trigger the field's onchange event I am trying to do so through code. The problem I've run into is that if I try to access or run functions on the field object from the extension, I get the error Error: uncaught exception: [Exception... "Component is not available" nsresult: "0x80040111 (NS_ERROR_NOT_AVAILABLE)" location: "JS frame :: chrome://webformsidebar/content/webformsidebar.js :: WebFormSidebar_FillProcess :: line 499" data: no] the line causing the error is... if (typeof thisField.onchange === 'function') The line right before it works just fine... thisField.options[t].selected=true; ...so I'm not sure why this is resulting in such an error. What surprises me most I guess is that checking for the existence of the function leads to an error. It feels like the problem is related to the code running in the context of the extension instead of the browser window document. If so, is there any way to call a function in the browser window context instead? Do I need to actually inject code into the page somehow? Any other ideas? Thanks

    Read the article

  • Calling system commands from Perl

    - by Dan J
    In an older version of our code, we called out from Perl to do an LDAP search as follows: # Pass the base DN in via the ldapsearch-specific environment variable # (rather than as the "-b" paramater) to avoid problems of shell # interpretation of special characters in the DN. $ENV{LDAP_BASEDN} = $ldn; $lcmd = "ldapsearch -x -T -1 -h $gLdapServer" . <snip> " > $lworkfile 2>&1"; system($lcmd); if (($? != 0) || (! -e "$lworkfile")) { # Handle the error } The code above would result in a successful LDAP search, and the output of that search would be in the file $lworkfile. Unfortunately, we recently reconfigured openldap on this server so that a "BASE DC=" is specified in /etc/openldap/ldap.conf and /etc/ldap.conf. That change seems to mean ldapsearch ignores the LDAP_BASEDN environment variable, and so my ldapsearch fails. I've tried a couple of different fixes but without success so far: (1) I tried going back to using the "-b" argument to ldapsearch, but escaping the shell metacharacters. I started writing the escaping code: my $ldn_escaped = $ldn; $ldn_escaped =~ s/\/\\/g; $ldn_escaped =~ s/`/\`/g; $ldn_escaped =~ s/$/\$/g; $ldn_escaped =~ s/"/\"/g; That threw up some Perl errors because I haven't escaped those regexes properly in Perl (the line number matches the regex with the backticks in). Backticks found where operator expected at /tmp/mycommand line 404, at end of line At the same time I started to doubt this approach and looked for a better one. (2) I then saw some Stackoverflow questions (here and here) that suggested a better solution. Here's the code: print("Processing..."); # Pass the arguments to ldapsearch by invoking open() with an array. # This ensures the shell does NOT interpret shell metacharacters. my(@cmd_args) = ("-x", "-T", "-1", "-h", "$gLdapPool", "-b", "$ldn", <snip> ); $lcmd = "ldapsearch"; open my $lldap_output, "-|", $lcmd, @cmd_args; while (my $lline = <$lldap_output>) { # I can parse the contents of my file fine } $lldap_output->close; The two problems I am having with approach (2) are: a) Calling open or system with an array of arguments does not let me pass > $lworkfile 2>&1 to the command, so I can't stop the ldapsearch output being sent to screen, which makes my output look ugly: Processing...ldap_bind: Success (0) additional info: Success b) I can't figure out how to choose which location (i.e. path and file name) to the file handle passed to open, i.e. I don't know where $lldap_output is. Can I move/rename it, or inspect it to find out where it is (or is it not actually saved to disk)? Based on the problems with (2), this makes me think I should return back to approach (1), but I'm not quite sure how to

    Read the article

  • How would you code a washing machine?

    - by Dan
    Imagine I have a class that represents a simple washing machine. It can perform following operations in the following order: turn on - wash - centrifuge - turn off. I see two basic alternatives: A) I can have a class WashingMachine with methods turnOn(), wash(int minutes), centrifuge(int revs), turnOff(). The problem with this is that the interface says nothing about the correct order of operations. I can at best throw InvalidOprationException if the client tries to centrifuge before machine was turned on. B) I can let the class itself take care of correct transitions and have the single method nextOperation(). The problem with this on the other hand, is that the semantics is poor. Client will not know what will happen when he calls the nextOperation(). Imagine you implement the centrifuge button’s click event so it calls nextOperation(). User presses the centrifuge button after machine was turned on and ups! machine starts to wash. I will probably need a few properties on my class to parameterize operations, or maybe a separate Program class with washLength and centrifugeRevs fields, but that is not really the problem. Which alternative is better? Or maybe there are some other, better alternatives that I missed to describe?

    Read the article

  • How to configure log4j with a properties file

    - by Dan
    How do I get log4j to pick up a properties file. I'm writing a Java desktop app which I want to use log4j. In my main method if have this: PropertyConfigurator.configure("log4j.properties"); The log4j.properties file sits in the same directory when I open the Jar. Yet I get this error: log4j:ERROR Could not read configuration file [log4j.properties]. java.io.FileNotFoundException: log4j.properties (The system cannot find the file specified) What am I doing wrong?

    Read the article

  • Writing to a java socket channel which should be closed does not generate an exception

    - by Dan Serfaty
    Hi all, We have a java server that keeps a socket channel open with an Android client in order to provide push capabilities to our client application. However, after putting the Android in airplane mode, which I expected would sever the connection, the server can still write to the SocketChannel object associated with that Android client and no error is thrown. Calling SocketChannel.isConnected() before writing to it returns true. What are we missing? Is the handling of sockets different with mobile devices? Thanks in advance for your help.

    Read the article

  • Hierarchy / Flyweight / Instancing Problem in Python

    - by Dan
    Here is the problem I am trying to solve, (I have simplified the actual problem, but this should give you all the relevant information). I have a hierarchy like so: 1.A 1.B 1.C 2.A 3.D 4.B 5.F (This is hard to illustrate - each number is the parent, each letter is the child). Creating an instance of the 'letter' objects is expensive (IO, database costs, etc), so should only be done once. The hierarchy needs to be easy to navigate. Children in the hierarchy need to have just one parent. Modifying the contents of the letter objects should be possible directly from the objects in the hierarchy. There needs to be a central store containing all of the 'letter' objects (and only those in the hierarchy). 'letter' and 'number' objects need to be possible to create from a constructor (such as Letter(**kwargs) ). It is perfectably acceptable to expect that when a letter changes from the hierarchy, all other letters will respect the same change. Hope this isn't too abstract to illustrate the problem. What would be the best way of solving this? (Then I'll post my solution) Here's an example script: one = Number('one') a = Letter('a') one.addChild(a) two = Number('two') a = Letter('a') two.addChild(a) for child in one: child.method1() for child in two: print '%s' % child.method2()

    Read the article

  • In MS SQL Server, is there a way to "atomically" increment a column being used as a counter?

    - by Dan P
    Assuming a Read Committed Snapshot transaction isolation setting, is the following statement "atomic" in the sense that you won't ever "lose" a concurrent increment? update mytable set counter = counter + 1 I would assume that in the general case, where this update statement is part of a larger transaction, that it wouldn't be. For example, I think this scenario is possible: update the counter within transaction #1 do some other stuff in transaction #1 update the counter with transaction #2 commit transaction #2 commit transaction #1 In this situation, wouldn't the counter end up only being incremented by 1? Does it make a difference if that is the only statement in a transaction? How does a site like stackoverflow handle this for its question view counter? Or is the possibility of "losing" some increments just considered acceptable?

    Read the article

  • How to apply coding methodologies and practices to non-coding work?

    - by Dan
    I can talk for hours about best-practice, source control, change management, feature tracking, development cycles and the lot, but most of what I've learnt or read seems to apply to nuts-and-bolts programming of compiled applications. You know, ASCII files that gets turned into 1s and 0s. How does one apply the same discipline and wisdom to working in environments that are point-and-click, config-centric. I'm thinking of CMSs and specifically, my current 9 to 5, SharePoint. Traditional practices of source control, dev-staging-production seem to break down since we're not working with code, and the live environment changes with user input. So to sum up a rather lengthy question, what works in a no-code environment?

    Read the article

  • Focus behavior in Applet-Javascript interaction

    - by Dan
    I have a web page with an applet that opens a popup window and also makes Javascript calls. When that Javascript call results in a focus() call on an HTML input, that causes the browser window to push itself in front of the applet window. But only on certain browsers, namely MSIE. On Firefox the applet window remains on top. How can I keep that behavior consistent in MSIE? Note that using the old Microsoft VM for Java also achieves the desired (applet window in front) result. HTML code: <html> <head> <script type="text/javascript"> function focusMe() { document.getElementById('mytext').focus(); } </script> </head> <body> <applet id="myapplet" mayscript code="Popup.class" ></applet> <form> <input type="text" id="mytext"> <input type="button" onclick="document.getElementById('myapplet').showPopup()" value="click"> </form> </body> </html> Java code: public class Popup extends Applet { Frame frame; public void start() { frame = new Frame("Test Frame"); frame.setLayout(new BorderLayout()); Button button = new Button("Push Me"); frame.add("Center", button); button.addActionListener(new ActionListener(){ public void actionPerformed(ActionEvent e) { frame.setVisible(false); } }); frame.pack(); } public void showPopup() { frame.setVisible(true); JSObject.getWindow(this).eval("focusMe()"); } }

    Read the article

  • How do I pass a DBNull value to a parameterized SELECT statement?

    - by Dan
    I have a SQL statement in C# (.NET Framework 4 running against SQL Server 2k8) that looks like this: SELECT [Column1] FROM [Table1] WHERE [Column2] = @Column2 The above query works fine with the following ADO.NET code: DbParameter parm = Factory.CreateDbParameter(); parm.Value = "SomeValue"; parm.ParameterName = "@Column2"; //etc... This query returns zero rows, though, if I assign DBNull.Value to the DbParameter's Value member even if there are null values in Column2. If I change the query to accommodate the null test specifically: SELECT [Column1] FROM [Table1] WHERE [Column2] IS @Column2 I get an "Incorrect syntax near '@Column2'" exception at runtime. Is there no way that I can use null or DBNull as a parameter in the WHERE clause of a SELECT statement?

    Read the article

  • Checking for empty arrays: count vs empty

    - by Dan McG
    This question on 'How to tell if a PHP array is empty' had me thinking of this question Is there a reason that count should be used instead of empty when determining if an array is empty or not? My personal thought would be if the 2 are equivalent for the case of empty arrays you should use empty because it gives a boolean answer to a boolean question. From the question linked above, it seems that count($var) == 0 is the popular method. To me, while technically correct, makes no sense. E.g. Q: $var, are you empty? A: 7. Hmmm... Is there a reason I should use count == 0 instead or just a matter of personal taste? As pointed out by others in comments for a now deleted answer, count will have performance impacts for large arrays because it will have to count all elements, whereas empty can stop as soon as it knows it isn't empty. So, if they give the same results in this case, but count is potentially inefficient, why would we ever use count($var) == 0?

    Read the article

  • Can a database function be called in the predicate of a llblgen query?

    - by Dan Appleyard
    I want to use a table-valued database function in the where clause of a query I am building using LLBLGen Pro 2.6 (self-servicing). SELECT * FROM [dbo].[Users] WHERE [dbo].[Users].[UserID] IN ( SELECT UserID FROM [dbo].[GetScopedUsers] (@ScopedUserID) ) I am looking into the FieldCompareSetPredicate class, but can't for the life of me figure out what the exact signature would be. Any help would be greatly appreciated.

    Read the article

  • .NET JAXB equivalent?

    - by Dan
    Is there an equivalent library for JAXB in .NET? I am trying to convert an XML I get to a .NET class. I have the XSD, but not sure how to convert the XML received into a concrete Class? I used the XSD tool to generate a class from the schema, but what I want to to convert the XML I receive on the fly to a object that I can work with in code. I've seen the thread here that deals with this, but my query is - I want the object created to contain the data that I receive in the XML (i.e. the field values must be populated).

    Read the article

  • Non Document Centric SharePoint Workflow

    - by Dan Revell
    SharePoint workflows are document centric in that the base thing the workflow runs on has to be a thing; be it a document or just a list item. The workflow itself is task based, so stuff a user has to do. Now I can put any sort of code in these tasks that I want to and even put complex InfoPath forms in for the user to perform the task. This has been fine on all my previous workflows. But what if I want the tasks to be actual official forms themselves. The item that the workflow runs on is just some abstract concept like an event. An example could be an accident has happened. There isn't an accident form, but a whole set of forms that need to be completed by different people. Task forms aren't really a nice way to go, because it locks all the forms into the task list. You can only access the forms by not deleting the tasks when complete and going to the workflow summery and following the task links to the InfoPath forms or going straight to the tasks list and doing a filter on particular "accidents". These are official documents so ideally there would be a library for each type of document and the workflow would orchestrate the completion of the right forms. It would mean each task would have to create a new blank form and then link the user to that form. The user would go complete the form but then have to go back to the task form and click yes I've completed it until the workflow could progress. Well this is short of the workflow monitoring the forms library form for some completion trigger. But then it all gets messy with the user experience from clicking the link in the task email, to open the Infopath task form, to clicking the link in the subsequent Infopath library form and then return through these forms on completion. It just gets messy trying to retrofit this non document centric sort of workflow into SharePoint. I would really appreciate any input on what might be the best way to do this. Store the forms as task forms Store the forms as library forms and create/link from the task forms Store the forms as different infopath views, and use a forms library. The workflow would trigger variables that progress the view the infopath form shows. Using the same form template for both task forms and a forms library and when a task form is complete, copy the xml into the forms library to have a official record outside of the workflow. Thanks

    Read the article

  • Is this a correct Interlocked synchronization design?

    - by Dan Bryant
    I have a system that takes Samples. I have multiple client threads in the application that are interested in these Samples, but the actual process of taking a Sample can only occur in one context. It's fast enough that it's okay for it to block the calling process until Sampling is done, but slow enough that I don't want multiple threads piling up requests. I came up with this design (stripped down to minimal details): public class Sample { private static Sample _lastSample; private static int _isSampling; public static Sample TakeSample(AutomationManager automation) { //Only start sampling if not already sampling in some other context if (Interlocked.CompareExchange(ref _isSampling, 0, 1) == 0) { try { Sample sample = new Sample(); sample.PerformSampling(automation); _lastSample = sample; } finally { //We're done sampling _isSampling = 0; } } return _lastSample; } private void PerformSampling(AutomationManager automation) { //Lots of stuff going on that shouldn't be run in more than one context at the same time } } Is this safe for use in the scenario I described?

    Read the article

  • How to return all aspnet_compiler errors (not just those in first directory)

    - by Dan Atkinson
    Hi there! Is there a way to get the aspnet_compiler to go through all views and return all errors, rather than just the errors in the current view directory? For example, lets say I have a project that has a bunch of folders... Views Folder1 Folder2 Folder3 Folder4 Two of them (Folder2 and Folder3) have errors. aspnet_compiler will run, and only return the errors it comes across in Folder2. It won't return those in Folder3 at the same time. Once I fix the errors in Folder2 and run it again, it'll then pick up the ones in the Folder3. I fix those. And then have to run the tool again, and again until it's all fixed. This is getting annoying!! For reference, here's the command I use: C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_compiler -v / -p "C:\path\to\project" Thanks in advance!

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >