Search Results

Search found 23474 results on 939 pages for 'event dispatch thread'.

Page 238/939 | < Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >

  • "FOR UPDATE" v/s "LOCK IN SHARE MODE" : Allow concurrent threads to read updated "state" value of locked row

    - by shadesco
    I have the following scenario: User X logs in to the application from location lc1: call it Ulc1 User X (has been hacked, or some friend of his knows his login credential, or he just logs in from a different browser on his machine,etc.. u got the point) logs in at the same time from location lc2: call it Ulc2 I am using a main servlet which : - gets a connection from database pooling - sets autocommit to false - executes a command that goes through app layers: if all successful, set autocommit to true in a "finally" statement, and closes connection. Else if an exception happens, rollback(). In my database (mysql/innoDb) i have a "history" table, with row columns: id(primary key) |username | date | topic | locked The column "locked" has by default value "false" and it serves as a flag that marks if a specific row is locked or not. Each row is specific to a user (as u can see from the username column) So back to the scenario: --Ulc1 sends the command to update his history from the db for date "D" and topic "T". --Ulc2 sends the same command to update history from the db for the same date "D" and same topic "T" at the exact same time. I want to implement an mysql/innoDB locking system that will enable whichever thread arriving to do the following check: Is column "locked" for this row true or not? if true, return a message to the user that " he is already updating the same data from another location" if not true (ie not locked) : flag it as locked and update then reset locked to false once finished. Which of these two mysql locking techniques, will actually allow the 2nd arriving thread from reading the "updated" value of the locked column to decide wt action to take?Should i use "FOR UPDATE" or "LOCK IN SHARE MODE"? This scenario explains what i want to accomplish: - Ulc1 thread arrives first: column "locked" is false, set it to true and continue updating process - Ulc2 thread arrives while Ulc1's transaction is still in process, and even though the row is locked through innoDb functionalities, it doesn't have to wait but in fact reads the "new" value of column locked which is "true", and so doesn't in fact have to wait till Ulc1 transaction commits to read the value of the "locked" column(anyway by that time the value of this column will already have been reset to false). I am not very experienced with the 2 types of locking mechanisms, what i understand so far is that LOCK IN SHARE MODE allow other transaction to read the locked row while FOR UPDATE doesn't even allow reading. But does this read gets on the updated value? or the 2nd arriving thread has to wait the first thread to commit to then read the value? Any recommendations about which locking mechanism to use for this scenario is appreciated. Also if there's a better way to "check" if the row has been locked (other than using a true/false column flag) please let me know about it. thank you SOLUTION (Jdbc pseudocode example based on @Darhazer's answer) Table : [ id(primary key) |username | date | topic | locked ] connection.setautocommit(false); //transaction-1 PreparedStatement ps1 = "Select locked from tableName for update where id="key" and locked=false); ps1.executeQuery(); //transaction 2 PreparedStatement ps2 = "Update tableName set locked=true where id="key"; ps2.executeUpdate(); connection.setautocommit(true);// here we allow other transactions threads to see the new value connection.setautocommit(false); //transaction 3 PreparedStatement ps3 = "Update tableName set aField="Sthg" where id="key" And date="D" and topic="T"; ps3.executeUpdate(); // reset locked to false PreparedStatement ps4 = "Update tableName set locked=false where id="key"; ps4.executeUpdate(); //commit connection.setautocommit(true);

    Read the article

  • SQLAuthority News – Professional Development and Community

    - by pinaldave
    I was recently invited by Hyderabad Techies to deliver a keynote for their 16-day online session called TECH THUNDERS. This event has been running from May 15 and will continue up to the end of the month May 30). There would be a total of 30 sessions. In every evening of those 16 day, there will be either one or two sessions from several noted industry experts. It is the same group which has received the Microsoft Community Impact Award as the Best User Group in India as for developers. I have never talked about Professional Development before. Even if this was my first time to do so, I still accepted the wonderful challenge for the sake of the thousands of audience who were expected to attend this online event. Time is of the essence; I had 15 minutes to deliver the keynote and open the event. The reason why I was nervous was because I had to cover precisely only 15 minutes- no more, no less. If I had an hour, I would have been very confident because I knew I could do a good job for sure. However, I still needed to open the event as great as it can be even if the time was short. I finally created a 6-slide small presentation. In reality, there were only two pages which had the main contents of my keynote, and the remaining slides were just wrappers and decors. You can download the complete slide deck from here. The image used in the slide deck is a curtsy of blog reader Roger Smith who sent it to me. The slide in which I spent a good amount of time is the slide which talks about Professional Development. The content of the slide is as follows: Today, Technology and You Keep your eyes, ears and senses open – Stay Active! You are not the first one who faced the problem – Search Online! Learn the web – Blogs, Forums and Friends! Trust the Technology, Not Print – Test Everything! Community and You! I had a very little time creating the slide deck as I was busy the whole day doing the Advanced SQL Server Training. I had put together these slides during the tea/coffee break of my session. Though it was just a six-bullet point, I had received quite a few emails right after keynote requesting me to talk more about this subject and share the details of my slide deck. I have talked with the event organizer and he will put the keynote online very soon. The subject of the talk is very simple; it revolves around the community. Time has changed, and Internet has come a long way from where it was many years ago. Now that we are all connected, help via the Internet and useful software is easily available around us. In fact, RSS, Newletters and few other technologies have progressed so much that the help through news is now being delivered to our door steps, instead of going out and seeking them. Sometimes, a simple search online solves a lot of problems of many developers. The community is now the first stop for any developer when he or she needs help or just wants to hang around and share some thoughts. I strongly suggest everybody to be a part of the Tech Community. Be it online, offline community or just a local user group, I strongly advise all of you to get involved. I am active in the Community, and I must say I recommend getting drawn into it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL User Group, SQLAuthority News, T SQL, Technology Tagged: Community

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • Ubuntu 10.04 - unable to install Arduino

    - by Newbie
    Hello! At the moment, I try to install Arduino on my Ubuntu 10.04 (32 Bit) computer. I downloaded the latest release at http://arduino.cc/en/Main/Software, cd'ed to the directory and unziped the package. When I try to run ./arduino , I get following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Base.main(Base.java:112) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(Unknown Source) ... 1 more Here is my java -version output: java version "1.6.0_20" OpenJDK Runtime Environment (IcedTea6 1.9.5) (6b20-1.9.5-0ubuntu1~10.04.1) OpenJDK Server VM (build 19.0-b09, mixed mode) Any suggestions on this? I try to install arduino without the 'arduino' package. I tried to install it with apt-get (sudo apt-get install arduino). When I try to start arduino (using arduino command) will cause following error: Exception in thread "main" java.lang.ExceptionInInitializerError at processing.app.Preferences.load(Preferences.java:553) at processing.app.Preferences.load(Preferences.java:549) at processing.app.Preferences.init(Preferences.java:142) at processing.app.Base.main(Base.java:188) Caused by: java.awt.HeadlessException at sun.awt.HeadlessToolkit.getMenuShortcutKeyMask(HeadlessToolkit.java:231) at processing.core.PApplet.<clinit>(PApplet.java:224) ... 4 more Update: I saw that I installed several versions of jre (sun and open). So I uninstalled the open jre. Now, when calling arduino I get a new error: java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path thrown while loading gnu.io.RXTXCommDriver Exception in thread "main" java.lang.UnsatisfiedLinkError: no rxtxSerial in java.library.path at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1734) at java.lang.Runtime.loadLibrary0(Runtime.java:823) at java.lang.System.loadLibrary(System.java:1028) at gnu.io.CommPortIdentifier.<clinit>(CommPortIdentifier.java:123) at processing.app.Editor.populateSerialMenu(Editor.java:965) at processing.app.Editor.buildToolsMenu(Editor.java:717) at processing.app.Editor.buildMenuBar(Editor.java:502) at processing.app.Editor.<init>(Editor.java:194) at processing.app.Base.handleOpen(Base.java:698) at processing.app.Base.handleOpen(Base.java:663) at processing.app.Base.handleNew(Base.java:578) at processing.app.Base.<init>(Base.java:318) at processing.app.Base.main(Base.java:207)

    Read the article

  • Fast Data: Go Big. Go Fast.

    - by Dain C. Hansen
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 For those of you who may have missed it, today’s second full day of Oracle OpenWorld 2012 started with a rumpus. Joe Tucci, from EMC outlined the human face of big data with real examples of how big data is transforming our world. And no not the usual tried-and-true weblog examples, but real stories about taxi cab drivers in Singapore using big data to better optimize their routes as well as folks just trying to get a better hair cut. Next we heard from Thomas Kurian who talked at length about the important platform characteristics of Oracle’s Cloud and more specifically Oracle’s expanded Cloud Services portfolio. Especially interesting to our integration customers are the messaging support for Oracle’s Cloud applications. What this means is that now Oracle’s Cloud applications have a lightweight integration fabric that on-premise applications can communicate to it via REST-APIs using Oracle SOA Suite. It’s an important element to our strategy at Oracle that supports this idea that whether your requirements are for private or public, Oracle has a solution in the Cloud for all of your applications and we give you more deployment choice than any vendor. If this wasn’t enough to get the juices flowing, later that morning we heard from Hasan Rizvi who outlined in his Fusion Middleware session the four most important enterprise imperatives: Social, Mobile, Cloud, and a brand new one: Fast Data. Today, Rizvi made an important step in the definition of this term to explain that he believes it’s a convergence of four essential technology elements: Event Processing for event filtering, business rules – with Oracle Event Processing Data Transformation and Loading - with Oracle Data Integrator Real-time replication and integration – with Oracle GoldenGate Analytics and data discovery – with Oracle Business Intelligence Each of these four elements can be considered (and architect-ed) together on a single integrated platform that can help customers integrate any type of data (structured, semi-structured) leveraging new styles of big data technologies (MapReduce, HDFS, Hive, NoSQL) to process more volume and variety of data at a faster velocity with greater results.  Fast data processing (and especially real-time) has always been our credo at Oracle with each one of these products in Fusion Middleware. For example, Oracle GoldenGate continues to be made even faster with the recent 11g R2 Release of Oracle GoldenGate which gives us some even greater optimization to Oracle Database with Integrated Capture, as well as some new heterogeneity capabilities. With Oracle Data Integrator with Big Data Connectors, we’re seeing much improved performance by running MapReduce transformations natively on Hadoop systems. And with Oracle Event Processing we’re seeing some remarkable performance with customers like NTT Docomo. Check out their upcoming session at Oracle OpenWorld on Wednesday to hear more how this customer is using Event processing and Big Data together. If you missed any of these sessions and keynotes, not to worry. There's on-demand versions available on the Oracle OpenWorld website. You can also checkout our upcoming webcast where we will outline some of these new breakthroughs in Data Integration technologies for Big Data, Cloud, and Real-time in more details. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";}

    Read the article

  • Glenn Fiedler's fixed timestep with fake threads

    - by kaoD
    I've implemented Glenn Fiedler's Fix Your Timestep! quite a few times in single-threaded games. Now I'm facing a different situation: I'm trying to do this in JavaScript. I know JS is single-threaded, but I plan on using requestAnimationFrame for the rendering part. This leaves me with two independent fake threads: simulation and rendering (I suppose requestAnimationFrame isn't really threaded, is it? I don't think so, it would BREAK JS.) Timing in these threads is independent too: dt for simulation and render is not the same. If I'm not mistaken, simulation should be up to Fiedler's while loop end. After the while loop, accumulator < dt so I'm left with some unspent time (dt) in the simulation thread. The problem comes in the draw/interpolation phase: const double alpha = accumulator / dt; State state = currentState*alpha + previousState * ( 1.0 - alpha ); render( state ); In my render callback, I have the current timestamp to which I can subtract the last-simulated-in-physics-timestamp to have a dt for the current frame. Should I just forget about this dt and draw using the physics thread's dt? It seems weird, since, well, I want to interpolate for the unspent time between simulation and render too, right? Of course, I want simulation and rendering to be completely independent, but I can't get around the fact that in Glenn's implementation the renderer produces time and the simulation consumes it in discrete dt sized chunks. A similar question was asked in Semi Fixed-timestep ported to javascript but the question doesn't really get to the point, and answers there point to removing physics from the render thread (which is what I'm trying to do) or just keeping physics in the render callback too (which is what I'm trying to avoid.)

    Read the article

  • Save the Date - Oracle Partner Community Forum: Exadata, Exalogic and Manageability, Vienna, 23-24 April 2013

    - by Javier Puerta
    Hardware and Software Engineered to Work Together .Ritu { font-family: Arial, Helvetica, sans-serif; } .Ritu { font-family: Arial, Helvetica, sans-serif; } .Ritu { font-family: Arial, Helvetica, sans-serif; } body,td,th { font-family: Arial, Helvetica, sans-serif; font-size: x-small; } .color { color: #F00; } .c { color: #F00; } .c { color: #F00; } .c { color: #000; font-size: xx-small; } .c a { color: #F00; } .c { color: #F00; } .cl { color: #F00; } .b { color: #000; font-size: xx-small; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .c { color: #F00; font-size: small; } .b { font-weight: bold; font-size: x-small; } .c { color: #F00; font-size: x-small; } .clr { color: #F00; } .c { color: #F00; } inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- Oracle PartnerNetwork | Account | Feedback SAVE THE DATE ORACLE PARTNER COMMUNITY FORUM: EXADATA, EXALOGIC AND MANAGEABILITY 23-24 APRIL 2013, VIENNA, AUSTRIA The 2013 event expands its scope to cover all the building blocks of the Cloud infrastructure: Exadata, Exalogic and Manageability! Dear partner I am delighted to announce the 2013 edition of the Exadata, Exalogic and Manageability Partner Community Forum for EMEA partners which will take place in Vienna, Austria, on April 23-24, 2013. After the experience of last year where we ran a joint Exadata and Manageability event, we received requests from many of you to add also Exalogic to the scope of the forum, and this way to cover the complete infrastructure architecture on the Exa platform. The continued market adoption of Exadata and Exalogic is being paralleled by a growth in the rate of projects sold and implemented by partners. Sharing customer cases and best-practices presented by other partners constitutes the core of this event. If you want to present an experience of your company around Exadata, Exalogic or Manageability that can be a learning experience for other partners, we still have some slots in the agenda. (Please contact Javier Puerta if you want to present.) Attending the Community Forum you will also have the opportunity to get Oracle’s insight on new products and market trends. And, of course, interact with the Oracle executives responsible for the Exadata, Exalogic and Manageability business. The atmosphere of beautiful Vienna will be the scenario of the event. Detailed venue and hotel booking information will be sent to you in January. Don't miss out on attending this key event! Save the date now - 23 & 24 April 2013, and watch out for your formal invitation coming soon. Kind regards, Javier Puerta Core Technology Partner Programs, Oracle EMEA E-Mail: [email protected] Jürgen Kress SOA Partner Adoption Oracle EMEA E-Mail: [email protected] Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • Save the Date - Oracle Partner Community Forum: Exadata, Exalogic and Manageability, Vienna, 23-24 April 2013

    - by Javier Puerta
    Hardware and Software Engineered to Work Together .Ritu { font-family: Arial, Helvetica, sans-serif; } .Ritu { font-family: Arial, Helvetica, sans-serif; } .Ritu { font-family: Arial, Helvetica, sans-serif; } body,td,th { font-family: Arial, Helvetica, sans-serif; font-size: x-small; } .color { color: #F00; } .c { color: #F00; } .c { color: #F00; } .c { color: #000; font-size: xx-small; } .c a { color: #F00; } .c { color: #F00; } .cl { color: #F00; } .b { color: #000; font-size: xx-small; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .i { font-style: italic; } .c { color: #F00; font-size: small; } .b { font-weight: bold; font-size: x-small; } .c { color: #F00; font-size: x-small; } .clr { color: #F00; } .c { color: #F00; } inside the Click Here The order you must follow to make the colored link appear in browsers. If not the default window link will appear 1. Select the word you want to use for the link 2. Select the desired color, Red, Black, etc 3. Select bold if necessary ___________________________________________________________________________________________________________________ Templates use two sizes of fonts and the sans-serif font tag for the email. All Fonts should be (Arial, Helvetica, sans-serif) tags Normal size reading body fonts should be set to the size of 2. Small font sizes should be set to 1 !!!!!!!DO NOT USE ANY OTHER SIZE FONT FOR THE EMAILS!!!!!!!! ___________________________________________________________________________________________________________________ -- Oracle PartnerNetwork | Account | Feedback SAVE THE DATE ORACLE PARTNER COMMUNITY FORUM: EXADATA, EXALOGIC AND MANAGEABILITY 23-24 APRIL 2013, VIENNA, AUSTRIA The 2013 event expands its scope to cover all the building blocks of the Cloud infrastructure: Exadata, Exalogic and Manageability! Dear partner I am delighted to announce the 2013 edition of the Exadata, Exalogic and Manageability Partner Community Forum for EMEA partners which will take place in Vienna, Austria, on April 23-24, 2013. After the experience of last year where we ran a joint Exadata and Manageability event, we received requests from many of you to add also Exalogic to the scope of the forum, and this way to cover the complete infrastructure architecture on the Exa platform. The continued market adoption of Exadata and Exalogic is being paralleled by a growth in the rate of projects sold and implemented by partners. Sharing customer cases and best-practices presented by other partners constitutes the core of this event. If you want to present an experience of your company around Exadata, Exalogic or Manageability that can be a learning experience for other partners, we still have some slots in the agenda. (Please contact Javier Puerta if you want to present.) Attending the Community Forum you will also have the opportunity to get Oracle’s insight on new products and market trends. And, of course, interact with the Oracle executives responsible for the Exadata, Exalogic and Manageability business. The atmosphere of beautiful Vienna will be the scenario of the event. Detailed venue and hotel booking information will be sent to you in January. Don't miss out on attending this key event! Save the date now - 23 & 24 April 2013, and watch out for your formal invitation coming soon. Kind regards, Javier Puerta Core Technology Partner Programs, Oracle EMEA E-Mail: [email protected] Jürgen Kress SOA Partner Adoption Oracle EMEA E-Mail: [email protected] Copyright © 2012, Oracle and/or its affiliates. All rights reserved. Contact PBC | Legal Notices and Terms of Use | Privacy Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL - E-mail Services, Redwood Shores, CA 94065, United States

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Windows 8 Camp&ndash;Ways to Prepare

    - by Lori Lalonde
    When Windows 8 was announced at the BUILD conference back in September, it created quite a buzz among the developer community. By the spring of 2012,  Windows 8 Developer Camps started popping up everywhere imaginable. I received a lot of questions from CTTDNUG members about whether or not we would be hosting one locally. If you recall my post about the Windows Phone/Azure Developer Workshop that CTTDNUG hosted back in March, you’ll remember that the biggest hurdle to overcome when planning this type of event was finding the right venue. It took some time, but I finally found a venue that was available and provided the prerequisites needed to ensure this camp is a success. I am very excited that CTTDNUG will be hosting a Windows 8 Camp this summer in the Kitchener/Waterloo area. In fact, it’s coming up in less than 2 weeks. Clearly other developers are excited as well, because our registration numbers show that the event is already 70% full! On top of that, I was fortunate enough to also book two well-known evangelists to present and teach at this full day developer camp: Andrei Marukovich and Atley Hunter. This was the icing on the cake. With the content provided by Microsoft, and two local experts that live and breathe Windows 8 development, I know that I, along with other developers that attend this event, will have the opportunity to maximize our learning potential and hit the ground running. If you plan on attending a Windows 8 Developer Camp soon, and want to ensure you get the most “bang for your buck” (figuratively speaking, since these camps are free), there are some things you can do to prepare before the big day: 1) Install the prerequisites on your own device before the big day I can’t stress this enough. Otherwise, you will be spending valuable time during the hands-on period downloading and installing what is needed, rather than digging into the development and using that time to ask the experts on-hand about programming challenges, issues, questions you may have with respect to your development. Prerequisites: Windows 8 Release Preview Visual Studio 2012 RC Download the Windows 8 SDK Samples 2) Purchase, download, and read Charles Petzold’s newest book:  Programming Windows 6th Edition This is a great introduction to the type of content you will be learning about during the camp. Doing some light reading beforehand might raise some questions about the concepts discussed in the book, which will give you the opportunity to write them down and bring them with you to the camp. The experts on hand will be able to answer them for you. 3) Make use of the freebies that are available Telerik has recently released a preview of their RadControls for Metro. You can sign up to receive a license code to give you access to install the preview for free and start playing around with it. Syncfusion also offers a free download of their Metro Studio package, which is a collection of metro style icons that you can customize and use in your own applications. Last but not least, once you’ve installed the Windows 8 Release Preview on your own device, go to the Windows 8 Store and download a handful of the free apps that are available. Testing out other Metro apps may give you ideas of what you can do in your own apps and analyze what features you like: application flow, type of animations used, concepts that were leveraged, how live tiles were used, etc. I hope you found these tips to be useful as you embark on a new development journey! Although this post focused on how to prepare for a Windows 8 camp, the same ideas are there whichever developer camp/workshop/event you attend. Learning does not begin and end on the day of the event. Attending a developer camp is just one step of many to master whatever technology you are interested in. It is a continuous process, which is fully maximized when you do your homework beforehand, actively participate during,  and follow up by putting what you learned to practice afterwards. Happy coding!

    Read the article

  • Change power button to 'Ask' in Xubuntu 13.10

    - by Gully.Moy
    I have recently installed Xubuntu 13.10 on my Vaio vpcea making me a Linux beginner. The problem is that laptop's power button is right on the edge of the bezel making it far too easy to press accidentally, in my opinion a design fault by Sony. At present, when I press the power button it shuts down strait away and as you can imagine, when I'm accidentally pressing it all the time it gets very annoying! So I planned to change it to ask what I would like to do when I press it or at least ask if I'm sure. So I went through the xfce GUI options "Settings Manager" - "Power Manager" to the field "When power button is pressed", but it was already set to "Ask". So I did some digging and found a thread telling me to navigate to /etc/xdg/xfce4/xfconf/xfce-perchannel-xml/xfce4-power-manager.xml where it said to find power-button-action and check that value="3". It already did. So I looked some more and found this thread which focuses on acpi scripts. I tried solution 1 & 2 using sudoedit to change the files accordingly (I have made executable bash shell scripts already so I think I followed them correctly), but still no difference. I also found this thread which instructed me to edit /etc/systemd/logind.conf so that HandlePowerKey=ignore. Still no luck. I even tried my own approach to completely disable /etc/acpi/powerbtn.sh by renaming it powerbtn.sh.bak hoping for at least no response from the power button... and I have done many reboots in between... but still it shuts down! I have also read that some people have the file /etc/acpi/events/power_button, but I do not. So does anyone have any other ideas? What else could be executing the shutdown sequence Is there something I'm missing? I haven't undone any of these actions so every one of the above files is currently edited on my computer, with the exception that "Solution 2" automatically undone "Solution 1" above. Thanks guys.

    Read the article

  • C# 5 Async, Part 2: Asynchrony Today

    - by Reed
    The .NET Framework has always supported asynchronous operations.  However, different mechanisms for supporting exist throughout the framework.  While there are at least three separate asynchronous patterns used through the framework, only the latest is directly usable with the new Visual Studio Async CTP.  Before delving into details on the new features, I will talk about existing asynchronous code, and demonstrate how to adapt it for use with the new pattern. The first asynchronous pattern used in the .NET framework was the Asynchronous Programming Model (APM).  This pattern was based around callbacks.  A method is used to start the operation.  It typically is named as BeginSomeOperation.  This method is passed a callback defined as an AsyncCallback, and returns an object that implements IAsyncResult.  Later, the IAsyncResult is used in a call to a method named EndSomeOperation, which blocks until completion and returns the value normally directly returned from the synchronous version of the operation.  Often, the EndSomeOperation call would be called from the callback function passed, which allows you to write code that never blocks. While this pattern works perfectly to prevent blocking, it can make quite confusing code, and be difficult to implement.  For example, the sample code provided for FileStream’s BeginRead/EndRead methods is not simple to understand.  In addition, implementing your own asynchronous methods requires creating an entire class just to implement the IAsyncResult. Given the complexity of the APM, other options have been introduced in later versions of the framework.  The next major pattern introduced was the Event-based Asynchronous Pattern (EAP).  This provides a simpler pattern for asynchronous operations.  It works by providing a method typically named SomeOperationAsync, which signals its completion via an event typically named SomeOperationCompleted. The EAP provides a simpler model for asynchronous programming.  It is much easier to understand and use, and far simpler to implement.  Instead of requiring a custom class and callbacks, the standard event mechanism in C# is used directly.  For example, the WebClient class uses this extensively.  A method is used, such as DownloadDataAsync, and the results are returned via the DownloadDataCompleted event. While the EAP is far simpler to understand and use than the APM, it is still not ideal.  By separating your code into method calls and event handlers, the logic of your program gets more complex.  It also typically loses the ability to block until the result is received, which is often useful.  Blocking often requires writing the code to block by hand, which is error prone and adds complexity. As a result, .NET 4 introduced a third major pattern for asynchronous programming.  The Task<T> class introduced a new, simpler concept for asynchrony.  Task and Task<T> effectively represent an operation that will complete at some point in the future.  This is a perfect model for thinking about asynchronous code, and is the preferred model for all new code going forward.  Task and Task<T> provide all of the advantages of both the APM and the EAP models – you have the ability to block on results (via Task.Wait() or Task<T>.Result), and you can stay completely asynchronous via the use of Task Continuations.  In addition, the Task class provides a new model for task composition and error and cancelation handling.  This is a far superior option to the previous asynchronous patterns. The Visual Studio Async CTP extends the Task based asynchronous model, allowing it to be used in a much simpler manner.  However, it requires the use of Task and Task<T> for all operations.

    Read the article

  • Pattern for Accessing MySQL connection

    - by Dipan Mehta
    We have an application which is C++ trying to access MySQL database. There are several (about 5 or so) threads in the application (with Boost library for threading) and in each thread has a few objects, each of which is trying to access Database for its' own purpose. It has a simple ORM kind of model but that really is not an important factor here. There are three potential access patterns i can think of: There could be single connection object per application or thread and is shared between all (or group). The object needs to be thread safe and there will be contentions but MySQL will not be fired with too many connections. Every object could initiate connection on its own. The database needs to take care of concurrency (which i think MySQL can) and the design could be much simpler. There could be two possibilities here. a. either object keeps a persistent connection for its life OR b. object initiate connection as and when needed. To simplify the contention as in case of 1 and not to create too many sockets as in case of 2, we can have group/set based connections. So there could be there could be more than one connection (say N), each of this connection could be shared connection across M objects. Naturally, each of the pattern has different resource cost and would work under different constraints and objectives. What criteria should i use to choose the pattern of this for my own application? What are some of the advantages and disadvantages of each of these pattern over the other? Are there any other pattern which is better? PS: I have been through these questions: mysql, one connection vs multiple and MySQL with mutiple threads and processes But they don't quite answer exactly what i am trying to ask.

    Read the article

  • Does concurrency inherently introduce "randomness" into a game?

    - by Jeff
    When a game is implemented with concurrency (as most games are), does this necessarily, by its very nature, introduce an element of randomness into the game that is outside of the players' control? Note that when I use the word "random", I'm not meaning to launch into a philosophical debate about the deterministic nature of the system. I understand that concurrency is deterministic in the sense that the operating system decides which processes to allow time on the CPU and in what order (or the JVM controls which Thread's turn it is to execute, etc). But my understanding of this is that there is no way to control or predict whether one thread's next command will execute before or after another. The reason I'm asking is because this seems like a fundamental difficulty for game development where a game is supposedly designed around a player's skill. Consider a game like League of Legends. Assume that two players are battling it out. It's a very close contest between the two and it's coming down to the wire -- so much so that whoever gets their last attack off will be the one to kill the other and win the game for their team. If the players are implemented using concurrency and the situation really was like this, is it essentially out of the players' hands at this point? Is the outcome of this match all up to whatever system is arbitrarily deciding which player's thread/process will execute next? If not, what am I misunderstanding about concurrency? If so, is there any way around this problem so that a game of skill can always be a game of skill, especially in those most crucial moments?

    Read the article

  • Bash Script help required

    - by Sunil J
    I am trying to get this bash script that i found on a forum to work. Copied it to text editor. Saved it as script.sh chmod 700 and tried to run it. rootdir="/usr/share/malware" day=`date +%Y%m%d` url=`echo "wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:\": :g' |\ awk '{print \"http://lists.clean-mx.com/pipermail/viruswatch/$day/\"$3}'"|sh` filename=`wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:": :g' |awk '{print $3}'` links -dump $url$filename | awk '/Up/'|grep "TR\|exe" | awk '{print $2,$8,$10,$11,$12"\n"}' > $rootdir/>$filename dirname=`wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:": :g' |awk '{print $3}'|sed 's:.html::g'` rm -rf $rootdir/$dirname mkdir $rootdir/$dirname cd $rootdir grep "exe$" $filename |awk '{print "wget \""$5"\""}' | sh ls *.exe | xargs md5 >> checksums mv *.exe $dirname rm -r $rootdir/*exe* mv checksums $rootdir/$dirname mv $filename $rootdir/$dirname I get the following message.. script.sh: line 11: /usr/share/malware/: Is a directory script.sh: line 11: links: command not found

    Read the article

  • Tuning B2B Server Engine Threads in SOA Suite 11g

    - by Shub Lahiri, A-Team
    Background B2B 11g has a number of parameters that can be tweaked to tune the engine for handling high volumes of messages. These parameters are also known as B2B server properties and managed via the EM console.  This note highlights one aspect of the tuning exercise and describes the different threads, that can be configured to tune the performance of a B2B server. Symptoms The most common indicator of a B2B engine in need of a tuning is reflected in the constant build-up of messages in an internal JMS queue within the B2B server. It is called B2B_EVENT_QUEUE and can be monitored via the Weblogic server console. Whenever such a behaviour is seen, it usually results in general degradation of performance. Remedy There could be many contributing factors behind a B2B server's degradation of performance. However, one of the first places to tune the server from the out-of-the-box, default configuration is to change the number of internal engine threads allocated within the B2B server. Usually the default configuration for the B2B server engine threads is not suitable for high-volume of messaging loads. So, it is necessary to increase the counts for 3 types of such threads, by specifying the appropriate B2B server properties via the EM console, namely, Inbound - b2b.inboundThreadCount Outbound - b2b.outboundThreadCount Default - b2b.defaultThreadCount The function of these threads are fairly self-explanatory. In other words, the inbound threads process the inbound messages that are coming into the B2B server from an external endpoint. Similarly, the outbound threads processes the messages that are sent out from the B2B server. The default threads are responsible for certain B2B server-specific special tasks. In case the inbound and outbound thread counts are not specified, the default thread count also dictates the total number of inbound and outbound threads. As found in any tuning exercise, the optimisation of these threads is usually reached via an iterative process. The best working combination of the thread counts are directly related to the system infrastructure, traffic load and several other environmental factors.

    Read the article

  • 101 Ways to Participate...and make the future Java

    - by heathervc
     In case you missed it earlier today, and as promised in BOF6283, here are the 101 Ways to Improve (and Make the Future) Java...thanks to Bruno Souza of SouJava and Martijn Verburg of the London Java Community for their contributions! Join or create a JUG Come to the meetings Help promoting your JUG: twitter, facebook, etc Find someone that can give a talk Get your company to sponsor (a meeting, an event) Organize an activity (meetings, hackathons, dojos, etc) Answer questions on a mailing list (or simply join!) Volunteer for a small, one time tasks (creating a web page, helping with an activity) Come early to an event, and help to carry the piano Moderate a list or add things to the wiki Participate in the organization meetings or mailing lists Take pictures of an event or meeting and publish them online Write a blog about an event or meeting, to help promote the group Help record and post a session online Present your JavaOne experience when you get back Repeat the best talk you saw at JavaOne at a JUG meeting Send this list of ideas to other Java developers in your area so they can help out too! Present a step-by-step tutorial Present GreenFoot and Alice to school students Present BlueJ and Alice to university students Teach those tools to teachers and professors Write a step-by-step tutorial on your blog or to a magazine Create a page that lists resources Give a talk about your favorite Java feature or technology Learn a new Java API and present to your co-workers Then, present in a JUG meeting, and then, present it in an event in your area, and submit it to JavaOne! Create a study group to get certified or to learn some new Java technology Teach a non-Java developer how to download the basic tools and where to find more information Download and use an open source project Improve the documentation Write an article or a blog post about the project Write an FAQ Join and participate on the mailing list Describe a bug in detail and submit a bug report Fix a bug and submit it to the project Give a talk about it at a JUG meeting Teach your co-workers how to use the project Sign up to Adopt a JSR Test regular builds of the Reference Implementation (RI) Report bugs in the RI Submit Feature Requests to the spec Triage issues on the issue tracker Run a hack day to discuss the API Moderate mailing lists and forums Create an FAQ or Wiki Evangelize a specification on Twitter, G+, Hacker News, etc Give a lightning talk Help build the RI Help build the Technical Compatibility Kit (TCK) Create a Podcast Learn Latin - e.g. legal language, translate to English Sign up to Adopt OpenJDK Run a Bugathon Fix javac compiler warnings Build virtual images Add tests to Java Submit Javadoc patches Give a webbing Teach someone to build OpenJDK Hold a brown bag session at work Fix the oldest known bug Overhaul Javadoc to use HTML Load the OpenJDK into different IDEs Run a build farm node Test your code on a nightly build Learn how to read Java byte code Visit JCP.org Follow jcp_org on Twitter Friend JCP on Facebook Read JCP Blog Register for JCP.org site Create a JSR Watch List Review JSRs in progress Comment on JSRs in progress, write and track bug reports, use cases, etc Review JSRs in Maintenance Comment on JSRs in Maintenance Implement Final JSRs Review the Transparency of JSRs in progress and provide feedback to the PMO and Spec Lead/community Become a JCP Member or associate with a current JCP member Nominate to serve on an Expert Group (EG) Serve on an EG Submit a JSR proposal and become Spec Lead Take a Spec Lead role in an Inactive or Dormant JSR Nominate for an Executive Committee (EC) seat Vote in the EC elections Vote in EC Special Elections Review EC Meeting Summaries Attend Spec Lead calls Write blogs, articles on your experiences Join the EC project on java.net Join JCP.Next on java.net/JSR 358 Participate on the JCP forums and join JSR projects on java.net Suggest agenda items for open EC meetings Attend public EC teleconference (2x per year) Attend open EC meetings at JavaOne Nominate for JCP Annual Awards Attend annual JavaOne and JCP Annual Awards Ceremony Attend JCP related BOF sessions and give your feedback to Program Office Invite JCP program office members to your JUG  or meetup Invite JSR Spec Leads to your JUG or meetup And always - hold a party!

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • Are these advanced/unfair interview questions regarding Java concurrency?

    - by sparc_spread
    Here are some questions I've recently asked interviewees who say they know Java concurrency: Explain the hazard of "memory visibility" - the way the JVM can reorder certain operations on variables that are unprotected by a monitor and not declared volatile, such that one thread may not see the changes made by another thread. Usually I ask this one by showing code where this hazard is present (e.g. the NoVisibility example in Listing 3.1 from "Java Concurrency in Practice" by Goetz et al) and asking what is wrong. Explain how volatile affects not just the actual variable declared volatile, but also any changes to variables made by a thread before it changes the volatile variable. Why might you use volatile instead of synchronized? Implement a condition variable with wait() and notifyAll(). Explain why you should use notifyAll(). Explain why the condition variable should be tested with a while loop. My question is - are these appropriate or too advanced to ask someone who says they know Java concurrency? And while we're at it, do you think that someone working in Java concurrency should be expected to have an above-average knowledge of Java garbage collection?

    Read the article

  • Android 2D terrain scrolling

    - by Nikola Ninkovic
    I want to make infinite 2D terrain based on my algorithm.Then I want to move it along Y axis (to the left) This is how I did it : public class Terrain { Queue<Integer> _bottom; Paint _paint; Bitmap _texture; Point _screen; int _numberOfColumns = 100; int _columnWidth = 20; public Terrain(int screenWidth, int screenHeight, Bitmap texture) { _bottom = new LinkedList<Integer>(); _screen = new Point(screenWidth, screenHeight); _numberOfColumns = screenWidth / 6; _columnWidth = screenWidth / _numberOfColumns; for(int i=0;i<=_numberOfColumns;i++) { // Generate terrain point and put it into _bottom queue } _paint = new Paint(); _paint.setStyle(Paint.Style.FILL); _paint.setShader(new BitmapShader(texture, Shader.TileMode.REPEAT, Shader.TileMode.REPEAT)); } public void update() { _bottom.remove(); // Algorithm calculates next point _bottom.add(nextPoint); } public void draw(Canvas canvas) { Iterator<Integer> i = _bottom.iterator(); int counter = 0; Path path = new Path(); path.moveTo(0, _screen.y); while (i.hasNext()) { path.lineTo(counter, _screen.y-i.next()); counter += _columnWidth; } path.lineTo(_screen.x, _screen.y); path.lineTo(0, _screen.y); canvas.drawPath(path2, _paint); } } The problem is that the game is too 'fast', so I tried with pausing thread with Thread.sleep(50); in run() method of my game thread but then it looks too torn. Well, is there any way to slow down drawing of my terrain ?

    Read the article

  • TXPAUSE : polite waiting for hardware transactional memory

    - by Dave
    Classic locks are an appropriate tool to prevent potentially conflicting operations A and B, invoked by different threads, from running at the same time. In a sense the locks cause either A to run before B or vice-versa. Similarly, we can replace the locks with hardware transactional memory, or use transactional lock elision to leverage potential disjoint access parallelism between A and B. But often we want A to wait until B has run. In a Pthreads environment we'd usually use locks in conjunction with condition variables to implement our "wait until" constraint. MONITOR-MWAIT is another way to wait for a memory location to change, but it only allows us to track one cache line and it's only available on x86. There's no similar "wait until" construct for hardware transactions. At the instruction-set level a simple way to express "wait until" in transactions would be to add a new TXPAUSE instruction that could be used within an active hardware transaction. TXPAUSE would politely stall the invoking thread, possibly surrendering or yielding compute resources, while at the same time continuing to track the transaction's address-set. Once a transaction has executed TXPAUSE it can only abort. Ideally that'd happen when some other thread modifies a variable that's in the transaction's read-set or write-set. And since we're aborting all writes would be discarded. In a sense this gives us multi-location MWAIT but with much more flexibility. We could also augment the TXPAUSE with a cycle-count bound to cap the time spent stalled. I should note that we can already enter a tight spin loop in a transaction to wait for updates to address-set to cause an abort. Assuming that the implementation monitors the address-set via cache-coherence probes, by waiting in this fashion we actually communicate via the probes, and not via memory values. That is the updating thread signals the waiter via probes instead of by traditional memory values. But TXPAUSE gives us a polite way to spin.

    Read the article

  • Get Func-y v2.0

    - by PhubarBaz
    In my last post I talked about using funcs in C# to do async calls in WinForms to free up the main thread for the UI. In that post I demonstrated calling a method and then waiting until the value came back. Today I want to talk about calling a method and then continuing on and handling the results of the async call in a callback.The difference is that in the previous example although the UI would not lock up the user couldn't really do anything while the other thread was working because it was waiting for it to finish. This time I want to allow the user to continue to do other stuff while waiting for the thread to finish.Like before I have a service call I want to make that takes a long time to finish defined in a method called MyServiceCall. We need to define a callback method takes an IAsyncResult parameter.public ServiceCallResult MyServiceCall(int param1)...public int MyCallbackMethod(IAsyncResult ar)...We start the same way by defining a delegate to the service call method using a Func. We need to pass an AsyncCallback object into the BeginInvoke method. This will tell it to call our callback method when MyServiceCall finishes. The second parameter to BeginInvoke is the Func delegate. This will give us access to it in our callback.Func<int, ServiceCallResult> f = MyServiceCall;AsyncCallback callback =   new AsyncCallback(MyCallbackMethod);IAsyncResult async = f.BeginInvoke(23, callback, f); Now let's expand the callback method. The IAsyncResult parameter contains the Func delegate in its AsyncState property. We call EndInvoke on that Func to get the return value.public int MyCallbackMethod(IAsyncResult ar){    Func<int, ServiceCallResult> delegate =        (Func<int, ServiceCallResult>)ar.AsyncState;    ServiceCallResult result = delegate.EndInvoke(ar);}There you have it. Now you don't have to make the user wait for something that isn't critical to the loading of the page.

    Read the article

  • MVC pattern synchronisation

    - by Hariprasad
    I am facing a problem in synchronizing my model and view threads I have a view which is table. In it, user can select a few rows. I update the view as soon as the user clicks on any row since I don't want the UI to be slow. This updating is done by a logic which runs in the controller thread below. At the same time, the controller will update the model data too, which takes place in a different thread. i.e., controller puts the query in a queue, which is then executed by the model thread - which is a single-threaded interface. As soon as the query executes, controller will get a signal. Now, In order to keep the view and model synchronized, I will update the view again based on the return value of the query (the data returned by model) - even though I updated the view already for that user action. But, I am facing issues because, its taking a lot of time for the model to return the result, by that time user would have performed multiple clicks. So, as a result of updating the view again based on the information from model, the view sometimes goes back to the state in which the previous clicks were made (Suppose user clicks thrice on different rows. I update the view as soon as the click happens. Also, I update the view when I get data back from the model - which is supposed to be same as the already updated state of the view. Now, when the user clicks third time, I get data for the first click from model. As a result, view goes back to a state which is generated by the first click) Is there any way to handle such a synchronization issue?

    Read the article

  • Should I expose IObservable<T> on my interfaces?

    - by Alex
    My colleague and I have dispute. We are writing a .NET application that processes massive amounts of data. It receives data elements, groups subsets of them into blocks according to some criterion and processes those blocks. Let's say we have data items of type Foo arriving some source (from the network, for example) one by one. We wish to gather subsets of related objects of type Foo, construct an object of type Bar from each such subset and process objects of type Bar. One of us suggested the following design. Its main theme is exposing IObservable objects directly from the interfaces of our components. // ********* Interfaces ********** interface IFooSource { // this is the event-stream of objects of type Foo IObservable<Foo> FooArrivals { get; } } interface IBarSource { // this is the event-stream of objects of type Bar IObservable<Bar> BarArrivals { get; } } / ********* Implementations ********* class FooSource : IFooSource { // Here we put logic that receives Foo objects from the network and publishes them to the FooArrivals event stream. } class FooSubsetsToBarConverter : IBarSource { IFooSource fooSource; IObservable<Bar> BarArrivals { get { // Do some fancy Rx operators on fooSource.FooArrivals, like Buffer, Window, Join and others and return IObservable<Bar> } } } // this class will subscribe to the bar source and do processing class BarsProcessor { BarsProcessor(IBarSource barSource); void Subscribe(); } // ******************* Main ************************ class Program { public static void Main(string[] args) { var fooSource = FooSourceFactory.Create(); var barsProcessor = BarsProcessorFactory.Create(fooSource) // this will create FooSubsetToBarConverter and BarsProcessor barsProcessor.Subscribe(); fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } The other suggested another design that its main theme is using our own publisher/subscriber interfaces and using Rx inside the implementations only when needed. //********** interfaces ********* interface IPublisher<T> { void Subscribe(ISubscriber<T> subscriber); } interface ISubscriber<T> { Action<T> Callback { get; } } //********** implementations ********* class FooSource : IPublisher<Foo> { public void Subscribe(ISubscriber<Foo> subscriber) { /* ... */ } // here we put logic that receives Foo objects from some source (the network?) publishes them to the registered subscribers } class FooSubsetsToBarConverter : ISubscriber<Foo>, IPublisher<Bar> { void Callback(Foo foo) { // here we put logic that aggregates Foo objects and publishes Bars when we have received a subset of Foos that match our criteria // maybe we use Rx here internally. } public void Subscribe(ISubscriber<Bar> subscriber) { /* ... */ } } class BarsProcessor : ISubscriber<Bar> { void Callback(Bar bar) { // here we put code that processes Bar objects } } //********** program ********* class Program { public static void Main(string[] args) { var fooSource = fooSourceFactory.Create(); var barsProcessor = barsProcessorFactory.Create(fooSource) // this will create BarsProcessor and perform all the necessary subscriptions fooSource.Run(); // this enters a loop of listening for Foo objects from the network and notifying about their arrival. } } Which one do you think is better? Exposing IObservable and making our components create new event streams from Rx operators, or defining our own publisher/subscriber interfaces and using Rx internally if needed? Here are some things to consider about the designs: In the first design the consumer of our interfaces has the whole power of Rx at his/her fingertips and can perform any Rx operators. One of us claims this is an advantage and the other claims that this is a drawback. The second design allows us to use any publisher/subscriber architecture under the hood. The first design ties us to Rx. If we wish to use the power of Rx, it requires more work in the second design because we need to translate the custom publisher/subscriber implementation to Rx and back. It requires writing glue code for every class that wishes to do some event processing.

    Read the article

  • Separate update and render

    - by NSAddict
    I'm programming a simple Snake in Java. I'm a complete newbie when it comes to Java and Game Developing, so please bear with me ;) Until now, I have been using a UI thread, as well as a update-thread. The update thread just set the position, set the GameObjects, and so on. I didn't think much of concurrency, but now I've come to a problem. I wanted to modify the ArrayList<GameObject>, but it throws a java.util.ConcurrentModificationException. With a little research I found out that this happens because the two threads are trying to access the variables at the same time. But I didn't really find a way to prevent this. I thought about copying the array and swapping them out when the rendering is finished, but I would have to deep-copy them, which isn't really the best solution in my opinion. It probably eats up more CPU resources than a single-threaded game. Are there any other ways to prevent this? Thanks a lot for your help!

    Read the article

< Previous Page | 234 235 236 237 238 239 240 241 242 243 244 245  | Next Page >