Search Results

Search found 40915 results on 1637 pages for 'virtual method'.

Page 1062/1637 | < Previous Page | 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069  | Next Page >

  • XNA Seeing through heightmap problem

    - by Jesse Emond
    I've recently started learning how to program in 3D with XNA and I've been trying to implement a Terrain3D class(a very simple height map). I've managed to draw a simple terrain, but I'm getting a weird bug where I can see through the terrain. This bug happens when I'm looking through a hill from the map. Here is a picture of what happens: I was wondering if this is a common mistake for starters and if any of you ever experienced the same problem and could tell me what I'm doing wrong. If it's not such an obvious problem, here is my Draw method: public override void Draw() { Parent.Engine.SpriteBatch.Begin(SpriteBlendMode.None, SpriteSortMode.Immediate, SaveStateMode.SaveState); Camera3D cam = (Camera3D)Parent.Engine.Services.GetService(typeof(Camera3D)); if (cam == null) throw new Exception("Camera3D couldn't be found. Drawing a 3D terrain requires a 3D camera."); float triangleCount = indices.Length / 3f; basicEffect.Begin(); basicEffect.World = worldMatrix; basicEffect.View = cam.ViewMatrix; basicEffect.Projection = cam.ProjectionMatrix; basicEffect.VertexColorEnabled = true; Parent.Engine.GraphicsDevice.VertexDeclaration = new VertexDeclaration( Parent.Engine.GraphicsDevice, VertexPositionColor.VertexElements); foreach (EffectPass pass in basicEffect.CurrentTechnique.Passes) { pass.Begin(); Parent.Engine.GraphicsDevice.Vertices[0].SetSource(vertexBuffer, 0, VertexPositionColor.SizeInBytes); Parent.Engine.GraphicsDevice.Indices = indexBuffer; Parent.Engine.GraphicsDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, vertices.Length, 0, (int)triangleCount); pass.End(); } basicEffect.End(); Parent.Engine.SpriteBatch.End(); } Parent is just a property holding the screen that the component belongs to. Engine is a property of that parent screen holding the engine that it belongs to. If I should post more code(like the initialization code), then just leave a comment and I will.

    Read the article

  • Feed the Beast Ultimate black screen after login 13.04?

    - by Drew S
    I get a black screen after login feed the beast ultimate splash animation. I have tried the manual LWJGL upgrading, using different versions of java (6 and 7 openJDK). no matter what I get this 2013-06-28 15:23:17 [INFO] [STDERR] Exception in thread "Minecraft main thread" java.lang.ExceptionInInitializerError 2013-06-28 15:23:17 [INFO] [STDERR] at net.minecraft.client.Minecraft.a(Minecraft.java:356) 2013-06-28 15:23:17 [INFO] [STDERR] at asq.a(SourceFile:56) 2013-06-28 15:23:17 [INFO] [STDERR] at net.minecraft.client.Minecraft.run(Minecraft.java:746) 2013-06-28 15:23:17 [INFO] [STDERR] at java.lang.Thread.run(Thread.java:722) 2013-06-28 15:23:17 [INFO] [STDERR] Caused by: java.lang.ArrayIndexOutOfBoundsException: 0 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR$Screen.<init>(XRandR.java:234) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR$Screen.<init>(XRandR.java:196) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR.populate(XRandR.java:87) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR.access$100(XRandR.java:52) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR$1.run(XRandR.java:110) 2013-06-28 15:23:17 [INFO] [STDERR] at java.security.AccessController.doPrivileged(Native Method) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.XRandR.getConfiguration(XRandR.java:108) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.LinuxDisplay.init(LinuxDisplay.java:618) 2013-06-28 15:23:17 [INFO] [STDERR] at org.lwjgl.opengl.Display.<clinit>(Display.java:135) 2013-06-28 15:23:17 [INFO] [STDERR] ... 4 more

    Read the article

  • Term for unit testing that separates test logic from test result data

    - by mario
    So I'm not doing any unit testing. But I've had an idea to make it more appropriate for my field of use. Yet it's not clear if something like this exists, and if, how it would possibly be called. Ordinary unit tests combine the test logic and the expected outcome. In essence the testing framework only checks for booleans (did this match, did the expected result result). To generalize, the test code itself references the audited functions, and also explicites the result values like so: unit::assert( test_me() == 17 ) What I'm looking for is a separation of concerns. The test itself should only contain the tested logic. The outcome and result data should be handled by the unit testing or assertion framework. As example: unit::probe( test_me() ) Here the probe actually doubles as collector in the first run, and afterwards as verification method. The expected 17 is not mentioned in the test code, but stored or managed elsewhere. How is this scheme called? Or how would you call it? I hope I can find some actual implementations with the proper terminology. Obviously such a pattern is unfit for TDD. It's strictly for regression testing. Also obviously, it cannot be used for all cases. Only the simpler test subjects can be analyzed that way, for anything else the ordinary unit test setup and assertion steps are required. And yes, this could be manually accomplished by crafting a ResultWhateverObject, but that would still require hardwiring that to the test logic. Also keep in mind that I'm inquiring for use with scripting languages, and not about Java. I'm aware that the xUnit pattern originates there, and why it's hence as elaborate as it is. Btw, I've discovered one test execution framework which allows for shortening simple test notations to: test_me(); // 17 While thus the result data is no longer coded in (it's a comment), that's still not a complete separation and of course would work only for scalar results.

    Read the article

  • Java EE and GlassFish Server Roadmap Update

    - by John Clingan
    2013 has been a stellar year for both the Java EE and GlassFish Server communities. On June 12, Oracle and its partners announced the release of Java EE 7, which delivers on three major themes – HTML5, developer productivity, and meeting enterprise demands. The online event attracted over 10,000 views in the first two days! During the online event, Oracle also announced the availability of GlassFish Server Open Source Edition 4, the world's first Java EE 7 compatible application server. The primary role of GlassFish Server Open Source Edition has been, and continues to be, driving adoption of the latest release of the Java Platform, Enterprise Edition. Oracle also announced the Java EE 7 SDK, which bundles GlassFish Server Open Source Edition 4, as a Java EE 7 learning aid. Last, Oracle publicly announced the Java EE 7 reference implementation based on GlassFish Server Open Source Edition 4. Java EE is a popular platform, as evidenced by the 20+ Java EE 6 compatible implementations available to choose from. After the launch of Java EE 7 and GlassFish Server Open Source Edition 4, we began planning the Java EE 8 roadmap, which was covered during the JavaOne Strategy Keynote. To summarize, there is a lot of interest in improving on HTML5 support, Cloud, and investigating NoSQL support. We received a lot of great feedback from the community and customers on what they would like to see in Java EE 8. As we approached JavaOne 2013, we started planning the GlassFish Server roadmap. What we announced at JavaOne was that GlassFish Server Open Source Edition 4.1 is scheduled for 2014. Here is an update to that roadmap. GlassFish Server Open Source Edition 4.1 is scheduled for 2014 We are planning updates as needed to GlassFish Server Open Source Edition, which is commercially unsupported As we head towards Java EE 8: The trunk will eventually transition to GlassFish Server Open Source Edition 5 as a Java EE 8 implementation The Java EE 8 Reference Implementation will be derived from GlassFish Server Open Source Edition 5. This replicates what has been done in past Java EE and GlassFish Server releases. Oracle will no longer release future major releases of Oracle GlassFish Server with commercial support – specifically Oracle GlassFish Server 4.x with commercial Java EE 7 support will not be released. Commercial Java EE 7 support will be provided from WebLogic Server. Oracle GlassFish Server will not be releasing a 4.x commercial version Expanding on that last bullet, new and existing Oracle GlassFish Server 2.1.x and 3.1.x commercial customers will continue to be supported according to the Oracle Lifetime Support Policy. Oracle recommends that existing commercial Oracle GlassFish Server customers begin planning to move to Oracle WebLogic Server, which is a natural technical and license migration path forward: Applications developed to Java EE standards can be deployed to both GlassFish Server and Oracle WebLogic Server GlassFish Server and Oracle WebLogic Server have implementation-specific deployment descriptor interoperability (here and here). GlassFish Server 3.x and Oracle WebLogic Server share quite a bit of code, so there are quite a bit of configuration and (extended) feature similarities. Shared code includes JPA, JAX-RS, WebSockets (pre JSR 356 in both cases), CDI, Bean Validation, JAX-WS, JAXB, and WS-AT. Both Oracle GlassFish Server 3.x and Oracle WebLogic Server 12c support Oracle Access Manager, Oracle Coherence, Oracle Directory Server, Oracle Virtual Directory, Oracle Database, Oracle Enterprise Manager and are entitled to support for the underlying Oracle JDK. To summarize, Oracle is committed to the future of Java EE.  Java EE 7 has been released and planning for Java EE 8 has begun. GlassFish Server Open Source Edition continues to be the strategic foundation for Java EE reference implementation going forward. And for developers, updates will be delivered as needed to continue to deliver a great developer experience for GlassFish Server Open Source Edition. We are planning for GlassFish Server Open Source Edition 5 as the foundation for the Java EE 8 reference implementation, as well as bundling GlassFish Server Open Source Edition 5 in a Java EE 8 SDK, which is the most popular distribution of GlassFish. This will allow GlassFish releases to be more focused on the Java EE platform and community-driven requirements. We continue to encourage community contributions, bug reports, participation on the GlassFish forum, etc. Going forward, Oracle WebLogic Server will be the single strategic commercially supported application server from Oracle. Disclaimer: The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract.It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, and timing of any features or functionality described for Oracle’s products remains at the sole discretion of Oracle.

    Read the article

  • Using json as database with EF, how can I link EF and the json file during DbContext initialization?

    - by blacai
    For a personal testing-project I am considering to create a SPA with the following technologies: ASP.NET MVC + EF + WebAPI + AngularJS. The project will make use of small amount of data, so I was thinking I could use just a .json file as storage. But I am not sure about how to proceed with the link between EF and the json file in the initialization of the DbContext. I found a stackoverflow related question: http://stackoverflow.com/questions/13899342/can-we-use-json-as-a-database I know the basics of edit files and store data inside. What I tried is to get the data from the json file in the initilizer method and create the objects one by one. This is more a doubt about how this works if I save/update an object in the dbcontext, do I need to go through all the elements and add/update it manually? Is it better to rewrite the complete file? According to this http://stackoverflow.com/questions/7895335/append-data-to-a-json-file-with-php it is not a good practice to use json/XML for data wich will be manipulated. Anyone has experience with anything similar? Is this a really bad idea and I should use another kind of data-storage?

    Read the article

  • How do I make time?

    - by SystemNetworks
    I wanted to output a text for a certain amount of time. One way is to use threads. Are there any other ways? I can't use threads for slick2d. This is my code when I use threads for slick: package javagame; import org.newdawn.slick.GameContainer; import org.newdawn.slick.Graphics; import org.newdawn.slick.Image; import java.util.Random; import org.newdawn.slick.Input; import org.newdawn.slick.*; import org.newdawn.slick.state.*; import org.lwjgl.input.Mouse; public class thread1 implements Runnable { String showUp; int timeLeft; public thread1(String s) { s = showUp; } public void run(Graphics g) { try { g.drawString("%s is sleeping %d", 500, 500); Thread.sleep(timeLeft); g.drawString("%s is awake", 600,600); } catch(Exception e) { } } @Override public void run() { // TODO Auto-generated method stub run(); } } It auto generates a new run() And also when I call it to my main class it has stack overflow!

    Read the article

  • How to Implement Complex Form Data?

    - by SoulBeaver
    I'm supposed to implement a relatively complex form that looks like follows, but has at least four more pages requiring the user to fill in all necessary information for the tracks: This data will need to be sent to the server, which is implemented using Dropwizard. I'm looking for best practices on how to upload and send such a complex form with potentially dozens of songs to the server. The simplest available solution I have seen is a simple multipart/form-data request with the following form schema (Source): Client <html> <body> <h1>File Upload with Jersey</h1> <form action="rest/file/upload" method="post" enctype="multipart/form-data"> <p> Select a file : <input type="file" name="file" size="45" /> </p> <input type="submit" value="Upload It" /> </form> </body> </html> Server @POST @Path("/upload") @Consumes(MediaType.MULTIPART_FORM_DATA) public Response uploadTrack(final FormDataMultiPart multiPart) { List<FormDataBodyPart> artists = multiPart.getFields("artist"); StringBuffer output = new StringBuffer(); for (FormDataBodyPart artist : artists) output.append(artist.getValueAs(String.class)); List<FormDataBodyPart> tracks = multiPart.getFields("track"); for (FormDataBodyPart track : tracks) writeToFile(track.getValueAs(InputStream.class), "Foo"); return Response.status(200).entity(output.toString()).build(); } Then I have also read about file uploads via Ajax or Formdata (Mozilla HttpRequest) which allows for Posts in the formats application/x-www-form-urlencoded, multipart/form-data, or text/plain. I don't know which approach, if any, is best. An ideal solution would be to utilize Jackson to convert a json string into my data objects, but I don't get the impression that this is possible with binary data.

    Read the article

  • RightNow CX @ OpenWorld: What to Experience

    - by Tony Berk
    We want to welcome our RightNow CX customers to Oracle OpenWorld next week. Get ready for a great week and a whole new experience! For a high level overview of what is going on during the week, please review these previous posts: Is There a Cloud Over OpenWorld? and What to "CRM" in San Francisco? CRM Highlights for OpenWorld '12. Also, don't forget you can add on the Customer Experience Summit @ OpenWorld to make your week even more complete and get involved with the Experience Revolution! Below is a highlight of only some of the RightNow related sessions at OpenWorld. Please use OpenWorld Schedule Builder or check the OpenWorld Content Catalog for all of the session details and any time or location changes. Tip: Pre-enrolled session registrants via Schedule Builder are allowed into the session rooms before anyone else, so Schedule Builder will guarantee you a seat. Many of the sessions below will likely be at capacity. No better way to start off than hearing where Oracle RightNow is going! Oracle RightNow CX Cloud Service Vision and Roadmap (CON9764) - Oct 1, 10:45 AM. Oracle RightNow CX Cloud Service combines Web, social, and contact center experiences for a unified, cross-channel service solution in the cloud, enabling organizations to increase sales and adoption, build trust, strengthen relationships, and reduce costs and effort. Come to this session to hear from David Vap and his team of Oracle experts about where the product is going and how Oracle is committed to accelerating the pace of innovation and value to its customers. Interested in the Cloud and want to know why some leading CIOs are moving to the cloud? You can hear first hand from CIOs from Emerson, Intuit and Overstock.com: CIOs and Governance in the Cloud (CON9767) - Oct 3, 11:45 AM.   And of course there are a number of sessions that drill down into more specific areas. Here are just a few: Deliver Outstanding Customer Experiences: Oracle RightNow Dynamic Agent Desktop Cloud Service (CON9771) - Oct 1, 4:45 PM. This session covers how companies have delivered exceptional customer experiences and how the Oracle RightNow Dynamic Agent Desktop Cloud Service roadmap will evolve in the future. The Oracle RightNow Contact Center Experience suite includes incident management, knowledge, guided processes, and other service capabilities to unify the customer experience across channels. Come learn about the powerful tools that enable even your junior agents to consistently provide outstanding service across all customer interaction channels. Self-Service in the Age of Data Intimacy (CON11516) - Oct 1, 3:15. Even though businesses are generating more and more data around their relationships and interactions with customers, very little of the information a business generates ends up available to the contact center and even less is made available to the online service experience. The generic one-size-fits-all approach that typifies most online service experiences ultimately fails to address all user needs, and that failure ultimately leads to the continued use of high-cost agent-assisted channels for low-value interactions. This session introduces Oracle RightNow Web Experience’s Virtual Assistant and discusses how you can deliver rich, engaging, highly personalized experiences with the quality of agent-assisted service at a much lower cost. Improve Chat Experiences: Best Practices for Chat Pilots and Deployments (CON11517) - Oct 1, 4:45 PM. Today’s organizations are challenged to grow revenue and retain customers with fewer resources, and many have turned to chat as an approach to improving the customer experience, increasing sales conversions, and reducing costs at the same time. From setting goals and metrics and training staff to customizing and tuning the solution, this session provides best practices and lessons learned from a broad set of implementations to help you get the most out of your chat solution. Differentiated Experience with Web Service (CON9770) - Oct 2, 1:15 PM. A reputation for excellent customer service can differentiate your brand and drive revenue. In this session, learn how to develop that reputation by transforming your online self-service into a highly interactive, branded customer experience. See live examples of how Oracle RightNow Web Experience has helped customers deliver on their Web service strategies. Unifying the Agent’s Engagement Console (CON11518) - Oct 2, 1:15 PM. Does your customer experience suffer because your agents are toggling between multiple tools? Do your agent productivity and morale suffer as well? Come to this session to learn how Oracle RightNow CX Cloud Service seamlessly unifies these disparate systems into a single engagement console. Regardless of channel, powerful adaptive tools consistently guide agents across contextually aware personalized workflows. Great agent experiences drive great customer experiences. Oracle RightNow CX Cloud Service and the Oracle Customer Experience Portfolio (CON9775) - Oct 3, 10:15 AM. This session covers how Oracle’s integrated suite of customer experience (CX) products fits with the Oracle CX portfolio of products (Oracle Fusion Customer Relationship Management; the Oracle ATG, Oracle Endeca, and Oracle Knowledge product families; and Oracle Business Intelligence) to increase revenues, strengthen customer relationships, and reduce costs across the entire end-to-end customer lifecycle for companies that sell to consumers and those that sell to businesses. Greater Insights from Customer Engagements (CON9773) Oct 4, 12:45 PM. In this session, hear how to leverage service interaction insights, customer feedback, and segmented service engagements to improve the customer experience. Discover how customers, such as J&P Cycles, learn and take action based on business insights gained through their customer engagements. Again, these are just some of the sessions, so check out the Content Catalog for details on Knowledge Management, Customization, Integration and more in the Oracle Develop stream for Customer Experience. Be sure to visit the Oracle DEMOgrounds in the Moscone West Exhibit Hall. If this is your first OpenWorld, welcome! If you are returning, hi again and enjoy!

    Read the article

  • DNS records on website.. What are they for?

    - by Blake Nic
    Recently we had to get some ddos protection for our website because of the large attacks we were seeing after getting a bit of popularity. We handed over our domain and hosting information to our ddos protection provider. It worked perfectly but I have a question. On our DNS records we have the Host and Answer and Type. The Host has our domain name there. The answer is this: SOMETEXTXXXX.dv.googlehosted.com. And when i copy and paste it into my browser it gives me a 404 error. But our website still loads and functions as it should. I don't understand why it would need this? I asked them about this and they said it is a method for ddos protection and the other IPs are the reverse proxy (the other ips give a 404 error too). Can anyone expand on this more please. How does all this tie in together and make the internet browser know where to point the person with all these reverse proxies and stuff I don't understand. Thank you. Here is an image for reference: http://i.stack.imgur.com/qo5QO.png

    Read the article

  • Creating Custom validation rule and register it

    - by FormsEleven
    What is Validation Rule? A validation rule is a piece of code that performs some check ensuring that data meets given constraints.In an enterprise application development environment, often it might require developers to have validation be performed based on some logic at several places across projects. Instead of redundant validation creation, a custom validation rule provides a library with a validation rules that can be registered and used across applications.A custom Validation is encapsulated in a reusable component so that you do not have to write it every time when you need to do input validation. Here is how we can easily implement a custom validation that checks for name of an employee to be "KING" For creating a custom Validation , 1.         Create Generic Application Workspace "CustomValidator" with the project "Model" 2.         Create an BC4J based on emp table. 3.         Create a custom validation rule.In EmpNamerule class, update the validateValue(..) method as follows:  public boolean validateValue(Object value) { EntityImpl emp = (EntityImpl)value; if(emp.getAttribute("Ename").toString().equals("KING")){ return false; } return true; } Create ADF Library: Next step would be to create ADF library. Create ADF library with name lets say testADFLibrary1.jarRegister ADF Library Next step is to register the ADF library , so that its available across the applications. Invoke the menu "Tools -> Preferences"Select the option "Business Components -> Registered Rules" from left paneClick on button "Pick Library". The dialog "Select Library" comes up with  the user library addedAdd new library' that points to the above jarCheck the checkbox "Register" and set the name for the rule Sample UsageHere is how we can easily implement a validation rule that restrict the name of the employee not to be "KING".Create new Application with BC4J based on EMP table.Create new validation under Business rule tab for Ename & select the above custom validation rule.Run the AppModule tester.

    Read the article

  • Scale a game object to Bounds

    - by Spikeh
    I'm trying to scale a lot of dynamically created game objects in Unity3D to the bounds of a sphere collider, based on the size of their current mesh. Each object has a different scale and mesh size. Some are bigger than the AABB of the collider, and some are smaller. Here's the script I've written so far: private void ScaleToCollider(GameObject objectToScale, SphereCollider sphere) { var currentScale = objectToScale.transform.localScale; var currentSize = objectToScale.GetMeshHierarchyBounds().size; var targetSize = (sphere.radius * 2); var newScale = new Vector3 { x = targetSize * currentScale.x / currentSize.x, y = targetSize * currentScale.y / currentSize.y, z = targetSize * currentScale.z / currentSize.z }; Debug.Log("{0} Current scale: {1}, targetSize: {2}, currentSize: {3}, newScale: {4}, currentScale.x: {5}, currentSize.x: {6}", objectToScale.name, currentScale, targetSize, currentSize, newScale, currentScale.x, currentSize.x); //DoorDevice_meshBase Current scale: (0.1, 4.0, 3.0), targetSize: 5, currentSize: (2.9, 4.0, 1.1), newScale: (0.2, 5.0, 13.4), currentScale.x: 0.125, currentSize.x: 2.869114 //RedControlPanelForAirlock_meshBase Current scale: (1.0, 1.0, 1.0), targetSize: 5, currentSize: (0.0, 0.3, 0.2), newScale: (147.1, 16.7, 25.0), currentScale.x: 1, currentSize.x: 0.03400017 objectToScale.transform.localScale = newScale; } And the supporting extension method: public static Bounds GetMeshHierarchyBounds(this GameObject go) { var bounds = new Bounds(); // Not used, but a struct needs to be instantiated if (go.renderer != null) { bounds = go.renderer.bounds; // Make sure the parent is included Debug.Log("Found parent bounds: " + bounds); //bounds.Encapsulate(go.renderer.bounds); } foreach (var c in go.GetComponentsInChildren<MeshRenderer>()) { Debug.Log("Found {0} bounds are {1}", c.name, c.bounds); if (bounds.size == Vector3.zero) { bounds = c.bounds; } else { bounds.Encapsulate(c.bounds); } } return bounds; } After the re-scale, there doesn't seem to be any consistency to the results - some objects with completely uniform scales (x,y,z) seem to resize correctly, but others don't :| Its one of those things I've been trying to fix for so long I've lost all grasp on any of the logic :| Any help would be appreciated!

    Read the article

  • boot up fails. drops to initramfs prompt 12.04

    - by dpm
    I am running an HP pavilion dv6000 dual boot win7 and Ubuntu 12.04. (well, up until today). after a reboot, the boot process drops to the busy box shell and i end up at the prompt: BusyBox v1.18.5 (Ubuntu 1:1.18.5-1ubuntu4) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs) Ive been researching others who have had this same problem, but haven't been able to find any of those solutions to work for me. I tried the method described here: http://www.proposedsolution.com/solutions/ubuntu-booting-to-initramfs-prompt/ and after the final command mount -t ntfs-3g /dev/sda1 /root -o force it does nothing and gives me another (initramfs) prompt. I can boot to a live CD (USB) and get to a terminal, but it doesn't seem to do much good, as I can see the /dev/sda1 in the ls command, but it doesn't recognize it when I try to cd to it. My command line skills are very green, and am just starting to grasp them. One more question: using the command fdisk -l how can I tell which mount point (sda1/sda2) is my windows partition and which one is Ubuntu? Any help? I'm in a bit over my head right now...

    Read the article

  • In Scrum, should you split up the backlog in a functional backlog and a technical backlog or not?

    - by Patrick
    In our Scrum teams we use a backlog, which mostly contains functional topics, but also sometimes contains technical topics. The advantage of having 1 backlog is that it becomes easy to choose the topics for the next sprint, but I have some questions: First, to me it seems more logical to have a separate technical backlog, where developers themselves can add pure technical items, like: we could improve performance in this method, this class lacks some technical documentation, ... By having one backlog, all developers always have to pass via the product owner to have their topics added to the backlog, which seems additional, unnecessary work for the product owner. Second, if you have a product owner that only focuses on the pure-functional items, the pure-technical items (like missing technical documentation, code that erodes and should be refactored, classes that always give problems during debugging because they don't have a stable foundation and should be refactored, ...) always end up at the end of the list because "they don't serve the customer directly". By having a separate technical backlog, and time reserved in every sprint for these pure technical items, we can improve the applications functionally, but also keep them healthy inside. What is the best approach? One backlog or two?

    Read the article

  • Brand New Annotations Support

    - by Ondrej Brejla
    Hi all! Today we would like to introduce you our brand new annotation support for NetBeans 7.2. The first thing which is different is the look of annotations in code completion. As you can see, there is a new annotation icon and an annotation type. Because we have a lot of modules with their own annotations, we differ them in code completion window by their type. We support annotations for: ApiGen (legacy PHPDoc annotations), PHPUnit, Doctrine 2 (ORM and ODM) and Symfony 2. Every annotation can be associated with some context. We recognize four of them: function, class/interface (type), method and field. It means that you will get just proper annotations for your class field as well as your global function. Do you have your own annotations? Or do you simply miss some? There is nothing hard to add it in there. We have a simple UI for adding your custom annotations! It's in Tools -> Options -> PHP -> Annotations. Here you can simply add, edit or delete your annotations. When you try to create new one, all fields are prefilled by some default values. So you really don't have to remember "how to use that crazy freemarker syntax". If you are satisfied with your new annotation, you can see it in a code completion window among other annotations. As you can see it has its own "Custom" type. That's all for today and as usual, please test it and if you find something strange, don't hesitate to file a new issue (component php, subcomponent Editor). Thanks.

    Read the article

  • Is it useful to unit test methods where the only logic is guards?

    - by Vaccano
    Say I have a method like this: public void OrderNewWidget(Widget widget) { if ((widget.PartNumber > 0) && (widget.PartAvailable)) { WigdetOrderingService.OrderNewWidgetAsync(widget.PartNumber); } } I have several such methods in my code (the front half to an async Web Service call). I am debating if it is useful to get them covered with unit tests. Yes there is logic here, but it is only guard logic. (Meaning I make sure I have the stuff I need before I allow the web service call to happen.) Part of me says "sure you can unit test them, but it is not worth the time" (I am on a project that is already behind schedule). But the other side of me says, if you don't unit test them, and someone changes the Guards, then there could be problems. But the first part of me says back, if someone changes the guards, then you are just making more work for them (because now they have to change the guards and the unit tests for the guards). For example, if my service assumes responsibility to check for Widget availability then I may not want that guard any more. If it is under unit test, I have to change two places now. I see pros and cons in both ways. So I thought I would ask what others have done.

    Read the article

  • Is there a resource that explains the benefits of layered programming?

    - by P.Brian.Mackey
    Some developers I know favor what I would call a procedural programming style. I recognize that procedural programming has its uses, albeit not in the business application world of .NET programming. So let's say we have a winform application with a buttonclick event. The buttonclick handles everything from the UI configuration to the database call and data manipulation. So you end up with a method that is 100's of lines of code long. Outside the fact that this code can't be considered test-able for various reasons, this style of programming is fragile to change. I can talk bout OO, Anti-patterns, etc. The problem is that any distinct topic I can dream up requires a great deal of explanation to understand the potential benefits. Outside of finding a new job (lots of businesses program this way), how can I teach these kinds of developers how to write better code? Obviously we can't sit around a round table and discuss pro's and con's all day due to time constraints and real work that has to be done. Although, training and intense training is the only thing I can think of to fix these problems. Not to say I write perfect code, I most certainly do not. I do believe there are certain best practices that should be followed as a rule E.G. OO in the context of .NET. The most common excuse I hear is "we can't write code fast enough if we do it like that".

    Read the article

  • Dealing with 2D pixel shaders and SpriteBatches in XNA 4.0 component-object game engine?

    - by DaveStance
    I've got a bit of experience with shaders in general, having implemented a couple, very simple, 3D fragment and vertex shaders in OpenGL/WebGL in the past. Currently, I'm working on a 2D game engine in XNA 4.0 and I'm struggling with the process of integrating per-object and full-scene shaders in my current architecture. I'm using a component-entity design, wherein my "Entities" are merely collections of components that are acted upon by discreet system managers (SpatialProvider, SceneProvider, etc). In the context of this question, my draw call looks something like this: SceneProvider::Draw(GameTime) calls... ComponentManager::Draw(GameTime, SpriteBatch) which calls (on each drawable component) DrawnComponent::Draw(GameTime, SpriteBatch) The SpriteBatch is set up, with the default SpriteBatch shader, in the SceneProvider class just before it tells the ComponentManager to start rendering the scene. From my understanding, if a component needs to use a special shader to draw itself, it must do the following when it's Draw(GameTime, SpriteBatch) method is invoked: public void Draw(GameTime gameTime, SpriteBatch spriteBatch) { spriteBatch.End(); spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, null, null, null, EffectShader, ViewMatrix); // Draw things here that are shaded by the "EffectShader." spriteBatch.End(); spriteBatch.Begin(/* same settings that were set by SceneProvider to ensure the rest of the scene is rendered normally */); } My question is, having been told that numerous calls to SpriteBatch.Begin() and SpriteBatch.End() within a single frame was terrible for performance, is there a better way to do this? Is there a way to instruct the currently running SpriteBatch to simply change the Effect shader it is using for this particular draw call and then switch it back before the function ends?

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • libgdx ActorGestureListener.pan() parameters not moving actor in smooth line

    - by Roar Skullestad
    I override the pan method in ActorGestureListener to implement dragging actors in libgdx (scene2d). When I move individual pieces on a board they move smoothly, but when moving the whole board, the x and y coordinates that is sent to pan is "jumping", and in an increasingly amount the longer it is dragged. These are an example of the deltaY coordinates sent to pan when dragging smoothly downwards: 1.1156368 -0.13125038 -1.0500145 0.98439217 -1.0500202 0.91877174 -0.984396 0.9187679 -0.98439026 0.9187641 -0.13125038 This is how I move the camera: public void pan (InputEvent event, float x, float y, float deltaX, float deltaY) { cam.translate(-deltaX, -deltaY); I have been using both the delta values sent to pan and the real position values, but similar results. And since it is the coordinates that are wrong, it doesn't matter whether I move the board itself or the camera. What could the cause be for this and what is the solution? When I move camera only half the delta-values, it moves smoothly but only at half the speed of the mouse pointer: cam.translate(-deltaX / 2, -deltaY / 2); It seems like the moving of camera or board affects the mouse input coordinates. How can I drag at "mouse speed" and still get smooth movements? (This question was also posted on stackoverflow: http://stackoverflow.com/questions/20693020/libgdx-actorgesturelistener-pan-parameters-not-moving-actor-in-smooth-line)

    Read the article

  • Managing flash animations for a game

    - by LoveMeSomeCode
    Ok, I've been writing C# for a while, but I'm new to ActionScript, so this is a question about best practices. We're developing a simple match game, where the user selects tiles and tries to match various numbers - sort of like memory - and when the match is made we want a series of animations to take place, and when they're done, remove the tile and add a new one. So basically it's: User clicks the MC Animation 1 on the MC starts Animation 1 ends Remove the MC from the stage Add a new MC Start the animation on the new MC The problem I run into is that I don't want to make the same timeline motion tween on each and every tile, when the animation is all the same. It's just the picture in the tile that's different. The other method I've come up with is to just apply the tweens in code on the main stage. Then I attach an event handler for MOTION_FINISH, and in that handler I trigger the next animation and listen for that to finish etc. This works too, but not only do I have to do all the tweening in code, I have a seperate event handler for each stage of the animation. So is there a more structured way of chaining these animations together?

    Read the article

  • how a pure functional programming language manage without assignment statements?

    - by Gnijuohz
    When reading the famous SICP,I found the authors seem rather reluctant to introduce the assignment statement to Scheme in Chapter 3.I read the text and kind of understand why they feel so. As Scheme is the first functional programming language I ever know something about,I am kind of surprised that there are some functional programming languages(not Scheme of course) can do without assignments. Let use the example the book offers,the bank account example.If there is no assignment statement,how can this be done?How to change the balance variable?I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory,this must can be done too. I learned C,Java,Python and use assignments a lot in every program I wrote.So it's really an eye-opening experience.I really hope someone can briefly explain how assignments are avoided in those functional programming languages and what profound impact(if any) it has on these languages. The example mentioned above is here: (define (make-withdraw balance) (lambda (amount) (if (>= balance amount) (begin (set! balance (- balance amount)) balance) "Insufficient funds"))) This changed the balance by set!.To me it looks a lot like a class method to change the class member balance. As I said,I am not familiar with functional programming languages,so if I said something wrong about them,feel free to point out.

    Read the article

  • checking for collision detection

    - by bill
    I am trying to create a game where you have a player and you can move right,left, and jump. kind of like mario but its not a side scroller. also i want to use 2d array to make a tile map. my big problem is that i dont understand how to check for collision. i spend about 2 week thinking about this and i came up with 2 solution but they both have problems. let say my map is: 0 = sky 1 = player 2 = ground 00000 10002 22022 Solution 1: move the '1'(player) and update the map less say player wants to move right, then x+=grid[x+1][y] this make the collision easy bc you can just check if if(grid[x][y+1] == 2){ //player is standing on top of ground } problem with this when u hit right key player will move (x*Titlewidth) to right. and as you can see the animation wont look smooth. Solution 2: move player and dont update map player_x += 2 this will make the animation more smoother bc i am just moving 2 pixels. problem1: i cant update map bc if player some times will be middle of int(2d array). but thats ok sinces its not a side scroller so updating the map is not a big deal. problem2: only way to check for collision is to use java intersection method. but then player have to be atleast 1 or 2 pixel in ground in order to check for collision. and as you can see that wont look good too. plz note this is my first collision game in java. so plz try to explain alot otherwise i wont understand it.

    Read the article

  • Maintaining State in Mud Engine

    - by Johnathon Sullinger
    I am currently working on a Mud Engine and have started implementing my state engine. One of the things that has me troubled is maintaining different states at once. For instance, lets say that the user has started a tutorial, which requires specific input. If the user types "help" I want to switch in to a help state, so they can get the help they need, then return them to the original state once exiting the help. my state system uses a State Manager to manage the state per user: public class StateManager { /// <summary> /// Gets the current state. /// </summary> public IState CurrentState { get; private set; } /// <summary> /// Gets the states available for use. /// </summary> /// <value> public List<IState> States { get; private set; } /// <summary> /// Gets the commands available. /// </summary> public List<ICommand> Commands { get; private set; } /// <summary> /// Gets the mob that this manager controls the state of. /// </summary> public IMob Mob { get; private set; } public void Initialize(IMob mob, IState initialState = null) { this.Mob = mob; if (initialState != null) { this.SwitchState(initialState); } } /// <summary> /// Performs the command. /// </summary> /// <param name="message">The message.</param> public void PerformCommand(IMessage message) { if (this.CurrentState != null) { ICommand command = this.CurrentState.GetCommand(message); if (command is NoOpCommand) { // NoOperation commands indicate that the current state is not finished yet. this.CurrentState.Render(this.Mob); } else if (command != null) { command.Execute(this.Mob); } else if (command == null) { new InvalidCommand().Execute(this.Mob); } } } /// <summary> /// Switches the state. /// </summary> /// <param name="state">The state.</param> public void SwitchState(IState state) { if (this.CurrentState != null) { this.CurrentState.Cleanup(); } this.CurrentState = state; if (state != null) { this.CurrentState.Render(this.Mob); } } } Each of the different states that the user can be in, is a Type implementing IState. public interface IState { /// <summary> /// Renders the current state to the players terminal. /// </summary> /// <param name="player">The player to render to</param> void Render(IMob mob); /// <summary> /// Gets the Command that the player entered and preps it for execution. /// </summary> /// <returns></returns> ICommand GetCommand(IMessage command); /// <summary> /// Cleanups this instance during a state change. /// </summary> void Cleanup(); } Example state: public class ConnectState : IState { /// <summary> /// The connected player /// </summary> private IMob connectedPlayer; public void Render(IMob mob) { if (!(mob is IPlayer)) { throw new NullReferenceException("ConnectState can only be used with a player object implementing IPlayer"); } //Store a reference for the GetCommand() method to use. this.connectedPlayer = mob as IPlayer; var server = mob.Game as IServer; var game = mob.Game as IGame; // It is not guaranteed that mob.Game will implement IServer. We are only guaranteed that it will implement IGame. if (server == null) { throw new NullReferenceException("LoginState can only be set to a player object that is part of a server."); } //Output the game information mob.Send(new InformationalMessage(game.Name)); mob.Send(new InformationalMessage(game.Description)); mob.Send(new InformationalMessage(string.Empty)); //blank line //Output the server MOTD information mob.Send(new InformationalMessage(string.Join("\n", server.MessageOfTheDay))); mob.Send(new InformationalMessage(string.Empty)); //blank line mob.StateManager.SwitchState(new LoginState()); } /// <summary> /// Gets the command. /// </summary> /// <param name="message">The message.</param> /// <returns>Returns no operation required.</returns> public Commands.ICommand GetCommand(IMessage message) { return new NoOpCommand(); } /// <summary> /// Cleanups this instance during a state change. /// </summary> public void Cleanup() { // We have nothing to clean up. return; } } With the way that I have my FSM set up at the moment, the user can only ever have one state at a time. I read a few different posts on here about state management but nothing regarding keeping a stack history. I thought about using a Stack collection, and just pushing new states on to the stack then popping them off as the user moves out from one. It seems like it would work, but I'm not sure if it is the best approach to take. I'm looking for recommendations on this. I'm currently swapping state from within the individual states themselves as well which I'm on the fence about if it makes sense to do there or not. The user enters a command, the StateManager passes the command to the current State and lets it determine if it needs it (like passing in a password after entering a user name), if the state doesn't need any further commands, it returns null. If it does need to continue doing work, it returns a No Operation to let the state manager know that the state still requires further input from the user. If null is returned, the state manager will then go find the appropriate state for the command entered by the user. Example state requiring additional input from the user public class LoginState : IState { /// <summary> /// The connected player /// </summary> private IPlayer connectedPlayer; private enum CurrentState { FetchUserName, FetchPassword, InvalidUser, } private CurrentState currentState; /// <summary> /// Renders the current state to the players terminal. /// </summary> /// <param name="mob"></param> /// <exception cref="System.NullReferenceException"> /// ConnectState can only be used with a player object implementing IPlayer /// or /// LoginState can only be set to a player object that is part of a server. /// </exception> public void Render(IMob mob) { if (!(mob is IPlayer)) { throw new NullReferenceException("ConnectState can only be used with a player object implementing IPlayer"); } //Store a reference for the GetCommand() method to use. this.connectedPlayer = mob as IPlayer; var server = mob.Game as IServer; // Register to receive new input from the user. mob.ReceivedMessage += connectedPlayer_ReceivedMessage; if (server == null) { throw new NullReferenceException("LoginState can only be set to a player object that is part of a server."); } this.currentState = CurrentState.FetchUserName; switch (this.currentState) { case CurrentState.FetchUserName: mob.Send(new InputMessage("Please enter your user name")); break; case CurrentState.FetchPassword: mob.Send(new InputMessage("Please enter your password")); break; case CurrentState.InvalidUser: mob.Send(new InformationalMessage("Invalid username/password specified.")); this.currentState = CurrentState.FetchUserName; mob.Send(new InputMessage("Please enter your user name")); break; } } /// <summary> /// Receives the players input. /// </summary> /// <param name="sender">The sender.</param> /// <param name="e">The e.</param> void connectedPlayer_ReceivedMessage(object sender, IMessage e) { // Be good memory citizens and clean ourself up after receiving a message. // Not doing this results in duplicate events being registered and memory leaks. this.connectedPlayer.ReceivedMessage -= connectedPlayer_ReceivedMessage; ICommand command = this.GetCommand(e); } /// <summary> /// Gets the Command that the player entered and preps it for execution. /// </summary> /// <param name="command"></param> /// <returns>Returns the ICommand specified.</returns> public Commands.ICommand GetCommand(IMessage command) { if (this.currentState == CurrentState.FetchUserName) { this.connectedPlayer.Name = command.Message; this.currentState = CurrentState.FetchPassword; } else if (this.currentState == CurrentState.FetchPassword) { // find user } return new NoOpCommand(); } /// <summary> /// Cleanups this instance during a state change. /// </summary> public void Cleanup() { // If we have a player instance, we clean up the registered event. if (this.connectedPlayer != null) { this.connectedPlayer.ReceivedMessage -= this.connectedPlayer_ReceivedMessage; } } Maybe my entire FSM isn't wired up in the best way, but I would appreciate input on what would be the best to maintain a stack of state in a MUD game engine, and if my states should be allowed to receive the input from the user or not to check what command was entered before allowing the state manager to switch states. Thanks in advance.

    Read the article

  • Mobility Card in Bangalore for Transportation

    - by Rekha
    Transport Minister R Ashoka announced Bangalore Metropolitan Transport Corporation (BMTC) services are going to be best in the world soon. BMTC has planned to launch a Mobility Card with which commuters can get rides in BMTC, KSRTC and future Metro Train facilities without buying tickets for each ride. The conductor with have a simple device in which the commuters can swipe their cards to deduct the ticket tarrif for bus or metro rides automatically. This Mobility card can be obtained by paying a fixed amount. This method is time saving and the commuters can be saved from paying the exact change for tickets. Ashoka says the Volvo Vayu Vaira services have internet connectivity and voice announcements of every bus stop names and this has been appreciated by the commuters. With WiFi Connections in Shatabdi Trains soon and Mobility Cards, India is soon to match the services of US Standards. Government officials are keen in implementing these services before the end of this year. Hope all these services are well used and maintained.   This article titled,Mobility Card in Bangalore for Transportation, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • Clients with multiple proxy and multithreading callbacks

    - by enzom83
    I created a sessionful web service using WCF, and in particular I used the NetTcpBinding binding. In addition to methods to initiate and terminate a session, other methods allow the client to send to one or more tasks to be performed (the results are returned via callback, so the service is duplex), but they also allow you to know the status of the service. Assuming you activate the same service on multiple endpoints, and assuming that the client knows these endpoints (for example, it could maintain a List of endpoints), the client should connect with one or more replicas of the same service. The client periodically updates the status of the service, so when it needs to perform a new task (the task is submitted by the user via UI), it selects the service currently less loaded and sends the task to it. Periodically, the client also initiates a maintenance procedure in order to disconnect from one or more overloaded service and in order to connect with new services. I created a client proxy using the svcutil tool. I wish each proxy can be used simultaneously by different threads, for example, in addition to the thread that submits the tasks using a proxy, there are also the following two threads which act periodically: a thread that periodically sends a request to the service in order to obtain the updated state; a thread that periodically selects a proxy to close and instantiates a new proxy to replace the closed one. To achieve these objectives, is it sufficient to create an array of proxies and manage their opening and closing in separate threads? I think I read that the proxy method calls are thread safe, so I would not need to perform a lock before requesting updates to the service. However, when the maintenance procedure (which is activated on its own thread) decides to close a proxy, should I perform a lock? Finally, each proxy is also associated with an object that implements the callback interface for the service: are the callbacks (invoked on the client) executed on different threads on the client? I would like to wrap the management of the proxy in one or more classes so that it can then easily manage within a WPF application.

    Read the article

< Previous Page | 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069  | Next Page >