Search Results

Search found 7513 results on 301 pages for 'actual'.

Page 117/301 | < Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >

  • Should the syntax for disabling code differ from that of normal comments?

    - by deltreme
    For several reasons during development I sometimes comment out code. As I am chaotic and sometimes in a hurry, some of these make it to source control. I also use comments to clarify blocks of code. For instance: MyClass MyFunction() { (...) // return null; // TODO: dummy for now return obj; } Even though it "works" and alot of people do it this way, it annoys me that you cannot automatically distinguish commented-out code from "real" comments that clarify code: it adds noise when trying to read code you cannot search for commented-out code for for instance an on-commit hook in source control. Some languages support multiple single-line comment styles - for instance in PHP you can either use // or # for a single-line comment - and developers can agree on using one of these for commented-out code: # return null; // TODO: dummy for now return obj; Other languages - like C# which I am using today - have one style for single-line comments (right? I wish I was wrong). I have also seen examples of "commenting-out" code using compiler directives, which is great for large blocks of code, but a bit overkill for single lines as two new lines are required for the directive: #if compile_commented_out return null; // TODO: dummy for now #endif return obj; So as commenting-out code happens in every(?) language, shouldn't "disabled code" get its own syntax in language specifications? Are the pro's (separation of comments / disabled code, editors / source control acting on them) good enough and the cons ("shouldn't do commenting-out anyway", not a functional part of a language, potential IDE lag (thanks Thomas)) worth sacrificing? Edit I realise the example I used is silly; the dummy code could easily be removed as it is replaced by the actual code.

    Read the article

  • From a DDD perspective is a report generating service a domain service or an infrastructure service?

    - by Songo
    Let assume we have the following service whose responsibility is to generate Excel reports: class ExcelReportService{ public String generateReport(String fileFormatFilePath, ResultSet data){ ReportFormat reportFormat = new ReportFormat(fileFormatFilePath); ExcelDataFormatterService excelDataFormatterService = new ExcelDataFormatterService(); FormattedData formattedData = excelDataFormatterService.format(data); ExcelFileService excelFileService = new ExcelFileService(); String reportPath= excelFileService.generateReport(reportFormat,formattedData); return reportPath; } } This is pseudo code for the service I want to design where: fileFormatFilePath: path to a configuration file where I'll keep the format of my excel file (headers, column widths, number of columns,..etc) data: the actual records returned from the database. This data can't be used directly coz I might need to make further calculations to the data before inserting them to the excel file. ReportFormat: Value object to hold the report format, has methods like getHeaders(), getColumnWidth(),...etc. ExcelDataFormatterService: a service to hold any logic that need to be applied to the data returned from the database before inserting it to the file. FormattedData: Value object the represents the formatted data to be inserted. ExcelFileService: a wrapper top the 3rd party library that generates the excel file. Now how do you determine whether a service is an infrastructure or domain service? I have the following 3 services here: ExcelReportService, ExcelDataFormatterService and ExcelFileService?

    Read the article

  • Logistics of code reuse (OOP)

    - by Ominus
    One of the driving points behind OOP is code reuse. I am curious about the actual logistics of this and how others both in team or solo handle it. For example lets say you have 5 projects you have worked on and between them you have a ton of classes that you think would be useful in other projects. How do you store them? Are they just in the normal project repository or do you break out the relevant classes and have them (as now copies) in another unique source repository that only houses code pieces that are intended to be reused? How do you go about finding or even knowing that there is a good piece of code out there that you should reuse? It's easier if your solo because you remember that you have coded something similar but even then it becomes kind of a stretch. If there is some way that you are storing these pieces of code do you then also have them indexed and searchable by tag or something. I fear that it just boils down to some tribal knowledge that you just know that for situation A i need solution B and we have a good piece of code that already can help here. A bit verbose but I hope you get what I am aiming at. If you think of a better way to make the question clearer please have at it :) TIA!

    Read the article

  • The best way to learn how to extend Orchard

    - by Bertrand Le Roy
    We do have tutorials on the Orchard site, but we can't cover all topics, and recently I've found myself more and more responding to forum questions by pointing people to an existing module that was solving a similar problem to the one the question was about. I really like this way of learning by example and from the expertise of others. This is one of the reasons why we decided that modules would by default come in source code form that we compile dynamically. it makes them easy to understand and easier to modify for your own purposes. Hackability FTW! But how do you crack open a module and look at what's inside? You can do it in two different ways. First, you can just install the module from the gallery, directly from your Orchard instance's admin panel. Once you've done that, you can just look into your Modules directory under the web site. There is now a subfolder with the name of the new module that contains a csproj that you can open in Visual Studio or add to your Orchard solution. Second, you can simply download the package (it's NuGet) and rename it to a .zip extension. NuGet being based on Zip, this will open just fine in Windows Explorer: What you want to dig into is the Content/Modules/[NameOfTheModule] folder, which is where the actual code is. Thanks to Jason Gaylord for the idea for this post.

    Read the article

  • Best Method of function parameter validation

    - by Aglystas
    I've been dabbling with the idea of creating my own CMS for the experience and because it would be fun to run my website off my own code base. One of the decisions I keep coming back to is how best to validate incoming parameters for functions. This is mostly in reference to simple data types since object validation would be quite a bit more complex. At first I debated creating a naming convention that would contain information about what the parameters should be, (int, string, bool, etc) then I also figured I could create options to validate against. But then in every function I still need to run some sort of parameter validation that parses the parameter name to determine what the value can be then validate against it, granted this would be handled by passing the list of parameters to function but that still needs to happen and one of my goals is to remove the parameter validation from the function itself so that you can only have the actual function code that accomplishes the intended task without the additional code for validation. Is there any good way of handling this, or is it so low level that typically parameter validation is just done at the start of the function call anyway, so I should stick with doing that.

    Read the article

  • MSDN Live 2010 &ndash; Delivered : 24 sessions (4 x 6) on Visual Studio and Team Foundation Server

    - by terje
    We (Mikael Nitell and me) got a whole track on the Norwegian MSDN Live tour this year.  We did these as a pair, and covered 4 cities over 4 days, 6 sessions per day, taking 8 hours to come through it.  The Islandic volcano made the travels a bit rough, but we managed 6 flights out of 8. The first one had to go by van instead, 7-8 hour drive each way together with other MSDN Live presenters – a memorable tour! Oslo was the absolute top point.  We had to change hall to a bigger one. People were crowding, and even the big hall was packed!  The presentations were mostly based on demos, but we had a few slides as well.  They have been uploaded to my SkyDrive.  Info to aliens – some of the text may be Norwegian. The sessions were as follows: Overview of news in Visual Studio and Team Foundation server 2010 Ensuring Quality with VS/TFS 2010 Releasing products with VS/TFS 2010 No More No Repro with VS/TFS 2010 Performance Testing and Parallel Programming with VS/TFS 2010 Migrating to VS/TFS 2010 Tips, tricks, news and some best practices with VS/TFS 2010   In the coming days, I will post up examples from the demos too, with explanations of how they are intended to work. These entries will also contain stuff we had to remove from the actual presentations due to the time constraints. We managed to create recordings of two of the sessions, which will be uploaded to Channel 9 by Microsoft, afaik.   I will update this blog with information about exact locations when that is done. Also note we’re (read:Osiris Data AS) running both Upgrade and Deep Dive courses  on VS/TFS 2010 now in May.  Please look here for more info. If you want to be informed, follow me on Twitter.  All blog entries will be announced on twitter.

    Read the article

  • Procedural content (settlement) generation

    - by instancedName
    I have, lets say, something like a homework or assignment to do. Roughly said I need to write an algorithm (pseudo code is not necessary, just in depth description) of procedure that would generate settlements, environment and a people to populate it with, as part of some larger world generation procedure. The genre of game is not specified, it could be any genre (rpg, strategy, colony simulation etc.) where interacting with large and extensive world is central to the game. Procedure should be called once per settlement. At the time of calling, world generation procedure makes geography, culture and history input available. Output should be map of the village and it's immediate area, and various potential additional information like myths, history, demographic facts etc. Bonus would be quest ant similar stuff, but that not really my focus at the moment. I will leave quality of the output for later when I actually dig little deeper into this topic. I am free to change parameters as long as I have strong explanation for doing so. Setting of the game is undetermined so I am free to use anything that I like the most. Ok, so my actual question is: Can anyone who has some experience in this field of game design recommend me some good literature, or point me in the direction where I should look/reed/study? I'm somewhat experienced game programmer, but I've never been into game design till now so any help will be great. I want to do this assignment as good as I can. As for deadline, it's not strictly set, but lets say I don't want it to take longer then few weeks, one month at worst case.

    Read the article

  • Floating point undesireable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • How to cover the widest range of computers when publishing?

    - by DevilWithin
    When you plan a game, or even when you already made a game, and its time to publish, you wonder how much of your audience is covered by the game technology demands. I'm directing this essentialy to casual games, as I constantly see people having old laptops and being unable to replace them. Laptops with integrated cards whose OpenGL version doesn't even support textures larger than 1024x1024. These people may be avid gamers as well, and a reasonable share of the audience to consider giving them the chance to play casual games, once they cannot play any blockbusters. As I've seen happening, a very "noticeable" example is Angry Birds. It's gameplay is merely casual (I think nobody disagrees here) and still, it uses so high resolution textures that at least OpenGL 2.0 or around is needed, which blocks away a lot of people. So, the actual question is: what is a good tradeoff for this issue? Would it be better to just sacrifice the texture resolution for everyone, but have more supported hardware? Would it be better to keep the high quality and just slice the textures into smaller ones, sacrificing the performance a little bit? What else? Any ideas about this topic are welcome for discussion.

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

  • Moving objects colliding when using unalligned collision avoidance (steering)

    - by James Bedford
    I'm having trouble with unaligned collision avoidance for what I think is a rare case. I have set two objects to move towards each other but with a slight offset, so one of the objects is moving slightly upwards, and one of the objects is moving slightly downwards. In my unaligned collision avoidance steering algorithm I'm finding the points on the object's forward line and the other object's forward line where these two lines are the closest. If these closest points are within a collision avoidance distance, and if the distance between them is smaller than the two radii of the two object's bounding spheres, then the objects should steer away in the appropriate direction. The problem is that for my case, the closest points on the lines are calculated to be really far away from the actual collision point. This is because the two forward lines for each object are moving away from each other as the objects pass. The problem is that because of this, no steering takes place, and the two objects partially collide. Does anyone have any suggestions as to how I can correctly calculate the point of collision? Perhaps by somehow taking into account the size of the two objects?

    Read the article

  • Any good reason open files in text mode?

    - by Tinctorius
    (Almost-)POSIX-compliant operating systems and Windows are known to distinguish between 'binary mode' and 'text mode' file I/O. While the former mode doesn't transform any data between the actual file or stream and the application, the latter 'translates' the contents to some standard format in a platform-specific manner: line endings are transparently translated to '\n' in C, and some platforms (CP/M, DOS and Windows) cut off a file when a byte with value 0x1A is found. These transformations seem a little useless to me. People share files between computers with different operating systems. Text mode would cause some data to be handled differently across some platforms, so when this matters, one would probably use binary mode instead. As an example: while Windows uses the sequence CR LF to end a line in text mode, UNIX text mode will not treat CR as part of the line ending sequence. Applications would have to filter that noise themselves. Older Mac versions only use CR in text mode as line endings, so neither UNIX nor Windows would understand its files. If this matters, a portable application would probably implement the parsing by itself instead of using text mode. Implementing newline interpretation in the parser might also remove some overhead of using text mode, as buffers would need to be rewritten (and possibly resized) before returning to the application, while this may be less efficient than when it would happen in the application instead. So, my question is: is there any good reason to still rely on the host OS to translate line endings and file truncation?

    Read the article

  • Augmenting functionality of subclasses without code duplication in C++

    - by Rob W
    I have to add common functionality to some classes that share the same superclass, preferably without bloating the superclass. The simplified inheritance chain looks like this: Element -> HTMLElement -> HTMLAnchorElement Element -> SVGElement -> SVGAlement The default doSomething() method on Element is no-op by default, but there are some subclasses that need an actual implementation that requires some extra overridden methods and instance members. I cannot put a full implementation of doSomething() in Element because 1) it is only relevant for some of the subclasses, 2) its implementation has a performance impact and 3) it depends on a method that could be overridden by a class in the inheritance chain between the superclass and a subclass, e.g. SVGElement in my example. Especially because of the third point, I wanted to solve the problem using a template class, as follows (it is a kind of decorator for classes): struct Element { virtual void doSomething() {} }; // T should be an instance of Element template<class T> struct AugmentedElement : public T { // doSomething is expensive and uses T virtual void doSomething() override {} // Used by doSomething virtual bool shouldDoSomething() = 0; }; class SVGElement : public Element { /* ... */ }; class SVGAElement : public AugmentedElement<SVGElement> { // some non-trivial check bool shouldDoSomething() { /* ... */ return true; } }; // Similarly for HTMLAElement and others I looked around (in the existing (huge) codebase and on the internet), but didn't find any similar code snippets, let alone an evaluation of the effectiveness and pitfalls of this approach. Is my design the right way to go, or is there a better way to add common functionality to some subclasses of a given superclass?

    Read the article

  • How to keep background requests in sequence

    - by Jason Lewis
    I'm faced with implementing interfaces for some rather archaic systems, for handling online deposits to stored value accounts (think campus card accounts for students). Here's my dilemma: stage 1 of the process involves passing the user off to a thrid-party site for the credit card transaction, like old-school PayPal. Step two involves using a proprietary protocol for communicating with a legacy system for conducting the actual deposit. Step two requires that each transaction have a unique sequence number, and that the requests' seqnums are in order. Since we're logging each transaction in Postgres, my first thought was to take a number from a sequence in the DB, guaranteeing uniqueness. But since we're dealing with web requests that might come in near-simultaneously, and since latency with the return from the off-ste payment processor is beyond our control, there's always the chance for a race condition in the order of requests passed back to the proprietary system, and if the seqnums are out of order, the request fails silently (brilliant, right?). I thought about enqueuing the requests in Redis and using Resque workers to process them (single worker, single process, so they are processed in order), but we need to be able to give the user feedback as to whether the transaction was processed successfully, so this seems less feasible to me. I've tried to make this application handle concurrency well (as much as possible for a Ruby on Rails app), but now we're in a situation where we have to interact with a system that is designed to be single process, single threaded, and sequential. If it at least gave an "out of order" error, I could just increment (or take the next value off the sequence), but it's designed to fail silently in the event of ANY error. We are handling timeouts in a way that blocks on I/O, but since the application uses multiple workers (Unicorn), that's no guarantee. Any ideas/suggestions would be appreciated.

    Read the article

  • Is there an open source clone of a game in the Total War Series?

    - by sinekonata
    I loved Shogun:Total War gameplay and then later on spent weeks re-enacting historical wars and battles with Europa Barbarorum. It's a mod for Rome:TW that focuses on historical accuracy in the peoples, units, sounds, visuals, everything from macro mechanics to actual battles (e.g. a lot more missiles). Since that time I kind of turned my back on Windows cause it sucks and use Linux cause Mac sucks even worse. So as I miss that game (Eur. Barb.) and consider it the most realistic RTS to date, I'd like to know if there are any free and open source alternatives to it because ever since I'm under linux, I became addicted to FOSS so I also turned my back on paying (even kickstarters) for closed source, pay to play games. I have found a clone/alternative for everyone of the best games like Minecraft, CSS, Natural Selection, TA/SupCom etc... It's kind of the last one I need. The Spring engine is amazing for example, is there another open source project of the source in current development? Or would Spring itself be enough (it certainly looks capable) to make it? Thanks in advance guys...

    Read the article

  • Parse git log by modified files

    - by MrUser
    I have been told to make git messages for each modified file all one line so I can use grep to find all changes to that file. For instance: $git commit -a modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... $git log | grep path/to/file.cpp/.h modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... modified: path/to/file.cpp/.h - 1) change1 , 2) change2, etc...... That's great, but then the actual line is harder to read because it either runs off the screen or wraps and wraps and wraps. If I want to make messages like this: $git commit -a modified: path/to/file.cpp/.h 1) change1 2) change2 etc...... is there a good way to then use grep or cut or some other tool to get a readout like $git log | grep path/to/file.cpp/.h modified: path/to/file.cpp/.h 1) change1 2) change2 etc...... modified: path/to/file.cpp/.h 1) change1 2) change2 etc...... modified: path/to/file.cpp/.h 1) change1 2) change2 etc......

    Read the article

  • How can I use a script to control a VirtualBox guest?

    - by TheWickerman666
    Refer to : Launch an application in Windows from the Ubuntu desktop I was wondering if Takkat could elaborate on the actual execution i.e. howto in the script file. This will be greatly helpful. Thanks in advance my script file InternetExplorerVM.sh looks like this, execution is /path/to/InternetExplorerVM.sh "C:\Program Files\Internet Explorer\iexplore.exe" #!/bin/bash # start Internet Explorer inside of a Windows7 Ultimate VM echo "Starting 'Internet Explorer' browser inside Windows7 virtual machine" echo "" sleep 1 echo "Please be patient" VBoxManage startvm b307622e-6b5e-4e47-a427-84760cf2312b sleep 15 echo "" echo "Now starting 'Internet Explorer'" ##VBoxManage --nologo guestcontrol b307622e-6b5e-4e47-a427-84760cf2312b execute --image "$1" --username RailroadGuest --password bnsf1234 VBoxManage --nologo guestcontrol b307622e-6b5e-4e47-a427-84760cf2312b execute --image "C:\\Program/ Files\\Internet/ Explorer\\iexplore.exe" --username RailroadGuest --password bnsf1234 --wait-exit --wait-stdout echo "" echo "Saving the VM's state now" VBoxManage controlvm b307622e-6b5e-4e47-a427-84760cf2312b savestate sleep 2 #Check VM state echo "" echo "Check the VM state" VBoxManage showvminfo b307622e-6b5e-4e47-a427-84760cf2312b | grep State exit My apologies for any mistakes, this is my first time posting on askubuntu.Thanks a ton in advance. This has been very helpful. Need this for BNSF guests, their Mainframe emulator works exclusively on Java enabled Internet Explorer.

    Read the article

  • Would it be possible to create an open source software library, entirely developed and moderated by an open community?

    - by Steven Jeuris
    Call it democratic software development, or open source on steroids if you will. I'm not just talking about the possibility of providing a patch which can be approved by the library owner. Think more along the lines of how Stack Exchange works. Anyone can post code, and through community moderation it is cleaned up and eventually valid code ends up in the final library. For complex libraries an elaborate system should probably be created, but for a simple library it is my belief this is already possible even within the Stack Exchange platform. Take a library of extension methods for .NET for example. Everybody goes their own way and implements their own subset of what they feel is important, open-source library or not. People want to share their code, but there is no suitable platform for it. extensionmethod.net is the result of answering this call for extension methods, but the framework hopelessly falls short; there is no order, or structure at all. You don't know whether an idea is any good until you try it, so I decided to create an Extension Methods proposal on Area51. I belief with proper moderation, it could be possible for the site to be more than a Q&A site, and that an actual library (or subsets of it) could be extracted from it. Has anything like this been attempted before? Are there platforms better suited for this?

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • Installing Visual Studio 2010 Service Pack 1

    - by Martin Hinshelwood
    As has become customary when the product team releases a new patch, SP or version I like to document the install. This post seams almost redundant as I had no problems, but I think that is as valuable to other thinking of installing the Service Pack as all the problems that we sometimes get. As per Brian's post I am Installing Visual Studio Team Foundation Server Service Pack 1 first and indeed as this is a single server local deployment I need to install both. If I only install one it will leave the other product broken. Figure: Hopefully this will be more uneventful It takes a little while for your system to be checked to see what components need updating. On my main computer this was pretty quick, but on the laptop it took some time. Figure: There are a lot of components to update With this update also comes an update to .NET as well as many other components. Figure: I downloaded the full 1.5GB’s, but you could do a web install It depends on how good you internet connection is to how long it would take to download, but as I am now in the US I decided not to trust the internet connection speeds. It took around 30-40 minutes to download the full thing which is a little slow. Figure: I did not need to download, but that would increase the install time So on my main computer again this was fast, but again on my netbook this took a little while. Figure: The actual install took around 30-40 minutes (2 hours on netbook) I was pretty impressed with the speed of the install, and as Team Explore is now out of the box with Visual Studio 2010 I don’t get the problem of the SP being installed before Team Explorer and having a disjointed experience Figure: As I suspected, no problems with the install Figure: Checking in Visual Studio shows that all the servicing points were successful This was an easy experience even if the SP was over 1.5GB’s to download Hopefully I will be discovering things that work better for a good while to come, as well as not seeing holes in the product that I had no encountered yet. What were your experiences of installing Visual Studio 2010 Service pack 1?

    Read the article

  • How to apply verification and validation on the following example

    - by user970696
    I have been following verification and validation questions here with my colleagues, yet we are unable to see the slight differences, probably caused by language barrier in technical English. An example: Requirement specification User wants to control the lights in 4 rooms by remote command sent from the UI for each room separately. Functional specification The UI will contain 4 checkboxes labelled according to rooms they control. When a checkbox is checked, the signal is sent to corresponding light. A green dot appears next to the checkbox When a checkbox is unchecked, the signal (turn off) is sent to corresponding light. A red dot appears next to the checkbox. Let me start with what I learned here: Verification, according to many great answers here, ensures that product reflects specified requirements - as functional spec is done by a producer based on requirements from customer, this one will be verified for completeness, correctness). Then design document will be checked against functional spec (it should design 4 checkboxes..), and the source code against design (is there a code for 4 checkboxes, functions to send the signals etc. - is it traceable to requirements). Okay, product is built and we need to test it, validate. Here comes our understanding trouble - validation should ensure the product meets requirements for its specific intended use which is basically business requirement (does it work? can I control the lights from the UI?) but testers will definitely work with the functional spec, making sure the checkboxes are there, working, labelled, etc. They are basically checking whether the requirements in functional spec were met in the final product, isn't that verification? (should not be, lets stick to ISO 12207 that only validation is the actual testing)

    Read the article

  • Empty interface to combine multiple interfaces

    - by user1109519
    Suppose you have two interfaces: interface Readable { public void read(); } interface Writable { public void write(); } In some cases the implementing objects can only support one of these but in a lot of cases the implementations will support both interfaces. The people who use the interfaces will have to do something like: // can't write to it without explicit casting Readable myObject = new MyObject(); // can't read from it without explicit casting Writable myObject = new MyObject(); // tight coupling to actual implementation MyObject myObject = new MyObject(); None of these options is terribly convenient, even more so when considering that you want this as a method parameter. One solution would be to declare a wrapping interface: interface TheWholeShabam extends Readable, Writable {} But this has one specific problem: all implementations that support both Readable and Writable have to implement TheWholeShabam if they want to be compatible with people using the interface. Even though it offers nothing apart from the guaranteed presence of both interfaces. Is there a clean solution to this problem or should I go for the wrapper interface? UPDATE It is in fact often necessary to have an object that is both readable and writable so simply seperating the concerns in the arguments is not always a clean solution. UPDATE2 (extracted as answer so it's easier to comment on) UPDATE3 Please beware that the primary usecase for this is not streams (although they too must be supported). Streams make a very specific distinction between input and output and there is a clear separation of responsibilities. Rather, think of something like a bytebuffer where you need one object you can write to and read from, one object that has a very specific state attached to it. These objects exist because they are very useful for some things like asynchronous I/O, encodings,...

    Read the article

  • Architecture of an action multiplayer game from scratch

    - by lcf
    Not sure whether it's a good place to ask (do point me to a better one if it's not), but since what we're developing is a game - here it goes. So this is a "real-time" action multiplayer game. I have familiarized myself with concepts like lag compensation, view interpolation, input prediction and pretty much everything that I need for this. I have also prepared a set of prototypes to confirm that I understood everything correctly. My question is about the situation when game engine must be rewind to the past to find out whether there was a "hit" (sometimes it may involve the whole 'recomputation' of the world from that moment in the past up to the present moment. I already have a piece of code that does it, but it's not as neat as I need it to be. The domain logic of the app (the physics of the game) must be separated from the presentation (render) and infrastructure tools (e.g. the remote server interaction specifics). How do I organize all this? :) Is there any worthy implementation with open sources I can take a look at? What I'm thinking is something like this: -> Render / User Input -> Game Engine (this is the so called service layer) -> Processing User Commands & Remote Server -> Domain (Physics) How would you add into this scheme the concept of "ticks" or "interactions" with the possibility to rewind and recalculate "the game"? Remember, I cannot change the Domain/Physics but only the Game Engine. Should I store an array of "World's States"? Should they be just some representations of the world, optimized for this purpose somehow (how?) or should they be actual instances of the world (i.e. including behavior and all that). Has anybody had similar experience? (never worked on a game before if that matters)

    Read the article

  • Why is my fresh install of 12.04 running slow?

    - by user75129
    Hey guys I'm a new linux user, I figured it would be the best for the laptop I just purchased because it's said to be faster than Windows 7. I'm currently dual-booting with Windows 7 Professial and Ubuntu 12.04. The laptop I am using is the LG X Note P210 Specs: Intel Core i5 470UM Dual Core clocked at 1.33GHz 12.5" HD LED LCD Screen at 1366 by 768 4GB DDR3 @ 1333 MHz RAM Integrated Intel HD Graphics Card 4 Cell Battery with 3150mAh It comes loaded Windows 7 Home Premium 64-Bit, it runs fine on that but my Ubuntu 12.04 runs slower than it and I don't understand why, it definitely has decent specs to run even a 64-bit operating system and do some gaming. Granted I know it's not the best but for a laptop it does the job so Ubuntu should work especially since it's said to make older units with worse specs run even better. I'm not all that familiar with coding and all so what are things I can do to optimize speed without overclocking? Boot up is fine, its program response time I believe, once Im in the actual OS, it lags, slows down, apps stop working, take forever to load up apps.

    Read the article

  • How to diagnose Ubuntu CPU spikes / IO wait?

    - by Jeff Welling
    I'm using Ubuntu and every couple minutes it goes unresponsive for a half second to a full second, which isn't normally a problem but makes trying to code extremely frustrating when your trying to hit backspace or navigate the code and nothing is happening. The problem is, the freezes are so brief that top doesn't have time to show me what is spiking the CPU (assuming something is, but I don't know what else could cause this). Does anyone know how to troubleshoot this performance issue? Edit: I've tried login in with Gnome Classic (No Effects) instead of Unity but it still freezes up every once in awhile. Edit: The CPU graph doesn't seem to be showing any actual spikes so it seems you were right and my original diagnosis of CPU spikes being the problem was incorrect, I now suspect IO wait. I don't recall this happening for the brief few weeks I had Windows 7 Starter running on it though, which leads me to believe it isn't (just?) the hardware.. is there anything I can tweak to improve this? I'm using an Acer Aspire One D257, with Ubuntu 11.10. Edit: Output of dmesg is at http://paste.ubuntu.com/1060054/ and kern.log is at http://paste.ubuntu.com/1060055/

    Read the article

< Previous Page | 113 114 115 116 117 118 119 120 121 122 123 124  | Next Page >