Search Results

Search found 7802 results on 313 pages for 'unit tests'.

Page 202/313 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • See the exciting new features available for iProcurement and Sourcing with 12.1.3 Rollup Patch 14254641:R12.PRC_PF.B!

    - by user793044
    See the exciting new features available for iProcurement and Sourcing with 12.1.3 Rollup Patch 14254641:R12.PRC_PF.B! Functional Area New Feature Note Reference Sourcing Suppliers can now accept Terms and Conditions to comply with the buyer's Non-Disclosure Agreements (NDA). The PDF generation process has been enhanced to provide faster generation of negotiation PDFs containing large amounts of data. Note 1499944.1 Sourcing New features From Procurement RUP Family R12.1.3 September Update 2012: Accept Terms and Conditions to Comply With NDA iProcurement Users can now do the following: Requesters can specify the GL date (encumbrance date) for each distribution against a line at the time of creating requisitions.  Enter an Accounting Date on and Procurement Requisition, if Dual Budgetary Control is enabled for Purchasing. Choose a Favorite Charge Account to override your default charge account, using the Preferences page.  Buyers can update the unit price, suggested supplier, and site details while requesting a catalog item (inventory item) that is not linked to a blanket purchase agreement. Note 1499911.1 iProcurement New Features From RUP Family R12.1.3 September Update 2012: GL/Accouting Date,PO_CUSTOM_FUNDS_PKG.plb,Price and Supplier Update For new features across all the Procurement product groups and information about applying Patch 14254641 see Note 1468883.1.

    Read the article

  • Reformatting and version control

    - by l0b0
    Code formatting matters. Even indentation matters. And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?

    Read the article

  • Microsoft Access as a Weapon of War

    - by Damon
    A while ago (probably a decade ago, actually) I saw a report on a tracking system maintained by a U.S. Army artillery control unit.  This system was capable of maintaining a bearing on various units in the field to help avoid friendly fire.  I consider the U.S. Army to be the most technologically advanced fighting force on Earth, but to my terror I saw something on the title bar of an application displayed on a laptop behind one of the soldiers they were interviewing: Tracking.mdb Oh yes.  Microsoft Office Suite had made it onto the battlefield.  My hope is that it was just running as a front-end for a more proficient database (no offense Access people), or that the soldier was tracking something else like KP duty or fantasy football scores.  But I could also see the corporate equivalent of a pointy-haired boss walking into a cube and asking someone who had piddled with Access to build a database for HR forms.  Except this pointy-haired boss would have been a general, the cube would have been a tank, and the HR forms would have been targets that, if something went amiss, would have been hit by a 500lb artillery round. Hope that solider could write a good query :)

    Read the article

  • C# class architecture for REST services

    - by user15370
    Hi. I am integrating with a set of REST services exposed by our partner. The unit of integration is at the project level meaning that for each project created on our partners side of the fence they will expose a unique set of REST services. To be more clear, assume there are two projects - project1 and project2. The REST services available to access the project data would then be: /project1/search/getstuff?etc... /project1/analysis/getstuff?etc... /project1/cluster/getstuff?etc... /project2/search/getstuff?etc... /project2/analysis/getstuff?etc... /project2/cluster/getstuff?etc... My task is to wrap these services in a C# class to be used by our app developer. I want to make it simple for the app developer and am thinking of providing something like the following class. class ProjectClient { SearchClient _searchclient; AnalysisClient _analysisclient; ClusterClient _clusterclient; string Project {get; set;} ProjectClient(string _project) { Project = _project; } } SearchClient, AnalysisClient and ClusterClient are my classes to support the respective services shown above. The problem with this approach is that ProjectClient will need to provide public methods for each of the API's exposed by SearchClient, etc... public void SearchGetStuff() { _searchclient.getStuff(); } Any suggestions how I can architect this better?

    Read the article

  • What is the R Language?

    - by TATWORTH
    I encountered the R Language recently with O'Reilly books and while from the context I knew it was a language for dealing with statistics, doing a web search for the support web site was futile. However I have now located the web site and it is at http://www.r-project.org/R is a free language available for a number of platforms including windows. CRAN mirrors are available at a number of locations worldwide.Here is the official description:"R is a language and environment for statistical computing and graphics. It is a GNU project which is similar to the S language and environment which was developed at Bell Laboratories (formerly AT&T, now Lucent Technologies) by John Chambers and colleagues. R can be considered as a different implementation of S. There are some important differences, but much code written for S runs unaltered under R. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, ...) and graphical techniques, and is highly extensible. The S language is often the vehicle of choice for research in statistical methodology, and R provides an Open Source route to participation in that activity. One of R's strengths is the ease with which well-designed publication-quality plots can be produced, including mathematical symbols and formulae where needed. Great care has been taken over the defaults for the minor design choices in graphics, but the user retains full control. R is available as Free Software under the terms of the Free Software Foundation's GNU General Public License in source code form. It compiles and runs on a wide variety of UNIX platforms and similar systems (including FreeBSD and Linux), Windows and MacOS."

    Read the article

  • What are some techniques I can use to refactor Object Oriented code into Functional code?

    - by tieTYT
    I've spent about 20-40 hours developing part of a game using JavaScript and HTML5 canvas. When I started I had no idea what I was doing. So it started as a proof of concept and is coming along nicely now, but it has no automated tests. The game is starting to become complex enough that it could benefit from some automated testing, but it seems tough to do because the code depends on mutating global state. I'd like to refactor the whole thing using Underscore.js, a functional programming library for JavaScript. Part of me thinks I should just start from scratch using a Functional Programming style and testing. But, I think refactoring the imperative code into declarative code might be a better learning experience and a safer way to get to my current state of functionality. Problem is, I know what I want my code to look like in the end, but I don't know how to turn my current code into it. I'm hoping some people here could give me some tips a la the Refactoring book and Working Effectively With Legacy Code. For example, as a first step I'm thinking about "banning" global state. Take every function that uses a global variable and pass it in as a parameter instead. Next step may be to "ban" mutation, and to always return a new object. Any advice would be appreciated. I've never taken OO code and refactored it into Functional code before.

    Read the article

  • Is Akka a good solution for a concurrent pipeline/workflow problem?

    - by herpylderp
    Disclaimer: I am brand new to Akka and the concept of Actors/Event-Driven Architectures in general. I have to implement a fairly complex problem where users can configure a "concurrent pipeline": Pipeline: consists of 1+ Stages; all Stages execute sequentially Stage: consists of 1+ Tasks; all Tasks execute in parallel Task: essentially a Java Runnable As you can see above, a Task is a Runnable that does some unit of work. Tasks are organized into Stages, which execute their Tasks in parallel. Stages are organized into the Pipeline, which executes its Stages sequentially. Hence if a user specifies the following Pipeline: CrossTheRoadSafelyPipeline Stage 1: Look Left Task 1: Turn your head to the left and look for cars Task 2: Listen for cars Stage 2: Look right Task 1: Turn your head to the right and look for cars Task 2: Listen for cars Then, Stage 1 will execute, and then Stage 2 will execute. However, while each Stage is executing, it's individual Tasks are executing in parallel/at the same time. In reality Pipelines will become very complicated, and with hundreds of Stages, dozens of Tasks per Stage (again, executing at the same time). To implement this Pipeline I can only think of several solutions: ESB/Apache Camel Guava Event Bus Java 5 Concurrency Actors/Akka Camel doesn't seem right because its core competency is integration not synchrony and orchestration across worker threads. Guava is great, but this doesn't really feel like a subscriber/publisher-type of problem. And Java 5 Concurrency (ExecutorService, etc.) just feels too low-level and painful. So I ask: is Akka a strong candidate for this type of problem? If so, how? If not, then why, and what is a good candidate?

    Read the article

  • Jet Brains release WebStorm 5.0

    - by TATWORTH
    At http://www.jetbrains.com/webstorm/whatsnew/index.html?WS50ROW, Jet Brains have announced the release of WebStorm 5.0, an IDE that brings the ease of code writing in VB.NET and C# that you get with ReSharper, to JavaScript, CSS and LESS. (There are some more details in http://blog.jetbrains.com/webide/2012/08/liveedit-plugin-features-in-detail/)Code completion in JavaScript, CSS and LESS is a very welcome feature. I look forward to trying out Web Storm. The download at http://www.jetbrains.com/webstorm/download/index.html comes with a free 30-day trial).Price information is at http://www.jetbrains.com/webstorm/buy/index.jsp - you should note that if you are an open-source developer, you can apply for a free license. The price of a personal license at £23 + VAT is a no-brainer. The price of a Commercial license would have been paid for in a few days of the increased productivity that this tool brings.Web Storm currently requires Google Chrome to run. Like ReSharper it appears to be a very able tool. It includes tools such as:XSLT debuggingJSLint for checking for JavaScript errorsJavaScript debuggingJavaScript unit testing (including code coverage)JavaScript folding regionsCoffeeScript supportWell I suggest that you try WebStorm 5.0

    Read the article

  • For an ORM supporting data validation, should constraints be enforced in the database as well?

    - by Ramnique Singh
    I have always applied constraints at the database level in addition to my (ActiveRecord) models. But I've been wondering if this is really required? A little background I recently had to unit test a basic automated timestamp generation method for a model. Normally, the test would create an instance of the model and save it without validation. But there are other required fields that aren't nullable at the in the table definition, meaning I cant save the instance even if I skip the ActiveRecord validation. So I'm thinking if I should remove such constraints from the db itself, and let the ORM handle them? Possible advantages if I skip constraints in db, imo - Can modify a validation rule in the model, without having to migrate the database. Can skip validation in testing. Possible disadvantage? If its possible that ORM validation fails or is bypassed, howsoever, the database does not check for constraints. What do you think? EDIT In this case, I'm using the Yii Framework, which generates the model from the database, hence database rules are generated also (though I could always write them post-generation myself too).

    Read the article

  • How should I start refactoring my mostly-procedural C++ application?

    - by oob
    We have a program written in C++ that is mostly procedural, but we do use some C++ containers from the standard library (vector, map, list, etc). We are constantly making changes to this code, so I wouldn't call it a stagnant piece of legacy code that we can just wrap up. There are a lot of issues with this code making it harder and harder for us to make changes, but I see the three biggest issues being: Many of the functions do more (way more) than one thing We violate the DRY principle left and right We have global variables and global state up the wazoo. I was thinking we should attack areas 1 and 2 first. Along the way, we can "de-globalize" our smaller functions from the bottom up by passing in information that is currently global as parameters to the lower level functions from the higher level functions and then concentrate on figuring out how to removing the need for global variables as much as possible. I just finished reading Code Complete 2 and The Pragmatic Programmer, and I learned a lot, but I am feeling overwhelmed. I would like to implement unit testing, change from a procedural to OO approach, automate testing, use a better logging system, fully validate all input, implement better error handling and many other things, but I know if we start all this at once, we would screw ourselves. I am thinking the three I listed are the most important to start with. Any suggestions are welcome. We are a team of two programmers mostly with experience with in-house scripting. It is going to be hard to justify taking the time to refactor, especially if we can't bill the time to a client. Believe it or not, this project has been successful enough to keep us busy full time and also keep several consultants busy using it for client work.

    Read the article

  • Structure of a .NET Assembly

    - by Om Talsania
    Assembly is the smallest unit of deployment in .NET Framework.When you compile your C# code, it will get converted into a managed module. A managed module is a standard EXE or DLL. This managed module will have the IL (Microsoft Intermediate Language) code and the metadata. Apart from this it will also have header information.The following table describes parts of a managed module.PartDescriptionPE HeaderPE32 Header for 32-bit PE32+ Header for 64-bit This is a standard Windows PE header which indicates the type of the file, i.e. whether it is an EXE or a DLL. It also contains the timestamp of the file creation date and time. It also contains some other fields which might be needed for an unmanaged PE (Portable Executable), but not important for a managed one. For managed PE, the next header i.e. CLR header is more importantCLR HeaderContains the version of the CLR required, some flags, token of the entry point method (Main), size and location of the metadata, resources, strong name, etc.MetadataThere can be many metadata tables. They can be categorized into 2 major categories.1. Tables that describe the types and members defined in your code2. Tables that describe the types and members referenced by your codeIL CodeMSIL representation of the C# code. At runtime, the CLR converts it into native instructions

    Read the article

  • Testing a codebase with sequential cohesion

    - by iveqy
    I've this really simple program written in C with ncurses that's basically a front-end to sqlite3. I would like to implement TDD to continue the development and have found a nice C unit framework for this. However I'm totally stuck on how to implement it. Take this case for example: A user types a letter 'l' that is captured by ncurses getch(), and then an sqlite3 query is run that for every row calls a callback function. This callback function prints stuff to the screen via ncurses. So the obvious way to fully test this is to simulate a keyboard and a terminal and make sure that the output is the expected. However this sounds too complicated. I was thinking about adding an abstraction layer between the database and the UI so that the callback function will populate a list of entries and that list will later be printed. In that case I would be able to check if that list contains the expected values. However, why would I struggle with a data structure and lists in my program when sqlite3 already does this? For example, if the user wants to see the list sorted in some other way, it would be expensive to throw away the list and repopulate it. I would need to sort the list, but why should I implement sorting when sqlite3 already has that? Using my orginal design I could just do an other query sorted differently. Previously I've only done TDD with command line applications, and there it's really easy to just compare the output with what I'm expected. An other way would be to add CLI interface to the program and wrap a test program around the CLI to test everything. (The way git.git does with it's test-framework). So the question is, how to add testing to a tightly integrated database/UI.

    Read the article

  • Ubuntu Server 12.04 as a router. Problem with DNS?? Or Routing table?

    - by Lorenzo
    I have a virtualbox lab made up of 4 Windows 2008 R2 servers (DC/DNS,SQL,SHAREPOINT, EXCHANGE) that are configured with static ip addresses with NIC's attached to Internal network. Everything works. I had the requirement to execute some tests that also access external services available on the internet. To keep things clean and similar to the production environment I have installed another VM, with Ubuntu Server 12.04 64 bit and configured (I hope) to work as a router like described on this post. This VM has two network interfaces: first is Bridged with the host and is used as a WAN connection and the other one attached in the Internal Network with its own static IP address on the internal network subnet. But actually the Windows servers does not connect to the internet while the unix one connects. I did a route command. this is the result: Kernel IP Routing table Destination Gateway Genmask Flags Metric Ref Use Iface default 10.69.121.1 0.0.0.0 UG 100 0 0 eth0 10.69.121.0 * 255.255.255.0 U 0 0 0 eth0 192.168.83.0 * 255.255.255.0 U 0 0 0 eth1 Can somebody help me with this configuration? :) Thanks! Addendum: I forgot to mention that one of the windows server hosts a DNS service for which I should maybe configure a forwarding server but I do not exactly know which server to forward on... :(

    Read the article

  • Wireless switch on Dell XT2 - strange behaviour of rfkill

    - by DyP
    I have an Dell Latitude XT2 using an Intel WLAN card (lspci lists it as "Intel Corporation Ultimate N WiFi Link 5300") running Lubuntu 12.04 with recent updates. The laptop has a hardware WLAN switch. I have problems activating the WLAN when booting with the hardware switch set to "off". The situation is a bit confusing, unfortunately. rfkill lists two WLAN devices (though lspci only shows the Intel one). This is the situation when booting with the hardware switch set to "Off": 0: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-bluetooth: Bluetooth Soft blocked: yes Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: yes Hard blocked: yes From some tests, I conclude WLAN is only activated when both, the dell-wifi and phy0, are unblocked by soft- and hardware. But I can only unblock dell-wifi after the hardware switch is set to "on". Procedure right from boot with hardware switch set to "Off": Soft-unblocking phy0 works as expected. Could be done by start-up script. sudo rfkill unblock 0: nothing happens. Soft block of dell-wifi not removed. Set the hardware switch to "on": phy0 gets its hard block removed. Still no WLAN. sudo rfkill unblock 0: both the soft and hard lock of dell-wifi are removed. WLAN is now active and works. sudo rfkill block 0: only adds the soft block as expected. WLAN goes off again. So, in order to activate WLAN, I have to use the hardware switch and afterwards (manually) run a script - that's a bit inconvenient. Does someone know a better solution? Maybe a daemon could help that listens to rfkill events to unblock dell-wifi after I have set the hardware switch to "on"? (sounds like another workaround) When booting with the hardware switch set to "On", nothing is blocked neither hard nor soft.

    Read the article

  • Storing editable site content?

    - by hmp
    We have a Django-based website for which we wanted to make some of the content (text, and business logic such as pricing plans) easily editable in-house, and so we decided to store it outside the codebase. Usually the reason is one of the following: It's something that non-technical people want to edit. One example is copywriting for a website - the programmers prepare a template with text that defaults to "Lorem ipsum...", and the real content is inserted later to the database. It's something that we want to be able to change quickly, without the need to deploy new code (which we currently do twice a week). An example would be features currently available to the customers at different tiers of pricing. Instead of hardcoding these, we read them from database. The described solution is flexible but there are some reasons why I don't like it. Because the content has to be read from the database, there is a performance overhead. We mitigate that by using a caching scheme, but this also adds some complexity to the system. Developers who run the code locally see the system in a significantly different state compared to how it runs on production. Automated tests also exercise the system in a different state. Situations like testing new features on a staging server also get trickier - if the staging server doesn't have a recent copy of the database, it can be unexpectedly different from production. We could mitigate that by committing the new state to the repository occasionally (e.g. by adding data migrations), but it seems like a wrong approach. Is it? Any ideas how best to solve these problems? Is there a better approach for handling the content that I'm overlooking?

    Read the article

  • What actions to take when people leave the team?

    - by finrod
    Recently one of our key engineers resigned. This engineer has co-authored a major component of our application. We are not hitting Truck number yet though, but we're getting close :) Before the guy waltzes off, we want to take actions necessary to recover from this loss as smoothly as possible and eventually 'grow' the rest of the team to competently cover the parts he authored. More about the context: the domain the component covers and the code are no rocket science but still a lot of non-trivial stuff. Some team members can already cover a lot of this but those have a lot on their plates and we want to make sure every. (as I see it): Improve tests and test coverage - especially for the non-trivial stuff, Update high level documents, Document any 'funny stuff' the code does (we had to do some heavy duct-taping), Add / update code documentation - have everything with 'public' visibility documented. Finally the questions: What do you think are the actions to take in this situation? What have you done in such situations? What did or did not work well for you?

    Read the article

  • error: no such partition after 11.10 upgrade to 12.04

    - by Alan King
    -I recently upgraded my 11.10 install to 12.04 LTS and got the above error message upon reboot after a GNU GRUB version ubuntu3 display showing Ubuntu 3.2.0-23-generic pae and other kernels or memory tests to choose from. The upgrade had to be done by CD because the Update Manager did not show the 12.04 upgrade option. After selecting the default install option of upgrading 11.10 to 12.04, I was presented with a screen saying that I had not specified a swap partition. Upon selection the 'back' key, I was taken to a partition page which listed two current partitions (only Ubuntu 11.10 had been installed - no Windoz): an ext4 partition plus a small 1.8GB partition. I double clicked the small partition and selected it as the swap partition even though I wondered at the time why this even came up. I can see the two user folders under home from the file manager screen while runnning 12.04 from the CD but if I try to access either one an error message is displayed saying I do not have permission while I get a loading message in the lower right corner of the window that does not go away. I have two questions: Can I access the user folders prior to recovery via the Terminal? If so, how? How do I fix the GRUB issue?

    Read the article

  • How can find out the device Id of my unmounted DVD?

    - by fred.bear
    When I put a DVD into the DVD drive, it appears in Nautilus Places, but is not automatically mounted. (this is by personal choice). In this unmounted state, mount (of course) reports nothing, and likewise for df.. but Nautilus is aware of the DVD hardware unit and has read the Label; which it shows in Places So it seems to me that Nautilus has already accessed the DVD devices (Did it temporarily mount it?)... The main point of my question was to determine how to find the device Id of an unmounted device .. but as I've been writing this, I now think it may not be as simple as that... This issue came up because I wanted to test this command cat iso-pieces.* | growisofs -Z /dev/dvd=/dev/stdin, but then realized that I didn't know how to get my DVD's device Id. ... and does the above command requires a mounted device, or does it write directly to the device? ... as you can see, I'm a bit vague about devices :) Come to think of it maybe Nautalus read the DVD device directly, because when all is said and done, something has to read/write directly to it. info growisofs says: Under Linux it will most likely be an ide-scsi device such as "/dev/scd0 How can I find this Id via a script?

    Read the article

  • Looking for an example of how a software project can be managed/deployed

    - by rguilbault
    My company is evaluating adopting off-the-shelf ALM products to aid in our development lifecycle; we currently use our own homegrown solutions to manage requirements gathering, specification documentation, testing, etc. One of the issues I am having is understanding how to move code between stages of development. We have what we call a pipeline, which consists of particular stops: [Source] - [QC] - [Production] At the first stop, the developer works out a solution to some requested change and performs individual testing. When that process is complete (and peer review has been performed), our ALM system physically moves the affected programs from the [Source] runtime environment to the [QC] runtime environment. This movement of code is triggered by advancing the status of the change request to match the stage of the pipeline. I have been searching the internet for a few days trying to find how the process is accomplished elsewhere -- I have read a bit about builds, automated testing, various ALM products, etc. but nowhere does any of this state how builds interact with initial change requests, what the triggers are, how dependencies are managed, how the various forms of testing are accommodated (e.g. unit testing, integration testing, regression testing), etc. Can anyone point me to any resources detailing specific workflows or attempt to explain (generically) how a change could/should be tracked and moved though the development lifecycle? I'd be very appreciative. Note: I've cleaned up the question to hopefully make it easier to understand. Also, I found another question (which I can't find now) that referenced this book, which sounds like it might be exactly what I am looking for -- not sure if I want to shell out the cash for it, though.

    Read the article

  • Basic Puppet installation with Solaris 11.2 beta

    - by user13366125
    At the recent announcement we talked a lot about the Puppet integration. But how do you set it up? I want to show this in this blog entry. However this example i'm using is even useful in practice. Due to the extremely low overhead of zones i'm frequently seeing really large numbers of zones on a single system. Changing /etc/hosts or changing an SMF service property on 3 systems is not that hard. Doing it on a system with 500 zones is ... let say it diplomatic ... a job you give to someone you want to punish. Puppet can help in this case making of managing the configuration and to ease the distribution. You describe the changes you want to make in a file or set of file called manifest in the Puppet world and then roll them out to your servers, no matter if they are virtual or physical. A warning at first: Puppet is a really,really vast topic. This article is really basic and it doesn't goes more than just even toe's deep into the possibilities and capabilities of Puppet. It doesn't try to explain Puppet ... just how you get it up and running and do basic tests. There are many good books on Puppet. Please read one of them, and the concepts and the example will get much clearer immediately. (more)

    Read the article

  • How to refactor to cleaner version of maintaing states of the widget

    - by George
    Backstory I inherited a bunch of code that I'd like to refactor. It is a UI application written in javascript. Current state: We have main application which consist of several UI components. And each component has entry fields, textboxes, menus, etc), like "ticket", "customer information", etc. Based on input, where the application was called from, who is the user, we enable/disable, hide, show, change titles. Unfortunately, the app grew to the point where it is really hard to scale, add new features. Main the driver (application code) calls set/unset functions of the respective components. So a lot of the stuff look like this Main app unit function1() { **call_function2()** component1.setX(true); component1.setY(true); component2.setX(false); } call_function2() { // it may repeat some of the code function1 called } and we have a lot of this in the main union. I am cleaning this mess. What is the best way to maintain the state of widgets? Please let me know if you need me to clarify.

    Read the article

  • Boundary conditions for testing

    - by Loggie
    Ok so in a programming test I was given the following question. Question 1 (1 mark) Spot the potential bug in this section of code: void Class::Update( float dt ) { totalTime += dt; if( totalTime == 3.0f ) { // Do state change m_State++; } } The multiple choice answers for this question were. a) It has a constant floating point number where it should have a named constant variable b) It may not change state with only an equality test c) You don't know what state you are changing to d) The class is named poorly I wrongly answered this with answer C. I eventually received feedback on the answers and the feedback for this question was Correct answer is a. This is about understanding correct boundary conditions for tests. The other answers are arguably valid points, but do not indicate a potential bug in the code. My question here is, what does this have to do with boundary conditions? My understanding of boundary conditions is checking that a value is within a certain range, which isn't the case here. Upon looking over the question, in my opinion, B should be the correct answer when considering the accuracy issues of using floating point values.

    Read the article

  • OSSEC HIDS Notification "Unknown problem somewhere in the system." (seems like hdd issue)

    - by John
    from what i understand somethings is wrong with hdd i am trying to find some commands in order to run some tests to check if hard disk is OK I will post a full list of logs after REBOOT of system: "Unknown problem somewhere in the system." kernel: ata2.00: failed command: READ FPDMA QUEUED kernel: res 51/40:c8:38:5c:16/00:00:00:00:00/40 Emask 0x409 (media error) <F> kernel: ata2.00: error: { UNC } kernel: ata2.00: failed command: READ FPDMA QUEUED kernel: res 51/40:78:88:5c:16/00:00:00:00:00/40 Emask 0x409 (media error) <F> kernel: sd 1:0:0:0: [sda] Sense Key : Medium Error [current] [descriptor] kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: md/raid1:md1: read error corrected (8 sectors at 1461400 on sda1) kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: sd 1:0:0:0: [sda] Add. Sense: Unrecovered read error - auto reallocate failed kernel: md/raid1:md1: read error corrected (8 sectors at 1461672 on sda1) Also some of this logs are duplicate or even more. Thanks.

    Read the article

  • Translating an object along its heading

    - by Kuros
    I am working on a simulation that requires me to have several objects moving around in 3D space (text output of their current position on the grid and heading is fine, I do not need graphics), and I am having some trouble getting objects to move along their relative headings. I have a basic understanding of vectors and matrices. I am using a vector to represent their position, and I am also using Euler Angles. I can translate one of my entities with a matrix along whatever axis, and I can alter their heading. For example, if I have an entity at (order is XYZ) 1, 1, 1, with a heading of 0, I can apply a translation matrix to get them to talk to 1, 1, 2 fine. However, if I change their heading to 270, they still walk to 1, 1, 3, instead of 2, 1, 2 as I desire. I have a feeling that my problem lies in not translating my matrix from world space to object space, but I am not sure how to go about that. How can I do this? Addition: I am using 3D vectors to represent their current position and their heading (using the three euler angles). For now, all I want to do is have an entity walk in a square, reporting their current position at each step. So, assuming it starts at 10, 10, 10 I want it to walk as follows: 10,10,10 -> 10, 10, 15 10, 10, 15 -> 5, 10, 15 5, 10, 15 -> 5, 10, 10 5, 10, 10 -> 10, 10, 10 My 1 Z unit translation matrix is as follows: [1 0 0 0] [0 1 0 0] [0 0 1 1] [0 0 0 1] My rotation matrix is as follows: [0 0 1 0] [0 1 0 0] [-1 0 0 0] [0 0 0 1]

    Read the article

  • How to rotate a group of objects around a common center?

    - by user1662292
    I've made a model in 3D Studio Max 9. It consists of a variety of cubes, clyinders etc. In XNA I've imported the model okay and it shows correctly. However, when I apply rotation, each component in the model rotates around it's own centre. I want the model to rotate as a single unit. I've linked the components in 3D Max and they rotate as I want in Max. protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); model = Content.Load<Model>("Models/Alien1"); } protected override void Update(GameTime gameTime) { camera.Update(1f, new Vector3(), graphics.GraphicsDevice.Viewport.AspectRatio); rotation += 0.1f; base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); Matrix[] transforms = new Matrix[model.Bones.Count]; model.CopyAbsoluteBoneTransformsTo(transforms); Matrix worldMatrix = Matrix.Identity; Matrix rotationYMatrix = Matrix.CreateRotationY(rotation); Matrix translateMatrix = Matrix.CreateTranslation(location); worldMatrix = rotationYMatrix * translateMatrix; foreach (ModelMesh mesh in model.Meshes) { foreach (BasicEffect effect in mesh.Effects) { effect.World = worldMatrix * transforms[mesh.ParentBone.Index]; effect.View = camera.viewMatrix; effect.Projection = camera.projectionMatrix; effect.EnableDefaultLighting(); effect.PreferPerPixelLighting = true; } mesh.Draw(); } base.Draw(gameTime); } More Info: Rotating the object via it's properties works fine so I'm guessing there's something up with the code rather than with the object itself. Translating the object also causes the objects to get moved independently of each other rather than as a single model and each piece becomes spread around the scene. The model is in .X format.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >