Search Results

Search found 5322 results on 213 pages for 'answers'.

Page 99/213 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • Is mocking for unit testing appropriate in this scenario?

    - by Vinoth Kumar
    I have written around 20 methods in Java and all of them call some web services. None of these web services are available yet. To carry on with the server side coding, I hard-coded the results that the web-service is expected to give. Can we unit test these methods? As far as I know, unit testing is mocking the input values and see how the program responds. Are mocking both input and ouput values meaningful? Edit : The answers here suggest I should be writing unit test cases. Now, how can I write it without modifying the existing code ? Consider the following sample code (hypothetical code) : public int getAge() { Service s = locate("ageservice"); // line 1 int age = s.execute(empId); // line 2 return age; // line 3 } Now How do we mock the output ? Right now , I am commenting out 'line 1' and replacing line 2 with int age= 50. Is this right ? Can anyone point me to the right way of doing it ?

    Read the article

  • How to build Gantt chart from a set of Redmine tickets without filling dates in all of them?

    - by Alexander Gladysh
    Redmine 1.1.1 I've created a set of tickets for a new project. In each issue I filled Subject, Description and Estimated time fields. I also filled blocks/blocked by dependencies in Related issues. But the Gantt chart for this project is empty (that is, it contains all the tasks, but does not contain any "bars" for them). I need to get a Gantt chart (or any other visual representation) to show to other project members. I'd hate to type all that information again into OpenProj. Is there a way to get a serviceable Gantt chart from the Redmine? Update: In the answers below I read that to get working Gantt chart I have to input start date and due date manually for each issue. I believe that this information should be inferred automatically from start date of first ticket (first — depenency-wise), estimated time of each ticket, dependency graph, resource assignment and working hours calendar. Just as it happens in any minimally sane Gantt chart project management tool. To enter this information by hand and to keep it up-to-date manually as the project evolves is insane waste of time. Is there a way to generate Gantt chart from the set of Redmine tickets without filling in all this information manually? (Solutions involving data export + import in sane tool or involving existing plugins are perfectly acceptable.)

    Read the article

  • General questions regarding open-source licensing

    - by ndg
    I'm looking to release an open-source iOS software project but I'm very new to the licensing side of the things. While I'm aware that the majority of answers here will not lawyers, I'd appreciate it if anyone could steer me in the right direction. With the exception of the following requirements I'm happy for developers to largely do whatever they want with the projects source code. I'm not interested in any copyleft licensing schemes, and while I'd like to encourage attribution in derivative works it is not required. As such, my requirements are as follows: Original source can be distributed and re-distributed (verbatim) both commercially and non-commercially as long as the original copyright information, website link and license is maintained. I wish to retain rights to any of the multi-media distributed as part of the project (sound effects, graphics, logo marks, etc). Such assets will be included to allow other developers to easily execute the project, but cannot be re-distributed in any manner. I wish to retain rights to the applications name and branding. Futher to selecting an applicable license, I have the following questions: The project makes use of a number of third-party libraries (all licensed under variants of the MIT license). I've included individual licenses within the source (and application) and believe I've met all requirements expressed in these licenses, but is there anything else that needs to be done before distributing them as part of my open-source project? Also included in my project is a single proprietary, close-sourced library that's used to power a small part of the application. I'm obviously unable to include this in the source release, but what's the best way of handling this? Should I simply weak-link the project and exclude it entirely from the Git project?

    Read the article

  • Case Management In-Depth: Stakeholders & Permissions by Mark Foster

    - by JuergenKress
    We’ve seen in the previous 3 posts in this series what Case Management is, how it can be configured in BPM Studio and its lifecycle. I now want to go into some more depth with specific areas such as:. Stakeholders & Permissions Case Activities Case Rules etc. In the process of designing a Case Management solution it is important to know what approach to take, what questions to ask and based on the answers to these questions, how to implement. I’ll start with Stakeholders & Permissions. Stakeholders The users that perform actions on case objects, defined at a business level, e.g. “Help Desk Agent”, “Help Desk Supervisor” etc. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: ACM,BPM,Mark Foster,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • How can I re-enable the typing break in 11.10?

    - by Hamish Downer
    I've just upgraded to beta 2 of Oneiric/11.10 and the typing break has gone. I've gone into the system settings and looked in "Keyboard Layout" and "Keyboard" and can't find anything. Has it just been dropped? Is there some hidden way to re-enable it? Update: Thought I'd write an update based on some stuff that has happened since this question (and the two answers) were written. Workrave has now been re-instated in oneiric-backports and for 12.04 (how to enable backports). It works fine, though if you want to put it in your systray then you need to allow it in there. The easy/lazy command line way to allow workrave into the notification area is to do something like: gsettings set com.canonical.Unity.Panel systray-whitelist "['all']" But read this question if you want a more detailed explanation about what you're doing here. Meanwhile the Gnome typing break has been split out into an app called DrWright, however it has not (at time of writing) been packaged for 11.10 (or later). And as mentioned in the other answer, another option is RSIBreak. It is a KDE app but works fine in Unity.

    Read the article

  • Reasons NOT to open source not-for-profit code?

    - by naught101
    I am a big fan of open source code. I think I understand most of the advantages of going open source. I'm a science student researcher, and I have to work with quite a surprising amount of software and code that is not open source (either it's proprietary, or it's not public). I can't really see a good reason for this, and I can see that the code, and people using it, would definitely benefit from being more public (if nothing else, in science it's vital that your results can be replicated if necessary, and that's much harder if others don't have access to your code). Before I go out and start proselytising, I want to know: are there any good arguments for not releasing not-for-profit code publicly, and with an OSI-compliant license? (I realise there are a few similar questions on SE, but most focus on situations where the code is primarily used for making money, and I couldn't much relevant in the answers.) Clarification: By "not-for-profit", I am including downstream profit motives, such as parent-company brand-recognition and investor profit expectations. In other words, the question relates only to software for which there is NO profit motive tied to the software what so ever.

    Read the article

  • Webmaster Tools, www and no-www, duplicate content and subdomains

    - by Jay
    I have not come to any conclusive answers on the following after many hours of research on many websites to the specific issue that I am trying to figure out. My company has two websites a main one at www.example.com and one at subdomain.example.com which is a subdomain of the first and is our self hosted blog. The way Google sees these with the www or no-www (called naked for now on) is that each of these actually are different when the www or naked version is used/not used in the front of the domain. I completely understand this. It is also advised that both should be set up in the Google Webmaster Tools, which I have done. Correct me if I am wrong on that in regard to having both set up. Now the way it appears is that we can set a preferred domain up in Webmaster Tools only at the root domain level. The subdomain can not have this and actually says the following "Restricted to root level domains only". So it appears that the domain should follow what the root domain says which on our preferred one says to display the www.example.com. and not the naked version. That is one issue I have in that one displays one way and the other displays another. Is it that we have the wrong redirects in place for the subdomain? Another question is does this have any affect on SEO in regards to duplicate content on the web in how we have set this up?

    Read the article

  • SQL – Crossword Puzzle Based on Course Building Successful High Traffic Profitable Blog

    - by Pinal Dave
    Do you like Crossword Puzzles? I personally love it. Everytime I open the newspaper, I try to resolve at least one crossword or sudoku. It is just fun to tease a brain little and stretch its limits. Regular readers of the blogs are aware that I have recently published two courses on how to build successful high traffic profitable blog. Here are the links to watch both the courses: Course 1, Course 2. Do watch them in order as both the courses have unique content, which can help you build a better blog. On my birthday July 30th, there was an interesting blog post posted on Pluralsight blog. It was a crossword build from my two courses. I encourage you try to solve the crossword which I have built. Giveaway: There is a cool gift for the winner – it is melting clock. Do not confuse this as a dummy or not working clock. This looks like melting but it always shows accurate time and it is perfectly balanced to hang off of any flat surface. How to Participate: Well, it is very simple, you just have to complete the crossword and send it to me at pinal at sqlauthority.com with all valid answers. The deadline is that you must send it before Monday August 5, 2013 or before the valid answer keys are posted on Pluralsight blog. Hints: Though the crossword is very easy and intuitive, if you ever get stuck anywhere here are two hints: Hint 1, Hint 2. Login to Pluralsight courses and watch both the courses. Watching the course will not only help you to easily complete crossword but there are hidden gems and secrets to build a high traffic profitable blog. Here is the link to download the crossword: Download Crossword. Alternatively you can download the image displayed below and print it as well.   Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Blogging

    Read the article

  • 50 Years After The Jetsons

    - by Jason Fitzpatrick
    The Jetsons, the future-oriented animated cartoon series from the 1960s, turned 50 this week. The Smithsonian takes a look at what the show meant, then and now. At the Smithsonian blog Paleofuture, Matt Novak looks back at the last 50 years and the impact that The Jetsons had. He writes: It’s important to remember that today’s political, social and business leaders were pretty much watching ”The Jetsons” on repeat during their most impressionable years. People are often shocked to learn that “The Jetsons” lasted just one season during its original run in 1962-63 and wasn’t revived until 1985. Essentially every kid in America (and many internationally) saw the series on constant repeat during Saturday morning cartoons throughout the 1960s, ’70s and ’80s. Everyone (including my own mom) seems to ask me, “How could it have been around for only 24 episodes? Did I really just watch those same episodes over and over again?” Yes, yes you did. But it’s just a cartoon, right? So what if today’s political and social elite saw ”The Jetsons” a lot? Thanks in large part to the Jetsons, there’s a sense of betrayal that is pervasive in American culture today about the future that never arrived. We’re all familiar with the rallying cries of the angry retrofuturist: Where’s my jetpack!?! Where’s my flying car!?! Where’s my robot maid?!? “The Jetsons” and everything they represented were seen by so many not as a possible future, but a promise of one. Hit up the link below for the full article–prepare to be surprised at just how few episodes of the show were ever animated and aired. 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Code maintenance: keeping a bad pattern when extending new code for being consistent or not ?

    - by Guillaume
    I have to extend an existing module of a project. I don't like the way it has been done (lots of anti-pattern involved, like copy/pasted code). I don't want to perform a complete refactor. Should I: create new methods using existing convention, even if I feel it wrong, to avoid confusion for the next maintainer and being consistent with the code base? or try to use what I feel better even if it is introducing another pattern in the code ? Precison edited after first answers: The existing code is not a mess. It is easy to follow and understand. BUT it is introducing lots of boilerplate code that can be avoided with good design (resulting code might become harder to follow then). In my current case it's a good old JDBC (spring template inboard) DAO module, but I have already encounter this dilemma and I'm seeking for other dev feedback. I don't want to refactor because I don't have time. And even with time it will be hard to justify that a whole perfectly working module needs refactoring. Refactoring cost will be heavier than its benefits. Remember: code is not messy or over-complex. I can not extract few methods there and introduce an abstract class here. It is more a flaw in the design (result of extreme 'Keep It Stupid Simple' I think) So the question can also be asked like that: You, as developer, do you prefer to maintain easy stupid boring code OR to have some helpers that will do the stupid boring code at your place ? Downside of the last possibility being that you'll have to learn some stuff and maybe you will have to maintain the easy stupid boring code too until a full refactoring is done)

    Read the article

  • Additional new content SOA & BPM Partner Community

    - by JuergenKress
    Oracle Fusion Middleware 12c (12.1.2.0.0) Released - Download (OTN, eDelivery) Whitepaper: Next Generation Service Integration Platform - PDF SOA Maturity This article in the Industrial SOA series offers exploration of the fundamentals of applying a factory approach to modern service-oriented software development. Read the article. Enterprise Service Bus The fifth article in the Industrial SOA series answers to some of the most important questions about the use of an enterprise service bus, using concrete examples to clarify areas of application that can be deemed correct for ESBs. Read the article. DevOps, Cloud, and Role Creep DevOps and cloud computing are changing the IT industry - and changing IT roles. An panel of community members discusses what’s happening and how it might affect your job. Listen to the podcast. Industrial SOA - Now chapters 1 to 5 available | Torsten Winterberg White Paper: Cloud Integration - A Comprehensive Solution White Paper: Next Generation Service Integration Platform : SOA Suite on Exalogic IT Briefcase Interview: An Integrated Approach to Mobile, Cloud, and API Management Technologies with Oracle Fusion Middleware Webcast: Oracle Cloud Integration – Information Week Webcast eBook: Oracle SOA Suite – In the Customers’ Words Podcast: Cloud Integration Transitioning from TIBCO to Oracle SOA Suite – Part 1 Events: Oracle Simplifying Integration of Cloud and On-Premise New B2B Book Published for Oracle SOA B2B 11g Get Fast-Data Accelerator in Your Hands Today: Mobile Data Offloading for Telecom Fast Data Accelerator - Blog New Oracle Process Accelerators in Financial Services & Teleco Detect, Analyze, Act Fast with BPM Improving the Quality of Healthcare with BPM Engineers Australia Improves and Automates Business Processes and Completes Engineer Enrollments up to 90% Faster with Middleware Platform - Case Study | PPT Specialized Partner Ataway on BPM Practice - Video eProseed Delivers Processes Skillfully with Oracle BPM Suite - Video Yarra Valley Water Uses SOA and BPM for Orchestration, Re-use and Visibility - Video Victoria University Discusses Oracle SOA & Oracle BPM - Video SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Version control for game development - issues and solutions?

    - by Cyclops
    There are a lot of Version Control systems available, including open-source ones such as Subversion, Git, and Mercurial, plus commercial ones such as Perforce. How well do they support the process of game-development? What are the issues using VCS, with regard to non-text files (binary files), large projects, etc? What are solutions to these problems, if any? For organization of Answers, let's try on a per-package basis. Update each package/Answer with your results. Also, please list some brief details in your answer, about whether your VCS is free or commercial, distributed versus centralized, etc. Update: Found a nice article comparing two of the VCS below - apparently, Git is MacGyver and Mercurial is Bond. Well, I'm glad that's settled... And the author has a nice quote at the end: It’s OK to proselytize to those who have not switched to a distributed VCS yet, but trying to convert a Git user to Mercurial (or vice-versa) is a waste of everyone’s time and energy. Especially since Git and Mercurial's real enemy is Subversion. Dang, it's a code-eat-code world out there in FOSS-land...

    Read the article

  • Is Intellisense faster in Visual Studio 2012 compared to Visual Studio 2010 for C++ projects?

    - by syplex
    We switched to VS2010 from VS2003 a few months ago, and there are many many improvements. But the speed of Intellisense is not one of them (although it does generate higher quality results, which is great). I read that Intellisense and the MSDN help system were being improved in VS2012, so I'm curious if its actually faster? The only data I could find were graphs of an early release (VS2011). For the record, I am using a vanilla install of VS2010 with SP1 on Windows 7 SP1 (x64). No plugins or add-ins running. What I'm looking for specifically: Has the speed of intellisense autocomplete improved? Has the speed of F12 (goto definition) improved? The answers to these questions will help in determining if VS2012 is worth the money to upgrade at this time as the intellisense slowness would be the only major reason for upgrading. I'd also be interested in knowing if the help system has improved. I'm currently using MSDN help from VS2008SP1 because it has filtering and is faster.

    Read the article

  • Allow access to WordPress site only by links in email newsletters

    - by Shane
    I send out a personal email newsletter, and have been looking into sending it via some service like MailChimp, or sendy.co. Many of these email services suggest, or require, the email newsletter content to be available online, in case the recipient's email app doesn't render it properly, or at all. The thing is I don't want my newsletter contents visible to the whole world. Nor do I want to require existing recipients to make accounts/be assigned accounts, with passwords. So, the question is: How can my WordPress site content be viewable only by clicking on the link to it in the email newsletter. It can't be found in a Google search; but once at the site the visitor can view previous newsletter contents. It seems an .htaccess file would do the trick, but I have been unable to figure out the syntax for this. Thanks for your help. I have copied below two other questions, and answers, which have helped me word my question clearly. Similar to this request about allowing access to a certain group while still restricting access to the world: Is there a way to password protect directory only in cpanel. But the user should not be prompted the password, when they try to access it via web? This persons question is the closest I could find to my situation: Restrict direct folder access via .htaccess except via specific links

    Read the article

  • installing ubuntu on SSD

    - by kunal
    Going to install Ubuntu 10.10 on new intel x25M 80GB SSD. It will be fresh install. I have been googling for past few days and getting overwhelming articles/blogs/Q&As. One particularly very useful being: Optimize for SSD (I could not post other links as i dont have enough credits) But with so many suggestions and differences of opinions (on different links) this simple OS install process seems to be daunting task to me and I really want to stick with ubuntu (although have used for very short period of time). Can someone help me by answering few questions (yes they are repeated as i couldnt comprehend the answers elsewhere) which file system (ext2/3/4 or something else)? (consider SSD life) can it be changed after installation? should i partition the disk? (as we do in traditional HDD) for now, no plan of dual booting. Only ubuntu will live on scarce space of 80GB SSD. i have 2 GB RAM, should i still allocate swap space (if i dont allocate swap space, can i still hibernate the machine)? will swap space impact SSD life? should i consider putting additional 1GB RAM to avoid swap space? Linux experience - absolute novice intended usage - heavy browsing, programming, regular video/music and some other non-CPU/RAM-intensive programs. will backup big files to an external hard drive. laptop config - 3 yr old vaio, core2 duo, 2GB RAM Please pardon the repetition and i really appreciate anyone helping me getting started with ubuntu.

    Read the article

  • Webcast Tomorrow: Securing the Cloud for Public Sector

    - by Darin Pendergraft
    Securing the Cloud for Public Sector Click here, to register for the live webcast. Cloud computing offers government organizations tremendous potential to enhance public value by helping organizations increase operational efficiency and improve service delivery. However, as organizations pursue cloud adoption to achieve the anticipated benefits a common set of questions have surfaced. “Is the cloud secure? Are all clouds equal with respect to security and compliance? Is our data safe in the cloud?” Join us December 12th for a webcast as part of the “Secure Government Training Series” to get answers to your pressing cloud security questions and learn how to best secure your cloud environments. You will learn about a comprehensive set of security tools designed to protect every layer of an organization’s cloud architecture, from application to disk, while ensuring high levels of compliance, risk avoidance, and lower costs. Discover how to control and monitor access, secure sensitive data, and address regulatory compliance across cloud environments by: providing strong authentication, data encryption, and (privileged) user access control to ensure that information is only accessible to those who need it mitigating threats across your databases and applications protecting applications and information – no matter where it is – at rest, in use and in transit For more information, access the Secure Government Resource Center or to speak with an Oracle representative, please call1.800.ORACLE1. LIVE Webcast Securing the Cloud for Public Sector Date: Wednesday, December 12, 2012 Time: 2:00 p.m. ET Visit the Secure Government Resource CenterClick here for information on enterprise security solutions that help government safeguard information, resources and networks. ACCESS NOW Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices | Privacy Statement

    Read the article

  • Script/tool to import series of snapshots, each being a new revision, into Subversion, populating source tree?

    - by Rob
    I've developed code locally and taken a fairly regular snapshot whenever I reach a significant point in development, e.g. a working build. So I have a long-ish list of about 40 folders, each folder being a snapshot e.g. in ascending date YYYYMMDD order, e.g.:- 20100523 20100614 20100721 20100722 20100809 20100901 20101001 20101003 20101104 20101119 20101203 20101218 20110102 I'm looking for a script to import each of these snapshots as a new subversion revision to the source tree. The end result being that the HEAD revision is the same as the last snapshot, and other revisions are as numbered. Some other requirements: that the HEAD revision is not cumulative of the previous snapshots, i.e., files that appeared in older snapshots but which don't appear in later ones (e.g. due to refactoring etc.) should not appear in the HEAD revision. meanwhile, there should be continuity between files that do persist between snapshots. Subversion should know that there are previous versions of these files and not treat them as brand new files within each revision. Some background about my aim: I need to formally revision control this work rather than keep local private snapshot copies. I plan to release this work as open source, so version controlling would be highly recommended I am evaluating some of the current popular version control systems (Subversion and GIT) BUT I definitely need a working solution in Subversion. I'm not looking to be persuaded to use one particular tool, I need a solution for each tool I am considering as I would also like a solution in GIT (I will post an answer separately for GIT so separate camps of folks who have expertise in GIT and Subversion will be able to give focused answers on one or the other). The same question but for GIT: Script/tool to import series of snapshots, each being a new edition, into GIT, populating source tree? An outline answer for Subversion in stackoverflow.com but not enough specifics about the script: what commands to use, code to check valid scenarios if necessary - i.e. a working script basically. http://stackoverflow.com/questions/2203818/is-there-anyway-to-import-xcode-snapshots-into-a-new-svn-repository

    Read the article

  • Best practices: Ajax and server side scripting with stored procedures

    - by Luka Milani
    I need to rebuild an old huge website and probably to port everyting to ASP.NET and jQuery and I would like to ask for some suggestion and tips. Actually the website uses: Ajax (client site with prototype.js) ASP (vb script server side) SQL Server 2005 IIS 7 as web server This website uses hundred of stored procedures and the requests are made by an ajax call and only 1 ASP page that contain an huge select case Shortly an example: JAVASCRIPT + PROTOTYPE: var data = { action: 'NEWS', callback: 'doNews', param1: $('text_example').value, ......: ..........}; AjaxGet(data); // perform a call using another function + prototype SERVER SIDE ASP: <% ...... select case request("Action") case "NEWS" With cmmDB .ActiveConnection = Conn .CommandText = "sp_NEWS_TO_CALL_for_example" .CommandType = adCmdStoredProc Set par0DB = .CreateParameter("Param1", adVarchar, adParamInput,6) Set par1DB = .CreateParameter(".....", adInteger, adParamInput) ' ........ ' can be more parameters .Parameters.Append par0DB .Parameters.Append par1DB par0DB.Value = request("Param1") par1DB.Value = request(".....") set rs=cmmDB.execute RecodsetToJSON rs, jsa ' create JSON response using a sub End With .... %> So as you can see I have an ASP page that has a lot of CASE and this page answers to all the ajax request in the site. My question are: Instead of having many CASES is it possible to create dynamic vb code that parses the ajax request and creates dynamically the call to the desired SP (also implementing the parameters passed by JS)? What is the best approach to handle situations like this, by using the advantages of .Net + protoype or jQuery? How the big sites handle situation like this? Do they do it by creating 1 page for request? Thanks in advance for suggestion, direction and tips.

    Read the article

  • Calculating collision force with AfterCollision/NormalImpulse is unreliable when IgnoreCCD = false?

    - by Michael
    I'm using Farseer Physics Engine 3.3.1 in a very simple XNA 4 test game. (Note: I'm also tagging this Box2D, because Farseer is a direct port of Box2D and I will happily accept Box2D answers that solve this problem.) In this game, I'm creating two bodies. The first body is created using BodyFactory.CreateCircle and BodyType.Dynamic. This body can be moved around using the keyboard (which sets Body.LinearVelocity). The second body is created using BodyFactory.CreateRectangle and BodyType.Static. This body is static and never moves. Then I'm using this code to calculate the force of collision when the two bodies collide: staticBody.FixtureList[0].AfterCollision += new AfterCollisionEventHandler(AfterCollision); protected void AfterCollision(Fixture fixtureA, Fixture fixtureB, Contact contact) { float maxImpulse = 0f; for (int i = 0; i < contact.Manifold.PointCount; i++) maxImpulse = Math.Max(maxImpulse, contact.Manifold.Points[i].NormalImpulse); // maxImpulse should contain the force of the collision } This code works great if both of these bodies are set to IgnoreCCD=true. I can calculate the force of collision between them 100% reliably. Perfect. But here's the problem: If I set the bodies to IgnoreCCD=false, that code becomes wildly unpredictable. AfterCollision is called reliably, but for some reason the NormalImpulse is 0 about 75% of the time, so only about one in four collisions is registered. Worse still, the NormalImpulse seems to be zero for completely random reasons. The dynamic body can collide with the static body 10 times in a row in virtually exactly the same way, and only 2 or 3 of the hits will register with a NormalImpulse greater than zero. Setting IgnoreCCD=true on both bodies instantly solves the problem, but then I lose continuous physics detection. Why is this happening and how can I fix it? Here's a link to a simple XNA 4 solution that demonstrates this problem in action: http://www.mediafire.com/?a1w242q9sna54j4

    Read the article

  • Partial recalculation of visibility on a 2D uniform grid

    - by Martin Källman
    Problem Imagine that we have a 2D uniform grid of dimensions N x N. For this grid we have also pre-computed a visibility look-up table, e.g. with DDA, which answers the boolean query is cell X visible from cell Y? The look-up table is a complete graph KN of the cells V in the grid, with each edge E being a binary value denoting the visibility between its vertices. Question If any given cell has its visibility modified, is it possible to extract the subset Edelta of edges which must have their visibility recomputed due to the change, so as to avoid a full-on recomputation for the entire grid? (Which is N(N-1) / 2 or N2 depending on the implementation) Update If is not possible to solve thi in closed form, then maintaining a separate mapping of each cell and every cell pair who's line intersects said cell might also be an option. This obviously consumes more memory, but the data is static. The increased memory requirement could be reduced by introducing a hierarchy, subdividing the grid into smaller parts, and by doing so the above mapping can be reused for each sub-grid. This would come at a cost in terms of increased computation relative to the number of subdivisions; also requiring a resumable ray-casting algorithm.

    Read the article

  • Webcast - Set Your Sights on Enterprise 2.0 in the Cloud

    - by [email protected]
    To gain a competitive edge in your market, you need your business processes to be more collaborative, agile, and flexible to meet growing business demands. How can you make that happen? One way is to deploy portal, content management, and Enterprise 2.0 capabilities on a cloud infrastructure. According to top industry analysts, Enterprise 2.0 and cloud computing are two of the top three CIO initiatives in 2010. What are some of the advantages associated with deploying your Enterprise 2.0 initiatives in a cloud environment? Learn about the security, performance, and flexibility benefits that are available to you. Watch our complimentary live Webcast, Cloud Computing and Enterprise 2.0--Gain a Competitive Advantage, to get the answers you're looking for. Find out how Oracle pioneered the highly scalable and highly secure solutions that will enable you to: Quickly deploy on a cloud computing infrastructure that can scale as projects go viral Accelerate business processes, such as new product introduction, customer service, and new employee on-boarding Take advantage of best practices in cloud computing and Enterprise 2.0 implementations Join us for this LIVE webcast tomorrow as we show you how to achieve a higher level of performance and flexibility with Enterprise 2.0 and cloud computing. Register today for the live Webcast.

    Read the article

  • ArchBeat Link-o-Rama for 2012-06-01

    - by Bob Rhubart
    Complexity of Social Computing - Is it a Consideration for EA's? | Pat Shepherd blogs.oracle.com Pat Shepherd asks, "Does Enterprise Architecture need to consider Social Computing in its scope?" Who should own the Enterprise Architecture? | Michael Glas blogs.oracle.com "Instead of looking at just who owns the architecture," suggests Michael Glas, "think about what the person/role/organization should do." The Application Architecture Domain | Michael Glas blogs.oracle.com Michael Glas asks—and answers: "As an Enterprise Architect, what do I need to consider when looking at/defining/designing the Application Architecture Domain?" CAP Twelve Years Later: How the "Rules" Have Changed | Eric Brewer www.infoq.com The CAP theorem asserts that any net­worked shared-data system can have only two of three desirable properties. How­ever, by explicitly handling partitions, designers can optimize consistency and availability, thereby achieving some trade-off of all three. Oracle DB with OEM in Amazon Cloud | Dr. Frank Munz www.munzandmore.com Dr. Frank Munz shares a video that screencast that explains "how to create an Oracle DB instance in AWS, how to enable OEM...and how to connect to your cloud instance with a local installation of NetBeans." Sample External Login.jsp page for Oracle Access Manager 11g | Brian Eidelman fusionsecurity.blogspot.com A-Team blogger Brian Eidelman expands on a previous post dealing with configuring OAM 11g to use an externally hosted custom login page. Bay Area Coherence Special Interest Group (BACSIG) Meeting June 7 coherence.oracle.com Date: Thursday, June 7, 2012 Time: 5:30pm – 9:00pm PT Where: Oracle Conference Center Room # 103 350 Oracle Parkway Redwood, Shores, CA Presentations: 6:00 p.m. - Coherence 101, The Evolution of Distributed Caching - Noah Arliss (Oracle) 7:00 p.m. - Optimizing Performance for Oracle Coherence and TopLink Grid at OOCL - Matt Rosen, Leo Limqueco (OOCL) 8:00 p.m. - Oracle Coherence Message Bus - Extreme Performance on Oracle Exalogic - Ballav Bihani (Oracle) Thought for the Day "I can't be left unsupervised." — Ron Wood (Born 06/01/1947 Source: Brainy Quote

    Read the article

  • Sort algorithms that work on large amount of data

    - by Giorgio
    I am looking for sorting algorithms that can work on a large amount of data, i.e. that can work even when the whole data set cannot be held in main memory at once. The only candidate that I have found up to now is merge sort: you can implement the algorithm in such a way that it scans your data set at each merge without holding all the data in main memory at once. The variation of merge sort I have in mind is described in this article in section Use with tape drives. I think this is a good solution (with complexity O(n x log(n)) but I am curious to know if there are other (possibly faster) sorting algorithms that can work on large data sets that do not fit in main memory. EDIT Here are some more details, as required by the answers: The data needs to be sorted periodically, e.g. once in a month. I do not need to insert a few records and have the data sorted incrementally. My example text file is about 1 GB UTF-8 text, but I wanted to solve the problem in general, even if the file were, say, 20 GB. It is not in a database and, due to other constraints, it cannot be. The data is dumped by others as a text file, I have my own code to read this text file. The format of the data is a text file: new line characters are record separators. One possible improvement I had in mind was to split the file into files that are small enough to be sorted in memory, and finally merge all these files using the algorithm I have described above.

    Read the article

  • What is the Best way to learn c# programming and capability to do anything with C#?

    - by MSU
    I browsed all the answers given in Stackoverflow,i couldn't find my answer that's why i am writing this, so please don't mark this a duplicate. My case i want to get the capability of doing anything in C# from building applications to solving problems. I searched for and tried to read books. Then one of the experts said, reading books will not make any good for you, for learning you have to solve problems i.e. real world problems in C#, he gave me some sample problems, which i previously did in C++, i started doing, but the thing is that i know the internal logic of solving the problem or the algorithm to solve. But i don't know how to imepelement them using C# efficiently. so, the gap is, i can't think in C#, I know the message to passed but don't the exact way to pass it. I did a program to solve a problem, then find out there are much easier ways of doing it wherever i was doing it in tougher way. So, can you guys(C# experts) please enlighten me, what i need to get hold of the language and get the ability to code in C# proficiently?

    Read the article

  • How can I install Celtx 2.9.7 properly on Ubuntu 12.04 LTS?

    - by cruxfilm
    I am new to Ubuntu and Linux and I want to install and use the newest version of the screenwriting software Celtx on Ubuntu 12.04 LTS. After trying https://answers.launchpad.net/ubuntu/+source/ubiquity/+question/206295 and using sudo add-apt-repository ppa:dreamstudio/video sudo apt-get update sudo apt-get install celtx I unfortunately had to find out that was a rather old version with a fairly screwed up UI. I then downloaded the newest version from http://download.celtx.com/2.9.7/Celtx-2.9.7-64.tar.bz2 but now I don't know how to properly install it. I extracted it to /home/username/ (there was no ~/bin/) as described here and I can now launch the application by running the file celtx within that folder (I get asked whether I want to Display, run or run it in Terminal) and it works fine. But I can't get it to launch from Unity. I tried right-clicking the launcher button and going "Lock to Launcher" while it's running and it does create an icon but clicking it to launch the program does nothing. Also searching for celtx in the Dash doesn't find the app. And advice on how to properly install Celtx 2.9.7 in Ubuntu 12.04 LTS?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >