Search Results

Search found 6682 results on 268 pages for 'edge cases'.

Page 186/268 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Why are MVC & TDD not employed more in game architecture?

    - by secoif
    I will preface this by saying I haven't looked a huge amount of game source, nor built much in the way of games. But coming from trying to employ 'enterprise' coding practices in web apps, looking at game source code seriously hurts my head: "What is this view logic doing in with business logic? this needs refactoring... so does this, refactor, refactorrr" This worries me as I'm about to start a game project, and I'm not sure whether trying to mvc/tdd the dev process is going to hinder us or help us, as I don't see many game examples that use this or much push for better architectural practices it in the community. The following is an extract from a great article on prototyping games, though to me it seemed exactly the attitude many game devs seem to use when writing production game code: Mistake #4: Building a system, not a game ...if you ever find yourself working on something that isn’t directly moving your forward, stop right there. As programmers, we have a tendency to try to generalize our code, and make it elegant and be able to handle every situation. We find that an itch terribly hard not scratch, but we need to learn how. It took me many years to realize that it’s not about the code, it’s about the game you ship in the end. Don’t write an elegant game component system, skip the editor completely and hardwire the state in code, avoid the data-driven, self-parsing, XML craziness, and just code the damned thing. ... Just get stuff on the screen as quickly as you can. And don’t ever, ever, use the argument “if we take some extra time and do this the right way, we can reuse it in the game”. EVER. is it because games are (mostly) visually oriented so it makes sense that the code will be weighted heavily in the view, thus any benefits from moving stuff out to models/controllers, is fairly minimal, so why bother? I've heard the argument that MVC introduces a performance overhead, but this seems to me to be a premature optimisation, and that there'd more important performance issues to tackle before you worry about MVC overheads (eg render pipeline, AI algorithms, datastructure traversal, etc). Same thing regarding TDD. It's not often I see games employing test cases, but perhaps this is due to the design issues above (mixed view/business) and the fact that it's difficult to test visual components, or components that rely on probablistic results (eg operate within physics simulations). Perhaps I'm just looking at the wrong source code, but why do we not see more of these 'enterprise' practices employed in game design? Are games really so different in their requirements, or is a people/culture issue (ie game devs come from a different background and thus have different coding habits)?

    Read the article

  • SOA &amp; Application Grid Specialization step 3 of 6 &ndash; Education Competence Center

    - by Jürgen Kress
    SOA & Application Grid Specialization step 3 of 6 – education competence center Dear Team In our fist step to become SOA Specialized & Application Grid Specialized we highlighted our OMM system to register your opportunities. In our second step we featured our marketing activities to create your reference cases and run joint marketing campaigns. In the third step we will focus on the education criteria: SOA Sales assessment & SOA Pre-Sales assessment & Support assessment. Steps: Login to Oracle Partner Network (support for login contact Partner Business Centers) Go to the OPN Competence Center Select the Oracle Service-Oriented Architecture 11g Sales Specialist (3 persons required) Click the play button to run the assessment Select the Oracle Service-Oriented Architecture 11g PreSales Specialist (3 persons required) Click the play button to run the assessment Select the Oracle Technology Support Specialist (1 person required) Click the play button to run the assessment Tips: · You can run the assessments as often as you like. After each try you will see your current score and correct answers to the questions. · During your next team meeting reserve an hour to become specialized jointly. · For the fist 5 partners who contact us we will order a pizza service to ensure the success of your team meeting! · We want your feedback to improve the assessments. If you find an ambiguous question or one with wrong context or even wrong answers, send us your feedback. The first 5 partners who will send us feedback will get a free competence center coffee cup! If you need to get an Oracle Partner Network Account please contact our Partner Business Centers.   For more information on Specialization please visit our OPN Specialized Webcast Series And become a member in our SOA Partner Community for registration please visit www.oracle.com/goto/ema/soa Jürgen Kress, SOA Partner Adoption EMEA Thanks for your efforts to become Specialized! SOA Specialized Application Grid Specialized Proof 2 transactions with OMM Proof 2 transactions with OMM Create your 2 references Create your 2 references SOA Sales assessment 3, Application Grid Sales Specialist 3 SOA Pre-Sales assessment 3 Application Grid PreSales Specialist 3 Support assessment 1 Support assessment 2 SOA Implementation assessment 4 Application Grid Implementation assessment 4 Technorati Tags: soa specialization Oracle Partner Network SOA Partner Community

    Read the article

  • Should i continue my self-taught coding practice or learn how to do coding professionally?

    - by G1i1ch
    Lately I've been getting professional work, hanging out with other programmers, and making friends in the industry. The only thing is I'm 100% self-taught. It's caused my style to extremely deviate from the style of those that are properly trained. It's the techniques and organization of my code that's different. It's a mixture of several things I do. I tend to blend several programming paradigms together. Like Functional and OO. I lean to the Functional side more than OO, but I see the use of OO when something would make more sense as an abstract entity. Like a game object. Next I also go the simple route when doing something. When in contrast, it seems like sometimes the code I see from professional programmers is complicated for the sake of it! I use lots of closures. And lastly, I'm not the best commenter. I find it easier just to read through my code than reading the comment. And most cases I just end up reading the code even if there are comments. Plus I've been told that, because of how simply I write my code, it's very easy to read it. I hear professionally trained programmers go on and on about things like unit tests. Something I've never used before so I haven't even the faintest idea of what they are or how they work. Lots and lots of underscores "_", which aren't really my taste. Most of the techniques I use are straight from me, or a few books I've read. Don't know anything about MVC, I've heard a lot about it though with things like backbone.js. I think it's a way to organize an application. It just confuses me though because by now I've made my own organizational structures. It's a bit of a pain. I can't use template applications at all when learning something new like with Ubuntu's Quickly. I have trouble understanding code that I can tell is from someone trained. Complete OO programming really leaves a bad taste in my mouth, yet that seems to be what EVERYONE else is strictly using. It's left me not that confident in the look of my code, or wondering whether I'll cause sparks when joining a company or maybe contributing to open source projects. In fact I'm rather scared of the fact that people will eventually be checking out my code. Is this just something normal any programmer goes through or should I really look to change up my techniques?

    Read the article

  • Could I be going crazy with Event Handlers? Am I going the "wrong way" with my design?

    - by sensae
    I guess I've decided that I really like event handlers. I may be suffering a bit from analysis paralysis, but I'm concerned about making my design unwieldy or running into some other unforeseen consequence to my design decisions. My game engine currently does basic sprite-based rendering with a panning overhead camera. My design looks a bit like this: SceneHandler Contains a list of classes that implement the SceneListener interface (currently only Sprites). Calls render() once per tick, and sends onCameraUpdate(); messages to SceneListeners. InputHandler Polls the input once per tick, and sends a simple "onKeyPressed" message to InputListeners. I have a Camera InputListener which holds a SceneHandler instance and triggers updateCamera(); events based on what the input is. AgentHandler Calls default actions on any Agents (AI) once per tick, and will check a stack for any new events that are registered, dispatching them to specific Agents as needed. So I have basic sprite objects that can move around a scene and use rudimentary steering behaviors to travel. I've gotten onto collision detection, and this is where I'm not sure the direction my design is going is good. Is it a good practice to have many, small event handlers? I imagine going the way I am that I'd have to implement some kind of CollisionHandler. Would I be better off with a more consolidated EntityHandler which handles AI, collision updates, and other entity interactions in one class? Or will I be fine just implementing many different event handling subsystems which pass messages to each other based on what kind of event it is? Should I write an EntityHandler which is simply responsible for coordinating all these sub event handlers? I realize in some cases, such as my InputHandler and SceneHandler, those are very specific types of events. A large portion of my game code won't care about input, and a large portion won't care about updates that happen purely in the rendering of the scene. Thus I feel my isolation of those systems is justified. However, I'm asking this question specifically approaching game logic type events.

    Read the article

  • Advantages to Multiple Methods over Switch

    - by tandu
    I received a code review from a senior developer today asking "By the way, what is your objection to dispatching functions by way of a switch statement?" I have read in many places about how pumping an argument through switch to call methods is bad OOP, not as extensible, etc. However, I can't really come up with a definitive answer for him. I would like to settle this for myself once and for all. Here are our competing code suggestions (php used as an example, but can apply more universally): class Switch { public function go($arg) { switch ($arg) { case "one": echo "one\n"; break; case "two": echo "two\n"; break; case "three": echo "three\n"; break; default: throw new Exception("Unknown call: $arg"); break; } } } class Oop { public function go_one() { echo "one\n"; } public function go_two() { echo "two\n"; } public function go_three() { echo "three\n"; } public function __call($_, $__) { throw new Exception("Unknown call $_ with arguments: " . print_r($__, true)); } } Part of his argument was "It (switch method) has a much cleaner way of handling default cases than what you have in the generic __call() magic method." I disagree about the cleanliness and in fact prefer call, but I would like to hear what others have to say. Arguments I can come up with in support of Oop scheme: A bit cleaner in terms of the code you have to write (less, easier to read, less keywords to consider) Not all actions delegated to a single method. Not much difference in execution here, but at least the text is more compartmentalized. In the same vein, another method can be added anywhere in the class instead of a specific spot. Methods are namespaced, which is nice. Does not apply here, but consider a case where Switch::go() operated on a member rather than a parameter. You would have to change the member first, then call the method. For Oop you can call the methods independently at any time. Arguments I can come up with in support of Switch scheme: For the sake of argument, cleaner method of dealing with a default (unknown) request Seems less magical, which might make unfamiliar developers feel more comfortable Anyone have anything to add for either side? I'd like to have a good answer for him.

    Read the article

  • How should game objects be aware of each other?

    - by Jefffrey
    I find it hard to find a way to organize game objects so that they are polymorphic but at the same time not polymorphic. Here's an example: assuming that we want all our objects to update() and draw(). In order to do that we need to define a base class GameObject which have those two virtual pure methods and let polymorphism kicks in: class World { private: std::vector<GameObject*> objects; public: // ... update() { for (auto& o : objects) o->update(); for (auto& o : objects) o->draw(window); } }; The update method is supposed to take care of whatever state the specific class object needs to update. The fact is that each objects needs to know about the world around them. For example: A mine needs to know if someone is colliding with it A soldier should know if another team's soldier is in proximity A zombie should know where the closest brain, within a radius, is For passive interactions (like the first one) I was thinking that the collision detection could delegate what to do in specific cases of collisions to the object itself with a on_collide(GameObject*). Most of the the other informations (like the other two examples) could just be queried by the game world passed to the update method. Now the world does not distinguish objects based on their type (it stores all object in a single polymorphic container), so what in fact it will return with an ideal world.entities_in(center, radius) is a container of GameObject*. But of course the soldier does not want to attack other soldiers from his team and a zombie doesn't case about other zombies. So we need to distinguish the behavior. A solution could be the following: void TeamASoldier::update(const World& world) { auto list = world.entities_in(position, eye_sight); for (const auto& e : list) if (auto enemy = dynamic_cast<TeamBSoldier*>(e)) // shoot towards enemy } void Zombie::update(const World& world) { auto list = world.entities_in(position, eye_sight); for (const auto& e : list) if (auto enemy = dynamic_cast<Human*>(e)) // go and eat brain } but of course the number of dynamic_cast<> per frame could be horribly high, and we all know how slow dynamic_cast can be. The same problem also applies to the on_collide(GameObject*) delegate that we discussed earlier. So what it the ideal way to organize the code so that objects can be aware of other objects and be able to ignore them or take actions based on their type?

    Read the article

  • Finding Tools Guidance in OUM

    - by user716869
    OUM is not tool – specific. However, it does include tool guidance.  Tool guidance in OUM includes: a mention of a tool that could be used to complete a specific task(s) templates created with a specific tool example work products in a specific tool links to tool resources Tool Supplemental Guides So how do you find all this helpful tool information? Start at the lowest level first – the Task Overview.  Even though the task overviews are written tool-agnostic, they sometimes mention suggestions, or examples of a tool that might be used to complete the task.  More specific tool information can be found in the Task Overview, Templates and Tools section.  In some cases, the tool used to create the template (for example, Microsoft Word, Powerpoint, Project and Visio) is useful. The Templates and Tools section also provides more specific tool guidance, such as links to: White Papers Viewlets Example Work Products Additional Resources Tool Supplemental Guides If you’re more interested in seeing what tools might be helpful in general for your project or to see if there is any tool guidance for a specific tool that your project is committed to using, go to the Supplemental Guidance page in OUM.  This page is available from the Method Navigation pull down located in the header of almost every OUM page. When you open the Supplemental Guidance page, the first thing you see is a table index of everything that is included on the page.  At the top of the right column are all the Tool Supplemental Guides available in OUM.  Use the index to navigate to any of the guides. Next in the right column is Discipline/Industry/View Resources and Samples.  Use the index to navigate to any of these topics and see what’s available and more specifically, if there is any tool guidance available.  For example, if you navigate to the Cloud Resources, you will find a link to the IT Strategies from Oracle page that provides information for Cloud Practitioner Guides, Cloud Reference Architectures and Cloud White Papers, including the Cloud Candidate Selection Tool and Cloud Computing Maturity Model. The section for Method Tool and Technique Cross References can take you to the Task to Tool Cross Reference.  This page provides a task listing with possible helpful tools and links to more information regarding the tools.  By no means is this tool guidance all inclusive.  You can use other tools not mentioned in OUM to complete an OUM task. The Method Tool and Technique Cross References can also take you to the various Technique pages (Index and Cross References).  While techniques are not necessarily “tools,” they can certainly provide valuable assistance in completing tasks. In the Other Resources section of the Supplemental Guidance page, you find links to the viewlets and white papers that are included within OUM.

    Read the article

  • From 20,663 issues to 1 issue&ndash;style-copping C5.Tests

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/05/28/from-20663-issues-to-1-issuendashstyle-copping-c5.tests.aspxI recently became interested in the potential of the C5 Collections solution from http://www.itu.dk/research/c5/, however I was dismayed at the state of the code in the unit test project, so I set about fixing the 20,663 issues detected by StyleCop. The tools I used were the latest versions of: My 64-bit development PC running Windows 8 Update with 8Gb RAM Visual Studio 2013 Ultimate with SP2 ReSharper GhostDoc Pro My first attempt had to be abandoned due to collision of class names which broke one of the unit tests. So being aware of this duplication of class names, I started again and planned to prepend the class names with the namespace name. In some cases I additionally prepended the item of the C5 collection that was being tested. So what was the condition of code at the start? Besides the sprawl of C# code not written to style cop standard, there was: 1) Placing of many classes within one physical file. 2) Namespace within name space that did not follow the project structure. 3) As already mentioned, duplication of class names across namespaces. 4) A copyright notice that spawled but had to be preserved. 5) Project sub-folders were all lower case instead of initial letter capitalised. The first step was to add a stylecop heading plus the original heading contained within a region, to every file. The next step was to run GhostDoc Pro using its “Document File” option on every file but not letting it replace the headers, I had added. This brought the number of issues down to 18,192. I then went through each file collapsing each class and prepending names as appropriate. At each step, I saved the changes to my local Git. The step was to move each class to its own file and to style-cop each file. ReSharper provides a very useful feature for doing this which also fixes missing “this.” and moves using statements inside the namespace. Some classes required minimal work whereas others required extensive work to reach the stylecop standard. The unit tests were run at each split and when each class was completed. When all was done, one issue remained which I will need to submit to stylecop team for their advice (and possibly a fix to stylecop). The updated solution has been made available at https://c5stylecopped.codeplex.com/releases/view/122785.

    Read the article

  • Agile team with no dedicated Tester members. Insane or efficient?

    - by MetaFight
    I'm a software developer. I've been thinking a lot about the efficiency of the Software Testers I've worked with so far in my career. In fact, I've been thinking a lot about the Software Testers role in general and have reached a potentially contentious conclusion: Non-developer Software Testers staff are less efficient at software testing than developers. Now, before everyone gets upset, hear me out. This isn't mere opinion: Software Testing and Software Development both require a lot of skills in common: Problem solving Thinking about corner cases Analytical skills The ability to define clear and concise step-by-step scenarios What developers have in addition to this is the ability to automate their tests. Yes, I know non-dev testers can automate their tests too, but that often then becomes a test maintenance issue. Because automating UI tests is essentially programming, non-dev members encounter all the same difficulties software developers encounter: Copy-pasta, lack of code reusibility/maintainability, etc. So, I was wondering. Why not replace all non-dev roles with developer roles? Developers have the skills required to perform Software Testing tasks, and they have the skills to automate tests and keep them maintainable. Would the following work: Hire a bunch of developers and split them into 2 roles: Software developers Software developers doing testing (some manual, mostly automated by writing integration tests, unit tests, etc) Software developers doing application support. (I've removed this as it is probably a separate question altogether) And, in our case since we're doing Agile development, rotate the roles every sprint or two. Also, if at all possible, try to have people spend their Developer stints and Testing stints on different projects. Ideally you would want to reduce the turnover rate per rotation. So maybe you could have 2 groups and make sure the rotation cycles of the groups are elided. So, for example, if each rotation was two sprints long, the two groups would have their rotations 1 sprint apart. That way there's only a 50% turn-over rate per sprint. Am I crazy, or could this work? (Obviously a key component to this working is that all devs want to be in the 3 roles. Let's assume I'm starting a new company and I can hire these ideal people) Edit I've removed the phrase "QA", as apparently we are using it incorrectly where I work.

    Read the article

  • How to configure a longer version Number in artifactory

    - by claudine
    The version-numbers for our jars have to be longer them x.x.x. We would rather need x.x.x.x to integrate some old-fashioned self-made mechanism. This is, because we tag our software with x.x.x and as soon as we have a delivery to a customer one specific jar has to be build exactly at this point of time to fit to another backend, which communicates with our program. For that reason this one jar has the version 2.3.4.1, when generated and in next delivery of the same Version it is build and named 2.3.4.2. Now artifactory cannot handle this an doesn't save more than x.x.x.2 in some cases. So we thought of maybe edit the regular expression in the maven repository layout (see attached Screenshot) Because testing the path in the field below shows, that it cannot handle the version number. Of course for the rest of our jars still x.x.x has to work.. For Example here is the maven-metadata.xml <?xml version="1.0" encoding="UTF-8"?> <metadata> <groupId>com.firm</groupId> <artifactId>someid</artifactId> <version>1.5.1</version> <versioning> <latest>1.5.1</latest> <release>1.5.1</release> <versions> <version>1.4.62</version> </versions> <lastUpdated>20120926073942</lastUpdated> </versioning> </metadata> The folder structure looks like: someid 1.4.62 1.4.62.1 1.4.62.2 1.4.62.3 If we deploy an new artifact version (1.4.62.1), the maven-metadata.xml contains the 1.4.62.1 version. But the artifactory overrides the version number (1.4.62.x) to (1.4.62) after an unspecified time. It seems that the artifactory only support major, minor and revision numbers, and deletes the buildnumber. Now we looking for a solution do disable this behavior. We use the JFrog Artifactory version 2.5.0 (rev. 13086). Any ideas, maybe? Thanks in andvance

    Read the article

  • IndexOutOfRangeException on World.Step after enabling/disabling a Farseer physics body?

    - by WilHall
    Earlier, I posted a question asking how to swap fixtures on the fly in a 2D side-scroller using Farseer Physics Engine. The ultimate goal being that the player's physical body changes when the player is in different states (I.e. standing, walking, jumping, etc). After reading this answer, I changed my approach to the following: Create a physical body for each state when the player is loaded Save those bodies and their corresponding states in parallel lists Swap those physical bodies out when the player state changes (which causes an exception, see below) The following is my function to change states and swap physical bodies: new protected void SetState(object nState) { //If mBody == null, the player is being loaded for the first time if (mBody == null) { mBody = mBodies[mStates.IndexOf(nState)]; mBody.Enabled = true; } else { //Get the body for the given state Body nBody = mBodies[mStates.IndexOf(nState)]; //Enable the new body nBody.Enabled = true; //Disable the current body mBody.Enabled = false; //Copy the current body's attributes to the new one nBody.SetTransform(mBody.Position, mBody.Rotation); nBody.LinearVelocity = mBody.LinearVelocity; nBody.AngularVelocity = mBody.AngularVelocity; mBody = nBody; } base.SetState(nState); } Using the above method causes an IndexOutOfRangeException when calling World.Step: mWorld.Step(Math.Min((float)nGameTime.ElapsedGameTime.TotalSeconds, (1f / 30f))); I found that the problem is related to changing the .Enabled setting on a body. I tried the above function without setting .Enabled, and there was no error thrown. Turning on the debug views, I saw that the bodies were updating positions/rotations/etc properly when the state was changes, but since they were all enabled, they were just colliding wildly with each other. Does Enabling/Disabling a body remove it from the world's body list, which then causes the error because the list is shorter than expected? Update: For such a straightforward issue, I feel this question has not received enough attention. Has anyone else experienced this? Would anyone try a quick test case? I know this issue can be sidestepped - I.e. by not disabling a body during the simulation - but it seems strange that this issue would exist in the first place, especially when I see no mention of it in the documentation for farseer or box2d. I can't find any cases of the issue online where things are more or less kosher, like in my case. Any leads on this would be helpful.

    Read the article

  • What's New in SGD 5.1?

    - by Fat Bloke
    Oracle announced the latest version of Secure Global Desktop (SGD) this week with 3 major themes: Support for Android devices; Support for Desktop Chrome clients;  Support for Oracle Unified Directory. I'll talk about the new features in a moment, but a bit of context first: Oracle SGD - what, how and why?  Oracle Secure Global Desktop is Oracle's secure remote access product which allows users on almost any device, to access almost any type application which  is hosted in the data center, from almost any location. And it does this by sitting on the edge of the datacenter, between the user and the applications: This is actually a really smart environment for an increasing number of use cases where: Users need mobility of location AND device (i.e. work from anywhere); IT needs to ensure security of applications and data (of course!) The application requires an end-user environment which can't be guaranteed and IT may not own the client platform (e.g. BYOD, working from home, partners or contractors). Oracle has a a specific interest in this of course. As the leading supplier of enterprise applications, many of Oracle's customers, and indeed Oracle itself, fit these criteria. So, as an IT guy rolling out an application to your employees, if one of your apps absolutely needs, say,  IE10 with Java 6 update 32, how can you be sure that the user population has this, especially when they're using their own devices? In the SGD model you, the IT guy, can set up, say, a Windows Server running the exact environment required, and then use SGD to publish this app, without needing to worry any further about the device the end user is using. What's new?  So back to SGD 5.1 and what is new there: Android devices Since we introduced our support for iPad tablets in SGD 5.0 we've had a big demand from customers to extend this to Android tablets too, and so we're pleased to announce that 5.1 supports Android 4.x tablets such as Nexus 7 and 10, and the Galaxy Tab. Here's how it works, with screenshots from my Nexus 7: Simply point your browser to the SGD server URL and login; The workspace is the list of apps that the admin has deemed ok for you to run. You click on an application to run it (here's Excel and Oracle E-Business Suite): There's an extended on-screen keyboard (extended because desktop apps need keys that don't appear on a tablet keyboard such as ctrl, WIndow key, etc) and touch gestures can be mapped to desktop events (such as tap and hold to right click) All in all a pretty nice implementation for Android tablet users. Desktop Chrome Browsers SGD has always been designed around using a browser to access your applications. But traditionally, this has involved using Java to deliver the SGD client component. With HTML5 and Javascript engines becoming so powerful, we thought we'd see how well a pure web client could perform with desktop apps. And the answer was, surprisingly well. So with this release we now offer this additional way of working, which can be enabled by a simple bit of configuration. Here's a Linux desktop running in a tab in Chrome. And if you resize the browser window, the Linux desktop is resized by SGD too. Very cool! Oracle Unified Directory As I mentioned above, a lot of Oracle users already benefit from SGD. And a lot of Oracle customers use Oracle Unified Directory as their Enterprise and Carrier grade user directory. So it makes a lot of sense that SGD now supports this LDAP directory for both Authentication and as a means to determine which users get which applications, e.g. publish the engineering app to the guys in the Development group, but give everyone E-Business Suite to let them do their expenses. Summary With new devices, and faster 4G networking becoming more prevalent, the pressure for businesses to move to a increasingly mobile enterprise is stronger than ever. SGD is good for users, and even better for IT. By offering the user the ability to work from anywhere, and IT the control and security they need, everyone wins with SGD. To try this for yourself, download SGD 5.1 (look under Desktop Virtualization Products) from the Oracle Software Delivery Cloud or if you're an existing customer, get it from My Oracle Support.  -FB 

    Read the article

  • Oracle Data Protection: How Do You Measure Up? - Part 1

    - by tichien
    This is the first installment in a blog series, which examines the results of a recent database protection survey conducted by Database Trends and Applications (DBTA) Magazine. All Oracle IT professionals know that a sound, well-tested backup and recovery strategy plays a foundational role in protecting their Oracle database investments, which in many cases, represent the lifeblood of business operations. But just how common are the data protection strategies used and the challenges faced across various enterprises? In January 2014, Database Trends and Applications Magazine (DBTA), in partnership with Oracle, released the results of its “Oracle Database Management and Data Protection Survey”. Two hundred Oracle IT professionals were interviewed on various aspects of their database backup and recovery strategies, in order to identify the top organizational and operational challenges for protecting Oracle assets. Here are some of the key findings from the survey: The majority of respondents manage backups for tens to hundreds of databases, representing total data volume of 5 to 50TB (14% manage 50 to 200 TB and some up to 5 PB or more). About half of the respondents (48%) use HA technologies such as RAC, Data Guard, or storage mirroring, however these technologies are deployed on only 25% of their databases (or less). This indicates that backups are still the predominant method for database protection among enterprises. Weekly full and daily incremental backups to disk were the most popular strategy, used by 27% of respondents, followed by daily full backups, which are used by 17%. Interestingly, over half of the respondents reported that 10% or less of their databases undergo regular backup testing.  A few key backup and recovery challenges resonated across many of the respondents: Poor performance and impact on productivity (see Figure 1) 38% of respondents indicated that backups are too slow, resulting in prolonged backup windows. In a similar vein, 23% complained that backups degrade the performance of production systems. Lack of continuous protection (see Figure 2) 35% revealed that less than 5% of Oracle data is protected in real-time.  Management complexity 25% stated that recovery operations are too complex. (see Figure 1)  31% reported that backups need constant management. (see Figure 1) 45% changed their backup tools as a result of growing data volumes, while 29% changed tools due to the complexity of the tools themselves. Figure 1: Current Challenges with Database Backup and Recovery Figure 2: Percentage of Organization’s Data Backed Up in Real-Time or Near Real-Time In future blogs, we will discuss each of these challenges in more detail and bring insight into how the backup technology industry has attempted to resolve them.

    Read the article

  • Cant connect to mysql using self signed SSL certificate

    - by carpii
    After creating a self-signed SSL certificate, I have configured my remote mysqld to use them (and ssl is enabled) I ssh into my remote server, and try connecting to its own mysqld using ssl (mysql server is 5.5.25).. ~> mysql -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error: error:00000001:lib(0):func(0):reason(1) Ok, I remember reading theres some problem with connecting to the same server via SSL. So I download the client keys down to my local box, and test from there... ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key --ssl-ca=ca.cert Enter password: ERROR 2026 (HY000): SSL connection error Its unclear what this "SSL connection error" error refers to, but if I omit the -ssl-ca, then I am able to connect using SSL.. ~> mysql -h <server> -u <user> -p --ssl=1 --ssl-cert=client.cert --ssl-key=client.key Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 37 Server version: 5.5.25 MySQL Community Server (GPL) However, I believe that this is only encrypting the connection, and not actually verifying the validity of the cert (meaning I would be potentially vulnerable to man-in-middle attack) The ssl certs are valid (albeit self signed), and do not have a passphrase on them So my question is, what am I doing wrong? How can I connect via SSL, using a self signed certificate? MySQL Server version is 5.5.25 and the server and clients are Centos 5 Thanks for any advice Edit: Note that in all cases, the command is being issued from the same directory where the ssl keys reside (hence no absolute path)

    Read the article

  • Lotus NotesSQL Driver - cannot install

    - by PowerUser
    Hi all, I need to install Lotus NotesSQL Driver (current version is 8.5) onto a virtual machine running XP. Here's what's I've done so far: I retrieved the file (CZOWFEN.zip) from the IBM website. I ran the exe. I then went to My Computer-Properties-Advanced-Environmental Settings-System Variables-Path and added "; c:\notessql" so the ODBC administrator could find Notes.ini (why the setup file didn't do this in the first place, i don't know). I opened up the ODBC administrator and tried to add a new System DSN to a Lotus DB. "The setup routines for the Lotus Notes SQL Driver (*.nsf) ODBC driver could not be loaded due to system error code 126" I redownloaded and reinstalled the driver (making sure I had the latest version 8.5). No luck. I checked the registry. All the file paths appeared to be correct. Per many, many similar cases on the internet, I tried several different variations of adding the various Lotus Notes folders to my PATH variables. Same error. I've done this setup on 5 different machines now with no problem. The only difference here is that this machine is virtual. Ideas?

    Read the article

  • Where can I ask questions that aren't IT questions?

    - by Adam Davis
    Thank you for your confidence in our abilities But have you read this? FAQ We get a lot of interesting technical questions on here, but Server Fault is meant to be first and foremost a resource for system administrators and IT professionals. In other words, Server Fault isn't meant to cater to every computer problem - just those that only exist in or can best be solved in the IT and sysadmin domain Yes, someone here might be able to help you, but you'll find that other forums more focused on your topic can give you a much better answer than a bunch of IT professionals. It's likely that your question will be downvoted, closed, and in some cases marked "offensive." It's not that we hate you, it's just that we like to keep our corn pops separate from our cocoa puffs. In that vein, here are a bunch of other forums where you can get help: (This list contains the top two forums for each category as voted for below.) Programming Stackoverflow Consumer Level Computer Hardware Ars Technica OpenForum Tom's Hardware Forums Computer Software Anandtech Forum Web Design/Hosting/CMS SitePoint Forums Web Hosting Talk Math, Science, Engineering Physics Forums PlanetMath Other/General Ask Metafilter Google Search Mahalo Answers Check out the answers below for even more suggestions! What forums can people go to to ask the questions that are off topic here? Please list only ONE forum per answer so votes can bring the best forums to the top. Return to FAQ Index

    Read the article

  • OSX esc key stops working (randomly)

    - by Jan Hancic
    I have a 2011 macbook Air with OS X Mountain Lion (I've upgraded from Lion). And ever since upgrading my escape key randomly stops working. And it's not that the keyboard on the laptop would be damaged as the same happens if I use apple's bluetooth keyboard. So I'm guessing it's a software issue. Also in some cases pressing ctrl+esc achieves the same thing as just esc so I'm 100% it's not a hardware problem. Does anybody have any idea what this might be all about and how to fix it? edit: the escape key stops working completely but it starts working again after I restart the computer. edit2: this usually happens after the computer wakes from sleep. So it's not like that is just stopps working in the middle of using it, but rather after I put it to sleep (or just close the lid) and then open it again. Has nobody got the same issue? Is there a better place to ask OSX related questions than SuperUser maybe?

    Read the article

  • Mouse wheel in VirtualPC (mostly) does not work on 64-bit Windows 7 RC

    - by JonStonecash
    I have recently upgraded my laptop from WinXP Pro (32-bit) to Windows 7 RC (64-bit). I have a number of VirtualPC 2007 images that I use for testing on various platforms and looking at beta software. I have installed the 64-bit version of VirtualPC. The images all work with the exception of the mouse wheel within the virtual machine. I have tried this out with WinXP Pro, Windows 7 RC, and Windows Server 2008 images. All are 32-bit and all exhibit the same behavior: a gentle rotation of the wheel does nothing; a quick rotation of the wheel sometimes gets a scroll and sometimes not. I regard this behavior as unusable as I tend to use the mouse wheel a lot. All of this worked just fine on WinXP. I have re-installed the Virtual machine additions on all of the machines. The Windows 7 RC virtual image was created after the upgrade to Windows 7 and the 64-bit version of VirtualPC (just to isolate the possibility that I had corrupted the images during the transition). I have googled, binged, and yahoo-ed. There are scattered mentions of this problem (dating back to VPC 2004) but no solutions. I am aware that I could start up one of these images and then use remote desktop connections to get access to that image. I, in fact, do just that for some development that I am doing; the mouse works just fine. This is acceptable in this case because I spend hours at a time in the development VM. These test environments are different in that I will bring up an image for just a short time: minutes rather than hours. Adding the rdc step is much more significant in these cases. Does anyone have any idea of what to do next?

    Read the article

  • Desktop SATA drives in SATA <-> FC array

    - by chris
    Let's assume you've got a box like one of these with space for 24 SATA disks. What are the best bits of advice for deploying this? For instance, should you be greedy and go for the 1.5 or 2tb disks or are they just not reliable enough to be used in an array like this and you should stick with 640gb or 750gb disks instead? Also, I know that FC (or generically, "enterprise class") disks have a different error recovery strategy than desktop disks. An enterprise disk will fail a read quickly and report to the controller that it wasn't able to read that block, and the RAID controller will quickly regenerate the info from the parity disk and mark the block as bad. A desktop disk, on the other hand, will try and try and try again to get the data, and in pathological cases this may cause a raid controller to fail the whole disk because the read operation times out. So there are a couple aspects to this question: What's the best sort of disk to get today? (ie specific disks on the market in Feb 2010) Generically, what should someone look for when trying to buy something like this that kinda walks the line between enterprise and consumer? Lastly -- is there anything that can be done with current "consumer" disks to make them more suitable for array use? IE can you use a SMART configuration to change the error recovery strategy used by the disk? Thanks!

    Read the article

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • How to make 7zip faster

    - by user34463
    I normally use winRAR over 7Zip simply because its faster and only a little less efficient with compression. I did a few tests on different filetypes and sizes comparing the 7zip and winRAR default settings on their normal compression and their best compression, and in a lot of cases winRAR was 50% faster and in some it was actually 100% faster. But, I do like FOSS more. So here are my questions: Is there a way to make 7zip speed up? I'd like it to at least be on par with rar's speed Is there a way to make recovery segments in 7zip like you can in rar? I didn't see any, but I guess it could be a command line thing. I tested winrar and 7zip using the latest stable version of each (4.something with 7zip). Is the 9.x beta release noticeably faster at compression? I'm talking about faster at a comparable setting in WinRAR, not just lowering to bare minimum compression. If it matters, I use a quad core intel i7 720 (1.6ghz)/(2.8ghz) with 4gb DDR3 ram, and the 64-bit version of 7zip.

    Read the article

  • TS (RD) Gateway Authentication Problem "The logon attempt failed"

    - by user2059
    I've been using TS Gateway to permit remote access for our staff for a few months now, and all has been well. Users either connect to a traditional terminal server desktop or hit our website and start an TS RemoteApp application- in both cases the connection is routed through a TS Gateway. However I came into work this morning to find that has stopped authenticating users through TS Gateway, each time returning "The logon attempt failed" as seen in the image even though the credentials are correct. It should be noted that everything works fine if the Gateway is taken out of the equation, it's the TS Gateway component that is causing these problems. Users experience this problem whether they connect through XP SP3, Vista or 7. On the server a total of 4 entries appear in the Windows security log at exactly the same time for each failed logon attempt: two 4624 "An account was successfully logged on" messages for the user, immediately followed by two 4634 "An account was logged off"s. This suggests that the server is accepting the credentials as correct, then booting the user off. Nothing at all is recorded in the NPS and Terminal Server logs. A reboot doesn't change things. Neither does completely removing and reinstalling the NPS and Terminal Server roles. I'm baffled as to how this can happen suddenly without warning. Any suggestions would be greatly appreciated.

    Read the article

  • What might cause https failure when not specifying SSL protocol?

    - by user35042
    I have a VBScript program that retrieves a web page from a server not under my control. The URL looks something like https://someserver.xxx/index.html. I use this code to create the object that does the page getting: Set objWinHttp = CreateObject("WinHttp.WinHttpRequest.5.1") When I wrote my program it had no problem retrieving this page. Recently, the web server serving this page went through an upgrade. Now my program can no longer fetch the page. Some clues: Clue 1. I can fetch the web page if I use a browser (I tried Firefox, IE, and Chrome). Clue 2. The VBScript code yields this error: The message received was unexpected or badly formatted. Clue 3. I can fetch the web page from the command line in certain cases but not in others: curl --sslv3 -v -k 'https://someserver.xxx/index.html' # WORKS! curl --sslv2 -v -k 'https://someserver.xxx/index.html' # WORKS! curl -v -k 'https://someserver.xxx/index.html' # FAILS curl --tlsv1 -v -k 'https://someserver.xxx/index.html' # FAILS In the case where I do not specify a protocol I get this error: * SSLv3, TLS handshake, Client hello (1): * error:14077417:SSL routines:SSL23_GET_SERVER_HELLO:sslv3 alert illegal parameter * Closing connection #0 In the case where I specify --tlsv1 I get this error: * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS alert, Server hello (2): * error:14094417:SSL routines:SSL3_READ_BYTES:sslv3 alert illegal parameter * Closing connection #0 A. Does anyone have any suggestions or ideas on what might be going on at the web server end (I am unable to talk to the admins of the web server to find out what they changed). B. Is there a way I can change my VBScript code to work around this issue? Can the SSL version be forced?

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

  • Configuration management in support of scientific computing

    - by Sharpie
    For the past few years I have been involved with developing and maintaining a system for forecasting near-shore waves. Our team has just received a significant grant for further development and as a result we are taking the opportunity to refactor many components of the old system. We will also be receiving a new server to run the model and so I am taking this opportunity to consider how we set up the system. Basically, the steps that need to happen are: Some standard packages and libraries such as compilers and databases need to be downloaded and installed. Some custom scientific models need to be downloaded and compiled from source as they are not commonly provided as packages. New users need to be created to manage the databases and run the models. A suite of scripts that manage model-database interaction needs to be checked out from source code control and installed. Crontabs need to be set up to run the scripts at regular intervals in order to generate forecasts. I have been pondering applying tools such as Puppet, Capistrano or Fabric to automate the above steps. It seems perfectly possible to implement most of the above functionality except there are a couple usage cases that I am wondering about: During my preliminary research, I have found few examples and little discussion on how to use these systems to abstract and automate the process of building custom components from source. We may have to deploy on machines that are isolated from the Internet- i.e. all configuration and set up files will have to come in on a USB key that can be inserted into a terminal that can connect to the server that will run the models. I see this as an opportunity to learn a new tool that will help me automate my workflow, but I am unsure which tool I should start with. If any member of the community could suggest a tool that would support the above workflow and the issues specific to scientific computing, I would be very grateful. Our production server will be running Linux, but support for OS X would be a bonus as it would allow the development team to setup test installations outside of VirtualBox.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >