Search Results

Search found 29235 results on 1170 pages for 'dynamic management objects'.

Page 845/1170 | < Previous Page | 841 842 843 844 845 846 847 848 849 850 851 852  | Next Page >

  • Finding a new programming language for web development?

    - by Xeoncross
    I'm wondering if there are any un-biased resources that give good, specific overviews of programming languages and their intended goals. I would like to learn a new language, but visiting the sites of each language isn't working. Each one talks about how great it is without much mention of it's weaknesses or specific goals. Ruby is a dynamic, open source programming language with a focus on simplicity and productivity. Python is a programming language that lets you work more quickly and integrate your systems more effectively. Having been a PHP developer for years, Vic Cherubini sums up my plight well: I knew PHP well, had my own framework, and could work quickly to get something up and running. I programmed like this throughout the MVC revolution. I got better and better jobs (read: better paying, better title) as a PHP developer, but all along the way realizing that the code I wrote on my own time was great, and the code I worked with at work was horrible. Like, worse than horrible. Atrocious. OS Commerce level bad. Having side projects kept me sane, because the code I worked with at work made me miserable. This is why I'm retiring from PHP for my side projects and new programming ventures. I'm spent with PHP. Exhausted, if you will. I've reached a level where I think I'm at the top with it as a language and if I don't move on to a new language soon, I'll be done completely with programming and I do not want that. Languages I've looked at include JavaScript (for node.js), Ruby, Python, & Erlang. I've even thought about Scala or C++. The problem is figuring out which ones are built to handle my needs the best. So where can I go to skip the hype and get real information about the maturity of a platform, the size of the community, and the strengths & weaknesses of that language. If I know these then picking a language to continue my web development should be easy.

    Read the article

  • What causes Box2D revolute joints to separate?

    - by nbolton
    I have created a rag doll using dynamic bodies (rectangles) and simple revolute joints (with lower and upper angles). When my rag doll hits the ground (which is a static body) the bodies seem to fidget and the joints separate. It looks like the bodies are sticking to the ground, and the momentum of the rag doll pulls the joint apart (see screenshot below). I'm not sure if it's related, but I'm using the Badlogic GDX Java wrapper for Box2D. Here's some snippets of what I think is the most relevant code: private RevoluteJoint joinBodyParts( Body a, Body b, Vector2 anchor, float lowerAngle, float upperAngle) { RevoluteJointDef jointDef = new RevoluteJointDef(); jointDef.initialize(a, b, a.getWorldPoint(anchor)); jointDef.enableLimit = true; jointDef.lowerAngle = lowerAngle; jointDef.upperAngle = upperAngle; return (RevoluteJoint)world.createJoint(jointDef); } private Body createRectangleBodyPart( float x, float y, float width, float height) { PolygonShape shape = new PolygonShape(); shape.setAsBox(width, height); BodyDef bodyDef = new BodyDef(); bodyDef.type = BodyType.DynamicBody; bodyDef.position.y = y; bodyDef.position.x = x; Body body = world.createBody(bodyDef); FixtureDef fixtureDef = new FixtureDef(); fixtureDef.shape = shape; fixtureDef.density = 10; fixtureDef.filter.groupIndex = -1; fixtureDef.filter.categoryBits = FILTER_BOY; fixtureDef.filter.maskBits = FILTER_STUFF | FILTER_WALL; body.createFixture(fixtureDef); shape.dispose(); return body; } I've skipped the method for creating the head, as it's pretty much the same as the rectangle method (just using a cricle shape). Those methods are used like so: torso = createRectangleBodyPart(x, y + 5, 0.25f, 1.5f); Body head = createRoundBodyPart(x, y + 7.4f, 1); Body leftLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body rightLegTop = createRectangleBodyPart(x, y + 2.7f, 0.25f, 1); Body leftLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body rightLegBottom = createRectangleBodyPart(x, y + 1, 0.25f, 1); Body leftArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); Body rightArm = createRectangleBodyPart(x, y + 5, 0.25f, 1.2f); joinBodyParts(torso, head, new Vector2(0, 1.6f), headAngle); leftLegTopJoint = joinBodyParts(torso, leftLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); rightLegTopJoint = joinBodyParts(torso, rightLegTop, new Vector2(0, -1.2f), 0.1f, legAngle); leftLegBottomJoint = joinBodyParts(leftLegTop, leftLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); rightLegBottomJoint = joinBodyParts(rightLegTop, rightLegBottom, new Vector2(0, -1), -legAngle * 1.5f, 0); leftArmJoint = joinBodyParts(torso, leftArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle); rightArmJoint = joinBodyParts(torso, rightArm, new Vector2(0, 1), -armAngle * 0.7f, armAngle);

    Read the article

  • Base Pages and Interfaces for ASP.NET Pages

    - by geekrutherford
    For quite a while I have been using the concept of base pages when developing pages in ASP.NET applications. It is a wonderful method for exposing common functions to all of your applications pages and also overriding certain events for various purposes (i.e. dynamic themes).  Recently I found out a new developer will be joining my team. This prompted me to review the applications code for readability and ease of maintenance. I began adding comments through out the code behind for all pages within the application. While doing so I noted that I had used common method names for such things as loading data, configuring controls, applying filters, etc.   Bringing a new developer on board, I wanted to make the transition as seamless as possible while also ensuring they follow existing coding practices we already have in place. While I could have created virtual methods for the common page methods allowing them to overridden, what I really needed was a way to ensure the new developer implemented the same methods for each and every page. Thus I created an interface to force the issue.   Now, every page not only inherits the base page class but also implements an interface. This provides every page not only common functions and overridden page events but also imposes rules for implementing certain common methods :-)   Interface   public interface BasePageInterface { /// Configures page based on users security permissions. void CheckPermissions(); /// Configures Filter Form control for current page.  /// Ensure you have set the FilteredGrid and PageAjaxManager properties of the FilterForm control in PageLoad!!!  void ConfigureFilters(); /// Sets event handlers and default settings for controls on the current page. void ConfigureControls(); /// Exports data bound to grid in selected format. void ExportGridData(ExportFormat fmt); /// Loads data and binds to grid. /// Columns are turned on/off in grid depending on tab selected and users permissions.  void LoadData(); }   Page code-behind class definition:   public partial class MyPage : BasePage, BasePageInterface Note, you could not use an abstract class to accomplish this considering C# does not allow for multiple inheritance.  Nor could the base page class be abstract since it needs to inherit from the System.Web.UI.Page class in order to override page events.

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Sharing on Github

    - by Alan
    Over the past couple weeks I have gotten a lot of help from StackOverflow users on a project, and rather than keep the finished product to myself I wanted to share it unencumbered by licenses, but don't want there to be so much legwork during installation that users shy away from trying it. I am about to post it to Github and choosing public domain licensing. I would like to to be super simple for users to make use of and just FTP it up and go. That being said, do I need to make sure I remove things like the JQuery file, and other GPL / MIT licensed dependencies that I didn't write but that my code depends on? I haven't removed any copyright notices from the other code and all of it open source, it would just be nice if users could download everything at once while of course not trying to represent that I am the license holder of the dependencies. Inside my files are also some snippets, do those have to be externalized with installation instructions or can it be posted as is? Here is an example, my nav.php file is 115 lines long and I have these at the top: <script type="text/javascript" src="./js/ddaccordion.js"> /*********************************************** * Accordion Content script- (c) Dynamic Drive DHTML code library (www.dynamicdrive.com) * Visit http://www.dynamicDrive.com for hundreds of DHTML scripts * This notice must stay intact for legal use ***********************************************/ </script> <link href="css/admin.css" rel="stylesheet"> <script type="text/javascript"> ddaccordion.init({ headerclass: "submenuheader", //Shared CSS class name of headers group contentclass: "submenu", //Shared CSS class name of contents group revealtype: "click", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover" mouseoverdelay: 200, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover collapseprev: false, //Collapse previous content (so only one open at any time)? true/false defaultexpanded: [], //index of content(s) open by default [index1, index2, etc] [] denotes no content onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed) animatedefault: false, //Should contents open by default be animated into view? persiststate: true, //persist state of opened contents within browser session? toggleclass: ["", ""], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"] togglehtml: ["suffix", "<img src='./images/plus.gif' class='statusicon' />", "<img src='./images/minus.gif' class='statusicon' />"], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs) animatespeed: "fast", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow" oninit:function(headers, expandedindices){ //custom code to run when headers have initalized //do nothing }, onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed //do nothing } }) </script>

    Read the article

  • Are there any concerns with using a static read-only unit of work so that it behaves like a cache?

    - by Rowan Freeman
    Related question: How do I cache data that rarely changes? I'm making an ASP.NET MVC4 application. On every request the security details about the user will need to be checked with the area/controller/action that they are accessing to see if they are allowed to view it. The security information is stored in the database. For example: User Permission UserPermission Action ActionPermission A "Permission" is a token that is applied to an MVC action to indicate that the token is required in order to access the action. Once a user is given the permission (via the UserPermission table) then they have the token and can therefore access the action. I've been looking in to how to cache this data (since it rarely changes) so that I'm only querying in-memory data and not hitting a database (which is a considerable performance hit at the moment). I've tried storing things in lists, using a caching provider but I either run in to problems or performance doesn't improve. One problem that I constantly run in to is that I'm using lazy loading and dynamic proxies with EntityFramework. This means that even if I ToList() everything and store them somewhere static, the relationships are never populated. For example, User.Permissions is an ICollection but it's always null. I don't want to Include() everything because I'm trying to keep things simple and generic (and easy to modify). One thing I know is that an EntityFramework DbContext is a unit of work that acts with 1st-level caching. That is, for the duration of the unit of work, everything that is accessed is cached in memory. I want to create a read-only DbContext that will exist indefinitely and will only be used to read about permission data. Upon testing this it worked perfectly; my page load times went from 200ms+ to 20ms. I can easily force the data to refresh at certain intervals or simply leave it to refresh when the application pool is recycled. Basically it will behave like a cache. Note that the rest of the application will interact with other contexts that exist per request as normal. Is there any disadvantage to this approach? Could I be doing something different?

    Read the article

  • Creating smooth lighting transitions using tiles in HTML5/JavaScript game

    - by user12098
    I am trying to implement a lighting effect in an HTML5/JavaScript game using tile replacement. What I have now is kind of working, but the transitions do not look smooth/natural enough as the light source moves around. Here's where I am now: Right now I have a background map that has a light/shadow spectrum PNG tilesheet applied to it - going from darkest tile to completely transparent. By default the darkest tile is drawn across the entire level on launch, covering all other layers etc. I am using my predetermined tile sizes (40 x 40px) to calculate the position of each tile and store its x and y coordinates in an array. I am then spawning a transparent 40 x 40px "grid block" entity at each position in the array The engine I'm using (ImpactJS) then allows me to calculate the distance from my light source entity to every instance of this grid block entity. I can then replace the tile underneath each of those grid block tiles with a tile of the appropriate transparency. Currently I'm doing the calculation like this in each instance of the grid block entity that is spawned on the map: var dist = this.distanceTo( ig.game.player ); var percentage = 100 * dist / 960; if (percentage < 2) { // Spawns tile 64 of the shadow spectrum tilesheet at the specified position ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 64 ); } else if (percentage < 4) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 63 ); } else if (percentage < 6) { ig.game.backgroundMaps[2].setTile( this.pos.x, this.pos.y, 62 ); } // etc... (sorry about the weird spacing, I still haven't gotten the hang of pasting code in here properly) The problem is that like I said, this type of calculation does not make the light source look very natural. Tile switching looks too sharp whereas ideally they would fade in and out smoothly using the spectrum tilesheet (I copied the tilesheet from another game that manages to do this, so I know it's not a problem with the tile shades. I'm just not sure how the other game is doing it). I'm thinking that perhaps my method of using percentages to switch out tiles could be replaced with a better/more dynamic proximity forumla of some sort that would allow for smoother transitions? Might anyone have any ideas for what I can do to improve the visuals here, or a better way of calculating proximity with the information I'm collecting about each tile? (PS: I'm reposting this from Stack Overflow at someone's suggestion, sorry about the duplicate!)

    Read the article

  • Wrapping REST based Web Service

    - by PaulPerry
    I am designing a system that will be running online under Microsoft Windows Azure. One component is a REST based web service which will really be a wrapper (using proxy pattern) which calls the REST web services of a business partner, which has to do with BLOB storage (note: we are not using azure storage). The majority of the functionality will be taking a request, calling our partner web service, receiving the request and then passing that back to the client. There are a number of reasons for doing this, but one of the big ones is that we are going to support three clients: our desktop application (win and mac), mobile apps (iOS), and a web front end. Having a single API which we then send to our partner protects us if that partner ever changes. I want our service to support both JSON and XML for the data transfer format, JSON for web and probably XML for the desktop and mobile (we already have an XML parser in those products). Our partner also supports both of these formats. I was planning on using ASP.NET MVC 4 with the Web API. As I design this, the thing that concerns me is the static type checking of C#. What if the partner adds or removes elements from the data? We can probably defensively code for that, but I still feel some concern. Also, we have to do a fair amount of tedious coding, to setup our API and then to turn around and call our partner’s API. There probably is not much choice on it though. But, in the back of my mind I wonder if maybe a more dynamic language would be a better choice. I want to reach out and see if anybody has had to do this before, what technology solutions they have used to (I am not attached to this one, these days Azure can host other technologies), and if anybody who has done something like this can point out any issues that came up. Thanks! Researching the issue seems to only find solutions which focus on connecting a SOAP web service over a proxy server, and not what I am referring to here. Note: Cross posted (by suggestion) from http://stackoverflow.com/questions/11906802/wrapping-rest-based-web-service Thank you!

    Read the article

  • Was I wrong about JavaScript?

    - by jboyer
    Yes, I was. Recently, I’ve taken a good hard look at JavaScript. I’ve used it before but mostly in the capacity of web design. Using JQuery to make your web page do cool stuff is different than really creating a JavaScript application using all of the language constructs. What I’m finding as I use it more is that I may have been wrong about my assumptions about it. Let me explain.   I enjoyed doing cool stuff with JQuery but the limited experience with JavaScript as a language coupled with the bad things that I heard about it led me to not have any real interest in it. However, JavaScript is ubiquitous on the web and if I want to do any web development, which I do, I need to learn it. So here I am, diving deep into the language with the help of the JavaScript Fundamentals training course at Pluralsight (great training for a low price) and the JavaScript: The Good Parts book by Douglas Crockford.   Now, there are certainly parts of JavaScript that are bad. I think these are well known by any developer that uses it. The parts that I feel are especially egregious are the following: The global object null vs. undefined truthy and falsy limited (nearly nonexistent) scoping ‘==’ and ‘===’ (I just don’t get the reason for coercion)   However, what I am finding hiding under the covers of the bad things is a good language. I am finding that I am legitimately enjoying JavaScript. This I was not expecting. I’m not going to go into a huge dissertation on what I like about it, but some things include: Object literal notation dynamic typing functional style (JavaScript: The Good Parts describes it as LISP in C clothing) JSON (better than XML) There are parts of JavaScript that seem strange to OOP developers like myself. However, just because it is different or seems strange does not mean it is bad. Some differences are quite interesting and useful.   I feel that it is important for developers to challenge their assumptions and also to be able to admit when they are wrong on a topic. Many different situations can arise that lead to this, such as choosing the wrong technology for a problem’s solution, misunderstanding the requirements, etc. I decided to challenge my assumptions about JavaScript instead of moving straight into CoffeeScript or Dart. After exploring it, I find that I am beginning to enjoy it the more I use it. As long as there are those like Crockford to help guide me in the right way to code in JavaScript, I can create elegant and efficient solutions to problems and add another ‘arrow’ to the ‘quiver’, so to speak. I do still intend to learn CoffeeScript to see what the hub-bub is about, but now I no longer have to be afraid of JavaScript as a legitimate programming language.   Has something similar ever happened to you? Tell me about it in the comments below.

    Read the article

  • Event Driven Behavior Tree: deterministic traversal order with parallel

    - by Heisenbug
    I've studied several articles and listen some talks about behavior trees (mostly the resources available on AIGameDev by Alex J. Champandard). I'm particularly interested on event driven behavior trees, but I have still some doubts on how to implement them correctly using a scheduler. Just a quick recap: Standard Behavior Tree Each execution tick the tree is traversed from the root in depth-first order The execution order is implicitly expressed by the tree structure. So in the case of behaviors parented to a parallel node, even if both children are executed during the same traversing, the first leaf is always evaluated first. Event Driven BT During the first traversal the nodes (tasks) are enqueued using a scheduler which is responsible for updating only running ones every update The first traversal implicitly produce a depth-first ordered queue in the scheduler Non leaf nodes stays suspended mostly of the time. When a leaf node terminate(either with success or fail status) the parent (observer) is waked up allowing the tree traversing to continue and new tasks will be enqueued in the scheduler Without parallel nodes in the tree there will be up to 1 task running in the scheduler Without parallel nodes, the tasks in the queue(excluding dynamic priority implementation) will be always ordered in a depth-first order (is this right?) Now, from what is my understanding of a possible implementation, there are 2 requirements I think must be respected(I'm not sure though): Now, some requirements I think needs to be guaranteed by a correct implementation are: The result of the traversing should be independent from which implementation strategy is used. The traversing result must be deterministic. I'm struggling trying to guarantee both in the case of parallel nodes. Here's an example: Parallel_1 -->Sequence_1 ---->leaf_A ---->leaf_B -->leaf_C Considering a FIFO policy of the scheduler, before leaf_A node terminates the tasks in the scheduler are: P1(suspended),S1(suspended),leaf_A(running),leaf_C(running) When leaf_A terminate leaf_B will be scheduled (at the end of the queue), so the queue will become: P1(suspended),S1(suspended),leaf_C(running),leaf_B(running) In this case leaf_B will be executed after leaf_C at every update, meanwhile with a non event-driven traversing from the root node, the leaf_B will always be evaluated before leaf_A. So I have a couple of question: do I have understand correctly how event driven BT work? How can I guarantee the depth first order is respected with such an implementation? is this a common issue or am I missing something?

    Read the article

  • Need help on a problemset in a programming contest

    - by topher
    I've attended a local programming contest on my country. The name of the contest is "ACM-ICPC Indonesia National Contest 2013". The contest has ended on 2013-10-13 15:00:00 (GMT +7) and I am still curious about one of the problems. You can find the original version of the problem here. Brief Problem Explanation: There are a set of "jobs" (tasks) that should be performed on several "servers" (computers). Each job should be executed strictly from start time Si to end time Ei Each server can only perform one task at a time. (The complicated thing goes here) It takes some time for a server to switch from one job to another. If a server finishes job Jx, then to start job Jy it will need an intermission time Tx,y after job Jx completes. This is the time required by the server to clean up job Jx and load job Jy. In other word, job Jy can be run after job Jx if and only if Ex + Tx,y = Sy. The problem is to compute the minimum number of servers needed to do all jobs. Example: For example, let there be 3 jobs S(1) = 3 and E(1) = 6 S(2) = 10 and E(2) = 15 S(3) = 16 and E(3) = 20 T(1,2) = 2, T(1,3) = 5 T(2,1) = 0, T(2,3) = 3 T(3,1) = 0, T(3,2) = 0 In this example, we need 2 servers: Server 1: J(1), J(2) Server 2: J(3) Sample Input: Short explanation: The first 3 is the number of test cases, following by number of jobs (the second 3 means that there are 3 jobs for case 1), then followed by Ei and Si, then the T matrix (sized equal with number of jobs). 3 3 3 6 10 15 16 20 0 2 5 0 0 3 0 0 0 4 8 10 4 7 12 15 1 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 8 10 4 7 12 15 1 4 0 50 50 50 50 0 50 50 50 50 0 50 50 50 50 0 Sample Output: Case #1: 2 Case #2: 1 Case #3: 4 Personal Comments: The time required can be represented as a graph matrix, so I'm supposing this as a directed acyclic graph problem. Methods I tried so far is brute force and greedy, but got Wrong Answer. (Unfortunately I don't have my code anymore) Could probably solved by dynamic programming too, but I'm not sure. I really have no clear idea on how to solve this problem. So a simple hint or insight will be very helpful to me.

    Read the article

  • ArchBeat Link-o-Rama for 2012-10-12

    - by Bob Rhubart
    This is your brain on IT architecture. Oracle Technology Network Architect Day in Los Angeles, Oct 25 This is your brain on IT architecture. Stuff your cranium with architecture by attending Oracle Technology Network Architect Day in Los Angeles, October 25, 2012, at the Sofitel Los Angeles, 8555 Beverly Boulevard, Los Angeles, CA 90048. Technical sessions, panel Q&A, and peer roundtables--plus a free lunch. Register now. WebCenter Sites Gadget Development Concepts Quickstart | John Brunswick What are Gadgets? "At their most basic level they can be thought of as lightweight portlets that run largely on the client side of an architecture," says John Brunswick. "Gadgets provide a cross-platform container to run reusable UI modules that generally expose dynamic information to an end user, allowing for some level of end user customization." ORCLville: OOW 2012 - A Not So Brief Recap Oracle ACE Director Floyd Teter, an Applications & Apps Technology specialists, shares his personal, frank, and and extensive recap or Oracle OpenWorld 2012. Fusion Applications Technical Tips | Naveen Nahata "Setting memory parameters for Admin and Managed servers of various domains in Fusion Applications can be, let us say, a little daunting," says Oracle Fusion Middleware A-Team member Naveen Nahata. "While all this may look complicated and intimidating, it is actually relatively simple once you understand how it all works." Following the Thread in OSB | Antony Reynolds Antony Reynolds recently led an Oracle Service Bus POC in which his team needed to get high throughput from an OSB pipeline. "Imagine our surprise when, on stressing the system, we saw it lock up, with large numbers of blocked threads." He shares the details of the problem and the solution in this extensive technical post. ExaLogic Hackers Night - November 19th Nürnberg Germany | WebLogic Partner Community EMEA Want to get your hands on Oracle Exalogic? Make your way to Nürnberg, Germany for this Exalogic Hacker's Night on November 19, 2012. Experts will be on hand to help you test your ideas. (The blog post is in English, but the event registration page is in German.) Thought for the Day "A foolish consistency is the hobgoblin of little minds…" — Ralph Waldo Emerson (May 25, 1803 – April 27, 1882) Source: SoftwareQuotes.com

    Read the article

  • Jitter during wall collisions with Bullet Physics: contact/penetration tolerance?

    - by Niriel
    I use the bullet physics engine through Panda3d. My scene is still very simple, think 'Wolfenstein3d': tile-based, walls are solid cubes. I expect walls to block the player, and I expect the player to slide along the walls in case of non-normal incidence. What I get is what I expect, with one difference: there is some jitter. If I try to force myself into the wall, then I see the frames blinking quickly between two positions. These differ by about 0.04 units of distance, which corresponds to 4 cm in my game. I noticed a 4 cm elsewhere: the bottom of my player capsule is 4 cm below ground, when at rest. Does that mean that there is somewhere in the Bullet engine a default 0.04-units-long tolerance to differentiate contact from collision? If so, what should I do ? Should I change the scale of my game so that these 0.04 units correspond to 0.4 cm, making the jitter ten times smaller? Or can I ask bullet to change its tolerance to a smaller value? Edit This is the jitter I get: 6.155 - 6.118 = 0.036 LPoint3f(0, 6.11694, 0.835) LPoint3f(0, 6.15499, 0.835) LPoint3f(0, 6.11802, 0.835) LPoint3f(0, 6.15545, 0.835) LPoint3f(0, 6.11817, 0.835) LPoint3f(0, 6.15726, 0.835) LPoint3f(0, 6.11876, 0.835) LPoint3f(0, 6.15911, 0.835) LPoint3f(0, 6.11937, 0.835) I found a setMargin method. I set it to 5 mm both on the BoxShape for the walls and on the Capsule shape for the player. It still jitters by about 35 mm as illustrated by this log (11.117 - 11.082 = 0.035): LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1169, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.082, 0.905) LPoint3f(0, 11.117, 0.905) LPoint3f(0, 11.0821, 0.905) LPoint3f(0, 11.1175, 0.905) LPoint3f(0, 11.0822, 0.905) LPoint3f(0, 11.1178, 0.905) LPoint3f(0, 11.0823, 0.905) LPoint3f(0, 11.1183, 0.905) The margin on the capsule did change my penetration with the floor though, I'm a bit higher (0.905 instead of 0.835). However, it did not change anything when colliding with the walls. How can I make the collisions against the walls less jittery? Edit, the day after: After more investigation, it appears that dynamic objects behave well. My problem comes from the btKinematicCharacterController that I use for moving my character; that stuff is totally bugged, according to the whole Internet :/.

    Read the article

  • Unable to find good parameters for behavior of a puck in Farseer

    - by Krumelur
    EDIT: I have tried all kinds of variations now. The last one was to adjust the linear velocity in each step: newVel = oldVel * 0.9f - all of this including your proposals kind of work, however in the end if the velocity is little enough, the puck sticks to the wall and slides either vertically or horizontally. I can't get rid of this behavior. I want it to bounce, no matter how slow it is. Retitution is 1 for all objects, Friction is 0 for all. There is no gravity. What am I possibly doing wrong? In my battle to learn and understand Farseer I'm trying to setup a simple Air Hockey like table with a puck on it. The table's border body is made up from: Vertices aBorders = new Vertices( 4 ); aBorders.Add( new Vector2( -fHalfWidth, fHalfHeight ) ); aBorders.Add( new Vector2( fHalfWidth, fHalfHeight ) ); aBorders.Add( new Vector2( fHalfWidth, -fHalfHeight ) ); aBorders.Add( new Vector2( -fHalfWidth, -fHalfHeight ) ); FixtureFactory.AttachLoopShape( aBorders, this ); this.CollisionCategories = Category.All; this.CollidesWith = Category.All; this.Position = Vector2.Zero; this.Restitution = 1f; this.Friction = 0f; The puck body is defined with: FixtureFactory.AttachCircle( DIAMETER_PHYSIC_UNITS / 2f, 0.5f, this ); this.Restitution = 0.1f; this.Friction = 0.5f; this.BodyType = FarseerPhysics.Dynamics.BodyType.Dynamic; this.LinearDamping = 0.5f; this.Mass = 0.2f; I'm applying a linear force to the puck: this.oPuck.ApplyLinearImpulse( new Vector2( 1f, 1f ) ); The problem is that the puck and the walls appear to be sticky. This means that the puck's velocity drops to zero to quickly below a certain velocity. The puck gets reflected by the walls a couple of times and then just sticks to the left wall and continues sliding downwards the left wall. This looks very unrealistic. What I'm really looking for is that a puck-wall-collision does slow down the puck only a tiny little bit. After tweaking all values left and right I was wondering if I'm doing something wrong. Maybe some expert can comment on the parameters?

    Read the article

  • A solution for a PHP website without a framework

    - by lortabac
    One of our customers asked us to add some dynamic functionality to an existent website, made of several static HTML pages. We normally work with an MVC framework (mostly CodeIgniter), but in this case moving everything to a framework would require too much time. Since it is not a big project, not having the full functionality of a framework is not a problem. But the question is how to keep code clean. The solution I came up with is to divide code in libraries (the application's API) and models. So inside HTML there will only be API calls, and readability will not be sacrificed. I implemented this with a sort of static Registry (sorry if I'm wrong, I am not a design pattern expert): <?php class Custom_framework { //Global database instance private static $db; //Registered models private static $models = array(); //Registered libraries private static $libraries = array(); //Returns a database class instance static public function get_db(){ if(isset(self::$db)){ //If instance exists, returns it return self::$db; } else { //If instance doesn't exists, creates it self::$db = new DB; return self::$db; } } //Returns a model instance static public function get_model($model_name){ if(isset(self::$models[$model_name])){ //If instance exists, returns it return self::$models[$model_name]; } else { //If instance doesn't exists, creates it if(is_file(ROOT_DIR . 'application/models/' . $model_name . '.php')){ include_once ROOT_DIR . 'application/models/' . $model_name . '.php'; self::$models[$model_name] = new $model_name; return self::$models[$model_name]; } else { return FALSE; } } } //Returns a library instance static public function get_library($library_name){ if(isset(self::$libraries[$library_name])){ //If instance exists, returns it return self::$libraries[$library_name]; } else { //If instance doesn't exists, creates it if(is_file(ROOT_DIR . 'application/libraries/' . $library_name . '.php')){ include_once ROOT_DIR . 'application/libraries/' . $library_name . '.php'; self::$libraries[$library_name] = new $library_name; return self::$libraries[$library_name]; } else { return FALSE; } } } } Inside HTML, API methods are accessed like this: <?php echo Custom_framework::get_library('My_library')->my_method(); ?> It looks to me as a practical solution. But I wonder what its drawbacks are, and what the possible alternatives.

    Read the article

  • Ubuntu 13.XX unable to mount USB HDD. Tried everything. I/O error boot sector/file system

    - by XaviGG
    I know that there are many posts related but none of them helped me. I will jump to the last test because it is the one that should work, but it does not. An external HDD with single partition slow NTFS formatted in Windows, empty and clean. Checked for errors, it tells that not errors where found. Moving to Ubutnu 13.04... Gparted throws the first error when trying to read the disk: Input/output error It appears as unknown the content of the disk. Unable to create partition table or format it, getting the same error when trying. If I try to mount it in the terminal it tells me the same, specifying that also there is an I/O error reading the boot sector. I have this problem since I upgraded (always with fresh install) to 13.04. I thought it will be solved by the 13.10 but it has the same behavior. I tried with two different HDD (HD and SSHD) that work perfectly in Windows 7. In 13.04 at least I got a trying of mounting where the icon of the drive started showing and disappearing until finally it disappeared. But now it doesn't even try. Possible causes: -The HDD was my old main HDD, so it had WIN,RECOVERY,SYSTEM,UBU,SWAP partitions. Maybe the way or place where the partition table is defined is not the best for an external HDD but I don't know a lot in that topic. I would appreciate a lot if someone can give me a guideline to convert one of these HDD in a working external HDD. No files to recover, nothing to care about. Just format completely the disk and be able to use it for storing backups without having to move the files first to the windows partition, load windows and then copy them to the external HDD. Because I want to use a file comparator for the backups. Thanks a lot Edit 1: I found an option in Windows to convert it to a dynamic HDD that warns me that I wont be able to run O.S. after changing. I suppose that is what I need because in the current mode I cannot safely extract it. But it tells me an error that it couldn't change the mode.

    Read the article

  • WHERE x = @x OR @x IS NULL

    - by steveh99999
    Every SQL DBA and developer should read the blog of MVP Erland Sommarskog – but particularly  his article on dynamic search conditions in T-SQL. I’ve linked above to his SQL 2005 article but his 2008 version is also a must-read. I seem to regularly come across uses of the SQL in the title above… Erland’s article explains in detail why this is inefficient, but I came across a nice example recently… A stored procedure contained the following code :- WHERE @Name is null or [Name] like @Name as a nonclustered index exists on the Name column, you might assume this would be handled efficiently by SQL Server. However, I got the following output from SET STATISTICS IO Table 'xxxxx'. Scan count 15, logical reads 47760, physical reads 9, read-ahead reads 13872, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. Note the high number of logical reads… After a bit of investigation, we found that @Name could never actually be set to NULL in this particular example. ie the @x IS NULL was spurious… So, we changed the call to WHERE  [Name] like @Name Now, how much more efficient is this code ? Table 'xxxxx'. Scan count 3, logical reads 24, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0 A nice easy win in this case…… a full index scan has been replaced by a significantly more efficient index seek. I managed to recreate the same behaviour on Adventureworks – here’s a quick query to demonstrate :- USE adventureworks SET STATISTICS IO ON DECLARE @id INT = 51721 SELECT * FROM Sales.SalesOrderDetail WHERE @id IS NULL OR salesorderid = @id SELECT * FROM Sales.SalesOrderDetail WHERE salesorderid = @id Take a look at the STATISTICS IO output and compare the actual query plans used to prove the impact of  WHERE @id IS NULL. And just to follow some of Erland’s advice – here’s how you could get similar performance if it was possible that @id could actually sometimes contain NULL. DECLARE @sql NVARCHAR(4000), @parameterlist NVARCHAR(4000) DECLARE @id INT = 51721 – or change to NULL to prove query is functionally correct SET @sql = 'SELECT * FROM Sales.SalesOrderDetail WHERE 1 = 1' IF @id IS NOT NULL SET @sql = @sql + ' AND salesorderid = @id' IF @id IS NULL SET @sql = @sql + ' AND salesorderid IS NULL' SET @parameterlist = '@id INT' EXEC sp_executesql @sql, @parameterlist,@id Sometimes I think we focus too much on hardware and SQL Server configuration – when really the answer is focus on writing efficient SQL.

    Read the article

  • Practices for domain models in Javascript (with frameworks)

    - by AndyBursh
    This is a question I've to-and-fro'd with for a while, and searched for and found nothing on: what're the accepted practices surrounding duplicating domain models in Javascript for a web application, when using a framework like Backbone or Knockout? Given a web application of a non-trivial size with a set of domain models on the server side, should we duplicate these models in the web application (see the example at the bottom)? Or should we use the dynamic nature to load these models from the server? To my mind, the arguments for duplicating the models are in easing validation of fields, ensuring that fields that expected to be present are in fact present etc. My approach is to treat the client-side code like an almost separate application, doing trivial things itself and only relying on the server for data and complex operations (which require data the client-side doesn't have). I think treating the client-side code like this is akin to separation between entities from an ORM and the models used with the view in the UI layer: they may have the same fields and relate to the same domain concept, but they're distinct things. On the other hand, it seems to me that duplicating these models on the server side is a clear violation of DRY and likely to lead to differing results on the client- and server-side (where one piece gets updated but the other doesn't). To avoid this violation of DRY we can simply use Javascripts dynamism to get the field names and data from the server as and when they're neeed. So: are there any accepted guidelines around when (and when not) to repeat yourself in these situations? Or this a purely subjective thing, based on the project and developer(s)? Example Server-side model class M { int A DateTime B int C int D = (A*C) double SomeComplexCalculation = ServiceLayer.Call(); } Client-side model function M(){ this.A = ko.observable(); this.B = ko.observable(); this.C = ko.observable(); this.D = function() { return A() * C(); } this.SomeComplexCalculation = ko.observalbe(); return this; }l M.GetComplexValue = function(){ this.SomeComplexCalculation(Ajax.CallBackToServer()); }; I realise this question is quite similar to this one, but I think this is more about almost wholly untying the web application from the server, where that question is about doing this only in the case of complex calculation.

    Read the article

  • Get Proactive With AutoVue Support!

    - by GrahamOracle
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} I’m pleased to announce the AutoVue “Get Proactive” page within the My Oracle Support portal. This new dynamic document links to valuable resources for AutoVue users, administrators, and integrators – Not only to remain up-to-date on Support and technical topics, but also to enhance your knowledge in planning for upgrade and maintenance activities for your AutoVue products. To access the AutoVue Get Proactive page, log into the MOS portal, search for note number 432.1 in the search field at the top-right, and once in the document select “Agile and AutoVue” from the dropdown (as shown in the following screenshot): The Get Proactive page is a working document, and we plan to include new resources as they become available. Therefore make sure to bookmark the document, and if you have any suggestions please post them using the ‘Add Comment’ feature within the document, or through the AutoVue Community. Normal 0 false false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • How to set up an rsync backup to Ubuntu securely?

    - by ws_e_c421
    I have been following various other tutorials and blog posts on setting up a Ubuntu machine as a backup "server" (I'll call it a server, but it's just running Ubuntu desktop) that I push new files to with rsync. Right now, I am able to connect to the server from my laptop using rsync and ssh with an RSA key that I created and no password prompt when my laptop is connected to my home router that the server is also connected to. I would like to be able to send files from my laptop when I am away from home. Some of the tutorials I have looked at had some brief suggestions about security, but they didn't focus on them. What do I need to do to let my laptop with send files to the server without making it too easy for someone else to hack into the server? Here is what I have done so far: Ran ssh-keygen and ssh-copy-id to create a key pair for my laptop and server. Created a script on the server to write its public ip address to a file, encrypt the file, and upload to an ftp server I have access to (I know I could sign up for a free dynamic DNS account for this part, but since I have the ftp account and don't really need to make the ip publicly accessible I thought this might be better). Here are the things I have seen suggested: Port forwarding: I know I need to assign the server a fixed ip address on the router and then tell the router to forward a port or ports to it. Should I just use port 22 or choose a random port and use that? Turn on the firewall (ufw). Will this do anything, or will my router already block everything except the port I want? Run fail2ban. Are all of those things worth doing? Should I do anything else? Could I set up the server to allow connections with the RSA key only (and not with a password), or will fail2ban provide enough protection against malicious connection attempts? Is it possible to limit the kinds of connections the server allows (e.g. only ssh)? I hope this isn't too many questions. I am pretty new to Ubuntu (but use the shell and bash scripts on OSX). I don't need to have the absolute most secure set up. I'd like something that is reasonably secure without being so complicated that it could easily break in a way that would be hard for me to fix.

    Read the article

  • ?Pick-Up???????WebLogic Server JDBC???·???????(10.3.4)|WebLogic Channel|??????

    - by ???02
    Oracle WebLogic Server JDBC???·???(10.3.4)?????????????????????????????????????????????????????    ¦Oracle RAC???????    ¦JDBC????????    ¦???·???·?????    ¦????????????    ¦??????????·?????    ¦????·??????????????Oracle RAC??????????????????????·???·????GridLink???·????????Oracle RAC?????????????    ¦ ???WebLogic Server???·????Oracle RAC????????????????Oracle RAC?????????????????????????    ¦Oracle RAC?????????????????????????          o????????????(FCF)          o????????·??????(RCLB)          o????????????    ¦?????·?????????XA??????????????????Oracle RAC???????????????????????????Oracle RAC???????????????????????    ¦Oracle????????·?????(SCAN)??????????  ??????????GridLink???·??????????·????·???????ONS????·??????SCAN?????????Oracle RAC??????????????????????????????????????????SCAN???????GridLink???·?????Oracle RAC???????????????????????????????????GridLink???·?????????????????JDBC????????JDBC??????WebLogic Server????????????·???????????????????·????GridLink???·??????????·???·?????????????????·?????????? > ????·??????????????????????????·???·????????·???·?????(???)????????????????????????·???·???????????????????????????????????·?????????????????????????????·???·?????????????????????????·??????????>????·???·????????????????????????????????????????????????Oracle???????????????·???·?????????????????????????????????????    ¦Oracle Net Sdp    ¦Oracle Dynamic Buffering    ¦Oracle Reduced Data Copies????????????????????????????????????????????????????Oracle?????????????????????????????·??????????????????·??????????????????·???????????????????????????·????????????????????·????????????????????????????·??????????????????????????????????????????WebLogic Server??????·????????????????????????????????????????????????????????????Oracle WebLogic Server MBean????????KeepConnAfterLocalTx???????????????¦????????¦Oracle Fusion Middleware Oracle WebLogic Server JDBC???·?????????JDBC API???????????????????????????????????????????????????????????????    *WebLogic JDBC???????    *JDBC???·??????    *GridLink???·??????    *JDBC???·???·??????    *WebLogic Server??Oracle RAC????    *WebLogic Server??JDBC????????    *WebLogic JDBC?????????    *WebLogic JDBC???????¦Oracle WebLogic Server JDBC????????  - Programming JDBC(10.3.4/??)  - Oracle WebLogic Server JDBC????????(10.3.3)     (??)10.3.4???????????????????????????    WebLogic Server????(10.3.4)WebLogic Server??JDBC API??????????????????¦Oracle WebLogic Server???4 JDBC????WebLogic ???4 JDBC?????????????????????¦Oracle WebLogic Server???????????WebLogic Server??????????????????¦Oracle WebLogic Server??????????????????????????WebLogic Server?????????????????????????????·??????

    Read the article

  • PXE boot linux. PXE-E51: No DHCP or proxyDHCP offers were received

    - by athspk
    I am trying to have an ubuntu box (192.168.10.9) acting as a PXE server, but i have trouble getting DHCP to work. The PXE server is connected to a SOHO router (192.168.10.1) acting as a switch. I have disabled the DHCP server on the router. $ dhcpd --version isc-dhcpd-4.2.4 The contents of /etc/dhcp/dhcpd.conf ddns-update-style none; option domain-name-servers 192.168.10.1; default-lease-time 3600; max-lease-time 7200; authoritative; log-facility local7; allow booting; allow bootp; subnet 192.168.10.0 netmask 255.255.255.0 { range dynamic-bootp 192.168.10.101 192.168.10.200; option routers 192.168.10.1; option broadcast-address 192.168.10.255; next-server 192.168.10.9; filename "/tftpboot/pxelinux.0"; } The contents of /etc/default/isc-dhcp-server INTERFACES="eth0" When the client boots, it tries to get an IP address from the server but fails with the following Error message: PXE-E51: No DHCP or proxyDHCP offers were received. On Server side, i was tailing /var/log/syslog while the client tries to boot: Dec 4 12:57:10 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:11 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:12 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:12 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:17 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:17 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:25 athspk-Dell dhcpd: DHCPDISCOVER from 00:1f:d0:8e:6b:db via eth0 Dec 4 12:57:25 athspk-Dell dhcpd: DHCPOFFER on 192.168.10.101 to 00:1f:d0:8e:6b:db via eth0 Please advise. Thanks in advance

    Read the article

  • Cannot reactivate RAID-5 volume: The size of the plex member is invalid

    - by Ian Boyd
    We had a 3-drive Windows Server 2008 R2 RAID-5 fail (operating in redundancy mode): WDC 1 TB WDC 1 TB WDC 1 TB We removed the failed hard drive, and put a WDC 1 TB drive (that we had standing by) into the machine. When launched, Disk Manager, asked permission to "initialize" the disk as either: Master Boot Record (MBR) Guid Partition Table (GPT) We initialized the disk as GPT, converted it to dynamic, and tried to use the Repair Volume command - except it was greyed out. (which is a terrifying thing on a failed production server hosting 3 virtual servers) i tried from the diskpart command line tool. First we look for our RAID-5 volume that is in Failed Rd mode: DISKPART> list volume Volume ### Ltr Label Fs Type Size Status Info ---------- --- ----------- ----- ---------- ------- --------- -------- Volume 0 E VMs (Raid5) NTFS RAID-5 1863 GB Failed Rd Volume 1 D DVD-ROM 0 B No Media Volume 2 System Rese NTFS Partition 100 MB Healthy System Volume 3 C NTFS Partition 1862 GB Healthy Boot There, Volume 0. Make that our active context: DISKPART> select volume 0 Volume 0 is the selected volume. Now we need to find the disk we will be repairing the volume with: DISKPART> list disk Disk ### Status Size Free Dyn Gpt -------- ------------- ------- ------- --- --- Disk 0 Online 931 GB 0 B * Disk 1 Online 931 GB 931 GB * Disk 2 Online 1863 GB 0 B Disk 3 Online 931 GB 0 B * Disk M0 Missing 0 B 0 B * The disk with 931 GB free, Disk 1. Now we just need to repair the volume: DISKPART> repair disk=1 Virtual Disk Service error: The size of the plex member is invalid.

    Read the article

  • Dell VRTX - slow cluster shared storage

    - by NorbyTheGeek
    I have a brand new Dell VRTX box set up as a Failover Cluster running HA Hyper-V virtual machines. This is my first time setting up clustering, and my first time with one of these boxes, so I'm sure I've missed something. The virtual machines are experiencing high disk latency and bad performance when accessing their VHD(x) files located on a Cluster Shared Volume. The VRTX has 10 x 900 GB 10K SAS drives in RAID 6 configuration, and the VRTX has the redundant Shared PERC 8 controllers. Both blades have full access to the virtual disks. There are two M520 blades installed, each with 128 GB RAM. MPIO is configured for the PERC 8 controllers. Operating system on the blades is Server 2012 (NOT R2). The RAID 6 array is split into a small (8 GB) volume for cluster quorum witness and a large (6.5 TB) volume for a Cluster Shared Volume (mounted on the nodes as C:\ClusterStorage\Volume1) An example of slow disk access: logging into a Server 2012 VM and having Server Manager come up automatically. Disk access goes to 100%, with write speeds at 20 MB or so, read speeds of 500 KB or so, and Average Response Time of over 1000 ms, sometimes spiking at 4000-5000 ms or so. It's the latency that really worries me. Is there something specific I should look at in my configuration? It doesn't seem to matter whether I use VHD or VHDX, dynamic or static.

    Read the article

  • "0x80004005: Unspecified error" when renaming folders in Windows 7

    - by Michael Steele
    Setup I've just installed Windows 7 Ultimate 64bit. The operating system is installed on an 80GB Intel X-25M. I have a secondary 500GB Barracuda 7200.12 to be used for storage. This second drive is mounted as 'G'. Problem I'm getting an error whenever I try to rename a folder within this drive. The error says: An unexpected error is keeping you from renaming the folder. If you continue to receive this error, you can use the error code to search for help with this problem. Error 0x80004005: Unspecified error Clicking the "Try Again" button gets past the error, and correctly renames the folder. If I create a folder and leave it's name as "New Folder", then I don't see an error. I'm also able to manipulate files however I want without seeing an error. Things I've Tried Changing permissions on the drive Reformatting the drive. I've tried using both a "Basic" and "Dynamic" partition table. Checking the drive for errors using Microsoft tools Things I've Not Yet Tried When I first installed Windows I was given a choice between three types of partitions to use. I no longer remember what the choices were or what I selected. One was a GUID partition table, one was a traditional MBR, and I can't remember the third option.

    Read the article

< Previous Page | 841 842 843 844 845 846 847 848 849 850 851 852  | Next Page >