Search Results

Search found 13749 results on 550 pages for 'reason'.

Page 170/550 | < Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >

  • The proxy server received an invalid response from an upstream server

    - by chandank
    I have tomcat server behind the apache. I am using mod_ssl and reverse proxy to the tomcat. All are running at default ports. The full error is as follow. ack Proxy Error The proxy server received an invalid response from an upstream server. The proxy server could not handle the request POST /pages/doeditpage.action. Reason: Error reading from remote server If I clean the browser cache the error goes away and comes back after few attempts. I test the same on Chrome/Firefox/IE on Windows platform. Wondering it works perfectly on Linux based Chrome/Firefox. I googled a lot there are few answers at stack overflow but I am not able to find my answer. Is this a server side problem? because so many browsers cant be wrong at same time on Windows.

    Read the article

  • ADF Bounded Taskflow Activation

    - by Vijay Mohan
    hey guys, It's really been a while since I last blogged. Just came across a hard-to-debug scenario, so thought of sharing it for the benefit of ADF developers.I had a page fragment(jsff) wrapped inside a  bounded taskflow, for which the activation was conditional and was based on a requestScope property (be it a requestScope variable or a property coming from a requestScope bean). As soon as the taskflow activates and page renders the requestScope parameters life span ends. After that, when you raise an event inside the page (click of commandLink, moseHover, valueChange event etc) then for the first time the event gets fired but it fails to affect the change in the page, moreover, for the subsequent times the event itself doesn't get fired. Any guesses as to what could be the culprit..?I guess, I already gave the reason in the initial paragraph. For the first time when the event gets fired, the fwk sees that the page is already lying in inactivate state, so it fails to affect the change and for subsequent times it doesn't even fire the event because it already knew that the page/region is inactive. So, in such a scenario we must use either a pageFlowScope property or transientVO property which could exist till the page's life span.

    Read the article

  • How to monitor CPU usage and performance on a Hyper-V server with several VM's

    - by Bjørn
    Hello, I have a server that is running Windows 2008 64 bit Hyper-V, with 8 gigs of RAM and Intel Xeon X3440 @ 2.53 Ghz, which gives me 8 logical cores in the performance monitor on the host system. I have set up three Virtual Machines, all running Windows 2008 32 bit. Build server, running Team City Staging server SQL Server, running SQL Server 2005 I have some troubles with the setup in that the host monitor remains responsive at all times, even though the VM's are seemingly working at 100% cpu and are very sluggish and unresponsive. (I have asked a separate question about that.) So the question here is: What is the best way to monitor how the physical CPU's are actually utilized? The reason I am asking is that I am being told that i cannot reliably use the task manager to monitor CPU usage in a VM.

    Read the article

  • Looking for a very bare and basic blog system?

    - by Shedo Chung-Hee Surashu
    Does anyone know of a user-hosted blog which can be an alternative to WordPress only, it should only have the bare necessities of a blog. I'll just take it from there. It should only have the following: Admin Account (for posting, editing, etc) Archive System Posting System with character limitation (For the Read More links.) Accept Comments from other users (only requires the user's name and email and / or website, then the actual comment). Pages (Allows me to create pages for custom content.) The reason I want this is because WordPress is already too bloated up that there are a ton of features that I don't need. I'd mostly be satisfied with a blogging system that has the above feature-set and I'll just work my way to add my own feature as I require it along the way.

    Read the article

  • Storing Entity Framework Entities in a Separate Assembly

    - by Anthony Trudeau
    The Entity Framework has been valuable to me since it came out, because it provided a convenient and powerful way to model against my data source in a consistent way.  The first versions had some deficiencies that for me mostly fell in the category of the tight coupling between the model and its resulting object classes (entities). Version 4 of the Entity Framework pretty much solves this with the support of T4 templates that allow you to implement your entities as self-tracking entities, plain old CLR objects (POCO), et al.  Doing this involves either specifying a new code generation template or implementing them yourselves.  Visual Studio 2010 ships with a self-tracking entities template and a POCO template is available from the Extension Manager.  (Extension Manager is very nice but it's very easy to waste a bunch of time exploring add-ins.  You've been warned.) In a current project I wanted to use POCO; however, I didn't want my entities in the same assembly as the context classes.  It would be nice if this was automatic, but since it isn't here are the simple steps to move them.  These steps detail moving the entity classes and not the context.  The context can be moved in the same way, but I don't see a compelling reason to physically separate the context from my model. Turn off code generation for the template.  To do this set the Custom Tool property for the entity template file to an empty string (the entity template file will be named something like MyModel.tt). Expand the tree for the entity template file and delete all of its items.  These are the items that were automatically generated when you added the template. Create a project for your entities (if you haven't already). Add an existing item and browse to your entity template file, but add it as a link (do not add it directly).  Adding it as a link will allow the model and the template to stay in sync, but the code generation will occur in the new assembly.

    Read the article

  • That Escalated Quickly

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2014/05/17/that-escalated-quickly.aspxI have been working remotely out of my home for over 4 years now. All of my coworkers during that time have also worked remotely. Lots of folks have written about the challenges inherent in facilitating communication on remote teams and strategies for overcoming them. A popular theme around this topic is the notion of “escalating communication”. In this context “escalating” means taking a conversation from one mode of communication to a different, higher fidelity mode of communication. Here are the five modes of communication I use at work in order of increasing fidelity: Email – This is the “lowest fidelity” mode of communication that I use. I usually only check it a few times a day (and I’m trying to check it even less frequently than that) and I only keep items in my inbox if they represent an item I need to take action on that I haven’t tracked anywhere else. Forums / Message boards – Being a developer, I’ve gotten into the habit of having other people look over my code before it becomes part of the product I’m working on. These code reviews often happen in “real time” via screen sharing, but I also always have someone else give all of the changes another look using pull requests. A pull request takes my code and lets someone else see the changes I’ve made side-by-side with the existing code so they can see if I did anything dumb. Pull requests can facilitate a conversation about the code changes in an online-forum like style. Some teams I’ve worked on also liked using tools like Trello or Google Groups to have on-going conversations about a topic or task that was being worked on. Chat & Instant Messaging  - Chat and instant messaging are the real workhorses for communication on the remote teams I’ve been a part of. I know some teams that are co-located that also use it pretty extensively for quick messages that don’t warrant walking across the office to talk with someone but reqire more immediacy than an e-mail. For the purposes of this post I think it’s important to note that the terms “chat” and “instant messaging” might insinuate that the conversation is happening in real time, but that’s not always true. Modern chat and IM applications maintain a searchable history so people can easily see what might have been discussed while they were away from their computers. Voice, Video and Screen sharing – Everyone’s got a camera and microphone on their computers now, and there are an abundance of services that will let you use them to talk to other people who have cameras and microphones on their computers. I’m including screen sharing here as well because, in my experience, these discussions typically involve one or more people showing the other participants something that’s happening on their screen. Obviously, this mode of communication is much higher-fidelity than any of the ones listed above. Scheduled meetings are typically conducted using this mode of communication. In Person – No matter how great communication tools become, there’s no substitute for meeting with someone face-to-face. However, opportunities for this kind of communcation are few and far between when you work on a remote team. When a conversation gets escalated that usually means it moves up one or more positions on this list. A lot of people advocate jumping to #4 sooner than later. Like them, I used to believe that, if it was possible, organizing a call with voice and video was automatically better than any kind of text-based communication could be. Lately, however, I’m becoming less convinced that escalating is always the right move. Working Asynchronously Last year I attended a talk at our local code camp given by Drew Miller. Drew works at GitHub and was talking about how they use GitHub internally. Many of the folks at GitHub work remotely, so communication was one of the main themes in Drew’s talk. During the talk Drew used the phrase, “asynchronous communication” to describe their use of chat and pull request comments. That phrase stuck in my head because I hadn’t heard it before but I think it perfectly describes the way in which remote teams often need to communicate. You don’t always know when your co-workers are at their computers or what hours (if any) they are working that day. In order to work this way you need to assume that the person you’re talking to might not respond right away. You can’t always afford to wait until everyone required is online and available to join a voice call, so you need to use text-based, persistent forms of communication so that people can receive and respond to messages when they are available. Going back to my list from the beginning of this post for a second, I characterize items #1-3 as being “asynchronous” modes of communication while we could call items #4 and #5 “synchronous”. When communication gets escalated it’s almost always moving from an asynchronous mode of communication to a synchronous one. Now, to the point of this post: I’ve become increasingly reluctant to escalate from asynchronous to synchronous communication for two primary reasons: 1 – You can often find a higher fidelity way to convey your message without holding a synchronous conversation 2 - Asynchronous modes of communication are (usually) persistent and searchable. You Don’t Have to Broadcast Live Let’s start with the first reason I’ve listed. A lot of times you feel like you need to escalate to synchronous communication because you’re having difficulty describing something that you’re seeing in words. You want to provide the people you’re conversing with some audio-visual aids to help them understand the point that you’re trying to make and you think that getting on Skype and sharing your screen with them is the best way to do that. Firing up a screen sharing session does work well, but you can usually accomplish the same thing in an asynchronous manner. For example, you could take a screenshot and annotate it with some text and drawings to illustrate what it is you’re seeing. If a screenshot won’t work, taking a short screen recording while your narrate over it and posting the video to your forum or chat system along with a text-based description of what’s in the recording that can be searched for later can be a great way to effectively communicate with your team asynchronously. I Said What?!? Now for the second reason I listed: most asynchronous modes of communication provide a transcript of what was said and what decisions might have been made during the conversation. There have been many occasions where I’ve used the search feature of my team’s chat application to find a conversation that happened several weeks or months ago to remember what was decided. Unfortunately, I think the benefits associated with the persistence of communicating asynchronously often get overlooked when people decide to escalate to a in-person meeting or voice/video call. I’m becoming much more reluctant to suggest a voice or video call if I suspect that it might lead to codifying some kind of design decision because everyone involved is going to hang up the call and immediately forget what was decided. I recognize that you can record and archive these types of interactions, but without being able to search them the recordings aren’t terribly useful. When and How To Escalate I don’t mean to imply that communicating via voice/video or in person is never a good idea. I probably jump on a Skype call with a co-worker at least once a day to quickly hash something out or show them a bit of code that I’m working on. Also, meeting in person periodically is really important for remote teams. There’s no way around the fact that sometimes it’s easier to jump on a call and show someone my screen so they can see what I’m seeing. So when is it right to escalate? I think the simplest way to answer that is when the communication starts to feel painful. Everyone’s tolerance for that pain is different, but I think you need to let it hurt a little bit before jumping to synchronous communication. When you do escalate from asynchronous to synchronous communication, there are a couple of things you can do to maximize the effectiveness of the communication: Takes notes – This is huge and yet I’ve found that a lot of teams don’t do this. If you’re holding a meeting with  > 2 people you should have someone taking notes. Taking notes while participating in a meeting can be difficult but there are a few strategies to deal with this challenge that probably deserve a short post of their own. After the meeting, make sure the notes are posted to a place where all concerned parties (including those that might not have attended the meeting) can review and search them. Persist decisions made ASAP – If any decisions were made during the meeting, persist those decisions to a searchable medium as soon as possible following the conversation. All the teams I’ve worked on used a web-based system for tracking the on-going work and a backlog of work to be done in the future. I always try to make sure that all of the cards/stories/tasks/whatever in these systems always reflect the latest decisions that were made as the work was being planned and executed. If held a quick call with your team lead and decided that it wasn’t worth the effort to build real-time validation into that new UI you were working on, go and codify that decision in the story associated with that work immediately after you hang up. Even better, write it up in the story while you are both still on the phone. That way when the folks from your QA team pick up the story to test a few days later they’ll know why the real-time validation isn’t there without having to invoke yet another conversation about the work. Communicating Well is Hard At this point you might be thinking that communicating asynchronously is more difficult than having a live conversation. You’re right: it is more difficult. In order to communicate effectively this way you need to very carefully think about the message that you’re trying to convey and craft it in a way that’s easy for your audience to understand. This is almost always harder than just talking through a problem in real time with someone; this is why escalating communication is such a popular idea. Why wouldn’t we want to do the thing that’s easier? Easier isn’t always better. If you and your team can get in the habit of communicating effectively in an asynchronous manner you’ll find that, over time, all of your communications get less painful because you don’t need to re-iterate previously made points over and over again. If you communicate right the first time, you often don’t need to rehash old conversations because you can go back and find the decisions that were made laid out in plain language. You’ll also find that you get better at doing things like writing useful comments in your code, creating written documentation about how the feature that you just built works, or persuading your team to do things in a certain way.

    Read the article

  • Why my computer working without harddisc with livecd, but with hardisk not working - computer not response to any signals?

    - by Yosef
    Hi, History of problem: I formated computer (HP Pavalion Desktop). When I restart computer - computer come to first screen before boot and not response to any signals (f2, f10, ESC, etc..) I take out motherboad battery return after time back and power computer - result : as before I disconnect wires of hard-disk and insert livecd UBUNTU to cd and power coputer: result: works without hard-disk. What is the root of problem: hard-disk broken? hard-disk wires not working well? BIOS? other reason How can I fix the problem?(Buy new hard disk etc...) Thanks, Yosef

    Read the article

  • Mimic the behavior of a machine added to a domain

    - by Ian
    Hello, For some reason, the IT department at our company does not want to add Windows 7 and Windows Vista machine to the domain controller. I hate to always provide my network credentials everytime I access a shared folder on a machine that is joined to the domain. I also hate to always provide my password when I launch outlook or Visual Studio (Team Explorer). Is there a way to mimic the behavior of a machine that is added to a domain without actually adding the machine in the domain? For shares, I can create a batch file that will NET USE the different fileservers we use here but that is a huge security risk as I will type my password as plaintext. Thanks!

    Read the article

  • Sharepoint 2007 Event ID 6482

    - by Dave M
    Our two server SharePoint 2007 SP2 farm has an issue. Event ID 6482 appears in the Application log of the Web front end many times a day. Often many time a minute. The full error is from Office SharePoint Server Event Type: Error Event Source: Office SharePoint Server Event Category: Office Server Shared Services Event ID: 6482 Date: 11/12/2009 Time: 3:05:22 PM User: N/A Computer: XXXXXX Description: Application Server Administration job failed for service instance Microsoft.Office.Server.Search.Administration.SearchServiceInstance (36a9b7ef-59aa-4f94-8887-8bf7b56f2f91). Reason: Error during encryption or decryption. System error code 0. Techinal Support Details: System.ArgumentException: Error during encryption or decryption. System error code 0. at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.SynchronizeDefaultContentSource(IDictionary applications) at Microsoft.Office.Server.Search.Administration.SearchServiceInstance.Synchronize() at Microsoft.Office.Server.Administration.ApplicationServerJob.ProvisionLocalSharedServiceInstances(Boolean isAdministrationServiceJob) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. The SharePoint site appears to be functioning normally and Search returns expected results. Any suggestions would be appreciated

    Read the article

  • Are there netcat-like tools for Windows which are not quarantined as malware?

    - by Matthew Murdoch
    I used to use netcat for Windows to help track down network connectivity issues. However these days my anti-virus software (Symantec - but I understand others display similar behaviour) quarantines netcat.exe as malware. Are there any alternative applications which provide at least the following functionality: can connect to an open TCP socket and send data to it which is typed on the console can open and listen on a TCP socket and print received data to the console ? I don't need the 'advanced' features (which are possibly the reason for the quarantining) such as port scanning or remote execution.

    Read the article

  • On contract work, obligations to said contract, and looking out for yourself…

    - by jlnorsworthy
    Without boring you all with details, my last two contract assignments were cut short; I was given 3 days notice on one, and 4 weeks notice on the other. Neither of these were due to performance – they both basically came down budget issues. On my second contract, I got the feeling that I may not have been a great place to stay for the duration of my contract. Because of money/time spent getting me in the door, and the possible negative effect of my employer/recruiter, I decided to stay at least for a few months (and start looking several weeks before the end of my supposedly “extendable” contract). These experiences have left me a little wary of contract work. It seems that if I land a bad contract, that my recruiter would take a hit (reputation or otherwise) if I quickly found another job. But on the other hand, the client company won’t think twice of ending the contract early for any reason. I know that the counter argument to this is “maybe your recruiter shouldn’t have put you into a crappy assignment”… either way, it seems that since I am relying on him to provide me with work, that I should try to not damage his reputation with client companies. I’m basically brand new to contracting (these were my first two contracts) so these concerns are new to me. TLDR: Is contract work, by its very nature, largely unstable? Am I worried too much about my recruiter? Should I be quicker to start looking for a new job even after just weeks at a new company (when the environment seems unstable)? If so, do I look through my recruiter or just find another position by any means necessary?

    Read the article

  • Salary and profit distribution in game industry?

    - by drowneath
    A couple years ago, I started a group/team of passionate people in game development. I was the one who had the idea to form a group that will (hopefully) later be a company/real studio. I was the one who gathered the people too. We are consisting of only a few people (< 10 people) and everyone has their own specialties in game development. For some reason, everyone agreed to make me the executive director of the group. We are currently focused in creating flash games and mobile games. Until now, we have created a few free game titles and gained profit from some freelancing projects. Since I have no prior experience in running a "company", I decided to split the profit we gained from projects equally regardless of the member's role in the company, as long as he/she is involved in and have contributed a decent amount of work to the development of the project. My questions are: What is the correct way to split profit that is gained from freelance projects that are developed together? Once we've released enough products and ready to register our company legally, what about the salary? What benefits do I have from being the founder and the director? I'm not a control-freak, but I want everything to be clear.

    Read the article

  • How to go to a website on a shared server by its ip address?

    - by user1502776
    I have a few questions, please help: Fist, I can access google search just by typing http://74.125.224.211 because this is the ip address returned by nslookup. However, I could not do so with ip addresses returned from www.yahoo.com. How do I go to yahoo search page by its ip ? Another example, http://www.allaboutcircuits.com will resolve to 68.233.243.63 by DNS server, but if I go to http://68.233.243.63 I got "Hello world!" , lol ! Second, for some reason, there is something wrong with DNS resolvers with my web hosting service (it will not be fixed !!). So command like, get_file_contents("http://www.allaboutcircuits.com"); will return php_network_getaddresses: getaddrinfo failed: Name or service not known How do I get around this with IP address , 68.233.243.63 I mean somehow attach the HTTP hostname parameter to get_file_contents() ? I would like to solve this on my own side (in my code), no troubleshooting/adjustment will be done by server admin.

    Read the article

  • Rackspace copy script failure message "java.net.SocketTimeoutException: Read timed out"

    - by user53864
    I am using Rackspace for Ubuntu cloud server. Everyday a script(I guess the script is from rackspace) executes on the cloud servers which copies the backfile to the Rackspace CloudFiles and sends the mail as if the files are copied and I've scheduled the script on the cloud servers. I've no much knowledge of the script and I guess the script is based on Cruise(as I could see build.xml, some jar files ...). Everyday the files are copied to the Rackspace from cloud servers but sometimes don't know why, the files will be copied to Rackspace sending an error failure message or sometimes the files will not be copied and sends the error failure message like the one below. Error while backing up on Station1 on 03/03/2011 04:50 AM and reason for error is java.net.SocketTimeoutException: Read timed out Anybody using Rackspace?, anybody has any fix for this?

    Read the article

  • How to Avoid Your Next 12-Month Science Project

    - by constant
    While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack. After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades? On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea. Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company. Engineering Systems is Hard Work! The backbone of Exalogic is its InfiniBand network: 4 times better bandwidth than even 10 Gigabit Ethernet, and only about a tenth of its latency. What a potential for increased scalability and throughput across the middleware and database layers! But InfiniBand is a beast that needs to be tamed: It is true that Exalogic uses a standard, open-source Open Fabrics Enterprise Distribution (OFED) InfiniBand driver stack. Unfortunately, this software has been developed by the HPC community with fastest speed in mind (which is good) but, despite the name, not many other enterprise-class requirements are included (which is less good). Here are some of the improvements that Oracle's InfiniBand development team had to add to the OFED stack to make it enterprise-ready, simply because typical HPC users didn't have the need to implement them: More than 100 bug fixes in the pieces that were not related to the Message Passing Interface Protocol (MPI), which is the protocol that HPC users use most of the time, but which is less useful in the enterprise. Performance optimizations and tuning across the whole IB stack: From Switches, Host Channel Adapters (HCAs) and drivers to low-level protocols, middleware and applications. Yes, even the standard HPC IB stack could be improved in terms of performance. Ethernet over IB (EoIB): Exalogic uses InfiniBand internally to reach high performance, but it needs to play nicely with datacenters around it. That's why Oracle added Ethernet over InfiniBand technology to it that allows for creating many virtual 10GBE adapters inside Exalogic's nodes that are aggregated and connected to Exalogic's IB gateway switches. While this is an open standard, it's up to the vendor to implement it. In this case, Oracle integrated the EoIB stack with Oracle's own IB to 10GBE gateway switches, and made it fully virtualized from the beginning. This means that Exalogic customers can completely rewire their server infrastructure inside the rack without having to physically pull or plug a single cable - a must-have for every cloud deployment. Anybody who wants to match this level of integration would need to add an InfiniBand switch development team to their project. Or just buy Oracle's gateway switches, which are conveniently shipped with a whole server infrastructure attached! IPv6 support for InfiniBand's Sockets Direct Protocol (SDP), Reliable Datagram Sockets (RDS), TCP/IP over IB (IPoIB) and EoIB protocols. Because no IPv6 = not very enterprise-class. HA capability for SDP. High Availability is not a big requirement for HPC, but for enterprise-class application servers it is. Every node in Exalogic's InfiniBand network is connected twice for redundancy. If any cable or port or HCA fails, there's always a replacement link ready to take over. This requires extra magic at the protocol level to work. So in addition to Weblogic's failover capabilities, Oracle implemented IB automatic path migration at the SDP level to avoid unnecessary failover operations at the middleware level. Security, for example spoof-protection. Another feature that is less important for traditional users of InfiniBand, but very important for enterprise customers. InfiniBand Partitioning and Quality-of-Service (QoS): One of the first questions we get from customers about Exalogic is: “How can we implement multi-tenancy?” The answer is to partition your IB network, which effectively creates many networks that work independently and that are protected at the lowest networking layer possible. In addition to that, QoS allows administrators to prioritize traffic flow in multi-tenancy environments so they can keep their service levels where it matters most. Resilient IB Fabric Management: InfiniBand is a self-managing network, so a lot of the magic lies in coming up with the right topology and in teaching the subnet manager how to properly discover and manage the network. Oracle's Infiniband switches come with pre-integrated, highly available fabric management with seamless integration into Oracle Enterprise Manager Ops Center. In short: Oracle elevated the OFED InfiniBand stack into an enterprise-class networking infrastructure. Many years and multiple teams of manpower went into the above improvements - this is something you can only get from Oracle, because no other InfiniBand vendor can give you these features across the whole stack! Exabus: Because it's not About the Size of Your Network, it's How You Use it! So let's assume that you somehow were able to get your hands on an enterprise-class IB driver stack. Or maybe you don't care and are just happy with the standard OFED one? Anyway, the next step is to actually leverage that InfiniBand performance. Here are the choices: Use traditional TCP/IP on top of the InfiniBand stack, Develop your own integration between your middleware and the lower-level (but faster) InfiniBand protocols. While more bandwidth is always a good thing, it's actually the low latency that enables superior performance for your applications when running on any networking infrastructure: The lower the latency, the faster the response travels through the network and the more transactions you can close per second. The reason why InfiniBand is such a low latency technology is that it gets rid of most if not all of your traditional networking protocol stack: Data is literally beamed from one region of RAM in one server into another region of RAM in another server with no kernel/drivers/UDP/TCP or other networking stack overhead involved! Which makes option 1 a no-go: Adding TCP/IP on top of InfiniBand is like adding training wheels to your racing bike. It may be ok in the beginning and for development, but it's not quite the performance IB was meant to deliver. Which only leaves option 2: Integrating your middleware with fast, low-level InfiniBand protocols. And this is what Exalogic's "Exabus" technology is all about. Here are a few Exabus features that help applications leverage the performance of InfiniBand in Exalogic: RDMA and SDP integration at the JDBC driver level (SDP), for Oracle Weblogic (SDP), Oracle Coherence (RDMA), Oracle Tuxedo (RDMA) and the new Oracle Traffic Director (RDMA) on Exalogic. Using these protocols, middleware can communicate a lot faster with each other and the Oracle database than by using standard networking protocols, Seamless Integration of Ethernet over InfiniBand from Exalogic's Gateway switches into the OS, Oracle Weblogic optimizations for handling massive amounts of parallel transactions. Because if you have an 8-lane Autobahn, you also need to improve your ramps so you can feed it with many cars in parallel. Integration of Weblogic with Oracle Exadata for faster performance, optimized session management and failover. As you see, “Exabus” is Oracle's word for describing all the InfiniBand enhancements Oracle put into Exalogic: OFED stack enhancements, protocols for faster IB access, and InfiniBand support and optimizations at the virtualization and middleware level. All working together to deliver the full potential of InfiniBand performance. Who else has 100% control over their middleware so they can develop their own low-level protocol integration with InfiniBand? Even if you take an open source approach, you're looking at years of development work to create, test and support a whole new networking technology in your middleware! The Extras: Less Hassle, More Productivity, Faster Time to Market And then there are the other advantages of Engineered Systems that are true for Exalogic the same as they are for every other Engineered System: One simple purchasing process: No headaches due to endless RFPs and no “Will X work with Y?” uncertainties. Everything has been engineered together: All kinds of bugs and problems have been already fixed at the design level that would have only manifested themselves after you have built the system from scratch. Everything is built, tested and integrated at the factory level . Less integration pain for you, faster time to market. Every Exalogic machine world-wide is identical to Oracle's own machines in the lab: Instant replication of any problems you may encounter, faster time to resolution. Simplified patching, management and operations. One throat to choke: Imagine finger-pointing hell for systems that have been put together using several different vendors. Oracle's Engineered Systems have a single phone number that customers can call to get their problems solved. For more business-centric values, read The Business Value of Engineered Systems. Conclusion: Buy Exalogic, or get ready for a 6-12 Month Science Project And here's the reason why it's not easy to "build your own Exalogic": There's a lot of work required to make such a system fly. In fact, anybody who is starting to "just put together a bunch of servers and an InfiniBand network" is really looking at a 6-12 month science project. And the outcome is likely to not be very enterprise-class. And it won't have Exalogic's performance either. Because building an Engineered System is literally rocket science: It takes a lot of time, effort, resources and many iterations of design/test/analyze/fix to build such a system. That's why InfiniBand has been reserved for HPC scientists for such a long time. And only Oracle can bring the power of InfiniBand in an enterprise-class, ready-to use, pre-integrated version to customers, without the develop/integrate/support pain. For more details, check the new Exalogic overview white paper which was updated only recently. P.S.: Thanks to my colleagues Ola, Paul, Don and Andy for helping me put together this article! var flattr_uid = '26528'; var flattr_tle = 'How to Avoid Your Next 12-Month Science Project'; var flattr_dsc = 'While most customers immediately understand how the magic of Oracle's Hybrid Columnar Compression, intelligent storage servers and flash memory make Exadata uniquely powerful against home-grown database systems, some people think that Exalogic is nothing more than a bunch of x86 servers, a storage appliance and an InfiniBand (IB) network, built into a single rack.After all, isn't this exactly what the High Performance Computing (HPC) world has been doing for decades?On the surface, this may be true. And some people tried exactly that: They tried to put together their own version of Exalogic, but then they discover there's a lot more to building a system than buying hardware and assembling it together. IT is not Ikea.Why is that so? Could it be there's more going on behind the scenes than merely putting together a bunch of servers, a storage array and an InfiniBand network into a rack? Let's explore some of the special sauce that makes Exalogic unique and un-copyable, so you can save yourself from your next 6- to 12-month science project that distracts you from doing real work that adds value to your company.'; var flattr_tag = 'Engineered Systems,Engineered Systems,Infiniband,Integration,latency,Oracle,performance'; var flattr_cat = 'text'; var flattr_url = 'http://constantin.glez.de/blog/2012/04/how-avoid-your-next-12-month-science-project'; var flattr_lng = 'en_GB'

    Read the article

  • Can't find instructions how to use Windows 7 drivers on Windows Server 2008 R2

    - by Robert Koritnik
    Windows 7 x64 comes with all sorts of signed drivers so there's high probability that all drivers for your machine will be installed during system setup. On the other hand Windows Server 2008 R2 doesn't. Event though it's practically the same OS when it comes to drivers. I know there's a very good reason for this difference. It's a server product, not a desktop one. But the thing is that many power users and developers use server OS on their workstations which are usually desktop machines (a bit more powerful though) and would benefit from the whole driver spectrum that Windows 7 offers... Question I know I've been reading on the internet about some trick where you first install Windows 7, than do something to get either all Windows 7 drivers or just those installed, and then install Windows Server 2008 R2 and use those drivers of Windows 7. The thing is I can't find these instructions on the internet any more. If anybody knows where they are please provide the link for the rest of us.

    Read the article

  • Can't find instructions how to use Windows 7 drivers on Windows Server 2008 R2

    - by Robert Koritnik
    Maybe I should post this to http://www.serverfault.com. Windows 7 comes with all sorts of signed drivers so there's high probability that all drivers for your machine will be installed during system setup. On the other hand Windows Server 2008 doesn't event though it's practically the same OS when it comes to drivers. But I know that this has a very good reason. It's a server product, not a desktop one. But the thing is that many power users and developers use server OS on their workstations which are normally desktop machines and would need Windows 7 driver spectrum... Question I know I've been reading about some trick on the internet that first installed Windows 7 on the machine, than do something to get either all Windows 7 driver collection or just those installed, and then install Windows Server 2008 and use those drivers. The thing is: I can't seem to find these instructions on the internet any more. If anybody knows where these are please provide the link for the rest of us.

    Read the article

  • How to change Excel Pivot table "Report Filter"s values cell formatting

    - by Damiqib
    My Excel is in Finnish, but don't let that bother you... First Report Filter "Kupi" has only number values in my source table, for example 643203, 3533, 253244, etc. How ever in Pivot's "Report Filter" all those values are converted to date values MONTH yyyy. How do I reformat the filter values to respect the original cell formatting?! The same problem is with actual date values in my source table when using "Report Filter" in Pivot table. In my source data my dates are in format: dd.mm.yyyy and for some reason in Pivot's "Report Filter" all dates are shown in MONTH yyyy-format?! Why is that and what do I need to do to fix this?

    Read the article

  • Beginners security question

    - by Reg H
    Hi everyone, I'm still pretty new to web development, and have a question about security. Every day I look at the "Latest Visitors" in my CPanel, and today there were some strange entries (one is pasted below). Not knowing any better, it looks to me like there is some site that's referring users to my site, for some reason. Can someone explain what these really are, and if it's something to be concerned about? Thanks! Host: 77.68.38.175 /?p=http://teen-37.net/myid.jpg? Http Code: 404 Date: Feb 17 08:13:58 Http Version: HTTP/1.1 Size in Bytes: - Referer: - Agent: libwww-perl/5.805 * /?p=../../../../../../../../../../../../../../../proc/self/environ%00 Http Code: 404 Date: Feb 17 08:13:59 Http Version: HTTP/1.1 Size in Bytes: - Referer: - Agent: libwww-perl/5.805

    Read the article

  • Correct way to treat iptables init failure?

    - by chris_l
    Hi, I'm initializing my iptables rules via /etc/network/if-pre-up.d/iptables, using iptables-restore. This works fine, but I'm a bit worried about what would happen, if that script failed for some reason (maybe the saved iptables file is corrupt or whatever). In case the script failed, I'd like to: Start up my network interfaces without any iptables rules Start up OpenSSH server But not any other services like web server, ... (and maybe stop running instances) Is there a good canonical way to do that? Going into a lower init stage? - I haven't done that in a long time, and I think a lot about init has changed in recent years (?) - which stage should I drop to, and would the OpenSSH server and my network interfaces still run? Thanks Chris (On Debian Lenny)

    Read the article

  • New Windows 7 Libraries created keep disappearing

    - by Sean
    I've just got a new laptop that came pre-installed with Windows 7 Professional edition. One of the new features of Windows 7 is Libraries. I'm familiar with how this works and am trying to create my own library called 'Work' to include all my work folders on my computer. However every time I create a new custom Library, after I rename it, it disappears from my Library menu. Each time I click Libraries in the Explorer, I keep seeing the same 4 default libraries, I.e. Documents, Pictures, Music, and Video. So when I try to create a new Library called 'Work' again, I get a pop up message "Do you want to rename New library to Work (2).Library-Microsoft?" Which means that my original work library still exists but for some reason I can't see it. Can someone please help me figure out why this is happening?

    Read the article

  • Entity Framework &amp; Transactions

    - by Sudheer Kumar
    There are many instances we might have to use transactions to maintain data consistency. With Entity Framework, it is a little different conceptually. Case 1 – Transaction b/w multiple SaveChanges(): here if you just use a transaction scope, then Entity Framework (EF) will use distributed transactions instead of local transactions. The reason is that, EF closes and opens the connection when ever required only, which means, it used 2 different connections for different SaveChanges() calls. To resolve this, use the following method. Here we are opening a connection explicitly so as not to span across multipel connections.   using (TransactionScope ts = new TransactionScope()) {     context.Connection.Open();     //Operation1 : context.SaveChanges();     //Operation2 :  context.SaveChanges()     //At the end close the connection     ts.Complete(); } catch (Exception ex) {       //Handle Exception } finally {       if (context.Connection.State == ConnectionState.Open)       {            context.Connection.Close();       } }   Case 2 – Transaction between DB & Non-DB operations: For example, assume that you have a table that keeps track of Emails to be sent. Here you want to update certain details like DataSent once if the mail was successfully sent by the e-mail client. Email eml = GetEmailToSend(); eml.DateSent = DateTime.Now; using (TransactionScope ts = new TransactionScope()) {    //Update DB    context.saveChanges();   //if update is successful, send the email using smtp client   smtpClient.Send();   //if send was successful, then commit   ts.Complete(); }   Here since you are dealing with a single context.SaveChanges(), you just need to use the TransactionScope, just before saving the context only.   Hope this was helpful!

    Read the article

  • Windows lowers volume because of "communication"

    - by SnippetSpace
    Randomly, or at system sound (usb connect for example) Windows 8 lowers the volume of all other sources. (Same problem can exist on windows 7, just look online). This is happening because of the "communication detection" made to detect phone calls and then lower volume. But in my case it happens all the time without reason as if windows always considers myself in a call. Do any of you have the same problem? is this a driver issue or a windows issue? I know this has been posted many times but they usually just tell you to turn off the setting above. I'm looking for an explanation or a solution :). Thanks for your feedback and ideas.

    Read the article

  • Visual Studio ALM MVP of the Year 2011

    - by Martin Hinshelwood
    For some reason this year some of my peers decided to vote for me as a contender for Visual Studio ALM MVP of the year. I am not sure what I did to deserve this, but a number of people have commented that I have a rather useful blog. I feel wholly unworthy to join the ranks of previous winners: Ed Blankenship (2010) Martin Woodward (2009) Thank you to everyone who voted regardless of who you voted for. If there was a prize for the best group of MVP’s then the Visual Studio ALM MVP would be a clear winner, as would the product group of product groups that is Visual Studio ALM Group. To use a phrase that I have learned since moving to Seattle and probably use too much: you guys are all just awesome. I have tried my best in the last year to document not only every problem that I have had with Team Foundation Server (TFS), but also to document as many of the things I am doing as possible. I have taken some of Adam Cogan’s rules to heart and when a customer asks me a question I always blog the answer and send them a link. This allows both my blog and my understanding of TFS to grow while creating a useful bank of content. The idea is that if one customer asks, all benefit. I try, when writing for my blog, to capture both the essence and the context for a problem being solved. This allows more people to benefit as they do not need to understand the specifics of an environment to gain value. I have a number of goals for this year that I think will help increase value in the community: persuade my new colleagues at Northwest Cadence to do more blogging (Steve, Jeff, Shad and Rennie) Rangers Project – TFS Iteration Automation with Willy-Peter Schaub, Bill Essary, Martin Hinshelwood, Mike Fourie, Jeff Bramwell and Brian Blackman Write a book on the Team Foundation Server API with Willy-Peter Schaub, Mike Fourie and Jeff Bramwell write more useful blog posts I do not think that these things are beyond the realms of do-ability, but we will see…

    Read the article

  • Are there any font rendering libraries for games development that support hinting?

    - by Richard Fabian
    I've used angel code's bitmap font generator quite a bit and though it's very good, I wondered if there would be a way of using the hinting information to provide a better readable result by using hinting to provide differing thickness based on size/pixel coverage. I imagine any solution would have to use the distance field tech presented in the valve paper on smoothing fonts while maintaining or reducing asset size. (http://www.gamedev.net/community/forums/topic.asp?topic_id=494612) but I haven't found any demos of it being used with hinting information turned on or included in the field gradients in any way. Another way of looking at this is whether there are any font bitmap generators that will output mipmaps that still maintain their readability in the face of pixel size. I think the lower mip levels would try to guarantee fill and space where it is necessary to maintain readability/topology over maintaining style/form (the point of hinting). In response to "Is there a reason you can't just render the size you want", the problem lies in the fact that font rasterisers currently don't render in 3D, and hinting information would be important in different amounts due to the pixel density being different along different axes, even differing in importance along the length of a string due to the size reducing over distance. For example, I only want horizontal hinting in a texture that is viewed from the side, and only really want vertical hinting in a font that is viewed from below or above. This isn't meant to be a renderer that tries to render a perfect outline as accurately as possible, as hinting distorts the reality of the font, instead this is meant to be a rendering solution for quite static scenes, but scenes that have 3D transformed and warped text layout. In this case the legibility is important, more important than the accuracy of representation of the polygon shape.

    Read the article

< Previous Page | 166 167 168 169 170 171 172 173 174 175 176 177  | Next Page >