Search Results

Search found 17852 results on 715 pages for 'load balancer'.

Page 404/715 | < Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >

  • How do I maximize a window in one monitor and do other things on the other monitor

    - by RoboShop
    Hi, I have windows 7 installed on my laptop which is a Dell studio XPS 16. I recently set up a second monitor for it. I've noticed that when I load up "full maximisation" applications (I don't know the term) like a game or media center on one monitor, I can't seem to do anything with the other monitor. With the game, I can see the other monitor, I can play a movie on it even, but if I click the mouse button on it, it'll stop the game and alt-tab away. In Media Center, I can't even cross my mouse over the other monitor. So my question is, in Windows 7, is it possible to run programs like this that are designed to be run in one maximised window and still be able to do things on the other window?

    Read the article

  • What are best monitoring tool customizable for cluster / distributed system?

    - by Adil
    I am working on a system having multiple servers. I am interested in monitoring some server specific data like CPU/memory usage, disk/filesystem usage, network traffic, system load etc. and some other my process specific data. What are available open source that can serve my purpose? If it provides to customize the parameter to be monitored and monitor your own data by creating plugin / agent. Any suggestions? I heard of Nagios, Zabbix and Pandora but not sure if they provide such interface.

    Read the article

  • Servers at remote sites vs. centralized servers?

    - by Boden
    Looking for some opinions here. We've got three physical locations and site-to-site VPN between all three. Currently we've got Windows domain controllers at each location, with roughly 50 clients at each. The domains are currently separate, and we're looking at integrating the three sites. Email (Exchange) will be located at the primary site, and RPD is already being used at the secondary branches to hit the app servers also located at the primary site. The bulk of the local user load at the other two sites is just file sharing. What would the main benefits and drawbacks be of replacing the local domain controllers with NAS devices, and only keeping the domain controller(s) at the primary site? (assuming upgrades are coming regardless) Under what circumstances would you choose one setup over the other?

    Read the article

  • virtual box upgrade

    - by Husni
    I did upgrade virtualbox from 4.1 to 4.2 wheneverver I want to load my win xp vdi, it gives me the following error: "Kernel driver not installed (rc=-1908) The VirtualBox Linux kernel driver (vboxdrv) is either not loaded or there is a permission problem with /dev/vboxdrv. Please reinstall the kernel module by executing '/etc/init.d/vboxdrv setup' as root. If it is available in your distribution, you should install the DKMS package first. This package keeps track of Linux kernel changes and recompiles the vboxdrv kernel module if necessary." I ran the suggested step to reinstall the kernel module, and the log file files is as follow: Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. Makefile:181: * Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR= and run Make again. Stop. I still unable to re-run my win virtual XP vdi file. anyone have a clue?

    Read the article

  • How to calculate square root in PHP [explained] [on hold]

    - by Enes Imsirovic
    At first code ! Don't forget embed the JQuery ! <html> <head> <title>Simple jQuery and PHP Square Root example</title> <script src="js/jquery-1.10.1.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function() { $('#form').submit(function(){ var number = $('#number').val(); $.ajax({type:"post",url:"calculate.php",data:"number=" +number,success:function(msg){$('#result').hide(); $("#result").html("<h3>" + msg + "</h3>").fadeIn("slow"); } }); return false; }); }); </script> </head> <body> <form id="form" action="calculate.php" method="post"> Enter number: <input id="number" type="text" name="number" /> <input id="submit" type="submit" value="Calculate Square Root" name="submit"/> </form> <p id="result"></p> </body> </html> Second code witch would be connected with first : calculate.php <?php if($_POST['number']==null){ echo "Please Enter a Number"; }else { if (!is_numeric($_POST['number'])) { echo "Please enter only numbers"; }else{ echo "Square Root of " .$_POST['number'] ." is ".sqrt($_POST['number']); } } ?> Chiefly for begginers, to see the power of PHP :) xD Load this on your localhost.. PHP files and JS : https://mega.co.nz/#!Et8zWSBb!KX2PFxa2Pzw_l-wi6QU8xi_eKTlHbtQuBsT_DvXrifk At least it look like this : http://imgur.com/vNnDRQ3

    Read the article

  • httpd service keep restarting. after 15-20 mins

    - by niraj
    I have recently purchased Dedicated Server which has 16bg ram and 1TB Harddisk. It has Cpanel and for firewall CSF Installd. I am mainly going to install it for File hosting service. Now the day i moved my httpd service keep restarting every 15-20 mins. It becomes unresponsive after that so have to manually restart it. My httpd settings are Start Servers = 5 Minimum Spare Servers = 5 Maximum Spare Servers = 10 Server Limit = 20000 Max Clients = 10000 Max Requests Per Child = 10000 Keep-Alive = On Keep-Alive Timeout = 5 Max Keep-Alive Requests = Unlimited Timeout 300 TOP is top - 14:53:41 up 1 day, 23:39, 2 users, load average: 0.10, 0.14, 0.09 Tasks: 1563 total, 1 running, 1562 sleeping, 0 stopped, 0 zombie Cpu(s): 0.7%us, 0.6%sy, 0.0%ni, 98.1%id, 0.2%wa, 0.0%hi, 0.5%si, 0.0%st Mem: 16303780k total, 16142048k used, 161732k free, 135264k buffers Swap: 8224760k total, 868k used, 8223892k free, 14136616k cached Please help me in this its keep happning.

    Read the article

  • Oracle R Distribution 2-13.2 Update Available

    - by Sherry LaMonica
    Oracle has released an update to the Oracle R Distribution, an Oracle-supported distribution of open source R. Oracle R Distribution 2-13.2 now contains the ability to dynamically link the following libraries on both Windows and Linux: The Intel Math Kernel Library (MKL) on Intel chips The AMD Core Math Library (ACML) on AMD chips To take advantage of the performance enhancements provided by Intel MKL or AMD ACML in Oracle R Distribution, simply add the MKL or ACML shared library directory to the LD_LIBRARY_PATH system environment variable. This automatically enables MKL or ACML to make use of all available processors, vastly speeding up linear algebra computations and eliminating the need to recompile R.  Even on a single core, the optimized algorithms in the Intel MKL libraries are faster than using R's standard BLAS library. Open-source R is linked to NetLib's BLAS libraries, but they are not multi-threaded and only use one core. While R's internal BLAS are efficient for most computations, it's possible to recompile R to link to a different, multi-threaded BLAS library to improve performance on eligible calculations. Compiling and linking to R yourself can be involved, but for many, the significantly improved calculation speed justifies the effort. Oracle R Distribution notably simplifies the process of using external math libraries by enabling R to auto-load MKL or ACML. For R commands that don't link to BLAS code, taking advantage of database parallelism using embedded R execution in Oracle R Enterprise is the route to improved performance. For more information about rebuilding R with different BLAS libraries, see the linear algebra section in the R Installation and Administration manual. As always, the Oracle R Distribution is available as a free download to anyone. Questions and comments are welcome on the Oracle R Forum.

    Read the article

  • wrong kernel running after install

    - by ticktockhouse
    I have installed Ubuntu 14.04 from unetbootin. When it reboots after the install, uname -r says: 3.5.0-17-generic ..this means that no modules have loaded for the kernel that is actually installed (3.13.0-32-generic). Does anyone know why this kernel should be installed via the install process? Is it an artifact of using Unetbootin? Booting into the Unetbootin image gives the correct kernel, and thus the modules load. Knowing why is one thing, but I'm not sure how to remedy it now. Because no modules are loaded, I can't connect to the network or connect a USB drive. I've tried update-grub, which seems to find the correct kernel, but doesn't seem to tell the system to boot from it. I've also tried selecting the kernel at boot time using the "Advanced Options for Ubuntu", and the 3.13.x kernel is the only one listed. Selecting this lead to the 3.5.x kernel stubbornly loading.. I'm a fairly accomplished sysadmin, but this one has me flummoxed :) Can anyone help?

    Read the article

  • What's static.ak.fbcdn.net that appears on the status bar of my browser everytime Facebook is loading?

    - by Maverick
    I find the message: "waiting for static.ak.fbcdn.net..." on the status bar of my browser everytime I load Facebook and many a times even while loading other websites. I searched on net and found out that static.ak.fbcdn.net stands for static akamai facebook content delivery network. I reckon that static.ak.fbcdn.net is the server URL from where Facebook delivers contents to our browser. Am I right? Can anyone elaborate? Also, why does the above mentioned message appear while loading other websites too?

    Read the article

  • Routing application traffic through specific interface

    - by UnicornsAndRainbows
    Hello All! First question here, so please go easy: I have a debian linux 5.0 server with two public interfaces. I would like to route outbound traffic from one instance of an application via one interface and the second instance through the second interface. There are some challenges: both instances of the application use the same protocol both instances of the application can access the entire internet (can't route based on dest network) I can't change the code of the application I don't think a typical approach to load balancing all traffic is going to work well, because there are relatively few destination servers being accessed in the outbound traffic, and all traffic would really need to be distributed pretty evenly across these relatively few servers. I could probably run two virtualized servers on the box and bind each of them to a different external ip, but I'm looking for a simpler solution, maybe using iproute or iptables? Any ideas for me? Thanks in advance - and I'm happy to answer any questions.

    Read the article

  • Compiling custom kernel 3.7.x lowlatency on Ubuntu 12.04

    - by FlabbergastedPickle
    All, I have a peculiar problem with trying to compile a lowlatency flavor of the latest 3.7 kernel. I retrieved the prepatched source from the launchpad using bzr, compiled it using the usual make-kpkg using the current config file plus default options for the rest, installed the kernel and booted into it. Everything works except for the fglrx and wl drivers that I had to install in the original 12.04 lowlatency kernel. So, I tried recompiling these and succeeded with both of them (no errors were reported)--wl driver required a minor adjustment to system.h include while latest fglrx 12.11 beta11 (released yesterday, Dec. 3rd, 2012) compiled without the hitch. Yet, when I try to modprobe either module (both having in common the fact that they were built after the kernel, fglrx as a deb, and wl via the usual make/make install), I get "FATAL: no MODULENAME module found" (MODULENAME being either wl or fglrx). The graphic driver watermark shows 3D crossed out and "for testing purposes" (or "unsupported hardware," can't remember), and no fglrx or wl is loaded. More mysteriously, dmesg shows no attempt on kernel's behalf to load the said drivers, even though they are clearly in the right /lib/modules/KERNEL_VERSION folder. How is this possible? Has something fundamentally changed in 3.7 kernel that would prevent modprobing of these? I know that there is driver signing option that was merged recently but as far as I could tell the kernel config file generated by the build process had that disabled. OTOH, while building wl driver, I did get a warning that the driver was not signed... Then again, even if the kernel disallowed loading of those modules, shouldn't dmesg reflect that? Any thoughts on this one are most appreciated.

    Read the article

  • Mobile Web Framework that will only control rendering and page transitions

    - by rlemon
    I have been using jQueryMobile for a bit now, and there are some things I like about it and others I do not. First I will give a bit of background. I have a light weight mobile application that has a few configurations and 6 pages. Ideally I Would like to load all pages into the DOM (they interact with each other quite often and pages will be switched in the same frequency). The application will post for some JSON every n seconds and refresh the values on the page (yes it is primarily a information display app). with the jQuery Mobile framework the only real thing I like is how easy it is to have a standardized UI a crossed all devices and browsers, I'm really not using too much else out of the framework other than the basic page navigation (if you are familiar with the framework; a bare-bone multi-page design is all i need). Why I want to step away from jQueryMobile is how weighty it is. Not only do you need to include the mobile library, but also the base jQuery libraries. This I do not like because I'm not using jQuery anywhere else on the site. Any suggestions on light-weight mobile frameworks that have a similar rendering as jQueryMobile?

    Read the article

  • ARM Debian (squeeze) USB driver with mismatch 3.3.3 kernel but /lib/modules/2.6.36

    - by frank
    Hei guys, my sheevaplug embedded server works fine, but when I wanted to use USB, the device gets not attached to /dev/tty/USB0 lsusb shows correctly: Bus 001 Device 002: ID 067b:2303 Prolific Technology, Inc. PL2303 Serial Port an modprobe usbserial raises: FATAL: Could not load /lib/modules/3.3.3/modules.dep: No such file or directory in the /lib/modules/ Folder there is instead a 2.6.36-Folder uname -r gives 3.3.3 How can I overcome this mismatch? Can I create a symlink? I can't flash this embedded device since it is deployed somewhere, only ssh? Please advise!

    Read the article

  • Apache connection intermittantly times out on LAN

    - by jonescb
    I'm using Fedora 12 with Apache 2.2.14, and I was having this error on 2.2.13 as well. Even when I connect to my server over LAN, Firefox will occasionally time out while connecting. I can't figure out what is causing this. The error log isn't showing anything. I even cleaned the error log file, so that if something happened, it'd be a little easier to spot. But I'm still getting time outs, and nothing in the error log. Can anybody help me find what the problem is? Here is my httpd.conf Pastebin It's the default Fedora configuration; I've only changed the ServerName if I remember correctly. I'm pretty sure it's not the Timeout setting, because on LAN it should never time out. I don't believe it's a load issue either, I'm the only one connecting to it. I'm not an Apache expert, so if more information is needed I'll need some instructions on how to get that data.

    Read the article

  • There is a porn domain pointing to my site

    - by Nicolas Martel
    Let's say example.com is my real site, and fooexample.com is the porn site. fooexample.com are pointing to my ip. Now you could think, just don't mind it right. Well the thing is that they are driving load of traffic. Not only that, but my main domain example.com become unavailable after a couple of minutes and the only domain that work is either fooexample.com or none of those 2. What i have done so far was using mod_rewrite to redirect the porn site to google but my domain still become unavailable. Blocking the ips served no result either. I hope someone will be able to help me because this is a huge problem right now. Thanks.

    Read the article

  • Mixing JavaFX, HTML 5, and Bananas with the NetBeans Platform

    - by Geertjan
    The banana in the image below can be dragged. Whenever the banana is dropped, the current date is added to the viewer: What's interesting is that the banana, and the viewer that contains it, is defined in HTML 5, with the help of a JavaScript and CSS file. The HTML 5 file is embedded within the JavaFX browser, while the JavaFX browser is embedded within a NetBeans TopComponent class. The only really interesting thing is how drop events of the banana, which is defined within JavaScript, are communicated back into the Java class. Here's how, i.e., in the Java class, parse the HTML's DOM tree to locate the node of interest and then set a listener on it. (In this particular case, the event listener adds the current date to the InstanceContent which is in the Lookup.) Here's the crucial bit of code: WebView view = new WebView(); view.setMinSize(widthDouble, heightDouble); view.setPrefSize(widthDouble, heightDouble); final WebEngine webengine = view.getEngine(); URL url = getClass().getResource("home.html"); webengine.load(url.toExternalForm()); webengine.getLoadWorker().stateProperty().addListener( new ChangeListener() { @Override public void changed(ObservableValue ov, State oldState, State newState) { if (newState == State.SUCCEEDED) { Document document = (Document) webengine.executeScript("document"); EventTarget banana = (EventTarget) document.getElementById("banana"); banana.addEventListener("click", new MyEventListener(), true); } } }); It seems very weird to me that I need to specify "click" as a string. I actually wanted the drop event, but couldn't figure out what the arbitrary string was for that. Which is exactly why strings suck in this context. Many thanks to Martin Kavuma from the Technical University of Eindhoven, who I met today and who inspired me to go down this interesting trail.

    Read the article

  • A few tips on deploying Secure Enterprise Search with PeopleSoft

    - by Matthew Haavisto
    Oracle's Secure Enterprise Search is part of PeopleSoft now.  It is provided as part of the Peopltools platform as an appliance, and is used with applications starting with release 9.2.  Secure Enterprise Search is a rich and powerful search product that can enhance search and navigation in PeopleSoft applications.  It also provides useful features like facets and filtering that are common in consumer search engines.Several questions have arisen about the deployment of SES and how to administer it and insure optimum performance.  People have also asked about what versions are supported on various platforms.  To address the most common of these questions, we are posting this list of tips.Platform SupportSES 11.1.2.2 does not support some of the platforms supported by PeopleTools, such as Windows 2012 and AIX 7.1. However, PeopleSoft and SES can use different operating system platforms when SES is deployed on a separate machine.SES 11.2.2.2 will have the required platform support for PT 8.53 in the future. We are planning to certify PT 8.53 once the testing is complete in 8.54 development and all platform support is released for 11.2.2.2.ArchitectureWe recommend running SES on a separate machine (from your apps) for two reasons:1.    SES bundles specific WebLogic, Java, and Oracle DB versions and might need different OS patches at a minimum than PeopleSoft. By having SES run on a different machine, these pre-requisites can be managed better through their lifecycle independenly for PeopleSoft and SES.2.    SES is resource intensive - it runs it's own WebLogic and Oracle database. By having SES run on its own machine, sufficient resources can be allocated to SES and free the PeopleSoft servers from impacts of SES load patterns.We will be providing a comprehensive red paper covering PeopleSoft/SES administration in the near future, but until that is published, we'll post tips on this blog.

    Read the article

  • MySQL won't stop doing stuff

    - by Felix
    Sorry for the title of the question, here's my problem: I've been trying to set up some scripts that import a lot of stuff hourly from an external source. They seemed to work fine, so I set up a cronjob to run them every hour. One day later I find six or seven instances of that script just hogging the MySQL server, making it unresponsive. I killed their processes, but MySQL was still not responding. I had to kill MySQL, reboot and then MySQL started working again (who knows on what) and being unresponsive (yes, I did remove the scripts from the cronjobs). I SHOW PROCESSLISTed and killed every process I could find. Still nothing, MySQL is hogging the HDD and is at the top of top and making the server load go up in the sky. I don't know what to do, if I kill and start it again it will probably do the same thing. What should I do?

    Read the article

  • Different graphic cards drivers while booting from external media

    - by goran
    I am booting a certain system of mine with ubuntu 9.10 from external HDD. I am satisfied with the setup and it works fine, however I would like to modify it so that I can choose which graphic card drivers to load during the boot time. Specifically I would like to choose between: nvidia proprietary driver ati proprietary driver generic driver Currently if I am using proprietary drivers then dont boot into X, delete xorg.conf, start gdm and reconfigure the system using jockey (for hardware drivers). What would be the steps to make this (semi-)automatic and avoid restarting X?

    Read the article

  • Network connection fails when downloading data

    - by Guus
    In the past few weeks, I've noticed my network connection becoming unstable while downloading files. I'm not sure how to diagnose this issue. I've found that my network connections appear to be temporarily unresponsive (for periods up to a minute or so) while I'm downloading a file (that's large enough to not be downloaded instantly). The problem occurs when downloading data through a webbrowser, but also when using SCP to download data from a remote location. During the period in which network connections are unresponsive, every resource that I try to access over the network is unavailable. This includes: The download itself (SCP reports a 'stalled' download) Web pages (won't load - browsers report 'resource unavailable') SSH sessions (CLI freezes) VPN connections (connections terminate) IM client connections (client starts reconnection attempts) ... (everything is pretty much dead) I've noticed that during such a period, I cannot even access the (web-based) administrative console of the router on my LAN at home (although it remains reachable for other devices). The problem occurs when connected to my home network, but also when I'm in the office. Other devices than my laptop are not affected. Given the above two characteristics I assume the problem cause lies within my laptop, not the network infrastructure. I'm running Ubuntu 11.10. Apart from applying automatic updates from Ubuntu, I can't think of a change that I applied to the OS that could have started the problems. I'm absolutely positive that this issue did not occur up to a few weeks ago (as it's very noticeable, and only started to annoy me in the last few days). When the problem occurs, applications that make use of a network connection fail visibly (I get popups telling me that a VPN connection is broken, for instance). The network manager does not report any issues related to my wifi-connection though. Help?

    Read the article

  • Chrome refused to execute this JavaScript file

    - by TestSubject528491
    In the head of my HTML page, I have: <script src="https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js"></script> When I load the page in my browser (Google Chrome v 27.0.1453.116) and enable the developer tools, it says: Refused to execute script from 'https://raw.github.com/cloudhead/less.js/master/dist/less-1.3.3.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled. Indeed, the script won't run. Why does Chrome think this is a plain text file? It clearly has a .js file extension. Since I'm using HTML5, I omitted the type attribute, so I thought that might be causing the problem. So I added type="text/javascript" to the <script> tag, and got the same result. I even tried type="application/javascript" and still got the same error. Then I tried changing it to type="text/plain" just out of curiosity. The browser did not return an error, but of course the JavaScript did not run either. Finally I thought the periods in the filename might be throwing the browser off. So in my HTML code, I changed all the periods to the URL escape character %2E: <script src="https://raw.github.com/cloudhead/less%2Ejs/master/dist/less-1%2E3%2E3.js"></script> This still did not work. The only thing that truly works (i.e. the browser does not give an error and the JS successfully runs) is if I download the file, upload it to a local directory, and then change the src value to the local file. I'd rather not do this since I'm trying to save space on my own website. How do I get Chrome to recognize that the linked file is actually a JavaScript type?

    Read the article

  • Displaying a Grid of Data in ASP.NET MVC

    One of the most common tasks we face as a web developers is displaying data in a grid. In its simplest incarnation, a grid merely displays information about a set of records - the orders placed by a particular customer, perhaps; however, most grids offer features like sorting, paging, and filtering to present the data in a more useful and readable manner. In ASP.NET WebForms the GridView control offers a quick and easy way to display a set of records in a grid, and offers features like sorting, paging, editing, and deleting with just a little extra work. On page load, the GridView automatically renders as an HTML <table> element, freeing you from having to write any markup and letting you focus instead on retrieving and binding the data to display to the GridView. In an ASP.NET MVC application, however, developers are on the hook for generating the markup rendered by each view. This task can be a bit daunting for developers new to ASP.NET MVC, especially those who have a background in WebForms. This is the first in a series of articles that explore how to display grids in an ASP.NET MVC application. This installment starts with a walk through of creating the ASP.NET MVC application and data access code used throughout this series. Next, it shows how to display a set of records in a simple grid. Future installments examine how to create richer grids that include sorting, paging, filtering, and client-side enhancements. We'll also look at pre-built grid solutions, like the Grid component in the MvcContrib project and JavaScript-based grids like jqGrid. But first things first - let's create an ASP.NET MVC application and see how to display database records in a web page. Read on to learn more! Read More >

    Read the article

  • Overloading methods that do logically different things, does this break any major principles?

    - by siva.k
    This is something that's been bugging me for a bit now. In some cases you see code that is a series of overloads, but when you look at the actual implementation you realize they do logically different things. However writing them as overloads allows the caller to ignore this and get the same end result. But would it be more sound to name the methods more explicitly then to write them as overloads? public void LoadWords(string filePath) { var lines = File.ReadAllLines(filePath).ToList(); LoadWords(lines); } public void LoadWords(IEnumerable<string> words) { // loads words into a List<string> based on some filters } Would these methods better serve future developers to be named as LoadWordsFromFile() and LoadWordsFromEnumerable()? It seems unnecessary to me, but if that is better what programming principle would apply here? On the flip side it'd make it so you didn't need to read the signatures to see exactly how you can load the words, which as Uncle Bob says would be a double take. But in general is this type of overloading to be avoided then?

    Read the article

  • Limiting calls to WCF services from BizTalk

    - by IntegrationOverload
    ** WORK IN PROGRESS ** This is just a placeholder for the full article that is in progress. The problem My BTS solution was receiving thousands of messages at once. After processing by BTS I needed to send them on via one of several WCF services depending on the message content. The problem is that due to the asynchronous nature of BizTalk the WCF services were getting hammered and could not cope with the load. Note: It is possible to limit the SOAP calls in the BtsNtSvc.exe.Config file but that does not have the desired results for Net-TCP WCF services. The solution So I created a new MessageType for the messages in question and posted them to the BTS messaeg box. This schema included the URL they were being sent to as a promoted property. I then subscribed to the message type from a new orchestraton (that does just the WCF send) using the URL as a correlation ID. This created a singleton orchestraton that was instantiated when the first message hit the message box. It then waits for further messages with the same correlation ID and type and processs them one at a time using a loop shape with a timer (A pretty standard pattern for processing related messages) Image to go here This limits the number of calls to the individual WCF services to 1. Which is a good start but the service can handle more than that and I didn't want to create a bottleneck. So I then constructed the Correlation ID using the URL concatinated with a random number between 1 and 10. This makes 10 possible correlation IDs per URL and so 10 instances of the singleton Orchestration per WCF service. Just what I needed and the upper random number is a configuration value in SSO so I can change the maximum connections without touching the code.

    Read the article

  • Problem booting hard drive after installing Centos from USB Stick

    - by Rick
    Here is the situation, I created a Centos Live 5.4 Bootable USB drive. I used this to install Centos on a HP Netbook. BTW: the Netbook doesn't have a CDRom so I used the usb key. When the system goes to write the Grub boot loader to disk, it wants to write the boot loader to the usb drive (/dev/sda), not the hard disk (/dev/hda). I do have the option of writing the boot loader to /dev/hda, (not to the mbr!) but when I reboot I get an load error and the Grub prompt. How can I get Centos booting from the hard disk instead of using the USB key. Thanks.

    Read the article

< Previous Page | 400 401 402 403 404 405 406 407 408 409 410 411  | Next Page >