Search Results

Search found 9244 results on 370 pages for 'thinking sphinx'.

Page 277/370 | < Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >

  • HAProxy appsession vs cookie precedence

    - by user1139473
    I am trying to find the best solution for balancing and keeping persistence on our application behind HAProxy. Here is our basic configuration: https://gist.github.com/endzyme/1804046b23c37beba520 After playing around with taking members down and up and also reloading the haproxy (with -sf) I have noticed that appsession isn't 100% effective, it would appear that sometimes it doesn't always 'request-learn'. I also tried to add a cookie JSESSION prefix to balance in case request-learn didn't take. Unfortunately it would present scenarios where the prefix would list svr2 but it was balanced to a different server. I am assuming it's because the appsession table takes first then sticks on that before using the cookie parameter. I have not tested with using cookie as an inserted option (not prefix on existing cookie) but I am thinking it would yield similar results. My question is: Which one is checked first, appsession or cookie, and is it an immediate catch after it reads the first one, or a fall through? Also as a follow up - is it not recommended to use both in the same backend? Cookie as I understand takes less memory resources, is agnostic to reloads and has way better reliability of persistence. Appsession I assume takes less cpu resource, since it's reading not writing. (Bonus Question: is there a way to inspect appsession/cookie table map? socket show table doesn't show anything except stick-tables) Many thanks in advance, -Nick

    Read the article

  • solution for an offline server

    - by dashmug
    I'm trying to setup a development server at work that will ideally be able to test drive a couple of projects in PHP, Rails, or Django (not always running at the same time). I develop the apps locally on a Mac and then I'll put the projects up on this server for testing with my actual users (non-techies) before deploying to a production server. My problem is that we have a very poor internet connection (almost negligible) at work and doing the usual apt-get/yum/ports (make, clean, install) processes for setting up servers always get their packages from online repositories somewhere. I know I could probably download the source and then compile them myself but that's going to be too much of a hassle for me. I'm thinking about two solutions: Plan A: Run a server VM on my Mac and then use this VM as the source repository for the offline server. I've read about Ubuntu's apt-proxy and it seems to be good enough though I haven't tried it yet. I'm not sure if this is possible but can I simply do apt-get install nginx --downloadonly so that the package and its dependencies will be downloaded into my VM and my server can use the VM as the source repo for apt-get? Plan B: Run a server VM on my Mac (which I can setup/update easily when I'm home) and then clone the VM to the offline development server. Maybe I should simply make the server a VM host so I can simply copy the VM over. I think this is okay for the first-time setup but subsequent updates will take too long (cloning the VM image). If I was working on Windows, I imagine it'd be easier because most services have an installer file that I can download and then run at the server. If you could suggest another way, it would be much appreciated. Update: From Michael Hampton's answer, I found a possible solution which is apt-cacher. I also found this page on Ubuntu's website. I wonder if there is a better tool than this one.

    Read the article

  • How to find Stolen MacBook with iCloud

    - by user1518089
    My MacBook Air was stolen about 6 weeks ago. Through iCloud and "Find Phone", I have some pictures and a location down to about 2 blocks. The pictures are from the current user taking photos which automatically appear on my local devices. (Yes they probably saw my pictures until I stopped taking them. Yes, they are stupid.) I was thinking about going there and hanging out until I recognized the current users, but it is in a very bad neighborhood and I would be noticed. The police have not done anything. Yes, the MacBook can be locked or a message sent. I am hoping to get it back. Does anyone have ideas on how to track them down? While Find Phone shows their location, it does not report an ip address. Is there a way to get an ip address? Does Facebook face recognition work on strangers? Come on tech geniuses, help me play detective. It does not have Drop Box installed.

    Read the article

  • Remote Desktop leaves host unresponsive

    - by Jeff Dalley
    I have my desktop PC at home set up to accept remote connections, and I often connect to it from work on my laptop via mstsc.exe. However, every time I remote to it, I find when I go home that despite the monitor being on - it's not receiving an image and it looks as though the computer is hibernating/asleep. I basically have to restart it whenever I get home and I know there's an answer for why its doing this. More details: When exiting the remote session, I have tried both logging off the account, and closing the RDP window without logging off; both give the same result. When I get home to the desktop I of course try moving the mouse, ctrl+alt+del to see if its responsive to restart, multiple key-press to see if I can get any audio out of it; It seems pretty obvious its sleeping/hibernating in some way: Nothing happens in any of these cases and a physical restart is necessary. Both desktop and laptop are running Windows 7 Ultimate. I'm thinking it really is sleeping/hibernating it, and I'm not sure why because left alone my desktop's power options are set to never turn off the HDD or change its state - I leave it on 24/7. This could be a stupid error on my part but I just can't see it! Thanks.

    Read the article

  • What needs to be considered when setting up for Linux Development? [closed]

    - by user123586
    I want to set up a box for Linux development. I have a working linux install with the usual toolchain and an IDE. I'm looking for advice on how to approach structuring accounts and folders for development. As the Perl folks say "There's always more than one way to do it." Left to my own devices, I'll come up with several unproductive ways of doing it before figuring out what an experienced Linux programmer would think obvious. I'm not looking for instructions to follow for a specific set of tools or a specific software package. Instead, I'm looking for insight into what decisions need to be made and how to make them, with understanding of the advantages and disadvantages of each individual choice. These are some of the questions that come up: Where to put sources Where to put built object files and libraries where to install what to set in environment variables what compiler flags matter and how do you manage them across several types of builds what configuration entries to make in an IDE how to manage libraries to support multiple environments how to handle different build versions such as debug vs release, or cross platform builds If you are an experienced Linux developer, the answers to these questions may seem trivial and obvious. I'd like to learn to make decisions about these questions that result in as little manual configuration as possible, given some existing sources, a particular IDE, or no IDE at all, a paticular set of development libraries etc. At this point you're probably thinking: Can you be more specific? Sure. But remember that I'm trying to learn how to think about this stuff, not just follow a recipie for a specific set of results. Example: Setup a project that uses CMake for some of its components, autogen.sh followed by configure for others and just configure for a few more: debug builds without an IDE debug builds in NetBeans debug builds in Eclipse debug build in Visual Studio all of the above with release builds for Linux, Mac and Windows. So... **What are your thoughts on an approach that works for all four? Do you have any advice on what to read?**

    Read the article

  • Diff bios - corrupt video driver

    - by sfonck
    Hi, I'm using an Dell M90 Precision Laptop which has a NVidia Quadro FX 2500M graphics card and is running Windows XP. Laptop has been running fine - but a few weeks ago screen went 'white' - restarted computer- bios and startup screens show weird green dots and stripes, normal startup only shows a black screen... only VGA mode works to display something. I've been trying to remove and reinstall the correct drivers downloaded from Dell's website - no solution. I gave up and reinstalled XP - everything was working perfect again. 2 weeks later - again the white screen - tried everything again (flashin new bios also - nothing works) Reinstalled XP - everyhting was working again, so I made a DriveSnapShot of the partition. Today - again the 'white screen'. Ok, no problem ...I was thinking all I needed to do was to restore the DriveSnapShot backup... Few minutes later the backup is restored ... but guess what: video driver does not work correctly... As the DriveSnapShot restored the complete partition, as it was at the time everything was working perfectly, this would mean my driver problems are due to 'settings' in the bios or on the graphics-card itself + these 'settings' can get overridden by doing a new XP-install.... I'm out of options, can somebody help me to find a solution for this problem: Is there some way to backup and restore a bios after seeing some problems? Is there some way to know what is causing this problem like a bios diff utility? Thanks!

    Read the article

  • Upgrade or replace?

    - by Felix
    My current PC is about four years old, although I have made upgrades to it throughout its existence. The current specs are: (old) Intel Pentium D 2.80Ghz (32K L1 / 2M L2), Gigabyte 945GCMX-S2 motherboard (old) 2.5GB DDR2 (slot0: 512MB @ 533Mhz; slot1: 2GB @ 667Mhz) (new) HIS Radeon HD 4670 - I think this is limited by the motherboard not supporting PCIe 2.0 (?) (old) WD Caviar 160GB - pretty slow (new) WD Caviar Black 640GB (if any more specs are relevant, let me know and I'll add them) Now, on to my question. I've been having performance issues lately, both in video games and in intensive applications. A couple of examples: Android application development (running Eclipse and the Android emulator) is painfully slow (on Linux). I only realized this when, at my new job as an Android dev, both tools are MUCH quicker. (I'm not sure what CPU I have there) The guys at my new job got me NFS Hot Pursuit, in which I barely get like 5-10FPS, even with graphics options turned all the way down My guess is that the bottleneck in my system is my CPU, so I'm thinking of upgrading to a Quad Core i5 + new motherboard + 4GB DDR3 (or more, 'cause I know you'll all jump and say 8GB minimum). Now: Is that a good idea? Is my CPU really a bottleneck, or is the whole system too old and I should replace it? I run Windows 7 on the old, 160GB HDD (which is on IDE, by the way). Could this slow down games as well? Should I get a new drive for Windows if I want to play new games? I know nothing about power supplies. Could that be a problem / will it be a problem if I upgrade to an i5? How come DiRT2 works on full graphics settings (pretty amazing graphics by the way) and NFS Hot Pursuit pulls only 5-10FPS?

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • IIS7ASP.Net 4.0 - 404 errors only for external clients

    - by dmcgiv
    recently we moved an ASP.Net 3.5 website to 4.0 (integrated mode) and when we deployed to the clients server (Windows Server 2008 Web edition) we notice that some .aspx pages are serving 404 errors. What is strange is that 1) the pages exist 2) if you browse from the server itself the page is served as normal, only external clients get the 404 3) it's the default 404 error page not the one configured in the web.config 4) it only happens for some .aspx pages, and I've not been able to establish a link between the pages that are not being served externally. We are using a URL rewriter module which I first thought may be at fault but then realised that only some of the failing pages are being rewritten. I've also tested removing the http module and the problem still persists. As everything is working as expected when logged onto the server I was thinking it my be some sort of permission issue, but why would it only affect a few pages? I turned on failed request tracking and the debug files are being generated with the expected 404 error, although at the moment I'm not sure what most of the data means so can't decipher what's going on internally. I'd really appreciate some help with this one.

    Read the article

  • Multiple Screen - Keyboard Sharing

    - by nhbdesign
    I run a small architectural firm with several drafters employed, I'm currently setting up a new office space and one of the things on top of my list is a figuring out a way to keep tabs on my drafters work and being able to collaborate in real time. Here's the challenge, they sit in a separate large cubicle room and I'm on the other end of the hallway, the way it is now; every time they’ve got some question on how to proceed on a certain design, they would come all the way to my office, I'd open their file (in read only) give some ideas, save-as new file, they go back copy paste... in short, nonsense. What I've been thinking of is to setup a hardwired solution that should enable me to have an extra monitor on my desktop which is hardwired (through KVM or something) to each of my employees workstations serving as a secondary display, so that I can watch live what they do, interact with them just as if they would have an extra keyboard and monitor in my office, except; I don’t want to have on my desk a separate monitor for each employee.. so I'd want them to be tiled on a single large screen, watching all screens alive, and whenever they ask me (or I just decide..) to step in, I just click on any tile and hurray, I'm in, editing and saving in real time on their workstation. I also want to reserve the option when I want to, to just use that monitor as just an extra screen for my workstation. Is something like that possible in 2013? P.S. I know of TeamViewer and similar internet/software based stuff, but I'm specifically looking for something solid hardwired and maintenance free, and also something that would allow to watch without my employees getting notified every time I do so (I’m not a tough boss though...).

    Read the article

  • How do I back up Hyper-V VMs with Windows Server backup on Windows Server 2008 R2?

    - by Chris
    I've searched this site and google, and I CAN find information about how to back up Hyper-V virtual machines by using Windows Server Backup from the Hyper-V host in Windows Server 2008. You have to set up a registry key to enable the Hyper-V VSS writer, and then you can take online backups of your VMs. However, all the information I have found is about a year old, and none of it has been updated for Windows Server 2008 R2. I tried to run the "FixIt" .msi found here: http://support.microsoft.com/kb/958662 ... but it said that it was not applicable to my operating system. So I am thinking either Windows Server 2008 R2 already has its VSS service for Hyper-V enabled, or it still needs to be enabled but the FixIt package doesn't feel comfortable operating on an OS that wasn't RTM at the time. I went ahead and scheduled a windows server backup for 9pm tomorrow. It said it would take 86 GB, which means it MUST be counting those VMs. But will this backup fail? Can anyone confirm whether you have to apply the same registry changes for R2?

    Read the article

  • Running Tor relay on personal server: can this hurt?

    - by rxt
    I would like to install TOR as relay on a hosted personal server. I have loads of bandwidth that I don't use. It's not an exit point. Can this hurt my server somehow? Possible problems I'm thinking of are blacklisting the IP-address, or something similar. I know that exit points get blacklisted on many servers. So if I'm using Tor as a client, I will probably use a blacklisted IP-address for the outside world, so cannot access those sites. However, I'm running this on a server, and as a public relay. Could this hurt the functioning of and access to websites on this server? I could install it as a bridge. I'm a little confused about the difference between bridging and relaying. If I understand correctly the only difference is that a relay is public. Does this mean that bridging only works if I know someone and give them my IP-address?

    Read the article

  • "merging" multiple internet connections

    - by Spencer R
    I've seen this question asked several times here on SF, but I'm looking for some updated information; specifically concerning Server 2012. I'm in the process of buying a home so I'm trying to get some plans together on how I want to structure my network. Internet speeds aren't the greatest and connections can be unreliable where the house is so I was thinking of having two DSL lines installed. My question is, how could I leverage those two connections to create the best network I can, in terms of speed and reliability. My parents will be moving in with me - they consume a lot of bandwidth as it is, but then add my internet traffic to it, and I'm headed for a lot of frustration. I thought I remember reading somewhere that Server 2012 has some new functionality to utilize multiple connections on multiple NICs in a way that wasn't possible in earlier versions of Server. Not sure if Windows will work but, I'm an application developer and spend the majority of my time in Windows environments. However, I've only recently returned to the Windows world, so I'd like my main server at home to run Win Server 2012 so that I can become more familiar with it.

    Read the article

  • How i can setup a nginx cache strategy that first try amazon s3, then memcache and do a fallback on miss?

    - by Tim
    i have a large site with lot of pages that almost never change, right now i am using two memcache servers (amazon elasticache), but this its really expensive. Thats why for this files that barely never change i want to upload them to amazon s3 and shutdown 1 memcache server. Here is my conf; location ~ /longterm/(.*){ proxy_pass http://amazonS3bucket; proxy_intercept_errors on; proxy_next_upstream http_404; error_page 404 503 = @fallback_memcached } location @fallback_memcache { set $memcached_key $uri; memcached_pass name:11211; error_page 404 @fallback; } location @fallback { try_files $uri $uri/index.html } I dont know why but the config doesnt work on the final fallback; if i got an amazon S3 hit it works, if i got an amazon S3 miss and a memcache hit it works, but if i got an amazon S3 miss then a memcache miss when it try to resolve the las fallback it fails. I am also thinking in use the amazon s3 fuse http://code.google.com/p/s3fs/ instead of the proxy pass i think it would be easier for implement, i would also be less performant?

    Read the article

  • Print job leaves queue but document isn't printed

    - by midnightstar
    I'm dealing with an HP Deskjet F380 All-in-One printer. It's connected via USB to a desktop running Windows 7 Enterprise x64. If I attempt to print something like a web page or a word document, the print job will show up in the print queue and the printer would stir. By stir, I mean, it would seem to prepare itself to print. However, the print job would then leave queue (I'm thinking the computer sees it as completed) and the printer would never actually print anything. However I went into Printers and Devices under the Windows start menu, into printer properties, and print a test page, the test page would print out successfully. I attempted to uninstall and re-install the printer drivers for the printer, but the printer would continue the same behavior afterwards. I also connected the printer to another computer and the printer will print just about anything. I also checked to make sure that the computer the printer needs to be connected to was update to date as far as the OS. The machine is fully up to date. I played with the way the computer handles printer spooling. Under the printer properties, under the "Advanced" tab, I had the print job print directly to the printer. In all these instances, the same behavior continues. I've restarted the printer spooling service. I've also gone under C:\Windows\System32\spool\PRINTERS and deleted files that were sitting in the folder. I have ran SFC /scannnow and the system found no errors in the system's integrity. I had the computer and printer make a cold reboot individually. The only lead I really have going for me is that since the printer prints on other PCs, I can only assume that there is something wrong with the way the PC is configured.

    Read the article

  • OSX server setup suggestions

    - by Tom
    I am looking into the possibility to setup an OSX server for my employees, and would like some input on what is the best approach to meet my needs, and perhaps some suggestions if I am moving in the wrong direction. I am thinking of a Mac Mini OSX server, and are not sure if my needs will be met, and what possibilities are out there. I want these capabilities: - Groups/Users managed on server - Shared folders and private folders for users/groups - Access to activated services - Server hosting software for the users (developing tools ++) - Similar to Windows Terminal Server - Virtual desktop environment (both local and over internet/VPN) - Possible to access trough Mac and Windows The reason I am looking at OSX server is that my employees almost only work in OSX environment, and I want to offer the capabilities to logon to the server trough some kind of terminal software, and have full access to their work OSX environment and software on their mac or pc, from anywhere they might be. Instead of having to have multiple setups and need for spending alot of time installing and setting up needed software on every client. This is a small business, where some work on local network, and others from the internet, preferably trough VPN. But a terminal server solution, that are fast and easy to manage would be perfect for our needs. So if anyone have any experience with a similar setup, please let me know what you did, and your experiences with your setup.

    Read the article

  • Reread partition table without rebooting?

    - by Teddy
    Sometimes, when resizing or otherwise mucking about with partitions on a disk, cfdisk will say: Wrote partition table, but re-read table failed. Reboot to update table. (This also happens with other partitioning tools, so I'm thinking this is a Linux issue rather than a cfdisk issue.) Why is this, and why does it only happens sometimes, and what can I do to avoid it? Note: Please assume that none of the partitions I am actually editing are opened, mounted or otherwise in use. Update: cfdisk uses ioctl(fd, BLKRRPART, NULL) to tell Linux to reread the partition table. Two of the other tools recommended so far (hdparm -z DEVICE, sfdisk -R DEVICE) does exactly the same thing. The partprobe DEVICE command, on the other hand, seems to use a new ioctl called BLKPG, which might be better; I don't know. (It also falls back on BLKRRPART if BLKPG fails.) BLKPG seems to be a "this partition has changed; here is the new size" operation, and it looked like partprobe called it individually on all the partitions on the device passed, so it should work if the individual partitions are unused. However, I have not had the opportunity to try it.

    Read the article

  • Have some questions about setting up a VPN to my private cloud servers

    - by Pure.Krome
    I've got a number of Virtual Servers running at a pretty big Cloud provider. They are all running Windows 2008 R2. I have a CISCO ASA firewall in front of them. Currently, I've got all ports blocked except 80/443/21/3386 (for Remote Desktop). I asked to have a VPN enabled on the firewall and they said it's easy to do BUT I need to use the 3rd party Cisco software. Now, I don't want to get into a debate about it .. but we don't want to install anything extra on our -client- computers. We all use Windows 7 and we love using the built in VPN client to connect to other private LANS we have setup in other locations. So i'm wondering what options I have to create a VPN tunnel to our private cloud LAN? All our cloud servers are part of WORKGROUP, so there's no Active Directory .. nor do we want to install all that. Secondly, we know we can open up a firewall port - so any ports for starting a VPN is fine! Lastly, I was thinking of just using one of the existing servers as the VPN server (and using the Windows VPN software) .. but I'm not sure this is a good thing? Remember - we just want to use the baked in VPN software in Windows 7 .. which is PPTP or SSTP or L2TP/IPSEC. I would -LOVE- to use some free OSS software. For usernames/passwords? We'd probably just have one account .. like U:Hithere P:whatever.. so we don't need any hardcore account management, like Active Directory, etc. So does anyone have any ideas?

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

  • thought about shared storage (NFS, Lustre) [closed]

    - by user134880
    Possible Duplicate: Can you help me with my capacity planning? Now I habe small cluster with total of 8 nodes. 6 of them are computing nodes (apache and vmware) and 2 nodes are for storage. 2 storage nodes are identical. Each storage server is linux box with 8 x 1Tb WD RE4 in soft raid 10. 1st box is master and 2nd is slave. Data is mirrored with DRDB. We export NFSv4 shares to Apache (for document root) and iSCSI to Vmware. Now all is working pretty good and stable. But it will be soon time to upgrade our system. I have been thinking of Lustre. Does some one has any real experience with Lustre or NFS medium clusters? Will it be good idea just to upgrade server and change hdd's to 3Tb ? With NFS we will always have only 2 servers to maintain (one primary and one slave). Thanks. QUESTIONS: 1) Does some one used Lustre? In production? I have seen a lot of info about how it is hard to setup Lustre because you need to compile own kernel and patches. It's answers from newbies. Is there some one who has used Lustre for some period of time? 2) About disk upgrades - it's only description of strategy. I'm not asking if it is enough 3Tb or not. I just ask if it is right just to replace hdds instead of adding new server (like with Lustre) Thanks again.

    Read the article

  • Combine multiple network interfaces to connect to a dedicated server

    - by Dženis Macanovic
    this is an underpaid employee writing, who's apparently responsible for all the IT stuff in a very small (non-IT) company. Today said company got a bunch of PCs/workstations, a switch, a computer that's supposed to be used as a router, two DSL connections (each 16 MBit/s downstream and 1 MBit/s upstream) and a dedicated server which is hosted and managed professionally by a larger local company with some decent connection speed (1 GBit/s both directions if I'm not mistaken). This is what I've set up (note I'm not making use of the second DSL connection at all)... ETH0 ETH1 [ SWITCH ]---[LINUX DEBIAN ROUTER]---[DSL MODEM 1]---[INTERNET] | | | PC1 | | PC2 | ... ... when my boss asked me, if it was somehow possible to get 32 MBit/s downstream and 2 MBit/s upstream. At that time I replied "no" without thinking too much about it. Now I've just had the following idea... ETH1 ETH0 ETH0 ,---[DSL MODEM 1 (NON-STATIC IP)]---, ,---, ETH0 [ SWITCH ]---[LINUX DEBIAN ROUTER] [INTERNET] [LINUX DEBIAN SERVER]---[INTERNET] | | | '---[ DSL MODEM 2 (STATIC IP) ]---' '---' PC1 | | ETH2 ETH0 PC2 | ... ... but I have absolutely no clue how to implement that. Would that even be possible? What would the masquerading rules look like on the router? What about the server? I didn't find anything on the internet, mainly because I couldn't come up with any good keywords to search for to begin with. English obviously isn't my first language. Thanks in advance for your time!

    Read the article

  • 1 PC, 2 consoles (as in 2 monitors, keyboards and mice)

    - by ciuly
    I have this desire to "kill 2 birds with one shot". Currently, I have 1 server running round the clock, 1 laptop that runs about 8 hours a day, 7 days a week, and a desktop that runs about the same length of time. All 3 are ... old, to say the least. So there is a great need to upgrade (well, the server might handle its job for another year or so, but that only depends on how much time I have to put it to "work"). Now, I'm "dreaming" of only one PC. I'm thinking vmware's ESX. So there will be a VM for the server, a VM for the "laptop" and one for the "desktop". And obviously I'll have to somehow "link" a set of monitor/keyboard/mouse with one of the laptop/desktop VMs. The server doesn't need such things, obviously (it doesn't have them at this moment either). Is something like this possible? ESX is not a requirement, it's just something I found that answers part of my problems, but there still remains the 2 KVM set that needs connecting and "linking" to appropriate VM. Why I would want to do this? well, first of all, it's much cheaper to upgrade one PC than 3. Then, the power consumption is obviously lower. Plus the extra space.Plus it allows me to better separate networks and services. Thanks.

    Read the article

  • how to export VARs from a subshell to a parent shell?

    - by webwesen
    I have a Korn shell script #!/bin/ksh # set the right ENV case $INPUT in abc) export BIN=${ABC_BIN} ;; def) export BIN=${DEF_BIN} ;; *) export BIN=${BASE_BIN} ;; esac # exit 0 <- bad idea for sourcing the file now these VARs are export'ed only in a subshell, but I want them to be set in my parent shell as well, so when I am at the prompt those vars are still set correctly. I know about . .myscript.sh but is there a way to do it without 'sourcing'? as my users often forget to 'source'. EDIT1: removing the "exit 0" part - this was just me typing without thinking first EDIT2: to add more detail on why do i need this: my developers write code for (for simplicity sake) 2 apps : ABC & DEF. every app is run in production by separate users usrabc and usrdef, hence have setup their $BIN, $CFG, $ORA_HOME, whatever - specific to their apps. so ABC's $BIN = /opt/abc/bin # $ABC_BIN in the above script DEF's $BIN = /opt/def/bin # $DEF_BIN etc. now, on the dev box developers can develop both ABC and DEF at the same time under their own user account 'justin_case', and I make them source the file (above) so that they can switch their ENV var settings back and forth. ($BIN should point to $ABC_BIN at one time and then I need to switch to $BIN=$DEF_BIN) now, the script should also create new sandboxes for parallel development of the same app, etc. this makes me to do it interactively, asking for sandbox name, etc. /home/justin_case/sandbox_abc_beta2 /home/justin_case/sandbox_abc_r1 /home/justin_case/sandbox_def_r1 the other option i have considered is writing aliases and add them to every users' profile alias 'setup_env=. .myscript.sh' and run it with setup_env parameter1 ... parameterX this makes more sense to me now

    Read the article

  • What are the practical differences between an IP address and a server?

    - by JMC Creative
    My understanding of IPs and other DNS-type server-related issues really falls short (read: exteme noob). I know a dedicated server would increase speed. What, if any, difference in speed would a dedicated IP make? Am I correct in understanding the Best Practices from Yahoo that I could use the second IP to serve up some content, which would increase the number of parallel downloads for the user? Or are both IPs (purchase from same hosting account) going to point to the same server? Or how does it work? Are there other optimization things I should be aware of when thinking of purchasing a dedicated IP? Clarification I am talking about the speed of serving the webpages, i.e. the speed of my website. Yes, I know that IP and server are completely different, not even opposites, just different. But this, indeed, is my question! The Question Reformulated: Will having a second (dedicated) IP on my website speed up the time that it will load and display for the user? Or does that have nothing at all to do with IP, and is only a server issue? I'm sorry if this is still unclear. This is a real question though, I may just not be wording it well.

    Read the article

  • computer randomly restarting. both in game and out of game

    - by eric
    first my specs are. AMD Phenom II x4 955 processor 3.2ghz 20gb ddr3 ram 4Gb Nvidia Geforce GTX 770 850w Corsair tx850w psu Gigabyte ud3 mobo Windows 7 professional I recently uprgraded my vid card to gtx770 and upgraded my psu to the 850w thats in it now. i did a reformat with the installation of the new gpu and psu and started fresh and only have a couple programs installed (diablo3, nvidia control panel, wow, and steam). all drivers are up to date and everything is hooked up correctly. the problem is it will randomly shut down. no blue screen. just turns itself straight off and reboots after a couple seconds. occasionally i will have to unplug the power cable from the psu for a few minutes then reconnect and it will start up. it seems pretty random. sometimes it does it when my pc is just sitting there on the home screen. and sometimes it does it during games. and sometimes it doesnt do it for days at a time. i noticed the psu felt hot so i put an extra fan blowing straight onto both the psu and gpu and neither feel overly hot after it shuts down now. could it just be that it is a psu problem. the psu was taken from another machine but wasnt having this problem in that machine. i have seen a few articles online about gtx770 doing the same thing. but i havent found any answers or solutions. any help will be appreciated. im sure the 850w is enough to power my machine, im just stumped and ran out of ideas to fix it. i have even returned the video card for another thinking it might have been an issue with that particular card, but still gettin the same problem.

    Read the article

< Previous Page | 273 274 275 276 277 278 279 280 281 282 283 284  | Next Page >