Search Results

Search found 4278 results on 172 pages for 'capacity planning'.

Page 98/172 | < Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >

  • Shortcut Keys for external monitor with XP

    - by Rhys
    Last night my laptop (samsung nc20 running XP) fell off the table and the screen cracked leaving me unable to see anything. I have arranged for a it to be repaired but want to make a back up of several folders before it gets sent away. I was planning to use my parents LG LCD tv as an external monitor so I can see what I am doing when copying things to but after plugging it in nothing seems to happen (works instantly on another laptop running vista) and hotkeys seem to be of no use at all. Does anyone know the series of shortcut keys I will need to do to get XP to use an external monitor? Thanks in advance

    Read the article

  • Could a computer act (dependably) as a wireless router for 200+ clients? [closed]

    - by awkwardusername
    That is, I have a Core 2 Duo E7500 at 2.93GHz, with 2GB memory. I plan to install either Windows Server 2012 or Zeroshell 2.0RC1, and it also (planning to) includes two PCIe Wireless Card Adapters. It also has one ethernet port, and I will connect that to another machine which will be a Database and a Web Server. My plan is to have a corporate level wireless intranet with 200+ clients. I cannot afford to buy routers because I want to operate at zero costs as possible, utilizing my available resources. Is that plan plausible? Also, what minimum specs should my wireless card have? @SvenW: Oh, I meant corporate on the deployment level. I am still an undergraduate and this is more of an educational and expiremental work than an actual project. I got Windows Server 2012 for free though, and this isn't actually for commercial use.

    Read the article

  • What is the difference between a PDU and a power strip (both 120V, 15A)?

    - by rob
    I just chatted with an APC rep about upgrading the UPSes at our office. She recommended a single higher-capacity 6-outlet Smart-UPS to replace the four Back-UPS units we currently have. When I asked how she recommended plugging in all the current devices, she recommended using a APC's AP9567 PDU, but said not to use a power strip. At first she said I had to use an APC brand PDU, but after I inquired about using a Tripp-Lite PDU, she said any brand PDU would be fine. The APC PDU previously referenced looks like a standard 120V power strip with overload protection but no surge protection. Other than overload protection (which seems redundant if plugging into the UPS), is there something else I'm missing, or should any power strip (without surge protection) be fine? Edit: I didn't mention it earlier, but we don't have a proper rack--though I did still plan to mount the PDU or power strip to something. I guess I'm wondering if there's any special reason I should pay as much as $180 for the low-end APC PDU (which just looks like a power strip to me) vs. $20-$30 for a workbench power strip.

    Read the article

  • how to design network for connectivity between private and corporate LANs?

    - by maruti
    there is a bunch of servers connected to shared storage in a private LAN (10.x.x.x). this privateLAN is managed by a windows server (DHCP, DNS and directory services) these hosts need to be from outside of the datacenter Eg. Remote desktop. can the NIC2 on each of the hosts be connected to the other public LAN (compromising speed or security? what are improtant considerations: additional hardware? like switches? routing&DNS software? currently available hardware : Dell Powerconnect 6224 switch .... planning this for storage network. software: windows 2003 server for DHCP, DNS, A/D ? would it be more flexible to use Linux distributions like IPCOP, Untangle etc? all that I am looking for is good isolation between private and other networks, avoid DHCP, DNS, AD clashes.

    Read the article

  • java -version doesn't write to stdout?

    - by Zárate
    Hi there, Either I'm doing something silly or Sun is. How come something like: java -version > version.txt Still prints out to stdout and leaves version.txt empty? I'm checking out the exit code, and it's still 0, so is not that's writing to stderr. I need this because I'm building a test-environment tool and want to check if the version of Java is adequate, I was planning to catch that version output, but now I'm stuck. I'm on OS X Leopard, Java version 1.6.0_20. Any ideas? Cheers, Juan

    Read the article

  • SQuirreL Client: Opening up a table in a separate tab

    - by Dalin Seivewright
    I've switched to SQuirrel Client to view an Oracle database with a variety of tables. At any given time, I need to look at the contents of several related tables at the same time. The problem is that, at least by default, SQuirrel Client does not open up multiple tables at a time. When you click on a different Table object, the main view is refreshed with newly selected table data. Oracle's SQLDeveloper (which I am trying to move away from) did this by default if I recall, but there was an option to "freeze" panes. Does a similar option exist in SQuirrel Client? I do not require the ability to view the contents of two different tables on the same screen (i.e. a split view) but I would like the ability to have tabs for each table so I can quickly switch between each table view rather then having to find the table in the table list over and over again. Note: I am using this in a professional capacity but if it does not "belong" here then I suppose it could be moved to superuser.

    Read the article

  • Forcing programs to be installed to another drive

    - by zyboxenterprises
    I have an SSD as my main Windows drive, with a 640GB 2.5" HDD, partitioned to store programs and user settings, and also to act as backup (it's the only thing I had lying around at the time of building my PC). The task was to make the PC as fast as possible, while having an increased storage capacity available to store normal user data, and to assist in my small data recovery business. The problem is that whenever I install a program, it installs to C:\Program Files [(x86 for the 32 bit programs]\, although I have changed the environment variables. This wouldn't normally be an issue, however every installation program points its shortcut to my 640GB HDD. The root layout of both drives: To clarify: Program files get installed to C:\ Program shortcuts are always pointed to Z:\, my 640GB HDD Modifying the relevant environment variables doesn't do anything, I looked at this, but however it only talks about modifying the registry and environment variables, which I have already done so. I install to the Z:\ drive if the installation program lets me change the installation path, but however the installation programs sometimes don't let me change this. Is there a way that I can force every program to install to the relevant location on Z:\? Perhaps I'm missing something here? Edit: Found this program; would it be appropriate to use in my case? I would be able to move the entire Program Files (and its x86 version) to Z:\, without impacting on the performance.

    Read the article

  • Motherboard RAM question.

    - by winterwindz
    Firstly I'm new at this. Not a very computer-ish person. So I very much apologize if this is not the right place, but here it goes.. My Motherboard is an Asus P5LD2-SE. I'm running in Win 7 Ultimate 32-bit(x86) OS, 1GB RAM (2x512 MB). I'm planning to upgrade my OS to 64-bit and because I know my Motherboard is a dual-channel, I bought a dual-channel 2 GB RAM (2 pcs). My question is; am I still able use my old RAM ones, since it is 4 slots..? Which in the end will show 5GB. Is it possible? Thank you for your time. =)

    Read the article

  • Training a spam filter based on Mailman moderator's actions?

    - by mc0e
    I'm planning a Mailman server, and looking for a good way to enable list moderators train a spam filter (likely to be either spamassassin or dspam). Has anyone come up with a good way to run training based on list moderator's decisions? Currently I don't have any better strategies than asking list moderators to forward spams one by one to a training address, which seems laborious and most likely to be inconsistently applied. Any ideas? I am aware of https://bugs.launchpad.net/mailman/+bug/558292 . I'm hoping someone has a better approach.

    Read the article

  • Installed Percona mySQL on CPanel but getting an error

    - by user1227914
    I installed Percona mySQL on my fresh CPanel server (no databases yet) according to: http://www.ecommy.com/linux/install-...el-environment Everything seemed to be OK and the server also starts fine, except some commands return this error: root@server [/var/lib/mysql]# mysql -A -sN information_schema -e "select * from user_statistics;" mysql: unknown variable 'innodb_file_per_table=1' root@server [/var/lib/mysql]# mysql -A mysql: unknown variable 'innodb_file_per_table=1' In my /etc/my.cnf I have: [mysql] innodb_file_per_table=1 userstat_running=1 I am planning on using InnoDB for the databases. Anyone know what the problem is? Or even better, how to fix it? I have installed Percona 5.5 with yum on CentOS.

    Read the article

  • Ubuntu "No space left on device" for /home, df shows 100% full, ds shows much, much less

    - by Jon Cram
    On an Ubuntu 12.04 server, normal users can no longer create or add to files in /home, encountering a "No space left on device" error. The /home directory has a capacity of 1.7 terabytes and as far as I can tell is nowhere near full in terms of actual data stored or inodes used. df -h shows: Filesystem Size Used Avail Use% Mounted on /dev/md2 1.0T 18G 955G 2% / udev 7.7G 4.0K 7.7G 1% /dev tmpfs 3.1G 320K 3.1G 1% /run none 5.0M 0 5.0M 0% /run/lock none 7.7G 0 7.7G 0% /run/shm cgroup 7.7G 0 7.7G 0% /sys/fs/cgroup /dev/md3 1.7T 1.7T 0 100% /home /dev/md1 496M 45M 426M 10% /boot /home indeed looks rather full. du -hs /home suggests otherwise: 1.4G /home There appears no inode issue - df -i: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/md2 67108864 75334 67033530 1% / udev 2013497 527 2012970 1% /dev tmpfs 2015816 440 2015376 1% /run none 2015816 2 2015814 1% /run/lock none 2015816 1 2015815 1% /run/shm cgroup 2015816 9 2015807 1% /sys/fs/cgroup /dev/md3 113909760 105981 113803779 1% /home /dev/md1 131072 239 130833 1% /boot I recently deleted a many gigabytes of application cache and log data from /home, however this was in the tens of gigabytes at best and nowhere near the capcity of /home. Update 1: du -hs --apparent-size /home 1.2G /home du -hs /home 1.4G /home What might be going on here?

    Read the article

  • Eclipse: Organising Files

    - by someguy
    I want to import a project that I'm planning to build upon. The problem is that it is very messy; with source files, class files and libraries under one directory. How would I organise these files using Eclipse? I know you can change the source folder and output folder, but when I do change the source folder, the files that I want inside it do not physically move to that folder. Output folder is fine, though. Also, I would like a separate folder for libraries. I'm not sure how I would go about this, however. Here's how I would like it: src: This folder will contain source files. bin: This folder will contain binary (class) files. lib: This folder will contain external libraries.

    Read the article

  • Running System Center Configuration Manager on a Domain Controller

    - by Brent D
    We are a smallish educational network (about 70 clients) with a single server running Windows Server 2008 Enterprise, functioning as both domain controller and file server. The educational pricing for Microsoft Forefront Endpoint Protection 2010 is irresistible as a managed anti-malware solution, but it requires System Center Configuration Manager 2007. I know best practice is not to run System Center Configuration Manager on a domain controller, but it's the only server I have to work with. Will installing SCCM on a domain controller cause problems? What conflicts might I need to take into account when planning deployment?

    Read the article

  • How do I stop linux from trying to mount android phone as usb storage?

    - by user1160711
    When I plug in my Motorola Triumph to my fedora 17 linux box USB port, I get an endless series of errors on the linux box as it desperately attempts to mount the phone as a USB drive. Stuff like this: Jun 23 10:26:00 zooty kernel: [528926.714884] end_request: critical target error, dev sdg, sector 4 Jun 23 10:26:00 zooty kernel: [528926.715865] sd 16:0:0:1: [sdg] Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE Jun 23 10:26:00 zooty kernel: [528926.715869] sd 16:0:0:1: [sdg] Sense Key : Illegal Request [current] Jun 23 10:26:00 zooty kernel: [528926.715872] sd 16:0:0:1: [sdg] Add. Sense: Invalid field in cdb Jun 23 10:26:00 zooty kernel: [528926.715876] sd 16:0:0:1: [sdg] CDB: Read(10): 28 20 00 00 00 00 00 00 04 00 If I go ahead and tell the phone to allow linux to mount the USB storage, the messages stop, and I get a mounted drive, but if all I want to do is use the debug bridge, my log on linux will continue to fill with this junk. Is there some udev magic I can do to make the system ignore this particular device as far as usb storage goes? I just noticed that if I tell the phone to enable USB storage, let linux recognize the new disk, then tell the phone to disable USB storage again, I get one additional log message about capacity changing to zero, but the endless spew of messages stops, so I guess one work around is to enable and disable USB right away.

    Read the article

  • High speed network configuration

    - by Peter M
    Sorry if this seems to be a stupid question, I'm not sure how to specify what I want to know when checking google. I will have 2 or 3 devices pumping out data on a 100Base-T port. The combined data rate of all devices is about 15KB/S which exceeds the optimal 100Base-T channel capacity (12KB/S), but well within the realms of a 1000Base-T connection. Each device will be sending a burst of data in the form of an FTP transfer to a common, single host computer in a sequential manner ie: Device A establishes FTP connection and transfers data Device B establishes FTP connection and transfers data Device C establishes FTP connection and transfers data It may be that the A&B, B&C and C&A transfers overlap in the time domain to some extent. There will be minimal traffic going back from the computer to each device (in general what ever is needed to support the FTP transfers), and the network will be dedicated to transferring data between these devices and the host computer. Is it possible to use a switch to combine the multiple incoming 100Base-T streams into a single outgoing 1000Base-T stream? if so what features in a switch should I be looking for? Or would it be better to have 3 physical point-to-point 100Base-T dedicated connections between each device and the host computer? (thus having at least 3 physical Ethernet interfaces on that computer) Note that I can't change the interface on the devices, but I am free to choose the network and host computer configuration. Thanks for you help Peter

    Read the article

  • MDaemon vs Exchange (2007-2010). Which way should we choose ?

    - by Deniz
    We are at the verge of a mail server decision. We do currently use 2 mail servers : MDaemon 10 and Exchange 2003. We are planning to use a company and customer wide one point solution. Our main candidates are MDaemon 11 and Exchange 2007 or 2010. We would like to learn other users experiences on those solutions. The server-side experiences, the user-side experiences , TCO, support options etc. And if there where other solutions (maybe MDaemon 11 + Exchange or anything else) you could suggest ?

    Read the article

  • How cloudfront works?

    - by Dharmik Bhandari
    I'm planning to Implement CDN(Content Delivery Network) of Amazon which is known as CloudFront in ASP.NET MVC3 with c#. I've googled about it but little bit confuse about few things mentions below. Is it compulsory that we have to uploads all static resources to CDN Network first and then we can use or Is it manageable by Amazon to crawl site static resources which is predefine folder or directory of sites? Is Amazon automatic update its copies when we anything change in static resources or every time we have to upload updated resources to CDN network.

    Read the article

  • Which software to use for RAMDISK on Windows 2008?

    - by Tony_Henrich
    I am building a server machine with lots of RAM. At least 16G. I am planning to put my frequently read and written data in RAM so I am looking for software for creating RAM disks. This is for Windows Server 2008 R2 Standard 64bit. Any recommendations? I would like one where I can flush the disk image into persistent storage upon demand. For example when Windows shuts down. (I am aware of all the consequences of data loss when power is lost)

    Read the article

  • Wordpress Installation on Two Servers - Loadbalancing

    - by rihatum
    Hi All, I have to install wordpress (One Blog, one domain, for e.g. mycompany.com/blog) on two servers sharing one database on a different server, these two servers are behind a loadbalancer and the db would be on another server. We are planning this way due to high traffic. I have done standalone wordpress installations on a single server, on windows 2003, 2008 with IIS6, 7 etc I am just researching as to how would I implement this. What would be the steps to achieve this and upon searching I saw some posts regarding the wp-content/uploads directory to be synced at regular intervals ? your help much appreciated Thanks for reading

    Read the article

  • Looking at desktop virtualization, but some users need 3D support. Is HP Remote Graphics a viable solution?

    - by Ryan Thompson
    My company is looking at desktop virtualization, and are planning to move all of the desktop compute resources into the server room or data center, and provide users with thin clients for access. In most cases, a simple VNC or Remote Desktop solution is adequate, but some users are running visualizations that require 3D capability--something that VNC and Remote Desktop cannot support. Rather than making an exception and providing desktop machines for these users, complicating out rollout and future operations, we are considering adding servers with GPUs, and using HP's Remote Graphics to provide access from the thin client. The demo version appears to work acceptably, but there is a bit of a learning curve, it's not clear how well it would work for multiple simultaneous sessions, and it's not clear if it would be a good solution to apply to non-3D sessions. If possible, as with the hardware, we want to deploy a single software solution instead of a mishmash. If anyone has had experience managing a large installation of HP Remote Graphics, I would appreciate any feedback you can provide.

    Read the article

  • What is a good partitioning design/scheme for a multi-boot *nix system?

    - by static
    I'm planning to install Debian on my server. I would like to design the partitioning scheme in such a way, that I could install one or more other *nix distributives on that. So, reading many articles I think this scheme could be a good one for the initial idea of multi-boot: /grub /swap /LVM VG1 (for OS1) -> /boot (LV1) / (LV2) /tmp (LV3) /var ... /var/log /home /LVM VG2 (for OS2) -> /boot / /tmp /var /var/log /home ... (other distros) /LVM VG0 (for data) -> /data (LV1) But I'm confused a little bit now: what should be the labels for these partitions (unique or not) and what should be the mounting points looking as (/home (OS1) mounted to /home as well as /home (OS2)...)?

    Read the article

  • Can two Linux installations share the same /home partition?

    - by huahsin68
    I am currently using OpenSuse 11.4 and Windows XP in laptop. I was planning to remove the Windows and switch to install Kubuntu. My current situation is that I have my root (/) and /home partition separated in OpenSuse. Can I share the /home partition between OpenSuse and Kubuntu? How do I configure Kubuntu to use the existing /home partition during the installation? BTW, the most recent Kubuntu is using ext3 file system whereas my OpenSuse is using ext3. Will this a matter for me to install Kubuntu? Any other issue I need to take care of?

    Read the article

  • Replacing HD in an MacOS 10.6.8 server caused all shares to fail

    - by Cheesus
    I'm hoping someone might have a helpful suggestion about this problem. We have 2 MacOSX servers available for file sharing. (quad Xeons - 2GB RAM, both 10.6.8), No.1 is an Open Directory Master with 50+ user accounts, No.2 has only 2 local accounts (/local/Default) and looks at the OD Master for all user accounts (/LDAPv3/10.x.x.20/) Both servers have 3 internal HD's, The boot volume with only Server OS and minimal Apps. A 'DataShare' HD (500GB) and a backup drive (500GB). After upgrading the DataShare HD in Server No.2 from a small internal HD (500GB) to larger capacity (2TB) drive, users are unable to connect to shares on Server No.2. Users get an error "There are no shares available or you are not allowed to access them on the server" The process I followed was to use Carbon Copy Cloner to create an exact copy of the original data drive (keeps all ownership data, UID, permissions, last edit date and time). Everything booted up ok, no indication there was any issues. (Paths to the sharepoint look good) Notes during troubleshooting - Server1 is operating perfectly, all users can access shares and authenticate etc. - I've checked the SACL (Server Access Control List) settings is ok. - On Server2 in the Server Admin' app, I can see all the shares listed ok. The paths seem valid, I can disable / reenable the shares, no errors. - On Server2 'workgroup manager' lists all the accounts from the OD Master in the LDAP dir view. All seems fine from here. Basically everything looks normal but no file shares on Server2 can be accessed from regular users.

    Read the article

  • Array on servers which receive several hundred GB of data a day

    - by Matthew
    This is hopefully a simple question. Right now we are deploying servers which will serve as data warehouses. I know with raid 5 the best practice is 6 disks per raid 5. However, our plan is to use RAID 10 (both for performance and safety). We have a total of 14 disks (16 actually, but two are being used for OS). Keeping in mind that performance is very much an issue, which is better - doing several raid 1's? Do one large raid 10? One large raid 10 had been our original plan, but I want to see if anyone has any opinions I haven't thought of. Please note: This system was designed for using Raid 1+0, so losing half of the raw storage capacity is not an issue. Sorry i hadn't mentioned that initially. The concern is more whether or not we want to use one large Raid 1+0 containing all 14 disks, or several smaller raid 1+0's and then stripe across them using LVM. I know the best practice for higher raid levels is to never use more than 6 disks in an array.

    Read the article

  • What postgresql client version should I build against, if server is 8.x?

    - by Ben Voigt
    I'm planning updates to a system that is currently running with 8.x server on Windows, 8.x client on Windows, and 8.x client on Linux. Obviously that seems like a bad choice of platform in a mixed environment, but the Linux machine has no persistent writable storage (as an anti-rootkit measure). I'm concerned with compatibility between versions right now. Can a linux postgresql 9.0.x client connect to a Windows 8.x server? The server is using some third-party binary extensions, so upgrading it is a more involved task and will be done later. If combining a 9.0.x client and 8.x server is discouraged, would latest 8.x clients be able to continue to connect if I did upgrade the server first? META: What tag is appropriate for backward-compatibility questions?

    Read the article

< Previous Page | 94 95 96 97 98 99 100 101 102 103 104 105  | Next Page >