Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 574/832 | < Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >

  • Fixing partitions and Installing BackTrack

    - by Josh
    My whole problem started when I started trying to install Backtrack(3 or 4) Backtrack was trying to install itself over my entire windows partition (Which I had combined into one when I installed windows 7). So I booted back into windows 7 on my netbook (eee pc 1000 HE btw) I went into disk-manager with the aim of making a partition to install backtrack on but came out with a really screwed up drive. So I had two partions when I started: the windows system partition, and then my main partition and they were blue in diskmanager (I think that has something to do with formatting). After I went through the steps to make a 10 GB FAT32 partition for backtrack I had about five partitons one called PE: that I have no Idea what it is the windows system file, my main partiton 10 GB unallocated space, and two other partions under 50MB each that are both unused space. And they were all converted to simple volumes (Green instead of blue). And backtrack still wants to erase my entire drive. Question number 1: How do I get it back to the way it was? Question 2: How to I get backtrack to dual boot on my netbook?

    Read the article

  • Inter-vlan routing issues

    - by DKNUCKLES
    I've been brought in to help administer a network and I've run into an issue - I'm not sure why this one is beyond me, however I figure an extra set of eyes on the problem may help resolve the issue. I have an HP MSM720 controller and at the time I'm trying to set up a basic hotspot set up with access points. For the time being I'm just looking to have people authenticate with a PSK and access the internet and other resources (namely printers) on other vlans. The user authenticates and the DHCP server on the controller gives them a 192.168.1.0/24 address. They are able to successfully browse the internet and ping machines on other networks, however they are unable to print to network printers that sit on the same LANs as the very computers that wireless clients can ping. The (extremely simplified) topology is as follows Computers on the wireless 192.168.1.1 network are able to ping computers on the 192.168.0.0 network, however cannot ping or print to the printers on the same network. I'm baffled and I have no idea why this is the case. Can anyone shed some light on this for me? Can someone spot the error of my configuration? EDIT : It should be noted that for whatever reason other computers on the 10.0.100.0/24 network cannot even ping the gateway of the Wireless Access network (192.168.1.1) - I'm not sure if this is relevant. These are the VLANS listed on the controller.

    Read the article

  • Basic IP address structure

    - by dannymcc
    We currently have a few servers, around 30-40 workstations and 16 phones. Each device has a static IP address. As an example the standard settings for a new workstation is; IP: 192.168.1.XXX Subnet: 255.255.255.0 Gateway: 192.168.1.99 DNS: 192.168.1.50 As I am slowly exploring new server OS's and virtualisation etc. I am getting close to wanting a wider range of IP addresses. What I would like to do is seperate the devices by IP as follows: Servers 192.168.1.XXX Workstations 192.168.2.XXX Printers 192.168.3.XXX Phones 192.168.4.XXX VM's 192.168.5.XXX Is this a bad idea, or is this a common way of doing things? My biggest concern is the phones and subnet masks. The phones are managed by our provider although I have access to the server that runs them. Would I need to change the subnet mask to 255.255.0.0 on all devices? Or only those that change? For example, the phones don't need to connect to any other devices other than other phones and the phone server. So if I have the phones on 192.168.1.XXX with a subnet mask of 255.255.255.0 and then moved everything I had complete ownership/control of to 192.168.X.XXX with a new subnet mask of 255.255.0.0. Would that work?

    Read the article

  • Has anyone used the sharedband connection bonding product?

    - by John Rennie
    See http://www.sharedband.com/ for details on the product. Obviously Sharedband aren't too keen on giving away their technical secrets, but I would guess that it bonds the connections at the IP layer i.e. their routers send the IP packets to the SharedBand routers over all available lines and the ShareBand routers handle all the virtual circuitry and provide the NATing to whatever IP address(es) they've assigned you. It looks a clever idea, and a good way to provide some resilience over ADSL links. You can even use ADSL links from different ISPs and SharedBand will still bond them for you. But, I find myself wondering how well it really works, and whether it's worth it. The Draytek routers can already load balance (though not bond) up to four ADSL lines, so the SharedBand product really only offers an advantage if you're hosting servers i.e. you can have one IP address to accept incoming connections through all your (working) ADSL lines. But should you really try and host servers using ADSL lines, especially since ADSL upload performance isn't stellar? Wouldn't it be better to use a hosted server, or maybe pay up for a leased line with a SLA? So I'm asking if anyone is using SharedBand, and if so what do you think of it? JR

    Read the article

  • QT Creator 64-bit Snow Leopard

    - by quadelirus
    I have a bunch of libraries that I need to link against that I installed via macports. They are 64-bit libraries. I'm working on an application written with QT Creator and the .pro is set up. I downloaded the QT SDK for Mac OS X, but it is 32-bit and so the compiled code won't link against the 64-bit binaries that I got from macports. Ok. So I downloaded the QT SDK source and built from source using -arch x86_64. Now I have a 64-bit version of the SDK (I think) but it didn't build a QT Creator app. So. I need to know one of 4 things: Either, 1.) I'm guessing that a simple make command will convince the QT SDK to build the creator for me. If this is true, then what is the command (make creator?). barring that, I need to know 2.) The easiest way to get MacPorts to redownload the libraries that I installed with a 32-bit version (I keep seeing a "+universal" mentioned, but I haven't seen it on a line, and simply calling ports +universal install XYZ doesn't seem to work--perhaps I need to uninstall and reinstall the package?). Also, is this a stupid idea? or 3.) Someone who actually has a prebuilt 64-bit QT SDK installer so I don't have to mess with this. It is ridiculous that QT doesn't already have this available, in my opinion--SL has been out since, what, last August? 4--and this would take the cake.) I don't understand why I can't simply put a "compile-for-64-bit stupid" command directly into the QT pro file and have it build. There isn't really a reason why a compiler compiled in 32-bits couldn't compile to 64-bits is there? Thanks.

    Read the article

  • Network connection to Firebird 2.1 became slow after upgrading to Ubuntu 10.04

    - by lyle
    We've got a setup that we're using for different clients : a program connecting to a Firebird server on a local network. So far we mostly used 32bit processors running Ubuntu LTS (recently upgraded to 10.04). Now we introduced servers running on 64bit processors, running Ubuntu 10.04 64bit. Suddenly some queries run slower than they used to. In short: running the query locally works fine on both 64bit and 32bit servers, but when running the same queries over the network the 64bit server is suddenly much slower. We did a few checks with both local and remote connections to both 64bit and 32bit servers, using identical databases and identical queries, running in Flamerobin. Running the query locally takes a negligible amount of time: 0.008s on the 64bit server, 0.014s on the 32bit servers. So the servers themselves are running fine. Running the queries over the network, the 64bit server suddenly needs up to 0.160s to respond, while the 32bit server responds in 0.055s. So the older servers are twice as fast over the network, in spite of the newer servers being twice as fast if run locally. Apart from that the setup is identical. All servers are running the same installation of Ubuntu 10.04, same version of Firebird and so on, the only difference is that some are 64 and some 32bit. Any idea?? I tried to google it, but I couldn't find any complains that Firebird 64bit is slower than Firebird 32bit, except that the Firebird 2.1 change log mentions that there's a new network API which is twice as fast, as soon as the drivers are updated to use it. So I could imagine that the 64bit driver is still using the old API, but that's a bit of a stretch, I guess. Thanx in advance for any replies! :)

    Read the article

  • Xen dom0 reports incorrect amount of RAM with dom0_mem set

    - by xen_amnesiac
    I've done a fair bit of searching about this, but have found nothing that answers my question. I have a system with 6GB of RAM which acts as a Xen server. For reference, it runs Ubuntu 12.04. I've set the kernel parameter dom0_mem:512M,max:512M in /etc/default/grub as follows: GRUB_CMDLINE_XEN_DEFAULT="dom0_mem=min:512M,max:512M" I've tried variations of that, with the same result. My question is this: With the above set, the dom0 reports in all applications a RAM amount of 422M. cat /proc/meminfo gives the following: $ cat /proc/meminfo MemTotal: 432472 kB MemFree: 54144 kB Buffers: 17640 kB Cached: 220104 kB SwapCached: 30172 kB Active: 136500 kB Inactive: 167780 kB Active(anon): 6156 kB Inactive(anon): 60516 kB Active(file): 130344 kB Inactive(file): 107264 kB Unevictable: 52 kB Mlocked: 52 kB SwapTotal: 1794044 kB SwapFree: 1682012 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 39572 kB Mapped: 8048 kB Shmem: 136 kB Slab: 44324 kB SReclaimable: 22012 kB SUnreclaim: 22312 kB KernelStack: 1280 kB PageTables: 3840 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 2010280 kB Committed_AS: 329192 kB VmallocTotal: 34359738367 kB VmallocUsed: 313988 kB VmallocChunk: 34359417340 kB HardwareCorrupted: 0 kB AnonHugePages: 0 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 524696 kB DirectMap2M: 0 kB top, htop, free -m, and byobu's RAM monitor all report the same amount. At first I thought this was because of the onboard graphics borrowing some memory, but have now switched to a dedicated GPU and it persists. Is this normal behavior, or has something gone amiss? It's just about 100MB of RAM that's "gone", and I have no idea where it went. I understand that it's normal that not all RAM is available for allocation, but does the system really take an amount relatively high to the amount of RAM available?

    Read the article

  • Rebooting access point via SSH with pexpect... hangs. Any ideas?

    - by MiniQuark
    When I want to reboot my D-Link DWL-3200-AP access point from my bash shell, I connect to the AP using ssh and I just type reboot in the CLI interface. After about 30 seconds, the AP is rebooted: # ssh [email protected] [email protected]'s password: ******** Welcome to Wireless SSH Console!! ['help' or '?' to see commands] Wireless Driver Rev 4.0.0.167 D-Link Access Point wlan1 -> reboot Sound's great? Well unfortunately the ssh client process never exits, for some reason (maybe the AP kills the ssh server a bit too fast, I don't know). My ssh client process is completely blocked (even if I wait for several minutes, nothing happens). I always have to wait for the AP to reboot, then open another shell, find the ssh client process ID (using ps aux | grep ssh) then kill the ssh process using kill <pid>. That's quite annoying. So I decided to write a python script to reboot the AP. The script connects to the AP's CLI interface via ssh, using python-pexpect, and it tries to launch the "reboot" command. Here's what the script looks like: #!/usr/bin/python # usage: python reboot_ap.py {host} {user} {password} import pexpect import sys import time command = "ssh %(user)s@%(host)s"%{"user":sys.argv[2], "host":sys.argv[1]} session = pexpect.spawn(command, timeout=30) # start ssh process response = session.expect(r"password:") # wait for password prompt session.sendline(sys.argv[3]) # send password session.expect(" -> ") # wait for D-Link CLI prompt session.sendline("reboot") # send the reboot command time.sleep(60) # make sure the reboot has time to actually take place session.close(force=True) # kill the ssh process The script connects properly to the AP (I tried running some other commands than reboot, they work fine), it sends the reboot command, waits for one minute, then kills the ssh process. The problem is: this time, the AP never reboots! I have no idea why. Any solution, anyone?

    Read the article

  • yum error when installing memcached

    - by Jack
    Hi, trying to install memcached with "yum install memcached" and i'm getting all these errors which I have no idea how to solve. Setting up Install Process Resolving Dependencies -- Running transaction check --- Package memcached.x86_64 0:1.4.5-1.el5.rf set to be updated -- Processing Dependency: perl(AnyEvent) for package: memcached -- Processing Dependency: perl(AnyEvent::Socket) for package: memcached -- Processing Dependency: perl(AnyEvent::Handle) for package: memcached -- Processing Dependency: perl(YAML) for package: memcached -- Processing Dependency: perl(Term::ReadKey) for package: memcached -- Processing Dependency: libevent-1.1a.so.1()(64bit) for package: memcached -- Running transaction check --- Package compat-libevent-11a.x86_64 0:3.2.1-1.el5.rf set to be updated --- Package memcached.x86_64 0:1.4.5-1.el5.rf set to be updated -- Processing Dependency: perl(AnyEvent) for package: memcached -- Processing Dependency: perl(AnyEvent::Socket) for package: memcached -- Processing Dependency: perl(AnyEvent::Handle) for package: memcached -- Processing Dependency: perl(YAML) for package: memcached -- Processing Dependency: perl(Term::ReadKey) for package: memcached -- Finished Dependency Resolution memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent::Socket) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(AnyEvent::Handle) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(YAML) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) memcached-1.4.5-1.el5.rf.x86_64 from rpmforge has depsolving problems -- Missing Dependency: perl(Term::ReadKey) is needed by package memcached-1.4.5-1.el5.rf.x86_64 (rpmforge) Packages skipped because of dependency problems: compat-libevent-11a-3.2.1-1.el5.rf.x86_64 from rpmforge memcached-1.4.5-1.el5.rf.x86_64 from rpmforge The perl modules that its complaining about are already installed. Any ideas?

    Read the article

  • How to make DD-WRT router's (configured like a repeater) devices be accessible on LAN? (i.e. integrate DHCP for both routers)

    - by Annonomus Penguin
    I have a D-Link DIR-600-A1 router running DD-WRT (using the 601's firmware: except for the model number, they are near identical). It has an Atheros chip, so there is no "repeater" option. You can bypass this by setting the main radio as a client to the main router, and adding a virtual radio configured as an AP. You can then set up the credentials for connecting to the main router and allowing devices to connect to the repeater/router. I have a few devices on my network: Ethernet computers Server with Samba running WiFi devices connected to the main router I then wanted to add a repeater. I have a couple of other things on the repeater: WiFi Computer Other WiFi devices. Anyway, I wanted to connect my WiFi computer to the share on my server via Samba. However, for some reason, my router treats the main router as WAN, not another device. I've tried disabling the SPI firewall: However, that doesn't work. I've tried pinging my WiFi computer from my server. However, I can ping my server from my WiFi computer. AFAIK, they are on the same subset, just using different IPs: the main one uses 192.168.0.x and the repeater uses 192.168.1.x (starting at 100 for some reason). It seems as I need to configure my router(s) to work together for DHCP. I noticed there was a "DHCP forwarder" option, but I have no idea what that would do. A quick note: for some reason (that's beyond me) my ISP disabled the capability to bridge a WiFi to ethernet connection with the router they provide (something about PPPoE or similar...). The service rep I talked to when I was having issues after I changed ISPs said that, but they couldn't explain exactly what they were "blocking." How can I get DD-WRT to not treat the client connection as WAN and the router to recognize the devices connected to the repeater?

    Read the article

  • Can't find standalone Chrome Gmail client that I know exists

    - by Carson
    I'm on Windows. A couple years ago when I switched from Outlook to Gmail (Google Apps), Google provided this awesome little standalone gmail client that was just a single-purpose Chrome install. It launched like a normal application, stayed updated when I updated Chrome. It was Chrome in a separate application that launched only gmail, stayed logged in really well, and "felt" like a gmail mail client, with the gmail interface. It had it's own little red envelope icon, it was a windows app. (I remember there was no Mac equivalent.) I found it while looking through the "this is how you get your company to switch to gmail" documentation that Google provided. I just repaved my box and now I'm looking for this thing again, and I had no idea it would be impossible to find. I've spent literally 2 hours looking, searching, googling, etc. I'm losing my mind. Anyone know how I can get my hands on this? I used it all day every day for 2 years, so I know it exists :), but I can not find it. Any assistance would be gratefully received.

    Read the article

  • PHP running too slow, always showing "504 Gateway Time-out"

    - by komase
    PHP running too slow, always showing "504 Gateway Time-out" My server spec: Dual core ATOM 330 CPU 2GB RAM Use nginx with PHP in fastcgi use eaccelerator CPU 74.3%id RAM used: 350MB of 2GB I have lots of sites in my server, with cron running every minutes all time, even on some minutes, double or triple cron running at same time. All my sites cron is heavy, usually the cron running more than one minutes. my nginx.conf has become too big until nginx refuse to start because too many sites in it. it has been solved by increasing server_names_hash_max_size. Im planning to add more sites in my server Now, opening my website always showing 504 Gateway Time-out. I have tested many eaccelerator and PHP setting, but this 504 Gateway Time-out still happen. the 504 Gateway Time-out will dissappeared when cron is disabled I have no idea: is this because not enough processor power? And what should I do? upgrade my processor? --------added this is top for my CPU just now: Cpu(s): 17.5%us, 3.8%sy, 0.1%ni, 71.6%id, 6.9%wa, 0.1%hi, 0.1%si, 0.0%st

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6x2GHz) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : From the cloud's IO chart, we're averaging 100mb/s read. > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. moved from superuser

    Read the article

  • Why does my computer crash randomly?

    - by Donavon Decker
    The other day I went out to my van to get my Tower and when I opened the trunk it fell out. I brought it into the house and opened it, and everything looked ok. When I started it up, about 1-3 minutes afterwards it would crash. It did this over and over until I reseated the cooler. Everything seemed normal again, until after about 10 minutes of gameplay (any game), it would crash. I reseated my GPU + reinstalled the drivers, however I still get the same error. A while back, I'd check my 'Windows Rating' periodically, and all of them were in the '6.0-6.9' range except for my hard disk usage (always been like that [not relative]). Today I went in and looked, and my Processor and Memory was rated 5.4. I reseated my cpu and my memory, refreshed the windows rating, and then my processor and memory went from 5.4, to 5.1. A few minutes ago I reseated them once again, and now it's back to 5.4. Note: Not sure if this is relevant to the issue, but I updated my bios earlier today I honestly have no idea what the issue is, but I'm getting aggravated at the problem. Here are some images which contain images of my specifications: i1271.photobucket.com/albums/jj623/donxdeck/1_zps09f0607c.jpg i1271.photobucket.com/albums/jj623/donxdeck/4_zps381cd00a.jpg i1271.photobucket.com/albums/jj623/donxdeck/3_zps54bba720.jpg i1271.photobucket.com/albums/jj623/donxdeck/2_zps945d3d72.jpg Thanks for the help

    Read the article

  • Ping: sendmsg: operation not permitted error after installing iptables on Arch GNU/Linux

    - by estol
    Yesterday I got a new computer as my homeserver, a HP Proliant Microserver. Installed Arch Linux on it, with kernel version 3.2.12. After installing iptables (1.4.12.2 - the current version afaik) and changing the net.ipv4.ip_forward key to 1, and enabling forwarding in the iptables configuration file (and rebooting), the system cannot use any of its network itnerfaces. Ping fails with Ping: sendmsg: operation not permitted If I remove iptables completely, networking is okay, but I need to share the Internet connection to the local network. eth0 - wan NIC integrated on the motherboard (no idea of vendor, probably HP). eth1 - lan NIC in a pci-express slot (Intel Gigabit CT Desktop http://www.intel.com/content/www/us/en/network-adapters/gigabit-network-adapters/gigabit-ct-desktop-adapter.html) Since it works without iptables(server can access the internet, and I can login with ssh from the internal network), I assume it has something to do with iptables. I do not have much experience with iptables, so I used these as reference (separate from each other of course...): wiki.archlinux.org/index.php/Simple_stateful_firewall#Setting_up_a_NAT_gateway revsys.com/writings/quicktips/nat.html howtoforge.com/nat_iptables On my previous server, I used the revsys guide to set up nat, worked like a charm. Anyone experienced anything like this before? What am I doing wrong? Thanks, estol

    Read the article

  • SOGo installation on Mail Server

    - by i.h4d35
    We run a normal mail server on cPanel for web-based email. We've just got a request to add Calendar, address book, tasks functions; mobile capabilities (I'm guessing acces via a mobile client/app); public folders etc. On the client-side, we have some people using webmail, some use MS Outlook and some others use Mozilla Thunderbird. Having looked around, I zeroed in on SOGo, Citadel and kolab as options for this. I read through SOGo's official install guide and also checked here and here. However, I see most of the HowTo's ask installation of MySQL/PgSQL, LDAP, Samba etc. While I can manage installation of Samba (if required), I have no idea if installing LDAP, MySQL etc is really required. Also, any guidance as to how to install on a regular mail server would be appreciated. Sorry if this sounds vague. If any more information is required, I'll be happy to give it. Thanks in advance. Edit: This server in question has always been governed via cPanel (to install PHP, MySQL, configure DNS etc). So I am confused if really need LDAP.

    Read the article

  • try to attach to a database file but can't browse folder which contains the file

    - by Chadworthington
    I am trying to attach to database file (*.mdf, *.ldf) that I placed in the same folder as all my other SQL Server databases. I begin the attach by attempting to browse to the folder which contains the db files as well as all of my active database files. I select "attach Database" and click the "Add" button to add a database to the list of databases to attach to. When I do so, I get this error: TITLE: Locate Database Files - BESI-CHAD ------------------------------ D:\SQLdata\MSSQL10_50.SQLBESI\MSSQL\DATA Cannot access the specified path or file on the server. Verify that you have the necessary security privileges and that the path or file exists. If you know that the service account can access a specific file, type in the full path for the file in the File Name control in the Locate dialog box. ------------------------------ BUTTONS: OK ------------------------------ The path is correct and, as I mentioned, it contains all of my other database files so I wouldn't think that permissions should be an issue, but here is what I see for that folder: Any idea why I cannot browse to that folder and attach to the db files that I have place there?

    Read the article

  • Performance of Virtual machines on very low end machines

    - by TheLQ
    I am managing a few cheap servers as my user base isn't large enough to get much more powerful servers. I also don't have the money lying around to invest in a server to prepare for the larger user base. So I'm stuck with the old hardware I have. I am toying with the idea of virtualizing all the current OS's with most likely VMware vSphere Hypervisor (AKA ESXi) Xen (ESXi has too strict of an HCL, and my hardware is too old). Big reasons for doing so: Ability to upgrade and scale hardware rapidly - This is most likely what I'll be doing as I distribute services, get a bigger server, centralize (electricity bills are horrible), distribute, get a bigger server, etc... Manually doing this by reinstalling the entire OS would be a big pain Safety from me - I've made many rookie mistakes, like doing lots of risky work on a vital production server. With a VM I can just backup the state, work on my machine, test, and revert if necessary. No worries, and no OS reinstallation Safety from other factors - As I scale servers might go down, and a backup VM can instantly be started. Various other reasons. However the limiting factor here is hardware. And I mean very depressing hardware. The current server's run off of a Pentium 3 and 4, and have 512 MB and 768 MB RAM respectively (RAM can be upgraded soon however). Is the Virtualization layer small enough to run itself and a Linux OS effectively? Will performance be acceptable (50% CPU overhead for every operation isn't acceptable)? Does it leave enough RAM for the Linux OS? Is this even feasible?

    Read the article

  • How to plug power/reset buttons from case to motherboard leads?

    - by MaxMackie
    I have a motherboard I salvaged from a pre-assembled computer. Except now I'm trying to use it in my own custom build. The problem is, this motherboard doesn't have any documentation because it was never meant to be used by consumers (as far as I know). I need to plug in my case's power/reset/hdd-light plugs into the motherboard. I usually check the documentation of the board to see which leads go to what connector, but I have no documentation for the board. So, as I see it, I have two options: I find the documentation (I've emailed gateway customer service, but I'm unsure of how successful I'll be with that). I simply test the leads one after the other (can this cause damage if plugged into the wrong leads?) However, there might even be a standard for which leads do what action (I'm not sure about this). For reference, my motherboard's SN/MD (?) is: H57M01G1-1.1-8EKS3H Does anyone have any idea if I can find documentation or find another way to be sure if my connections are correct?

    Read the article

  • Reboot fails with "Invalid partition"

    - by Mike Clark
    My laptop can't reboot. Any time something restarts the laptop (e.g. to apply Windows updates, or Start Menu-Restart, etc), the computer sits at a black screen with the message "Invalid partition" displayed in console text. When this happens, I power off the computer, then power it back on, and it boots up fine. OK, now the history behind this: This laptop is a new Dell. The day I got it, I used gparted to reclaim 30 GB of disk space that had been allocated to a "recovery partition" in the middle of the laptop's primary drive. (I have DVDs for recovery and I didn't want to waste 30 GB of SSD space on recovery data.) So I used gparted to delete the recovery partition and resize the primary Windows partition to use up the new free space. As expected when resizing a boot partition, the computer would no longer boot. I used Windows Recovery Console to fix the boot process: FIXMBR C: FIXBOOT C: BOOTCFG /rebuild This worked fine and the computer boots up fine. But, as mentioned earlier, the laptop still can't reboot. Any idea on how to fix this without completely reformatting the disk and reinstalling Windows from scratch? It's Windows 7.

    Read the article

  • Server 2003 crashing intermittently, want to transfer function to other DC

    - by user1305332
    I have a Win2003R2 server that is intermittently crashing after some virus were introduced. I'm sure all virus have been cleaned thanks to Malwarebytes (were using McAfee - useless). When it crashes you can't login (local or remote) but can still access files remotely and ping it. After a while even file sharing stops and have to kill power to restart it (no BSOD) I need to either fix it (tried to reinstall SP2 and I tried to reinstall windows in repair mode but the repair option was not available when I booted from installation disks) or move it's functionality to another DC (another 2003R2 server). The server that's crashing is old with SCSI drives while the new server uses SATA drives and faster so it seems like a good idea to just transfer roles and ditch the old box. Finding replacement SCSI drives looks expensive if they ever fail. What would I need to transfer roles. If I just move the 5 FSMO roles and copy over the file shares. Would the new server have enough to run without the old server? Never done something like this, just want some tips. Thanks.

    Read the article

  • Munin 2 data not showing up on graph

    - by letronje
    I have a fresh installation of Munin 2.0.1 on my Ubuntu 12.04 and the first time I tried to view graphs, it showed them properly(After installation, I had to follow http://munin-monitoring.org/wiki/CgiHowto2 to set it up) After that, the graphs show up, but with with just one data point(single vertical line) as if no data is being collected after I tried it for the first time. In Munin 1.4, there was munin-cron which was run every 5 minutes and I saw new data being plotted in the graph atleast every 5 minutes. But If there is no cron job in v2, How does data collection work with Munin2 ? Is the data collected when the graphs are requested ? The file timestamps in /var/lib/munin have not changed after the first time I tried the graphs. But i do see munin-node process running(restarted in several times). I also see no errors in the munin node log files or apache2 log files. Any idea what could be wrong ? Screenshot : http://i.imgur.com/uzuAK.png Also, is there a way to pre-create graphs instead of doing it dynamically, on the fly ?

    Read the article

  • Can't connect two PCs to a Network Switch at the same time (Windows 7)

    - by puk
    I have two computers connected to a network switch and every once in a while one of the computers will lose its internet connection. It's almost always the same computer every time. However, if I play around with the control panel, I can switch it, so that now the other computer is not connected. Restarting either of the computers does not help either. In Windows, the worlds-greatest-trouble-shooter tells me that a network cable is unplugged and that I should try plugging it in...Disabling and re-enabling my NIC does not fix this problem, neither does swapping cables around. When rebooting, the BIOS complains about how the Ethernet Cable is not plugged in. If it's in any way important, My set up at the office is like so: Modem - Routher - Network Switch 1 - Network Switch 2. I have tried turning off the energy saving option for my NIC, and I tried manually setting the link-speed to 100Mbps Full Duplex without any luck. Also, I have a Realtek PCIe GBE Family controller on both computers Does anyone have any idea why this is happening every 5-10 days? EDIT: I have also tried using a completely different Network Switch and the problem still persists as before.

    Read the article

  • Creating mdraid device on top of other existing mdraid devices

    - by Dmitriusan
    I'm considering creating something like "hierarchical raid" and wondering whether it is possible using pure mdraid. Moreover, I'm going to boot from this device. I'm using Ubuntu Server 12.04 LTS with Grub2 bootloader. Motivation behind doing that is: I have 4 x 1tb 7200rpm disks. Two are newer and faster (up to 200mb/sec) and other two are slower (up to 140mb/sec). I want to create RAID-0 device from them. When creating such RAID-0 directly from 4 hard disks, I get summary speed up to ~480mb/sec. That is roughly 4*120mb/sec, so RAID-0 works with speed of the slowest device. I have an idea to create a separate RAID-0 md0 device from 500gb partitions of slower hard disks. Theoretically, this md0 device will have speed 2*140=240~280mb/sec. After that, I'm going to add this md0 device to RAID-0 with faster disks, finishing with up to 3*200=600mb/sec. Stripe-width for this raid will be 2x times bigger than for underlying raid with slow disks. Questions are: is it possible or I'm missing something? will that work as expected? can I boot from such consolidated raid device? any better ideas? any pitfalls? I don't want to use fakeraid for consolidating slow disks for multiple reasons (portability, ability to customize parameters and so on). PS Speed is needed for home virtualization server and just for experience/fun. Reliability is provided via regular automatic backups to a separate device. PPS I considered also using different stripe-width for hard disks with different speed in single raid, but mdraid does not seem to support that.

    Read the article

  • Acer Aspire One getting extremely hot

    - by ascom
    I have an Acer Aspire One D250-1197. I really better type fast before it overheats again... For some reason, I'm having a problem with heat on my netbook only when I run Joli OS (Ubuntu 9.10 LTS?). When I leave it idle, with nothing running (other than the regular Joli OS desktop and a couple of doing-nothing terminals), heat slowly builds up to the point where the netbook is burning hot to the touch. I have never had this problem when running Windows 7 Starter (even though it gives me plenty of other headaches). It seems that the fan is spinning, but not fast enough to keep up with the heat buildup. Is there something wrong with the fan drivers? The computer doesn't seem to recognize that it is overheating. What can I do to solve this problem (other than shut it off or use Windows)? I'm currently on the wrong side of Earth (I mean, on vacation), so I just need a temporary fix, such as a driver I can install. Also, I have to use Linux, because I have to share out the wired connection in hotels wirelessly to the iPhones. EDIT: I'm switching from Joli OS to a more "proper" and up to date distribution (Xubuntu 13.04). I'll see if it still has the heat problem and try @nod's cpufreq idea.

    Read the article

< Previous Page | 570 571 572 573 574 575 576 577 578 579 580 581  | Next Page >