Search Results

Search found 13047 results on 522 pages for 'np hard'.

Page 419/522 | < Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >

  • Ubuntu 10.10 - PC shutdown before boot shortly after BIOS loads

    - by clem
    Since installing Ubuntu 10.10 from Karmic I've started getting problems with starting up the PC. I've done a complete wipe (Boot and Nuke) of the hard drive and reinstalled Ubuntu 10.10 but the problem still occurs. There is no dual boot on the PC, just Ubuntu. Here is the problem: Each morning, when I turn the PC on from being off overnight, the PC starts up and loads the BIOS. I get the following message Verifying DMI Pool Data... K8 NPT Data Change...Update New Data to DMI!....... Then poof the computer shuts off. However, after switching the computer back on around 6 or 7 times after it's turned itself off, it will eventually boot up without any problem. Also, once up and running for a while, I can shutdown and restart the PC first time, without any issues. I have also noticed a problem with the USB mouse being recognised and once I finally get the computer booted up, I need to unplug and then plug the mouse back in to get it working. I've opened the PC up and checked the connections (cables, cards and memory) and it all seems fine. The main issue with troubleshooting this problem is I cannot test any suggestions or fixes until the next morning because once the computer is up and running it will remain so! I do not leave the computer on overnight to save energy. So.. Is this a hardware / boot software issue? This is a very odd problem and I have googled to no avail. Any suggestions?

    Read the article

  • Wake on Demand for Apache server in OS X 10.8

    - by Gary
    Mac OS X Mountain Lion does not have a Web Sharing box in the Sharing system prefs menu. It is thoroughly discussed on the web that the Apache server is available, and that it can be turned on manually in the command line, or by using a convenient Prefs Pane. That works while the computer is awake. But, when my computer goes to sleep, the server stops working, even though Wake for Network Access is checked in the Energy Saver Pref Pane. From the discussion on Bonjour, I see that this problem probably results from the fact that Apache isn't registered with Bonjour. Does that sound likely? If not, please make some suggestions. The connection is via hard-wired ethernet. If registration with Bonjour is the problem, I'd like to know how to register it. You gave a nice description of dns-sd, and the command description says dns-sd -R Name Type Domain Port [TXT...] (Register a service)", but I don't know what to use for "Name" or "Type", or the format of the domain. I tried some dns-sd -B searches and found nothing I could use as a model, and it doesn't show up in Bonjour Browser. Any suggestions would be appreciated.

    Read the article

  • How to host multiple FLEX applications in IIS7

    - by Devtron
    Hello, I manually deploy a FLEX application to my web server (IIS 7). There are two virtual directories, 1.) Default 2.) myFlexApp1. myFlexApp1 is where my working FLEX application resides. I now need to deploy a different FLEX application (let's call it myFlexApp2) to the same web server. I set up a virtual directory for [myFlexApp2] and it complains about the "bindings" using port 80, which is already used by [myFlexApp1]. I have tried to give them separate host names in their bindings properties. For example, myFlexApp1.mydomain.com and myFlexApp2.mydomain.com. I can never get [myFlexApp2] to show from an external browser. I was able to get only one or the other to display, but never could run both. Here is what I need: myFlexApp1.mydomain.com -- myFlexApp1 calendar.mydomain.com -- myFlexApp2 test.mydomain.com -- myFlexApp1 where test.mydomain.com is the default URL. Is this possible? What am I doing wrong? I even tried to edit the hosts file in [C:\Windows\System32\drivers\etc] but that didnt work either. How can I serve up two FLEX applications on IIS 7? It shouldn't be this hard!

    Read the article

  • Unexpected(?) high 'wasted' memory in memcached

    - by Nanne
    Looking at our memcached stats I think I have found an issue I was not aware of before. It seems that we have a strangely high amount of wasted space. I checked with phpmemcacheadmin for a change, and found this image staring at me: Now I was under the impression that the worst-case scenario would be that there is 50% waste, although I am the first to admit not knowing all the details. I have read - amongst others- this page which is indeed somewhat old, but so is our version of memcached. I think I do understand how the system works (e.g.) I believe, but I have a hard time understanding how we could get to 76% wasted space. The eviction rate that phpmemcacheadmin shows is 2 ev/s, so there is some problem here. The primary question is: what can I do to fix this. I could throw more memory at it (there is some extra available I think), maybe I should fiddle with the slab config (is that even possible with this version?), maybe there are other options? Upgrading the memcached version is not a quickly available option. The secondairy question, out of curiosity, is of course if the rate of 75% (and rising) wasted space is expected, and if so, why. System: This is currently not something I can do anything about, I know the memcached version isn't the newest, but these are the cards I've been dealt. Memcached 1.4.5 Apache 2.2.17 PHP 5.3.5

    Read the article

  • Is git-annex appropriate for my scenario?

    - by Karel Bílek
    I have a git repository with source codes I want to put in the open on github. However, I also have gigabytes of data that I don't want to have in the open and in the repo - they are big, they are proprietary, they are "burdened" with copyrights and so on. However, those are also logically "part of the same project" and I wish to have some control over their history (basically, what git already does). Right now, I have them in the directory "data" in the repository and I have the directory ignored and I resign on getting them to git. However, I have read about git-annex and it seems it can do what I want. So, I have two questions. Is git annex appropriate for me? How exactly should I use git annex for my scenario? Meaning - which commands should I use and how? I have tried to read the official documentation but it talks about use cases that I don't care about. I have the data on one computer only and I don't think I will be moving them soon (it's nice to have the possibility, but it's not why I want to use git annex). Also, the documentation is pretty hard to read.

    Read the article

  • Advice: USB Monitoring Programming

    - by Kashif
    I need an advice about USB programming in linux. i have to design a USB monitoring program that 'll keep checking usb ports of a linux cent os. as soon as a usb or external hard disk is connected, this program will shoot an email to some specific person about detail of usb (as size, mount on, time). when usb is disconnected, it will again shoot an email to some person with same kind of information. mean while this program will also write logs in syslog/messages with name of programing for easy tracking. Now I want ask that what is best way to develop this program. as I'm new to this field so i know nothing about it? either i should use perl, bash scripting or some other language? I have no idea what is right way to adopt coz this program will keep running all the time to keep a check on usb ports. I know few commands in like lsusb, fdisk (to check attached usb) and df -h (to get detail of usb) but dont know how i can achieve using these commands that i am thinking. also one more thing that in future i also need to modify this program for ubuntu and Citrix XenServer and it should be same everywhere.

    Read the article

  • Windows Media Center showing Jerky Video on PC

    - by Kris Erickson
    I had to repave my Windows 7 x64 box last week due to a hard drive crash, and for a while everything was running perfectly but now all videos in Windows Media Center are jerky (the sound is fine, they just seem to skip a ton of frames all the time). This is on the local machine, but the same thing happens when I try to stream to my Xbox. The videos all show fine in VLC and Windows Media Player (however exhibit the same problem in Quicktime). I guess I must have installed something recently (in the process of getting all the apps I usually have running on my PC) that caused this but for the life of me I can't figure it out. I have updated to the latest video driver (and then rolled back to the standard Windows 7 driver), I have rolled back all the other drivers that I have installed (I believe). I have uninstalled all the codec packs (I also run TVersity, so I have the TVersity codec pack installed), and I uninstalled TVersity. Nothing seems to help. I have uninstalled windows media center, and reinstalled it from the Programs and Features. I have basically ran out of things to try to fix this, and am almost thinking about reinstalling Windows again. Any suggestions? Edit Specs on the PC (which I figured was unimportant since everything used to work perfectly): Intel Core 2 CPU 6600 @ 2.4 Ghz Nvidia GTS 8800 Built in realtek-audio soundcard 4GB Ram Codecs which are failing: All that I have tried, but at least Xvid, Mpgv (mpeg2 video from a camera), and Wmv (only kinds that I have ready access to).

    Read the article

  • HP dv6910 laptop no longer recognizes wireless adapter, dvd drive, and one usb port. What can I do about this?

    - by Joan H.
    In the last 3 weeks, my not even 3 year old HP laptop (vista) just seems to be failing. First, one of the usb ports stopped working. Next, the dvd drive all of a sudden was not recognized as even existing, so it is now useless. And as of today, I no longer seem to have a wireless adapter on my laptop. It doesn't show up in device manager, same as the dvd drive. I can connect to the internet by ethernet cable, but not wirelessly. It worked fine last night; now it doesn't even exist! I have tried a 'hard reset' I read about on HP's forum- no help. I am not sure what else to try. I read about the USB wifi dongle in another response, but I already have maxed our my usb ports even though I have bought a USB hub since the one port failed. Plus, I'd like my laptop to just work like it's supposed to instead of being jerry rigged with usb hubs, external dvd drives, and wifi dongles etc. Any ideas?

    Read the article

  • Advice on Computer Specs for overall development/general use machine

    - by Ender
    At the moment I am restricted to a laptop with 512MB of RAM, a 120GB HDD and a 1.5GHz Intel processor for all my development and general browsing needs, and as you can probably tell using it for anything modern is a painful experience. As a result I've decided to buy myself a new desktop computer, one that will stand the test of time and one that can be upgraded easily. Rather than build the machine myself I've decided to go through Dell as I've had good experiences with them when purchasing computers for my family. I've had my eye on this as it's got a good amount of RAM, has a decent-rated processor and isn't priced too badly. http://www1.euro.dell.com/uk/en/home/Desktops/inspiron-580/pd.aspx?refid=inspiron-580&s=dhs&cs=ukepp1&~oid=uk~en~20211~inspiron-580_d005827~~ Intel® Core™ i5 Processor 750 (2.66GHz, 8MB) Genuine Windows® 7 Home Premium 64bit - English Display Not Included ATI Radeon™ HD 5450 1GB DDR3 graphics 6144MB Dual Channel DDR3 [3x2048] Memory 1TB (7200rpm) SATA Hard Drive DVD +/- RW Drive (read/write CD & DVD) with DVD Burn software 1 year of coverage included with your PC McAfee® Security Centre - 15 Month Protection - English After the pain of using a slow laptop for all this time the main thing I want is speed. I may look to play a couple of basic games on it, nothing too powerful. Obviously I'll be doing some development on it too so it'll have to be able to handle the latest IDE's and Database tools like SQL Server pretty quickly. Finally, should I ever need to improve it I'd like to be able to add more RAM and change some of the parts. I wouldn't have thought this would be a problem but a few people I've spoken to have said that the amount of RAM the motherboard can handle isn't that great. Is this true? How long can I expect to be using this computer before it's too slow? Thanks in advance for the help.

    Read the article

  • creating a backup system with freenas

    - by masfenix
    We are currently in the process of opening a new accounting firm in the new year (actually moving from our previous location). I am looking for a cheap/free solution to back up our files (small, text files couple of kb). I was impressed with FreeNas and Windows Backup but I found out that Windows Backup only saves for a maximum of 2 years. The work machines will be running Windows 8 or Windows 7. There can be many work machines however we have only one to start with (ie, think of it as just one employee). I have an old core 2 duo with 2 gigs of ram that I can convert to a server if need be. I want the syncing to be done through LAN since the data is confidential and should never touch the outside world. So ideally, I would like the following scenario: A skydrive/dropbox like service to sync my client files over work machines and a central server. The "server" part should store history of files (i don't know how this will be done since the file will have the same name?). This isn't really necessary, but I can see it become useful. I am not familiar with RAID, so does any software RAID solution exist? I will most likely be buying 2 hard drives.

    Read the article

  • How to install Windows 7 on a MacBook with HDDs and no optical drive, without rEFIt

    - by user1238528
    I just removed the SuperDrive on my MacBook Pro and replaced it with an SSD. So now my laptop has a SSD, and a HDD, but no optical drive. I have Lion on the SSD and I want to install Windows 7 on the HDD. Unfortunately, Boot Camp only will install Windows off of the Windows DVD. I have made a bootable Windows 7 thumb drive but my MacBook Pro won’t boot off it. So my question is how can I install Windows on the other HDD? I have thought about maybe using Oracle VirtualBox to install it on that hard drive, but I don’t know if that would allow me to boot directly into Windows. I really don't want to go down the whole virtualization route. I know I could just take out the SSD, put back in the optical drive, run the Windows 7 DVD, take the optical drive back out, put the SSD back in. But that sounds like a nightmare. Also, I really don’t want to use things like rEFIt. Any advice?

    Read the article

  • How do I share a complete XP disk so it can be seen from a Windows 7 system? (To move all files to a

    - by Ian Ringrose
    This should be easier! (both computers can see the internet etc so I know the network it’s self is working) I have a normal home network with a Windows XP machine on it and the new Windows 7 (64 bit) machine. So I can transfer the files to the new Windows 7 machine, I wish to share the complete disk (and all files) from the Windows XP machine and access them from the Windows 7 machine. Is there a step by step set of instructions for doing this anywhere? So fare I have: put both computers into the same workgroup put the windows 7 machine into work network mode so it can see the XP machine in the work group shared the XP disk as read only But when I try to access a lot of the folders on the XP disks, I am told I am not allowed to access them. (I was not asked for any passwords by the windows 7 machine when I accessed the XP machine. The XP machine just has its default account with no password set on it) The XP machine runs XP home and hence has "simple file shairing" turn on. So it seems that even if I create a admin account (with password) and connect with that account, it still comes in as "guest" on the XP machine. Chooseing to share the folder I want access to rather then the top of the disk drive seems to work, but is a pain as I need to share each user's folder with a different share name. If the new computer was not a laptop, I would just plug the hard disk from the old machine into it, but being a laptop I don't have that option.

    Read the article

  • Queries passed to SQL Server are getting corrupted

    - by adrianbanks
    We are experiencing a bizarre error with our application at a customer site. We have managed to narrow it down to the point where we can replicate the behaviour using just Management Studio and SQL Server. We have two machines, A and B: +------------+ +--------------------+ | [A] | | [B] | | Management | -------------- | SQL Server 2008 R2 | | Studio | | Enterprise x64 | +------------+ +--------------------+ We are running a SQL script in Management Studio on machine A against the SQL Server instance on machine B. We are not actually executing the script, just parsing it. Most of the time, the parse operation works fine. Occasionally (seemingly randomly), the parse operation fails with a syntax error. The error message shows the part of the script with the error, which appears as some SQL from the original script that has been truncated and has random characters appended to it. An example: The original SQL: SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = ST.TABLE_NAME WHERE ST.TABLE_TYPE = 'BASE TABLE' AND SC.COLUMN_NAME = 'Identity' AND ST.TABLE_NAME != 'dtproperties' ORDER BY ST.TABLE_NAME The SQL that is in error (as reported by SQL Server): SELECT DISTINCT ST.TABLE_NAME as TableName FROM INFORMATION_SCHEMA.TABLES AS ST INNER JOIN INFORMATION_SCHEMA.COLUMNS AS SC ON SC.TABLE_NAME = Sa? The above example shows how the query is being corrupted. It doesn't always happen, and is not always the same bit of SQL that causes the error. Parsing this script against another SQL Server instance produces no errors, showing that the script is fine. It appears that something is corrupting the SQL that is being received the the server. This leads me to think that the problem lies either with the client end or in the transmission of the SQL from the client to the server. I have a SQL trace from the period where an error occurs, which shows the SQL has been corrupted when SQL Server receives it. We have been unable to track down any possible cause of this behaviour, and so cannot find a fix. Because the errors occur seemingly randomly, it is also very hard to generate reproduction steps to submit a bug report. Any ideas?

    Read the article

  • Someone from china wants kill my entry bandwidth??

    - by yes123
    Hi guys. Someoen from china with two different ip is downloading the same big file from my server. Their ip are: 122.89.45.210 60.210.7.62 They requesting this file and downloading more than 20 times per minute. What Can I do to prevent this? (I am on gentoo with root access) And WHY they do this to a site that doesn't have nothing to do with china ? ADD1: Other ips: 221.8.60.131 124.67.47.56 119.249.179.139 60.9.0.176 ADD2: the stupid thing is they are requesting only 1 single file lol. Or they want that file removed (tho i don't see why) Or they are pretty stupid ADD3: Situation is getting worse. IP are spreading from other countries too (usa and korea if www.geobytes.com/iplocator.htm it's right) And now they are requesting another file. ADD4: it seems after they realized i removed that file they stopped attacking me. I will monitor the situation. They started again after a sleep of 3-4 mintues with the same file (lucky me). Hard to say why this is happening

    Read the article

  • Systemd Service Start With Dynamic Port Value From Docker

    - by Sheriffen
    Using CoreOS, Docker and systemd to manage my services I want to properly perform service discovery. Since CoreOS utilizes etcd (distributed key-value) there is a very convenient way to do this. On systemd's ExecStartPost I can just insert the started service into etcd without problems. My usecase needs something like this: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": 5555 }' which works like a charm. But this is where my idea popped up. Docker has the power to randomly assign a port if I just run docker run -p 5555 which is awesome since I don't have to set it statically in the *.service file and I could possibly run multiple instances on the same host. What if I could get the randomly assigned port and insert instead of the static 5555? Turns out I can use the docker port command to get the port and with some formatting we can get just the port with $(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2) which works if I set it (using bash) like this: /usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)' }' but using systemd I just can't get it to work. This is the code I'm using: ExecStartPost=/usr/bin/etcdctl set /services/myServiceName '{ \"host\": \"%H\", \"port\": '$(echo $(/usr/bin/docker port my-container-name 5555) | cut -d':' -f2)'}' Somehow I got something wrong but it's hard to debug since it works when typed in the terminal.

    Read the article

  • Ubuntu network card problem.

    - by Steve Greene
    Hello folks, Several days ago, I installed Ubuntu 9.10 onto my Acer Aspire 3100 laptop, running it alongside Widows Vista as a dual-bootable system. Creation of the Ubuntu boot CD went fine, and the installation onto my hard drive was flawless. Ubuntu opens and behaves as I would expect, except for one little problem. For reasons unknown to me, Ubuntu is not communicating with my laptop's networking hardware, and I have no internet connectivity, it works fine under Windows Vista. Up in the right side of the Ubuntu desktop, I click on the network icon and it does not show a wireless connection at all. At home, where I use a dialup modem, I also see no means of getting online. My modem is an HDAUDIO Soft Data Fax Modem with Smart CP,manufactured by CXT (Conexant Systems Inc., file version 4.0.13.0, and the driver version is 7.58.0.0). I am an advanced computer user, but I am not a programmer. I seek a solution that is user-friendly for normal people, something equivalent to a driver that I can easily install or activate that will allow Ubuntu to see my hardware and get me connected. Can anyone help me over this hopefully-little glitch My processor is a Mobile AMD Sempron Processor 3500+ at 1.80 GHz, 1.50 GB RAM, and a 32-bit Operating System.

    Read the article

  • Grub error 18, gparted not showing anything

    - by Montecristo
    Some week ago I started having some problems with my pc, sometimes it just freezed not allowing me to do anything. I had to turn it off and on and sometimes do it a couple of time even at startup. Now it does not start at all, grub is giving me error 18. I have found that a solution is to create a bootable partition in the first sector of the disk. gparted does not recognize any partition, the window in which there would be my partitions is empty. sudo fdisk -l does not output anything. If I type sudo mount /dev/sda and then tab tab to autocomplete these are the devices coming out: sda sda1 sda2 sda5. If I launch sudo mount -t ext3 /dev/sda1 disk I get the following error: mount: wrong fs type, bad option, bad superblock on /dev/sda1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg outputs [ 1831.974847] EXT3-fs: unable to read superblock Do you know how to solve this issue? I'm not completely sure this is a software problem, should I try with a new hard disk?

    Read the article

  • How to diagnose storage system scaling problems?

    - by Unknown
    We are currently testing the maximum sequential read throughput of a storage system (48 disks total behind two HP P2000 arrays) connected to HP DL580 G7 running RHEL 5 with 128 GB of memory. Initial testing has been mainly done by running DD-commands like this: dd if=/dev/mapper/mpath1 of=/dev/null bs=1M count=3000 In parallel for each disk. However, we have been unable to scale the results from one array (maximum throughput of 1.3 GB/s) to two (almost the same throughput). Each array is connected to a dedicated host bust adapter, so they should not be the bottleneck. The disks are currently in JBOD configuration, so each disk can be addressed directly. I have two questions: Is running multiple DD commands in parallel really a good way to test maximum read throughput? We have noticed very high SWAPIN-% numbers in iotop, which I find hard to explain because the target is /dev/null How shoud we proceed in trying to find the reason for the scaling problem? Do you thing the server itself is the bottleneck here, or could there be some linux parameters that we have overlooked?

    Read the article

  • How can I remedy the always-on-top window problem?

    - by GateKiller
    Sorry for the vague title but this one is hard to explain so bear with me please. I'm using Windows Vista at work for web development and sometimes when I Click or Alt-Tab to background window, the window will get focus but it will not be brought to the front. In order to bring the window to the front, I have to click on the applications border (when the resize cursor appears) and the window will then jump to the front. I've had this problem for about a year now and it happens at least a dozen times a day, but it doesn't do this all the time - seems random. I hope I have explained the issue fully (and you've understood it) and would appreciate any constructive answers or comments to solve this problem. Example: If I Alt-Tab from Google Chrome to Notepad and this problem randomly occurs, Google Chrome will remain in front of Notepad, however, I will be able to type text into Notepad while the window is behind Google Chrome. Clicking on Notepad's content area will not bring it to the front but clicking it's window border will. Video Exampe http://vimeo.com/19388998 In this video, I clicked from Google Chome to UltraEdit and chrome stayed in font, but as you can see, I can still type in UltraEdit. I'm starting to believe that this could be a bug in Google Chrome so I'll continue to watch if this between other applications.

    Read the article

  • Disk usage on IIS, PHP5, performance problems.

    - by Jacob84
    Hi everybody, I'm quite worried with a performance problem that I'm facing in one of our production servers. I'm working for a hosting company, so you can imagine how heterogeneous the applications runnning here are. All started with a call of a client complaining about the speed loading a Joomla. The setup is IIS6 (Windows 2003) with PHP5 and FAST CGI wich normally works pretty well. I've tested the loading time and indeed, he was right. 7 or 8 seconds to load, when usually this can be accomplished in 2. Seeing this results, I started to check first CPU and RAM. Everithing normal, 2GB of RAM free, 3%-8% of CPU activity. That's what I call a relaxed server ;). Unfortunately, digging a little deeper I've found the 'PhysicalDisk' counters quite high (above 10), specially the read queues. I've used Process Explorer to see wich of those processes has the higher deltas, but everything seemed normal. As the problem is specially related to PHP pages, I've checked specific IIS counters, as Actual connections, Number of CGI requeriments and Number of ISAPI requeriments. CGI -> 3 to 7 ISAPI -> 5 to 9 Connections-> 90 to 120 (wich appears at the top of the graph) More than a solution (I know this is hard to find), I would like to know if you have an specifical methodology to face this kind of problems. Thanks a lot, as always.

    Read the article

  • What is the alternative of Apache's global Alias in IIS instead of adding a Virtual Directory to every single sites one by one?

    - by Sk8erPeter
    In Apache, there's a way I can make phpMyAdmin available globally to all VirtualHosts I set up. In Apache, it looks like this: <IfModule mod_alias.c> Alias /phpmyadmin "c:/AppServ/www/phpMyAdmin" </IfModule> This way I reach phpmyadmin with prepending /phpmyadmin to all my domain names, and I can see phpmyadmin's initial page. (So for example it works for all my domains like this: http://example_1.com/phpmyadmin, http://example_2.com/phpmyadmin, http://example_3.com/phpmyadmin also does work). In IIS, there's an "Add Virtual Directory..." option when right clicking on a given site. Here I can set up e.g. phpMyAdmin's path to be reached with prepending /phpmyadmin to the given domain (e.g. http://example_1.com/phpmyadmin), but isn't there a "global" setting similar to Apache's Alias? Or do I have to add a virtual directory to every given sites one by one? I'm just curious, it's not a hard work to do it, but I'm interested in it if there exists another method to do it. Thanks in advance!

    Read the article

  • Adding subnet to a vsphere with single vcenter and esxi host

    - by Ilya Rakhlin
    Let me start of by saying that I do not specialize in networking, I am in the process of adding additional VMs to a testing environment and wanted some recommendations. In this case I am running a single ESXI 5.1 host and a single Vcenter management server. The problem is, I need another range of IP addresses added to the existing setup; hopefully without reconfiguring everything. Currently the esxi host is configured to IP: 192.168.100.200, gateway: 192.168.100.1 and subnet: 255.255.255.0. All of the VMs are running some version of linux with hard coded IP addresses in that range, and using that subnet. The VMs I am about to deploy I want to be on the 192.168.101.X network. Is it possible to add an additional subnet to this existing system that will also communicate with the current subnet? The esxi host has 6 physical NICs but only one connected as it is only a testing system; not sure if that matters. Are there any other ways to accomplish this hopefully without restarting or at least reconfiguring the IP addresses for each VM? Reason: Due to the configuration of the VMs to run the applications that we need I am using a large amount of the current IPs in the potential range (mostly VIPs). I will be setting up a new version of this “environment” while keeping the old one, thus potentially running out of IP addresses.

    Read the article

  • Could hybrid SSD + HDD be made with fixed internal partitions?

    - by Aaron
    I was pretty close to getting Seagate's Momentus XT but have been scared off by the many problems reported on forums and feedback sites, especially in Mac Book Pros. So I'm waiting for mk 2 with some extra flash and better reliablilty I'm assuming will come out this year. What would suit me better though is a 32+500 hybrid drive where I have more control over what is on the flash drive and what is on the disk drive. So there are 2 physical partitions within the one 2.5" hard drive enclosure which use different media internally (32GB for core files and 500GB for data and multimedia). The partitions would be locked so they can't be changed. - Or even better, the disk driver just makes them appear as two disks to the OS that share the same bus... Perhaps it's ok if the bios just sees the first drive until the OS is loaded. Is either of it technically possible? Obviously difficult to market outside of the enthusiast market. The SSD memory modules can be pretty small right, so they could even make them a card that plugs into a secondary connection on the enclosure. That would be good for computer builders as well as for upgrading and recoverability. Then future operating systems could recognise these system SSD drives and automatically install the OS + swap files on it. While placing document libraries on the larger data drive. While in the longer term HDD will probably disapear there will always be a trade off between speed, storage size and expense.

    Read the article

  • Bad performance with Linux software RAID5 and LUKS encryption

    - by Philipp Wendler
    I have set up a Linux software RAID5 on three hard drives and want to encrypt it with cryptsetup/LUKS. My tests showed that the encryption leads to a massive performance decrease that I cannot explain. The RAID5 is able to write 187 MB/s [1] without encryption. With encryption on top of it, write speed is down to about 40 MB/s. The RAID has a chunk size of 512K and a write intent bitmap. I used -c aes-xts-plain -s 512 --align-payload=2048 as the parameters for cryptsetup luksFormat, so the payload should be aligned to 2048 blocks of 512 bytes (i.e., 1MB). cryptsetup luksDump shows a payload offset of 4096. So I think the alignment is correct and fits to the RAID chunk size. The CPU is not the bottleneck, as it has hardware support for AES (aesni_intel). If I write on another drive (an SSD with LVM) that is also encrypted, I do have a write speed of 150 MB/s. top shows that the CPU usage is indeed very low, only the RAID5 xor takes 14%. I also tried putting a filesystem (ext4) directly on the unencrypted RAID so see if the layering is problem. The filesystem decreases the performance a little bit as expected, but by far not that much (write speed varying, but 100 MB/s). Summary: Disks + RAID5: good Disks + RAID5 + ext4: good Disks + RAID5 + encryption: bad SSD + encryption + LVM + ext4: good The read performance is not affected by the encryption, it is 207 MB/s without and 205 MB/s with encryption (also showing that CPU power is not the problem). What can I do to improve the write performance of the encrypted RAID? [1] All speed measurements were done with several runs of dd if=/dev/zero of=DEV bs=100M count=100 (i.e., writing 10G in blocks of 100M). Edit: If this helps: I'm using Ubuntu 11.04 64bit with Linux 2.6.38. Edit2: The performance stays approximately the same if I pass a block size of 4KB, 1MB or 10MB to dd.

    Read the article

  • Apache Bench reports different result with same page

    - by Aspis
    I'm running into a little problem base-lining an Apache2/fcgi/php-fpm server I am setting up. 1) If I run: ab -n 15000 http://mysite.com/index.php. Apache Bench returns Time per request: 41ms but document length: 0 bytes and html transferred: 0 bytes. The Transfer rate: 7.9Kb/s. 2) If I run: ab -n 15000 http://mysite.com/ Apache Bench returns Time per request: 83ms along with the accurate document length and html transferred total. The APC cache status reports identical hit counts from both test. Also Apache Bench reports no errors in either case. Overall, no errors on any test sites and all logs are clean, etc. DocumentRoot is set to index.php so I would expect both of these test runs to produced a similar result. My 2 question(s) are: 1) why the discrepancy? 2) which is the correct result? I've seen plenty of results like test 1 posted (with out question) but frankly from my own experience and those of others, accurate testing is hard to come by. Even with out goofy issues like this.

    Read the article

< Previous Page | 415 416 417 418 419 420 421 422 423 424 425 426  | Next Page >