Search Results

Search found 22300 results on 892 pages for 'half bit'.

Page 691/892 | < Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >

  • Sending eMails in a external subnet in vmware ESXi

    - by user80658
    This might be a bit hard for me to explain - and it is a pretty individual situation. I got a native server at Hetzner (www.hetzner.de). The public IP is 88.[...].12. I got ESXi running on this server. I can access the esxi console by the public ip, but none of the virtual machines. That's why I bought a public subnet with 8 (6 usable) IPs (46.[...]) and an additional public ip (88.[...].26). This additional public ip belongs to the first virtual maschine - a firewall appliance - which is connected to the WAN. This need to be done this way - since it is the official way by hetzner. My 46. subnet is behind the firewall. I got a virtualmin server with dovecot imap/pop3 server. When sending a email, most provider (gmail) will accept those mails, but a lot will put it into spam (aol). My theory is: The MX line of my domain says of course the ip of the virtual machine (46.[...]), but in the raw email it says that email is sent by the ip of the firewall (88.[...].26), which doesnt sound trustworthy. A solution would be if the firewall could handle mail, but it simply cant. How can I prevent this problem? Thanks.

    Read the article

  • RAID-capable 3.5" SATA Drives

    - by nroam
    I recently purchased a pair of 1TB Western Digital WD1002FBYS RE3 drives for use in an external RAID enclosure. I have found that they tend to drop out of the array after a while. Thinking it was the enclosure I tried them on another one but found the same issue. So a bit of googling and I found http://www.tomshardware.com/forum/251076-32-raid-issues-western-digital-hard-disk which suggests that: "WD's "RE" (RAID Edition) HDDs support Time-Limited Error Recovery ("TLER" ): http://www.wdc.com/en/products/productcatalog.asp?language=en As a non-TLER HDD fills up with data, the error detection firmware might take too long, and the RAID controller may drop that HDD from a RAID array." So now I wonder what SATA drives have firmware which is compatible with RAID arrays (esp. RAID 1, 5, but not 0)? I have not been able to come up with the magic set of keywords to ellicit the answer from Google. However, various sites suggest that Seagate & Hitachi are in general OK. Does anyone have any generic (or even specific) guidance on how to work out if a drive's firmware may harbour code that is potentially an issue in a RAID0 setting other than stating that it must be 'enterprise' ready?

    Read the article

  • pcfg_openfile: unable to check htaccess file, ensure it is readable

    - by rxt
    After moving a website folder on my local development machine to another drive, then moving it back, I got a 403 error. Most of this problem had probably to do with rights that got messed up. After deleting the code and restoring it from SVN, the rights seemed allright. The error stayed however. The setup is a bit complex, as follows: I have Ubuntu 10.4 as development machine, trying to mimic the server as much as possible We use Eclipse + SVN and I create all projects in a local folder under my user account In /var/www-vhosts I create folders for each vhost, like this one: test.localhost test.local/index.php: includes the index file of the project test.local/.htaccess is a dynamic link to the htaccess file in a project subfolder I get the following error in the apache error log: [Thu Jul 08 15:55:56 2010] [crit] [client 127.0.0.1] (13)Permission denied: /var/www-vhosts/test.localhost/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readable The problem seems to be the .htaccess file, or the link to it. When I empty the htaccess, nothing changes When I remove the link, the index-include produces some output (in the apache error log) When I remove the link and replace it with the actual file, I get another error: [Thu Jul 08 16:47:54 2010] [error] [client 127.0.0.1] Symbolic link not allowed or link target not accessible: /var/www-vhosts/test.localhost/test I'm lost here, don't know what to do next. Do you have any ideas what I can try? This setup has worked before, but I don't know what is different now.

    Read the article

  • Postfix auto create Maildir

    - by Eugene
    I've been beating my head against a wall for a while now on this one. Basically, here is the rundown: Our MX record points to a frontend SMTP server, which contains aliases for actually routing the mail. No alias, no access to the backend storage server, which is what our clients connect to. I'm upgrading the backend email server. Currently, a user is created for every email user on the server, which creates the mailbox. On the new server, everything autheticates through PAM to an LDAP server (all of which is working properly). My goal is to get Postfix to create the Maildir directory for the user automatically. This works fine when I have the /home directory with 777 permissions, but for obvious reasons, this should be avoided. I would like to do this with 775 permissions on /home with a group owner of whatever user Postfix is running as, but I can't seem to figure out what user to use. With the 777 permissions, the /home/$user/Maildir directory is created on message delivery. Does anybody know how I can do this without 777 permissions? The system I am working on is a 64-bit Debian Lenny 5.07 install. Any advice would be appreciated.

    Read the article

  • Mac - KeyRemap4MacBook - Custom XML

    - by DjRikyx
    I hope someone of you know this powerfull Pref Panel to remap the keyboard.. Since i'm using a PC Keyboard, i wanted to make Screenshot a bit easy to do.. I managed to get working: -Command+Shift+3 to Stamp (full screen screenshot) -Command+Shift+4 to Control+Stamp (Selection Cursor Screenshot) Now i want to remap Shift+Stamp to Command+Shift+4+Space to get Windows Screenshot.. i tried but nothing was working.. Here is my current XML.. i only need to add the last remap! <?xml version="1.0"?> <root> <item> <name>Custom PC Style Screenshot</name> <appendix>Command+Shift_L+4+Space to Shift+F13</appendix> <appendix>Command+Shift_L+4 to Control+F13</appendix> <appendix>Command+Shift_L+3 to F13</appendix> <identifier>private.custom_pc_style_screenshot</identifier> <autogen></autogen> <autogen>--KeyToKey-- KeyCode::F13, VK_CONTROL, KeyCode::KEY_4, ModifierFlag::COMMAND_L | ModifierFlag::SHIFT_L</autogen> <autogen>--KeyToKey-- KeyCode::F13, KeyCode::KEY_3, ModifierFlag::COMMAND_L | ModifierFlag::SHIFT_L</autogen> Hope someone of you can help me out! Thank you

    Read the article

  • Upgrading only certain packages via the getdeb repo

    - by intuited
    I'm a bit confused about how getdeb.net works now. The last time I got a package from there was a while ago; at that point the procedure was that you would just download a .deb for each package that you wanted to install/upgrade and then install it using dpkg -i. However the inexorable march of progress has lent its trumpets to this system as well, and getdeb installs are now done via their repo, which is registered with apt in /etc/apt/sources.list.d, after you install a single package that makes the changes to the apt database. I've installed that package, and I've discovered that aptitude dist-upgrade now wants to upgrade a lot of packages on my system that weren't ready for upgrades prior to the installation of the getdeb package. If I rename the file /etc/apt/sources.list.d/getdeb.list to something with a different extension, then do aptitude update && aptitude dist-upgrade, it stops wanting to upgrade packages. So I gather that the default behaviour is now to upgrade all packages to the version available at getdeb. This is not particularly appropriate, since these packages are not as well tested as the officially released versions. Is there a config setting somewhere that will prevent upgrading packages to versions from the getdeb repo unless this action is specifically selected? I'd like to be able to pick and choose what packages are upgraded via getdeb.

    Read the article

  • Cacti Login Page: Infinite loop occurs

    - by beicha
    Apache 2.2.3 | PHP 5.1.6 | MySQL 5.0.77 I followed cacti installation guide to install latest cacti 0.8.7h on CentOS 5.5 (64-bit). The installation of PHP/Apache/MySQL went smoothly until I finished the setup, and came to the login page. I can login http://.../cacti/index.php with admin account but the new page is redirected to the same login page with the message "Please enter your Cacti user name and password below" This is a infinite loop! If I use a wrong admin password I get the correct error message "Invalid User Name/Password Please Retype". [Same problem here] If I login use Guest/guest account, "Error: Access Denied, user account disabled." displays. The Cacti log file (./cacti/log/cacti.log) is empty. I Googled and seems this problem has existed for a long time, but no followup solutions were found on the forum posts I found. Anyone can help me on this problem? If more information needed, please let me know. Nov 18, 2011 UPDATE: I re-installed Cacti, this question remains UNSOLVED.

    Read the article

  • Gigabyte H55N-USB3: No video on HDMI

    - by newt
    I built a new PC with a Gigabyte H55N-USB3 / Intel Core i5 650. With a monitor plugged in the DVI port, everything works fine. I installed Windows 7 32-bit and enabled remote desktop connection. After that, I unplugged the monitor, plugged it into network and installed everything else (drivers, programs, etc) via RDP. However, when I try to use the HDMI port on my TV nothing appears. Neither during the boot, neither after Windows starts. The TV says there's "no signal" (if I remove the cable the message changes to "check cable"). The cable is new, and it is working fine with my home theater on same TV (by the way, it is the cable which came bundled with the home theater). Video driver is the latest from Intel site. Anyway, this shouldn't be the problem since there is no image during the boot. Any ideas or tips would be welcome. I'm googling around but found nothing useful, yet.

    Read the article

  • Ubuntu odd mouse and keyboard behavior when window gains inner focus

    - by Scott
    This morning on my Ubuntu 10.10 system when I open a window - for example, system-preferences-about me, if I click in to a field such as "work email", I can no longer close the window with the mouse! Clicking the X on the window will not work. Also, I loose the ability to click on anything else - clicking on the desktop, icons, menus, workspaces, etc. do not work. Even the effect when you hover over a folder on the desktop and that folder highlights - that stops until the window is closed. If I open this same screen but do not click in to a field, everything is fine - I can close the window with the X and everything else works fine. Same thing happens with several other windows I tried. Even calculator - I can open it, everything is fine until I click on a button in the calculator - then no ability to click on anything else. Have to Alt-F4 to close the window. The system is only about a week old from a fresh install (64 bit ubuntu - quad core amd machine). I uninstalled wine, turned off remote desktop/disabled in startup, also in startup disabled visual assistance, bluetooth, dropbox, klipper. Reboot, no difference. The only other thing non-standard I see in startup is nvidia. Using a logitec usb mouse, saitek usb keyboard. Was working fine yesterday. I can not think of anything I did / installed yesterday. I switched themse, then went to update manager and saw two x server / x org related updates and installed them, reboot and NOW IT IS FINE! However, then I re-enabled dropbox, klipper and remote desktop rebooted and the problem is back. Again I disabled and rebooted. Problem is still there!! So somehow I fixed it at least for a few minutes, but now it is back and I am out of ideas.

    Read the article

  • "Dictionary problem." Error with VMPlayer

    - by George Mauer
    I'm pretty new to using vmware virtualization (been a virtualbox user) so I'm hoping you guys can help me out. I recently got an external usb disk containing a vm for a client, downloaded vmplayer, set it up with "Open a Virtual Machine", ran it, easy as pie. After working with it a bit this morning, I shut the VM down and now trying to start it back up again I get this: I tried removing the vm from my library, now it happens whenever I try to add it back in. In the meantime, I can still access other virtual machines so it seems like the problem might be with the virtual disk. So two questions: This is obviously not a very helpful error message. Where can I go to get more information? My Application EventLog doesn't contain anything from VMWare. What steps can I take to fix the problem? Edit: A couple more pieces of information. I did not take any snapshots. I don't think VM Player even has that ability. I have a zip file of (what I assume) is the state of the VM when it was sent to me. I cannot unzip it as it is huge and simply requires more HD space than I have available but I did extract the vmx file and examine it. Other than the UUIDs and the fact that mine reads cleanShutdown = "FALSE" they are identical. The log contains the following lines Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead: Unable to load dict from 'E:....\MachineName.vmsd'. Jun 23 10:11:18.080: vmx| SNAPSHOT: SnapshotConfigInfoRead failed for file 'E:....\MachineName.vmx': Dictionary problem (6) Jun 23 10:11:18.082: vmx| SNAPSHOT: Snapshot_TimeStampTiers failed: Dictionary problem (6)

    Read the article

  • WAMP running extremely slow on WIndows 7

    - by JavaCake
    After 2 days of tough fight trying to figure out what the problem is with my Windows 7 32-bit machine at work i have nearly given up. The issue is that the pages are loaded extremely slow, the performance is both when accessed locally (127.0.0.1) or from another computer in the intranet. First to explain the system: WAMP version: Apache 2.2.22 – Mysql 5.5.24 – PHP 5.4.3 XDebug 2.1.2 XDC 1.5 PhpMyadmin 3.4.10.1 SQLBuddy 1.3.3 webGrind 1.0 DocumentRoot: Located on network drive MySQL: InnoDB Pages: PHP, MySQL, AJAX etc. So basically the changes i have made in order to get a greater performance: Changed C:\windows\system32\drivers\etc\hosts: 127.0.0.1 localhost 127.0.0.1 127.0.0.1 Modified my.ini: innodb_flush_log_at_trx_commit = 2 Modified httpd.ini: EnableMMAP on EnableSendfile on Modified php.ini: realpath_cache_size= 4m How i measure the performance is the overall loadtime of the page. I run it locally on my Mac OS X machine aswell (MAMP), and typically the frontpage loadtime is 0.06seconds but on the Windows 7 machine it is 6-10seconds. I have verified the loadtime with developertools in Chrome aswell. Furthermore the result is identical in XAMPP.

    Read the article

  • Dealing with upgrade of libevent on Amazon AWS

    - by Dreen
    I am building an application (in Python) on Amazon EC2 that has a following dependency chain: gevent-websocket ---> gevent ---> libevent The last one (libevent) got upgraded on Sunday and my server is now generating this error: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: libevent-1.4.so.2: cannot open shared object file: No such file or directory Not wanting to spend much time on the issue, I tried to mitigate it by creating a symlink to an always-recent version: $ sudo ln -s /usr/lib64/libevent.so /usr/lib64/libevent-1.4.so.2 But it didn't quite work: (...) File "/usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/__init__.py", line 41, in <module> from gevent import core ImportError: /usr/lib/python2.6/site-packages/gevent-0.13.7-py2.6-linux-x86_64.egg/gevent/core.so: undefined symbol: current_base I am a bit stumped as to how to proceed. Should I create more symlinks? To what? Or is there a better way to solve this problem... PS. For the record I am using Amazon AMI.

    Read the article

  • How do you recreate the System Recovery environment in Windows 7?

    - by Howiecamp
    I'm running Windows 7 Home Premium RTM (64-bit) and I want to take advantage of the system recovery tools (eg the Command Prompt) without using the Windows 7 DVD. My understanding is that this environment (WinRE) should be installed to your HDD by default as part of the Windows 7 installation. However, when I hit F8 on boot and select "Repair", I get: Windows failed to start. A recent hardware or software change might be the cause. To fix the problem... Status: 0xc000000e Info: The boot selection failed because a required device is inaccessible. The "Info" line seems like the smoking gun. My next step was to boot from the Windows 7 DVD, and choose "Repair". It indicated my Recovery Environment wasn't on the Windows 7 boot menu (perfect) and offered to fix it. I said yes and rebooted, however same issue as above. In addition, when I booted in to Windows 7 and I looked at the boot menu options, the recovery/repair option was not there. Only my Windows installation. Finally, I ran the Disk Management tool (diskmgmt.msc) and took a look at the contents of my "System Reserved" partition (which was set to "Active" as normal). It's unclear to me what the contents should look like, however it is my understanding that the WinRE environment gets installed to this partition. (As part of the above troubleshooting I followed http://superuser.com/questions/25728/how-to-fix-windows-7-boot-process which lead to http://www.sevenforums.com/tutorials/668-system-recovery-options.html).

    Read the article

  • using i7 "gamer" cpu in a HPC cluster

    - by user1219721
    I'm running WRF weather model. That's a ram intensive, highly parallel application. I need to build a HPC cluster for that. I use 10GB infiniband interconnect. WRF doesn't depends of core count, but on memory bandwidth. That's why a core i7 3820 or 3930K performs better than high-grade xeons E5-2600 or E7 Seems like universities uses xeon E5-2670 for WRF. It costs about $1500. Spec2006 fp_rates WRF bench shows $580 i7 3930K performs the same with 1600MHz RAM. What's interesting is that i7 can handle up to 2400MHz ram, doing a great performance increase for WRF. Then it really outperforms the xeon. Power comsumption is a bit higher, but still less than 20€ a year. Even including additional part I'll need (PSU, infiniband, case), the i7 way is still 700 €/cpu cheaper than Xeon. So, is it ok to use "gamer" hardware in a HPC cluster ? or should I do it pro with xeon ? (This is not a critical application. I can handle downtime. I think I don't need ECC?)

    Read the article

  • Windows 7 - Can't get my TV working as primary display with nVidia 7900GS

    - by Daniel Schaffer
    I just installed Windows 7 Ultimate 64 RTM (from MSDN) on my HTPC, which is connected to a 42" Magnavox LCD TV via component cables to my nVidia 7900GS. Everything was fine through the installation until I went to install the official driver from nVidia. Towards the end of the installation, the TV blinked off and wouldn't come back on. I went and got an LCD monitor and plugged it into a DVI port and the monitor came right up, but was automatically selected as the primary display. Now, if I set the TV to be the primary display, the TV just blanks until I hit escape to cancel the "settings have changed, do you want to keep them" dialog. Any suggestions? Update: I'm able to set the TV as the primary display using the Windows 7 "screen resolution" configuration panel. However, if I try to remove the LCD monitor either by unplugging it or using the configuration, the TV blanks out again. Update 2: This setup was working correctly in Vista Home Premium 32-bit. Update 3: I've uninstalled the nVidia driver and am using the driver that Windows Update installed. As much as this offends my geek sensibilities (must use the "right" driver!!), well, It Works™.

    Read the article

  • Why does BitLocker need a minimum volume size of 64 MB?

    - by Iszi
    Since the future of TrueCrypt appears to be still unclear, I figured I'd try to get my stuff migrated into BitLocker at least for the time being. I nearly never have to access my encrypted data from anything that's not BitLocker-capable, so cross-platform compatibility isn't a big deal to me at this time. However, I am having a bit of an issue understanding the minimum requirement of a 64 MB volume. With TrueCrypt, I was able to protect small files (and most of my protected files are fairly small) in containers down to 300 KB or even less. When I finally created a VHD of an appropriate size last night (100 MB), it seemed the file system itself only took up about 3 MB and encrypting it with BitLocker didn't appear to take up any more. While 3 MB is still an order of magnitude larger than the smallest volume I could make with TrueCrypt, it's still relatively reasonable in comparison to 64 MB. This is an especially large amount of overhead (and largely wasted at that, since it's mostly empty space for now) when I consider that some of these volumes will be stored and synced in the cloud. What possible reasons could BitLocker have for needing volumes to be 64 MB large, when it's not even appearing to use that space? BitLocker FAQ on TechNet

    Read the article

  • Linux as a router for public networks

    - by nixnotwin
    My ISP had given me a /30 network. Later, when I wanted more public ips, I requested for a /29 network. I was told to keep using my earlier /30 network on the interface which is facing ISP, and the newly given /29 network should be used on the other interface which connects to my NAT router and servers. This is what I got from the isp: WAN IP: 179.xxx.4.128/30 CUSTOMER IP : 179.xxx.4.130 ISP GATEWAY IP:179.xxx.4.129 SUBNET : 255.255.255.252 LAN IPS: 179.xxx.139.224/29 GATEWAY IP :179.xxx.139.225 SUBNET : 255.255.255.248 I have a Ubuntu pc which has two interfaces. So I am planning to do the following: eth0 will be given 179.xxx.4.130/30 gateway 179.xxx.4.129 eth1 will be given 179.xxx.139.225/29 And I will have the following in the /etc/sysctl.conf: net.ipv4.ip_forward=1 These will be iptables rules: iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT My clients which have the ips 179.xxx.139.226/29 and 179.xxx.139.227/29 will be made to use 179.xxx.139.225/29 as gateway. Will this configuration work for me? Any comments? If it works, what iptables rules can I use to have a bit of security? P.S. Both networks are non-private and there is no NATing.

    Read the article

  • How to setup ping between XP guest from Win8 Host using Hyper-V virtual swtich

    - by rism
    Hyper-V client is installed on a Win8 Pro 64 bit box and a VM running XP has been created within that with an internal virtual swtich. The VM can be booted and accessed and there is a default virtual NIC within it with dynamic IP of 169.254.x.x which i have changed to be a static IP of 192.168.0.12/255.255.255.0 confirmed via ipconfig on the XP guest. The Host has IP of 192.168.0.7/255.255.255.0. Both host and guest have their firewalls disabled for simplicity. I cant ping guest from host nor host from guest. TTL timeout. And with regard to Hyper-V and VMs I dont know what to do next. Both are in same workgroup (as per name) but since they cant ping I guess that means nothing. .... My objective is to share a folder on VM so I can install a 32bit accountancy app that wont run on Win8/7 so if there is a more simplistic way then Im all ears but typically a peer to peer is very simple.

    Read the article

  • How to setup RAID 1 with Intel RST on an existing Windows 7 system?

    - by instcode
    I'd like to setup RAID-1 using Intel Rapid Storage Technology on my Windows 7 64-bit system. I have an 1TB SATA HDD with Windows 7 system installed on the first primary partition (leftmost, ~200GB). The rest of this HDD is unallocated (~800GB). I bought another 2TB SATA, then created a primary partition (leftmost, ~500GB) and filled my data in. The rest of this HDD is unallocated (~1.5TB). A quick disk layout (XXX is the unallocated region): HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXXXXXXXX ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXXXXXXXX ] Now, I want to create a 500GB RAID-1 partition (I'm not sure if using "partition" is correct here) on the rightmost of the 2 HDDs above without losing any existing data from both disks. Here is the expected layout: HDD1 (1TB): [ 200GB C:\ SYSTEM | XXXXXX | 500GB D:\ DATA - RAID-1 ] HDD2 (2TB): [ 500GB Z:\ PROGRAM | XXXXXXXXXXXXXXXX | 500GB D:\ DATA RAID-1] Let's not concern about data lost, is it possible to have that final layout using Intel RST? Previously, I tried this layout using dynamic disk & software RAID from Windows and it worked as expected, however, it's quite ugly in resynching after an OS failure that I don't want. If yes, is there a way to keep the data on existing partitions untouched or, at least, it should keep the SYSTEM partition safe (I'm okay if the PROGRAM partition has to be gone.)? Well, are there any strict/special steps I should follow when using the Intel RST manager in order to achieve that? If none of those questions above are "Yes", could you please suggest some other possible layouts that leave the C:SYSTEM partition untouched?

    Read the article

  • Dell laptop keyboard doesn't work

    - by Tam
    I'm trying to fix my in-laws laptop, it's a Dell Studio 1745 that's running Windows 7 64 bit. The problem is that most of the keys on the keyboard do not work. The function keys work and the caps lock and numpad keys work, but no other keys do. If I hit the F2 key enough times when starting up, I can get to the BIOS, but after that even the function keys stop working. If I let it go all the way to the Windows login screen, I can see that the caps lock and num lock work - little images on screen actually appear, but they don't toggle the state of the key, i.e.,capslock is always off, numlock is always off. Using the fn+function combo works, so changing the brightness, etc. works fine. I'm stumped. I've tried disconnecting power and battery and leaving it for an hour or so before starting up but that hasn't helped either. Also - this might be a red herring - the touchpad is failing as well, the MS Device Manager says that it's failing with status 10, "unable to start device"

    Read the article

  • PCI Tv Tuner not receiving signal

    - by C-dizzle
    I inherited an Angel II PCI Dual TV Tuner card, popped it into my computer's PCI slot, plugged in my coax cable, but am having trouble getting any type of signal to it. The computer I am using is Windows 7 Ultimate 32 bit, I know the card works because the computer recognizes it and installs the drivers, or at least I assume it works because of that. I'm trying to use Windows Media Center to watch/record tv. Here are a few things I have tried to get it working: Uninstalled/Reinstalled the newest drivers Tried hooking up an antenna to the coax input instead of my cable Instead of using cable splitter, went directly from wall output into card with coax cable Tried using different output on splitter in case the out port was bad I haven't tried a different coax cable yet, it should be fine since it's pretty new. Since this is my first time setting up a TV Tuner card, is there anything specific that I need to do with it? Is there any configuration that needs to be done on it? Do I need to have a digial receiver? I was getting pretty frustrated with it so I wanted to turn here to the experts, I'm sure someone can help me figure it out.

    Read the article

  • Optimizing Apache for large file serving

    - by D_Guy13
    I have a random problem with Apache that I can't quite figure out, here is my setup, Windows Server 2008 R2, 64 Bit, 5GB RAM, SSD with 200 MB(Read/write) and Dual Core CPU @ 2.1 GHz A dump from mod-staus, Server Version: Apache/2.4.7 (Win32) mod_limitipconn/0.24 mod_antiloris/0.5.2 PHP/5.5.9 Server MPM: WinNT Apache Lounge VC11 Server Built: Nov 21 2013 20:13:01 Current Time: Thursday, 21-Aug-2014 23:38:06 W. Europe Daylight Time Restart Time: Thursday, 21-Aug-2014 20:30:47 W. Europe Daylight Time Parent Server Config. Generation: 1 Parent Server MPM Generation: 1 Server uptime: 3 hours 7 minutes 18 seconds Server load: -1.00 -1.00 -1.00 Total accesses: 283025 - Total Traffic: 1172.2 GB 25.2 requests/sec - 106.8 MB/second - 4.2 MB/request 62 requests currently being processed, 388 idle workers Serving large .zip & iso files using mod_xsendfile. (File size range 500 MB - 1.5 GB) The setup works and is running fine. CPU usage is very unstable, jumps all the time between 10% - 90% and the servers goes down when it hits 100%. In that case I have to hard restart the server. Server it outputting traffic at 30 Mbps. Is there anything else I should think about to get a more stable CPU usage? Is that CPU usage normal? Can switching to Linux help me achieve better CPU usage?

    Read the article

  • Large scale file replication with an option to "unsubscribe" from a replicated file on a given machine

    - by Alexander Gladysh
    I have a 100+ GB files per day incoming on one machine. (File size is arbitrary and can be adjusted as needed.) I have several other machines that do some work on these files. I need to reliably deliver each incoming file to the worker machines. A worker machine should be able to free its HDD from a file once it is done working with it. It is preferable that a file would be uploaded to the worker only once and then processed in place, and then deleted, without copying somewhere else — to minimize already high HDD load. (Worker itself requires quite a bit of bandwidth.) Please advise a solution that is not based on Java. None of existing replication solutions that I've seen can do the "free HDD from the file once processed" stuff — but maybe I'm missing something... A preferable solution should work with files (from the POV of our business logic code), not require the business logic to connect to some queue or other. (Internally the solution may use whatever technology it needs to — except Java.)

    Read the article

  • Provider claiming "all web servers in the cloud are automatically kept in sync" - should I be skeptical?

    - by RobMasters
    I'm no expert in cloud computing - I've spent a fair bit of time researching it and various providers but am yet to get any hands-on experience with it. From what I've read about AWS and auto-scaling EC2 instances though, it seems as though each instance should be completely decoupled from all other instances. i.e. If content is uploaded to the web server's local filesystem from a custom CMS backend then that content won't be available if subsequently requested from a different web server in the auto-scaling group. Is that right? I met with a representative of our existing hosting provider recently and he was claiming that it isn't a problem that our legacy CMS system is highly dependent on having a local filesystem. He said that all web servers, regardless of how many, would be kept as exact duplicates so I shouldn't notice any difference compared to our existing setup of a single dedicated server. This smells a little too much like bull fecal-matter to me...should I be skeptical about this? I'm a little worried because my (non-technical) boss who ultimately makes the decisions is all for signing up to this cloud solution because it won't require any extra work. I'm sure that they must at least be able to provide this, otherwise they wouldn't be attempting to sell it to us. But at what cost? It sounds as though each web server will always need to be checking the other web server(s) for new static content, which to me sounds like unwanted overhead that'll slow things down. I'd really appreciate it if somebody could clear this up to me. I'm all for switching to AWS and using S3+CloudFront for all static content, but that isn't looking very likely to happen at the moment.

    Read the article

  • how to import existing VM in vmware workstation 8 inventory

    - by Wimmel
    I like to add existing vmware (player) virtual machines to the vmware workstation 8 inventory on linux. When I create a new virtual machine, it is stored in /var/lib/vmware/Shared VMs/. But copying new directories to that folder, does not make them appear in the workstation window. I found out, the inventory is stored in /etc/vmware/hostd/vmInventory.xml; <ConfigRoot> <ConfigEntry id="0000"> <objID>1</objID> <vmxCfgPath>/var/lib/vmware/Shared VMs/test 1234/test 1234.vmx</vmxCfgPath> </ConfigEntry> </ConfigRoot> But I don't know if I break anything when adding entries myself, and giving it an unique ID. Besides, adding a large number of VMs this way is a bit cumbersome. On ESX, it was possible to use vmware-cmd -s register, but I don't have a vmware-cmd installed. In another question it was suggested to use vmware converter. But vmware converter 5 (on windows) only allows a destination file location when I select workstation as destination type. When I select vmware infrastructure as destination type, it says the destination is unsupported; it required vmware vcenter server.

    Read the article

< Previous Page | 687 688 689 690 691 692 693 694 695 696 697 698  | Next Page >