Search Results

Search found 22065 results on 883 pages for 'performance testing'.

Page 712/883 | < Previous Page | 708 709 710 711 712 713 714 715 716 717 718 719  | Next Page >

  • Acronis Disk Director AFTER Clone Disk error: PXE-E61: Media test failure, check cable

    - by Kairan
    Used Acronis Disk Director on my desktop, plugged in the laptop drive 240GB SSD (USB) and the new hard drive 500GB SSD (usb) and the copy seemed to be fine. I didnt see any error messages but I didnt stare at it for 3 hours either. The clone disk of course the Toshiba hidden restore partition, the primary partition C drive and the active (boot?) partition and yes, did check box for copy NT signature. The computer boots up fine most of the time, but it seems that when the computer goes to sleep (i believe its sleep, hard to do much testing during school) or hibernate or reboot it will sometimes display this message: Intel(R) Boot Agent GE v1.3.52 Copyright (C) 1997-2010, Intel Corporation PXE-E61: Media test failure, check cable PXE-M0F: Exiting Intel Boot Agent Insert system disk in drive. Press any key when ready... Of course any key does nothing but repeat a similar method. However, if I press the power button on the laptop (Toshiba Portege R705, Win 7 Pro 64-bit) it puts computer into hibernate. After hibernating I press power button again and it comes out of hibernation without any odd messages or problems described above... so apparently that is my TEMP fix. Another recent issue I noticed is on occasion when creating a new folder or modifying something in the system variables, other random areas I will get a message: "The Stub received bad data" and simply retry the task and it works. Perhaps these two issues are linked.

    Read the article

  • Nginx ignores HTTP Authentication for WordPress login directory

    - by MrNerdy
    I am running WordPress in a subfolder of my domain for testing and development purposes on a VPS LEMP-stack. In order to password-protect the wp-login.php with an etxra layer, I used HTTP authentication for the wp-admin folder. The problem is that the http authentication is ignored. When the wp-login.php or wp-admin-folder is called, it goes directly to the normal WordPress-login. I installed everything from the command line in the following way: sudo apt-get install apache2-utils sudo htpasswd -c /var/www/bitmall/wp-admin/.htpasswd exampleuser New password: Re-type new password: Adding password for user exampleuser My Nginx configuration file looks like this: server { listen 80; root /var/www; index index.php index.html index.htm; server_name example.com; location / { try_files $uri $uri/ /index.html; } location /bitmall/wp-admin/ { auth_basic "Restricted Section"; auth_basic_user_file /var/www/bitmall/wp-admin/.htpasswd; } location ~ /\.ht { deny all; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /var/www; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } I would appreciate your advive on this.

    Read the article

  • How do you verify a restore?

    - by Nic
    What tool(s) would you use to verify that a restored file structure is whole and complete? My environment is a Windows Server 2008 file server. (We use tape for backup, but that is inconsequential.) I am specifically looking for a tool that will: Record the names of all files and folders below a specified directory Optionally calculate checksums of each file encountered Save this index in a human-readable format Compare the index against restored data and show differences Some background: I recently had to replace the disks in our file server. The upgrade was scheduled to start 36 hours after the most recent full backup, so I created a differential backup. However, it turns out that one of our applications was clearing the archive bit on files saved to the server, so these were not included in the differential backup. I was unaware of this until my users reported some files as missing. Aside from this, are there any other common methods for validating the integrity of a restore? I am frequently told that testing backups by restoring them is the only way to know that backups are working, but how do you deal with the case where it works 99% correctly and the other 1% silently fails?

    Read the article

  • To update or to not update?

    - by Massimo
    Since starting working where I am working now, I've been in an endless struggle with my boss and coworkers in regard to updating systems. I of course totally agree that any update (be it firmware, O.S. or application) should not be applied carelessly as soon as it comes out, but I also firmly believe that there should be at least some reason if the vendor released it; and the most common reason is usually fixing some bug... which maybe you're not experiencing now, but you could be experiencing soon if you don't keep up with . This is especially true for security fixes; as an examle, had anyone simply applied a patch that had already been available for months, the infamous SQL Slammer worm would have been harmless. I'm all for testing and evaluating updates before deployng them; but I strongly disagree with the "if it's not broken then don't touch it" approach to systems management, and it genuinely hurts me when I find production Windows 2003 SP1 or ESX 3.5 Update 2 systems, and the only answer I can get is "it's working, we don't want to break it". What do you think about this? What is your policy? And what is your company policy, if it doesn't match your own?

    Read the article

  • apache permission errors

    - by Wilduck
    I'm trying to set up Apache on a arch-linux box as a testing environment (I'm only using the localhost, not trying to serve anything to the greater web). When setting up Django with mod_wsgi, it recommended that I set up a WSGIScriptAlias from / to /usr/local/django/mysite/apache/django.wsgi . I've done this, as well as added the /usr/.../apache directory to my httpd.conf. When I try to access http://localhost I get a 403 forbidden error. I have no idea why this is happening. Things I've tried so far: 1) chown -R http .../apache 2) chmod -R 777 .../apache 3) using a simple Alias directive to host a static file from that directory. None of these have worked. I'm at a loss for what I'm doing wrong. Below is a relevant excerpt from my httpd.conf: Alias / /usr/local/django/mysite/apache <Directory "/usr/local/django/mysite/apache"> Order deny,allow Allow from all </Directory> So my question is: what am I doing wrong?

    Read the article

  • How to add URL's to wiki (MediaWiki) powered documentation?

    - by Ian Boyd
    We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: vmrc://solo.avatopia.com:5901/Windows 2000 Server My first thought was to convert the URL into a link: [vmrc://solo.avatopia.com:5901/Windows 2000 Server] But the content renders literally as above: with the square brackets and all. Testing with other URL protocols: [http://solo.avatopia.com] [ftp://solo.avatopia.com] [ldap://solo.avatopia.com] [vmrc://solo.avatopia.com] Only the first two work, and are converted to hyperlinks. The other two remain as liternal text. How can i add URLs to MediaWiki powered documentation? Original Question We have an internal company wiki. The wiki engine being used is MediaWiki, the wiki engine that runs Wikipedia. Some of it contains IT stuff. One of the things i want want to have are hyperlinks to the various virtual machines. An example of a command, as it needs to run, is: \\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/Windows 2000 Server If launching from a command prompt, you have to quote the spaces: C:\>"\\solo\VMRC Client\vmrc.exe" solo.avatopia.com:5901/"Windows 2000 Server" My first thought in converting the above for use on our wiki-site, is to simply HTML-ify it: file://\\solo\VMRC Client\vmrc.exe solo.avatopia.com:5901/&quot;Windows 2000 Server&quot; but MediaWiki only converts file://\solo\VMRC to a hyperlink, the remainder is text. i've tried other random things, including enclosing the URL in square brackets. What is the correct answer? i don't want to happen to randomly stumble on some format that happens to work today, and breaks in the future.

    Read the article

  • OpenVPN: ERROR: could not read Auth username from stdin

    - by user56231
    I managed to setup openvpn but now I want to integrate a user/pass authentication method so, even though I haven't added the auth-nocache in the server config, whenever I try to connect it returns with the following message on the client side: ERROR: could not read Auth username from stdin My server.conf file contains basic stuff, everything works up untill I try to implement this for of authentication. mode server dev tun proto tcp port 1194 keepalive 10 120 plugin /usr/lib/openvpn/openvpn-auth-pam.so login client-cert-not-required username-as-common-name auth-user-pass-verify /etc/openvpn/auth.pl via-env ca /etc/openvpn/easy-rsa/2.0/keys/ca.crt cert /etc/openvpn/easy-rsa/2.0/keys/server.crt key /etc/openvpn/easy-rsa/2.0/keys/server.key dh /etc/openvpn/easy-rsa/2.0/keys/dh1024.pem user nobody group nogroup server 10.8.0.0 255.255.255.0 persist-key persist-tun #persist-local-ip status openvpn-status.log verb 3 client-to-client push "redirect-gateway def1" push "dhcp-option DNS 10.8.0.1" log-append /var/log/openvpn comp-lzo I searched all over the net for a solution and all answers seems to be related to the auth-nocache param which I haven't set. The directive auth-user-pass-verify /etc/openvpn/auth.pl via-env points to a script which is executed to perform the authentication. A false authentication should result in a exit 1 while a true one should result with exit 0. For testing, that script auth.pl returns exit 0 no matter what the input is but it seems that the file is not executed before the error raises. auth.pl file contents: #!/usr/bin/perl my $user = $ENV{username}; my $passwd = $ENV{password}; printf("$user : $passwd\n"); exit 0; Any ideas?

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Best CPUs for speeding up compiling times of C++ w/ DistGCC

    - by Jay
    I'm putting together a distributed build farm with DistGCC to speed up our teams compile times and just looking for thoughts on which processors to use in the hosts. Are we going to get a noticeable decrease in time using 8 cores vs. 4-hyperthreaded cores? Big difference in time between i7 and Xeon? etc, etc. Just need advice from people who've put together kick-a build clusters. We've got a majority of the normal things to speed up builds in place (pre-compiled headers, ccache, local gigabit connections between them, tons of ram, etc) so please just give advice on the best processor to use. And money is a factor, but anythings doable if the performance increase is noticeable. Thanks. Jay EDIT: Although any advice IS welcome, please refrain from "Do this first" posts as we're not planning on skimping on things like SSD, maxed out RAM, etc. My personal system is a iMac Quad-core i5 with 8GB of RAM. When I build our project locally, my processor floats around 99-100% a majority of the time, which makes me assume it is a bottleneck, even if you made everything else faster. My ram on the other hand doesn't even get close to maxing out. It's also worth noting that I did research this, however every discussion I could find was primarily for gaming machines, which is obviously a different beast in usage. These machines won't even have monitors or anything but integrated graphics since they have one purpose: Build freakin fast. (hopefully)

    Read the article

  • Mysql Cluster not working on Ubuntu

    - by user53864
    I am unable to setup MySQL Cluster on ubuntu servers. As a starting point I started from the link but I am not successful and the tar ball version I download is 6.3.45. As I wanted to test the mysql cluster, the Data node and SQL node are same but sql never appeared as connected in management node console and it looks like below. [ndbd(NDB)] 2 node(s) id=2 @192.168.1.107 (Version: version number, Nodegroup: 0, Master) id=3 @192.168.1.108 (Version: version number, Nodegroup: 0) [ndb_mgmd(MGM)] 1 node(s) id=1 @192.168.1.105 (Version: version number) [mysqld(API)] 2 node(s) id=4 (not connected, accepting connect from 192.168.1.107) id=5 (not connected, accepting connect from 192.168.1.108) On all the 3 machines mysql-server & client(apt-get install mysql-server mysql-client) were already installed and I completely stopped and also removed them at the system start up. Now the mysqld is from extracted cluster tar ball(/usr/local/mysql/support-files/mysql.server). As for testing, I created a test database on both the data nodes but the tables are also not syncing on other node. I checked many links, configurations are remained similar in all the links but somewhere it's going wrong. Anymore extra package is required?, Could anyone help me here..?. I am trying this for past 3 days... Thank you!

    Read the article

  • How do I diagnose a bottleneck in an Intel Atom based Ubuntu server?

    - by Jon Cage
    I have a small media server at home which has software raid and a gigabit link to the rest of my network. For some reason though, I only get ~10MB/s transfers when copying to/from the server. I use software RAID5 (mdadm) over 4 1TB disks. On top of that I then use LVM to give me a huge pool of disk space which is then split up into multiple partitions which can be resized as and when they need it. I'm guessing this it most likely the cause, but I'd like to know for sure where the root cause is. So, how can I benchmark network throughput (Windows 7 desktop <- Ubuntu server) and hard disk performance to try and identify where my bottleneck might be? [Edit] If anyone's interested, the motherboard is an Intel Desktop Board D945GCLF2. So that's a 300 series Atom processor with the Intel® 945GC Express Chipset [Edit2] I feel like such a fool! I just checked my desktop and I had the slower of the two onboard NICs plugged in so the server is probably not at fault here. Transferring a copy of ubuntu off the server I get ~35-40MB/s according to Windows 7. I'll do those HD tests when I get a chance though (just for completeness).

    Read the article

  • Macbook Pro Triple Boot OS X Lion, Windows 7 and Windows 8

    - by Lloyd Sparkes
    MacBook Pro (Summer 2010 Model, Basic Model) I currently have OS X Lion and Windows 7 running side by side on my MacBook Pro. However I have a need to get Windows 8 running as well in this mix (a Virtual Machine is not good enough, I need the performance). I have created a suitably sized parition (80GB) that is recognizable in Boot Camp. However every time I try to boot from the USB stick (that worked to install Windows 8 on my PC) using the latest version of rEFIt, it just boots Windows 7 and not the Windows 8 installer. I cannot start the installation within Windows 7 as it will just install over Windows 7. I'm guessing the Boot Camp emulation is doing something werid to stop the "Press any key to install Windows..." message from appearing (which should happen if the installer detects Windows is already installed (e.g. if you left your install disk in). Is there a way to get around this / force the installer to start? (Note I cannot start the Windows 7 installer either if I wanted to install a second copy of Windows 7 to upgrade to Windows 8)

    Read the article

  • Prolific USB-to-Serial Comm Port significantly slower under Windows 7 comparing to Windows XP

    - by Dmitry S
    Not sure if this question should be asked here or on SuperUser but if we get an answer here it may be useful for others here I am using a Prolific USB-to-Serial adapter based on the Prolific chip to use with a device on serial port. I have the latest version of the driver installed: 1.3.0 (2010-7-15). When I use my device with this adapter on my main Windows 7 (32bit) system it takes 8-9 seconds to send a command through to the device. However, when I do the same thing on a different Windows XP system (an old laptop I borrowed for testing) it only takes 2-3 seconds. I have made sure that the port settings and other variables are the same between systems. I also tested on a third laptop (also running Windows 7) and again got a significant delay. So the question is if anyone else experienced the same problem and found a solution. I would like to avoid moving to an XP system for what I need to achieve so that's my last option. Thanks in advance.

    Read the article

  • WT-NMP - PHP-CGI randomly stops running with no error log

    - by alexfontaine
    We have recently installed WT-NMP and are currently running Php-Cgi with php 5.4.24. We are running fairly simple php scripts and when testing everything is running fine. Over the weekend we wanted to keep the server running test it over a longer period of time. The server and scripts ran fine all day on Friday, but sometime late on Saturday, the php-cgi stopped running. There are no errors in the error log (C:\WT-NMP\log). In the configuration (php.ini) I have the following options set: error_reporting = E_ALL display_errors = On display_startup_errors = On log_errors = On html_errors = On error_log = "c:/wt-nmp/log/php_error.log" We also have the standard nginx.conf error logs: access_log "c:/wt-nmp/log/nginx_access.log"; error_log "c:/wt-nmp/log/nginx_error.log" warn; So, since the log directory is empty, I am assuming that the running php scripts and general nginx operations are not causing the php-cgi to stop. So my questions are: What else could cause the php-cgi to stop running? Are there any other options for logging that we could turn on that could help us track this down? Are there other log locations that we should be looking at? Thanks!

    Read the article

  • samba4 dc "network location cannot be reached"

    - by mitchell babies peters
    to clear the air centos 6.4? (maybe 6.3) as the server, running samba 4.0.10, trying to add a windows 7 client that has connectivity to the server. this is what windows shouts as me as it mocks my dependence on network infrastructure. "the network location cannot be reached." i have access to the domain contoller (dc) im using the dc as the domain name server (dns) already, and the name is correctly resolving, and it is correctly forwarding outbound traffic. i have nothing but self taught experience with active directory(ad) so if i am missing something obvious, please shout it out, but keep the verbal abuse to a minimum. i checked samba4DC + my error and found nothing relevant to my issue, if i missed something please point me in that direction. the weekend is just starting as i write this so i probably wont be back on to check this post for a day or three, but i might because this mystery is killing me. i followed the samba4 as a dc guide here and i supplimented gaps with this i have tested kerberos, ntp, and set my DC as the clock to sync to in my windows client and it appears to be a very small fraction of a second off so that shouldn't be it. also, firewall and selinux are both off for testing. i have also tried disabling ipv6, and cleared the registry of ipv6 records (allegedly the default samba4 as a DC runs as windows server 2003 which allegedly does not support or tolerate the existence of ipv6, fair warning, i heard this on the internet so it is probably a lie) i have tried a few other things that i have forgotten because i have been doing this for a day and a half now. ideas welcome. suggestions for alternatives are also welcome, as long as they are free. i was given a budget of $0 dollars and told to implement active directory (no prior knowledge of active directory at that point).

    Read the article

  • exim4 redirect mail sent to *@domain1.example.com to *@domain2.example.com

    - by nightcoder
    Current situation: We have a VPS that hosts a website example.org. Exim is configured to work as a smarthost. All emails sent through exim are successfully relayed to another mail server (that is working on example.com). Goal: To forward mail sent to *@example.org to *@example.com, i.e. change the recipient's address from *@example.org to *@example.com. Problem: If I send email to address *@example.org, then it seems exim doesn't change the address, it still relays the message to another mail server but recipient is still *@example.org. Maybe the redirect is not applied for some reason. Configuration and logs: /etc/exim4/update-exim4.conf.conf: dc_eximconfig_configtype='smarthost' dc_other_hostnames='' dc_local_interfaces='' dc_readhost='example.org' dc_relay_domains='example.org' dc_minimaldns='false' dc_relay_nets='0.0.0.0/32' dc_smarthost='example.com::26' CFILEMODE='644' dc_use_split_config='false' dc_hide_mailname='true' dc_mailname_in_oh='true' dc_localdelivery='maildir_home' /etc/exim4/conf.d/router/999_exim4-config_redirect (created by me): domain_redirect: debug_print = "R: forward for $local_part@$domain" driver = redirect domains = example.org data = [email protected] (for now data is set to a specific address for simplicity and testing) exim log when sending email to [email protected] (should be redirected to [email protected]): 2012-03-20 19:40:07 1SA4ud-0005Dw-7k <= [email protected] U=www-data P=local S=657 2012-03-20 19:40:08 1SA4ud-0005Dw-7k => [email protected] R=smarthost T=remote_smtp_smarthost H=domain2.com [184.172.146.66] X=TLS1.0:RSA_AES_256_CBC_SHA1:32 DN="C=US,2.5.4.17=#13053737303932,ST=TX,L=Houston,STREET=Suite 400,STREET=11251 Northwest Freeway,O=HostGator.com,OU=HostGator.com,OU=Comodo PremiumSSL Wildcard,CN=*.hostgator.com" 2012-03-20 19:40:08 1SA4ud-0005Dw-7k Completed So, the address is not changed :( Please help! I'm trying to make it work for half a day already :(

    Read the article

  • About Load average in htop, how to decide if it's still doing ok?

    - by Joe Huang
    I use 'htop' to monitor my web server. It's recently quite loaded and the Load average is showing something like this: Load average: 3.10 2.56 1.63 I searched the web about these numbers and I found an article about it: http://blog.scoutapp.com/articles/2009/07/31/understanding-load-averages In the article, it says if I have 2 CPUs, 2.0 means 100% CPU utilization. And my VPS has two CPUs, so what does 3.1 mean? How could it exceed 100% CPU utilization? And from these numbers, does it mean I should be wary about the loading now? But the performance seems totally fine, and this is a managed VPS, the hosting company has not notified me any warning about it. During day time, Load average always show these high numbers... here is another snapshot while writing. Load average: 3.03 2.77 1.97 Load average: 0.41 1.29 1.60 <---- 5 more minutes later So I am wondering how much room left for this site to grow in current configurations? What kind of proactive actions I should take in advance? I don't want to wait until the server bursts. Thanks.

    Read the article

  • Simple end-to-end load and bottleneck monitoring for DB-based web sites

    - by T.J. Crowder
    What tools do you use / would you recommend for monitoring a Linux-based, DB-based website's servers for bottlenecks and load? The obvious goal being to know when growth has gotten to the point where it's necessary to scale up (or out) one or more of the bits and pieces because the current system won't be managing the load if an observed trend continues. I'm looking for general recommendations based on standard Linux load metrics, disk I/O metrics, network I/O metrics, etc., but if specifics are helpful: It'll be Tomcat6 using APR (possibly with a Varnish or similar caching and balancing front-end), MySQL, and either Ubuntu 8.04 LTS or 10.04 LTS depending on timing. I know about top, vmstat, iostat, bwmon and the like that collect and parse info from the /proc file system (et. al.); and obviously MySQL provides a lot of queriable performance information. I could use those directly, probably automating periodic monitoring logs with scripts and such. But I have a suspicion that I'd be reinventing a wheel... For example, Hyperic HQ seems to be along the lines of what I'm looking for. Others? Meta: I tend to think of "recommendation" questions as needing to be CW because there's no one right answer, but I see a lot of these here that aren't CWs, so I haven't marked it as one. I'll happily do so if enough people think I should.

    Read the article

  • Intel Atom overheating in ASUS EEE Box 1501P

    - by Sergey L.
    I have had an ASUS EEE Box 1501P for just a little bit over a year. Of course it breaks 2 months after the warranty runs out. http://www.asus.com/Eee/EeeBox_PC/EeeBox_PC_EB1501P/ I have been using the box as a Home Media Center. Running mostly 24/7 often pausing a video overnight. Since last week the fan started running extremely loud. After some digging I found that the Intel Atom CPU in it is overheating and the built-in sensor is reporting temperatures way over 105°C. This got me worried, so I took the unit apart. Completely vacuumed the heat sink, oiled the fan, but the unit is still showing the same behaviour. After turning it on and just observing the hardware monitor in the BIOS the temperature slowly rises from 40°C to over 95°C in appx 5 min. I am running the newest BIOS and a lightweight Linux OPENELEC OS with XBMC on it. Now I am wondering if it could be a faulty heat sensor in the Atom. Recommended running temperature is up to 85°C, but I have not detected any performance hits when running at the above mentioned 105°C and there seem to be no software faults. How can an Atom with an attached heat sink and a fan running at full capacity even get this hot in the first place at 0 load? Aren't those things designed to generate virtually no heat? Could it be a faulty heat sensor? What shall I try to fix this? I would prefer not to damage the CPU, since it is hard fused into the motherboard and cannot be replaced. I could remove the heat pipe/heat sink, but it is getting hot, so heat is properly transferring from the CPU to the heat pipe, the fan is running at full capacity, is recently oiled and warm air is making it out of the exhaust. Edit: One more note: The North-bridge (or whatever it is called nowadays) is on the same heat pipe.

    Read the article

  • Karmic iptables missing kernel moduyles on OpenVZ container

    - by luison
    After an unsuccessful p2v migration of my Ubuntu server to an OpenVZ container which I am stack with I thought I would give a try to a reinstall based on a clean OpenVZ template for Ubuntu 9.10 (from the OpenVZ wiki) When I try to load my iptables rules on the VM machine I've been getting errors which I believe are related to kernel modules not being loaded on the VM from the /vz/XXX.conf template model. I've been testing with a few post I've found but I was stack with the error: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Could not load /lib/modules/2.6.24-10-pve/modules.dep: No such file or directory iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I read about the template not loading all iptables modules so I added modules to the XXX.conf of the VZ virtual machine like this: IPTABLES="ip_tables iptable_filter iptable_mangle ipt_limit ipt_multiport ipt_tos ipt_TOS ipt_REJECT ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_LOG ipt_length ip_conntrack ip_conntrack_ftp ip_conntrack_irc ipt_conntrack ipt_state ipt_helper iptable_nat ip_nat_ftp ip_nat_irc" As the error remained I read that I should build dependencies again on the virtual machine: depmod -a but this returned an error: WARNING: Couldn't open directory /lib/modules/2.6.24-10-pve: No such file or directory FATAL: Could not open /lib/modules/2.6.24-10-pve/modules.dep.temp for writing: No such file or directory So I read again about creating the directory empty and redoing "depmod -a" it. I now don't get the dependancies error but get this and I don't have a clue how to proceed: WARNING: Deprecated config file /etc/modprobe.conf, all config files belong into /etc/modprobe.d/. FATAL: Module ip_tables not found. iptables-restore v1.4.4: iptables-restore: unable to initialize table 'raw' Error occurred at line: 2 Try `iptables-restore -h' or 'iptables-restore --help' for more information. I understand that iptables rules have to be different on the VM machine and perhaps some of the rules we are trying to apply (from our physical server) are not compatible but these are just source IP and destination port checks that I would like to be able to have available . I've heard that on the CentOS template there are no issues with this, so I understand is to do with VM config. Any help would be greatly appreciated.

    Read the article

  • Linux Experts Riddle: Network output of 10MB/s on 10GB/s NIC

    - by user150324
    I have two CentOS 6 servers. I am trying to transfer files between them. Source server has 10GB/s NIC nd destination server has 1GB/s NIC. Regardless to the command used nor the protocol, the transfer speed is ~1 Mega byte per second. The goal is at least couple dozens MB per second. I have tried: rsync (also with various encryptions), scp, wget, aftp, nc. Here's some testing results with iperf: [root@serv ~]# iperf -c XXX.XXX.XXX.XXX -i 1 ------------------------------------------------------------ Client connecting to XXX.XXX.XXX.XXX, TCP port 5001 TCP window size: 64.0 KByte (default) ------------------------------------------------------------ [ 3] local XXX.XXX.XXX.XXX port 33180 connected with XXX.XXX.XXX.XXX port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.30 MBytes 10.9 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 1.0- 2.0 sec 1.28 MBytes 10.7 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 2.0- 3.0 sec 1.34 MBytes 11.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 3.0- 4.0 sec 1.53 MBytes 12.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 4.0- 5.0 sec 1.65 MBytes 13.8 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 5.0- 6.0 sec 1.79 MBytes 15.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 6.0- 7.0 sec 1.95 MBytes 16.3 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 7.0- 8.0 sec 1.98 MBytes 16.6 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 8.0- 9.0 sec 1.91 MBytes 16.0 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 9.0-10.0 sec 2.05 MBytes 17.2 Mbits/sec [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.0 sec 1.68 MBytes 14.0 Mbits/sec I guess HD is not the bottleneck here.

    Read the article

  • Will Parallel-port dongle work on USB-to-Parallel Adapter?

    - by Gary M. Mugford
    We have a niche program running on a Win2K laptop that uses a security dongle connected to a parallel port for authentication. The laptop is getting creaky and I spent a frustrating night last night shopping various websites for a new laptop that had a parallel port. Seems I'm about three years late [G]. The question I have, is, if I buy a new(ish) laptop and use a USB-to-Parallel Port adapter, will the security dongle work? I know I'm not being specific about the app, but it's one most people wouldn't have heard of anyways. I've been guessing the answer to my question is no, since the app won't know to send a request out to the non-existent port. But, if the process actually is that the dongle sends a message INTO the computer every now and then, then it might work. And, I'm not sure whether the dongle is only needed at program startup time or randomly. The dongle is a 'permanent' addition to the old laptop. This is all about the money. We can have a newly-updated version of the program (which won't add any features we need) for the princely sum of $2700. Or we can spend $500 on a refurbed laptop still running WinXP, add a 30 buck adapter and keep the same solid, stolid performance we've come to appreciate. But it all comes down to the dongle behaviour. Oh, and a dock won't work. The whole laptop issue is about moving about the various nooks and crannies of the building with laptop in hand. Thanks for any suggestions/guidance. GM

    Read the article

  • How to get an ARM CPU clock speed in Linux?

    - by MiKy
    I have an ARM-based embedded machine based on S3C2416 board. According to the specifications I have available there should be a 533 MHz ARM9 (ARM926EJ-S according to /proc/cpuinfo), however the software running on it "feels" slow, compared to the same software on my Android phone with a 528MHz ARM CPU. /proc/cpuinfo tells me that BogoMIPS is 266.24. I know that I should not trust BogoMIPS regarding performance ("Bogo" = bogus), however I would like to get a measurement on the actual CPU speed. On x86, I could use the rdtsc instruction to get the time stamp counter, wait a second (sleep(1)), read the counter again to get an approximation on the CPU speed, and according to my experience, this value was close enough to the real CPU speed. How can I find the actual CPU speed of given ARM processor? Update I found this simple Pi calculator, which I compiled both for my Android phone and the ARM board. The results are as follows: S3C2416 # cat /proc/cpuinfo Processor : ARM926EJ-S rev 5 (v5l) BogoMIPS : 266.24 Features : swp half fastmult edsp java ... #./pi_arm 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 8.50 sec. (real time) Android # cat /proc/cpuinfo Processor : ARMv6-compatible processor rev 2 (v6l) BogoMIPS : 527.56 Features : swp half thumb fastmult edsp java # ./pi_android 10000 Calculation of PI using FFT and AGM, ver. LG1.1.2-MP1.5.2a.memsave ... 5.95 sec. (real time) So it seems that the ARM926EJ-S is slower than my Android phone, but not twice slower as I would expect by the BogoMIPS figures. I am still unsure about the clock speed of the ARM9 CPU.

    Read the article

  • Different versions of iperf for windows give totally different results

    - by Albert Mata
    Measuring TCP output from a Windows client to Solaris server: WXP SP3 with iperf 1.7.0 -- returns an average around 90Mbit Same client, same server but iperf 2.0.5 for windows -- returns an average of 8.5 Mbit Similar discrepancies have been observed connecting to other servers (W2008, W2003) It's difficult to get to some conclusions when different versions of the same tool provide vastly different results. Example below: C:\tempiperf -v (from iperf.fr) iperf version 2.0.5 (08 Jul 2010) pthreads C:\tempiperf -c solaris10 Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte (default) [ 3] local 10.172.181.159 port 2124 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0-10.2 sec 10.6 MBytes 8.74 Mbits/sec Abysmal perfomance, but now I test from the same host (Windows XP SP3 32bit and 100Mbit) to the same server (Solaris 10/sparc 64bit and 1Gbit running iperf 2.0.5 with default window of 48k) with the old iperf C:\temp1iperf -v iperf version 1.7.0 (13 Mar 2003) win32 threads C:\temp1iperf.exe -c solaris10 -w64k Client connecting to solaris10, TCP port 5001 TCP window size: 64.0 KByte [1208] local 10.172.181.159 port 2128 connected with 10.172.180.209 port 5001 [ ID] Interval Transfer Bandwidth [1208] 0.0-10.0 sec 112 MBytes 94.0 Mbits/sec So one iperf with a 64k window says 8.75Mbit and the old iperf with the same window size says 94.0Mbit. These results are constant through repeated tests. From my testing launching iperf(old) with window size "x" and iperf(new) with window size "x" instead of producing the same or very close results produce totally different results. The only difference I see is the old compiled as win32 threads vs. pthreads but parallelism (-P 10) appears to work in both. Anyone has a clue or can recommend a tool that gives results I can trust?? EDIT: Looking at traces from (old) iperf it sets the TCP Window Scale flag to 3 in the SYN packet, when I run the (new) iperf this is set to 0 in the initial packet. A quick analysis of the window size through the exchange shows the (old) iperf moving back and forth but mostly at 32k while the (new) iperf mostly keeps at 64k. Maybe it will help somebody to connect the dots.

    Read the article

< Previous Page | 708 709 710 711 712 713 714 715 716 717 718 719  | Next Page >