Search Results

Search found 21004 results on 841 pages for 'assembly load'.

Page 671/841 | < Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >

  • java memory allocation under linux

    - by pstanton
    I'm running 4 java processes with the following command: java -Xmx256m -jar ... and the system has 8Gb memory under fedora 12. however it is apparently going into swap. how can that be if 4 x 256m = 1Gb ? EDIT: also, how can all 8Gb of memory be used with so little memory allocated to basically the only thing running? is it java not garbage collecting because the OS tells it it doesn't need to or what? TOP: top - 20:13:57 up 3:55, 6 users, load average: 1.99, 2.54, 2.67 Tasks: 251 total, 6 running, 245 sleeping, 0 stopped, 0 zombie Cpu(s): 50.1%us, 2.9%sy, 0.0%ni, 45.1%id, 1.1%wa, 0.0%hi, 0.8%si, 0.0%st Mem: 8252304k total, 8195552k used, 56752k free, 34356k buffers Swap: 10354680k total, 74044k used, 10280636k free, 6624148k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1948 xxxxxxxx 20 0 1624m 240m 4020 S 96.8 3.0 164:33.75 java 1927 xxxxxxxx 20 0 139m 31m 27m R 91.8 0.4 38:34.55 postgres 1929 xxxxxxxx 20 0 1624m 200m 3984 S 86.2 2.5 183:24.88 java 1969 xxxxxxxx 20 0 1624m 292m 3984 S 65.6 3.6 154:06.76 java 1987 xxxxxxxx 20 0 137m 29m 27m R 28.5 0.4 75:49.82 postgres 1581 root 20 0 159m 18m 4712 S 22.5 0.2 52:42.54 Xorg 2411 xxxxxxxx 20 0 309m 9748 4544 S 20.9 0.1 45:05.08 gnome-system-mo 1947 xxxxxxxx 20 0 137m 28m 27m S 13.3 0.4 44:46.04 postgres 1772 xxxxxxxx 20 0 135m 25m 25m S 4.0 0.3 1:09.14 postgres 1966 xxxxxxxx 20 0 137m 29m 27m S 3.0 0.4 64:27.09 postgres 1773 xxxxxxxx 20 0 135m 732 624 S 1.0 0.0 0:24.86 postgres 2464 xxxxxxxx 20 0 15028 1156 744 R 0.7 0.0 0:49.14 top 344 root 15 -5 0 0 0 S 0.3 0.0 0:02.26 kdmflush 1 root 20 0 4124 620 524 S 0.0 0.0 0:00.88 init 2 root 15 -5 0 0 0 S 0.0 0.0 0:00.00 kthreadd 3 root RT -5 0 0 0 S 0.0 0.0 0:00.00 migration/0 4 root 15 -5 0 0 0 S 0.0 0.0 0:00.04 ksoftirqd/0

    Read the article

  • Can a VM perform better when only two cores instead of four cores are presented to it?

    - by arcain
    We had a VMWare VM at work with two cores allocated to it that ran a pretty heinous process in IIS. Under load the process was maxing out the CPU usage on both cores, so we asked our system engineers to present the other two cores of the physical processor to the VM. The engineer immediately said that this would not improve performance at all, but would make the VM perform worse. That statement didn't make much sense to me, and I'm wondering how what the engineer said could be true. Are there actually cases where four cores presented to a VM would cause worse performance than two cores on the same physical hardware? Let's assume an ideal situation where there's only one VM on the host server, so nothing is being shared with other OS instances. I believe the physical server had a single quad core processor, and was most likely hosting multiple VMs. I don't really know what version of ESX was running on the host, nor do I know with certainty what the physical processor config was, but from within the VM I had access to, I saw two 3.33 GHz AMD processors. In the end, I never got to test the engineer's assertion out because (while we were trying to get the VM upgraded) we were able to optimize the process and reduce it's CPU consumption, and 2) we ended up migrating to a different VM on another ESX server which had four cores presented to it.

    Read the article

  • What are the practical differences between an IP address and a server?

    - by JMC Creative
    My understanding of IPs and other DNS-type server-related issues really falls short (read: exteme noob). I know a dedicated server would increase speed. What, if any, difference in speed would a dedicated IP make? Am I correct in understanding the Best Practices from Yahoo that I could use the second IP to serve up some content, which would increase the number of parallel downloads for the user? Or are both IPs (purchase from same hosting account) going to point to the same server? Or how does it work? Are there other optimization things I should be aware of when thinking of purchasing a dedicated IP? Clarification I am talking about the speed of serving the webpages, i.e. the speed of my website. Yes, I know that IP and server are completely different, not even opposites, just different. But this, indeed, is my question! The Question Reformulated: Will having a second (dedicated) IP on my website speed up the time that it will load and display for the user? Or does that have nothing at all to do with IP, and is only a server issue? I'm sorry if this is still unclear. This is a real question though, I may just not be wording it well.

    Read the article

  • Ubuntu's garbage collection cron job for PHP sessions takes 25 minutes to run, why?

    - by Lamah
    Ubuntu has a cron job set up which looks for and deletes old PHP sessions: # Look for and purge old sessions every 30 minutes 09,39 * * * * root [ -x /usr/lib/php5/maxlifetime ] \ && [ -d /var/lib/php5 ] && find /var/lib/php5/ -depth -mindepth 1 \ -maxdepth 1 -type f -cmin +$(/usr/lib/php5/maxlifetime) ! -execdir \ fuser -s {} 2> /dev/null \; -delete My problem is that this process is taking a very long time to run, with lots of disk IO. Here's my CPU usage graph: The cleanup running is represented by the teal spikes. At the beginning of the period, PHP's cleanup jobs were scheduled at the default 09 and 39 minutes times. At 15:00 I removed the 39 minute time from cron, so a cleanup job twice the size runs half as often (you can see the peaks get twice as wide and half as frequent). Here are the corresponding graphs for IO time: And disk operations: At the peak where there were about 14,000 sessions active, the cleanup can be seen to run for a full 25 minutes, apparently using 100% of one core of the CPU and what seems to be 100% of the disk IO for the entire period. Why is it so resource intensive? An ls of the session directory /var/lib/php5 takes just a fraction of a second. So why does it take a full 25 minutes to trim old sessions? Is there anything I can do to speed this up? The filesystem for this device is currently ext4, running on Ubuntu Precise 12.04 64-bit. EDIT: I suspect that the load is due to the unusual process "fuser" (since I expect a simple rm to be a damn sight faster than the performance I'm seeing). I'm going to remove the use of fuser and see what happens.

    Read the article

  • Role of MBR in the booting process

    - by pg4421
    I am new to stack overflow. So please correct me if my question seems irrelevant or stupid. I read here in Booting Process : The job of the primary boot loader is to find and load the secondary boot loader (stage 2). It does this by looking through the partition table for an active partition. When it finds an active partition, it scans the remaining partitions in the table to ensure that they're all inactive. When this is verified, the active partition's boot record is read from the device into RAM and executed. The question is that I am having a Hard disk which has two Operating System images windows and ubuntu and hence both partitions in which they reside are active. Then why do we have only one active partition always? (I know that active partition is one of the primary partition but then why we are giving special reference to one primary partition? ) I am confused a bit. Please solve my query. Thank you so much.

    Read the article

  • How to fix "The connection was restarted"?

    - by Altar
    I have a php script with ajax. Few days ago, without making any changes, the script stopped working. The script is working sometimes but it is very slow. Other times it doesn't load completely. Yet other times it loads and empty page or it shows a "The connection was restarted" message. We have tried loading the page from 4 different computers in two different countries (two different ISPs). There is nothing relevant in the server's log. We contacted iPage.com hosting support. We sent screenshots. And all we manage to get from them was an incomplete screenshot and the message 'We tested. The page loads'. So, they won't admit the fault. It looks like they loaded the page once and declared it is working and went back to playing solitaire. I know their support. We had all kind of problems with iPage hosting. My questions are: 1. There is any way to make this error more easy to reproduce? I mean instead of fixing the error to get it appear more often. 2. There is any way to test a server response, to see if it drops connections?

    Read the article

  • Install Windows 8 on SSD and Program & Users on HDD

    - by Foe
    I have been dealing with a few problems while installing Windows 8 on my computer. On my old configuration, I had Windows 7 installed on my 60Go SSD, and my programs and user data on my 1To HDD, thanks to relative links. Yet, while installing Windows 8 on my SSD, he made a small partition on my HDD "System related". Plus, I'm afraid using only links is a bit cheap, and I saw lots of people messing with their registry when trying to put user data on another drive. I read a lot about optimizing Windows 8 for SSD, putting Users on another drive, and very similar situation that didn't quite correspond to what I was trying to achieve. Here's what I tried : http://www.eightforums.com/tutorials/4275-user-profiles-relocate-another-partition-disk.html Booting on Audit mode and using an XML to relocate Users didn't work as the specified version in the file is a test one, and I don't what to enter if I'm using the last release. Booting with the install DVD in repair mode to do a copy of the User and create a relative link, resulting in an error on the logon screen while entering my password saying that "The profile can't be load" (average translation of my error from french to english) Do anyone know how to do a clean separated install of Windows 8, with OS on a drive, and the other data on a second one ? Thanks.

    Read the article

  • Using Openfire for distributed XMPP-based video-chat

    - by Yitzhak
    I have been tasked with setting up a distributed video-chat system built on XMPP. Currently my setup looks like this: Openfire (XMPP server) + JingleNodes plugin for video chat OpenLDAP (LDAP server) for storing user information and allowing directory queries Kerberos server for authentication and passwords In testing with one set of machines (i.e. only three), everything works as expected: I can log in to Openfire and it looks up the user information in the OpenLDAP database, which in turn authenticates my user with Kerberos. Now, I want to have several clusters, so that there is a cluster on each continent. A typical cluster will probably contain 2-5 servers. Users logging in will be directed to the closest cluster based on geographical location. Something that concerns me particularly is the dynamic maintenance of contact lists. If a user is using a machine in Asia, for example, how would contact lists be updated around the world to reflect the current server he is using? How would that work with LDAP? Specific questions: How do I direct users based on geographical location? What is the best architecture for a cluster? -- would all traffic need to come into a load-balancer on each one, for example? How do I manage the update of contact lists across all these servers? In general, how do I go about setting this up? What are the pitfalls in doing this? I am inexperienced in this area, so any advice and suggestions would be appreciated.

    Read the article

  • Should I completely turn off swap for linux webserver?

    - by Poma
    Recently my friend told me that it is a good idea to turn off swap on linux webservers with enough memory. My server has 12 GB and currently uses 4GB (not counting cache and buffers) under peak load. His argument was that in a normal situation server will never use all of its RAM so the only way it can encounter OutOfMemory situation is due to some bug/ddos/etc. So in case swap is turned off system will run out of memory that will eventually crash the program hogging memory (most likely the web server process) and probably some other processes. In case swap is turned on it will eat both RAM and swap and eventually will result in the same crash, but before that it will offload crucial processes like sshd to swap and start to do a lot of swap operations resulting in major slowdown. This way when under ddos system may go into a completely unusable condition due to huge lags and I probably will not be unable to log in and kill webserver process or deny all incoming traffic (all but ssh). Is this right? Am I missing something (like the fact that swap partition is very useful in some way even if I have enough RAM)? Should I turn it off?

    Read the article

  • Ubuntu loading stops ?

    - by joxnas
    I don't know why, or how... but my ubuntu's loading / booting stops right after the ubuntu logo appearing.. An underscore appears in the right top of the screen: _ then, it disappears leaving the whole screen black Version is 9.10 ,Kernel is .20 I have tryed recovering mode and selected recover damaged packages option, but it didn't do any good.. I have very important files in Ubuntu that I need to copy to a pen drive or something, today.. The terminal mode is working, so i think i can do this there.. My questions: How can I get my Ubuntu to load? Is it possible to copy the files i need via terminal mode? I dont know if with other of the previous kernels it would work... but I had configured my grub (inside Ubuntu's gui) to show only the last kernel... and now i can't select any other kernel then .20 because I don't know a way to configure Grub unless via Ubuntu's gui... My hardware: ATI Mobility Radeon 4650 HD P7450 2.13Ghz Core duo 4Gb DDR2

    Read the article

  • Mysql InnoDB and quickly applying large updates

    - by Tim
    Basically my problem is that I have a large table of about 17,000,000 products that I need to apply a bunch of updates to really quickly. The table has 30 columns with the id set as int(10) AUTO_INCREMENT. I have another table which all of the updates for this table are stored in, these updates have to be pre-calculated as they take a couple of days to calculate. This table is in the format of [ product_id int(10), update_value int(10) ]. The strategy I'm taking to issue these 17 million updates quickly is to load all of these updates into memory in a ruby script and group them in a hash of arrays so that each update_value is a key and each array is a list of sorted product_id's. { 150: => [1,2,3,4,5,6], 160: => [7,8,9,10] } Updates are then issued in the format of UPDATE product SET update_value = 150 WHERE product_id IN (1,2,3,4,5,6); UPDATE product SET update_value = 160 WHERE product_id IN (7,8,9,10); I'm pretty sure I'm doing this correctly in the sense that issuing the updates on sorted batches of product_id's should be the optimal way to do it with mysql / innodb. I'm hitting a weird issue though where when I was testing with updating ~13 million records, this only took around 45 minutes. Now I'm testing with more data, ~17 million records and the updates are taking closer to 120 minutes. I would have expected some sort of speed decrease here but not to the degree that I'm seeing. Any advice on how I can speed this up or what could be slowing me down with this larger record set? As far as server specs go they're pretty good, heaps of memory / cpu, the whole DB should fit into memory with plenty of room to grow.

    Read the article

  • Powershell Script to help archive company email

    - by sec_goat
    I am attempting to use powershell to gather emails that pertain to a certain subject, so that these correspondences might be turned over to a legal department. I am having a couple of issues here that I would like some assistance getting past. I run the following command: get-mailbox -Database "Mailbox Database" | Export-Mailbox -ContentKeywords "Keywords To Search" -TargetMailbox "sec_goat" -TargetFolder EmailSearch -StartDate "01/13/2011 12:01:00 This has pretty much done what I want, and returned a boat load of emails, however it has also flooded my inbox with hundreds of blank calendars and contact lists. I realize now I should have used the exclusion on these folders, as well as a test environment (which we don't have). 1.How can I clean up this script to not include all the blank folders, contacts and calendars that DO NOT match the keywords search? 2.How do I clean up hundreds of blank contact lists and calendars in my mailbox without right clicking and deleting each one? EDIT: I edited the post to change the scope of the question. I think my focus is less on the legal perspective and more on the "How can I clean up my mess and make future archives less messy and painful?"

    Read the article

  • ImageMagick installation confusion

    - by Codemonkey
    Well this is turning out to be a pain. I'm on CentOS 6.5. I saw on the PECL ImageMagick changelog that they added a load of constants for new filters, such as LANCZOS2SHARP. Some testing I did earlier suggested that PhotoShop 6.5 was able to downsize photos with better clarity than my current ImageMagick's best effort of LANCZOS, so I thought I'd try to upgrade to test out the newer filters. So, first port of call - get source from pecl for 3.2.0RC1. Installed with no problems. But, ah-ha. Although it says it only requires IM 6.2.4, the filters I'm after don't work unless you have 6.6.6+ So I go http://www.imagemagick.org/script/install-source.php to install the newest version. This is the bit that's puzzled me. It all appears to have installed fined. The tests work fine, passing 40 out of 40. If I run identify -version on the command line it outputs 6.8.9 But if I echo Imagick::getVersion() in PHP it shows 6.5.4, even after restarting php-fpm. rpm -qa | grep ImageMagick shows that I still have 6.5.4 installed locate ImageMagic also only seems to show the 6.5.4 one I feel that the missing link here is ImageMagick-devel, do I need to install that too? How do I go about doing that? Or do I just need to reinstall the pecl-imagemagick 3.2.0RC1 now that I have the latest IM installed?

    Read the article

  • Our clients site is redirecting to a pill scammy site [closed]

    - by Alex Demchak
    Possible Duplicate: My server's been hacked EMERGENCY We've usually host our clients site, but we aren't hosting this one. The website itself (weddle-funeral.com) works just fine. if you load google and search for weddle funeral stayton oregon - and click that link, the site links to a scammy pill site. I went through the site and there were some php files in the wordpress plugins that got quarantined by my antivirus. I removed ALL non essential files, and uploaded fresh versions of all the plugins, but it's STILL redirecting from google. I tried logging in to the cpanel (on a virtual private server), and the cpanel flashed a red warning screen The site's security certificate is not trusted! You attempted to reach XXXXX.com, but the server presented a certificate issued by an entity that is not trusted by your computer's operating system. This may mean that the server has generated its own security credentials, which Google Chrome cannot rely on for identity information, or an attacker may be trying to intercept your communications. You should not proceed, especially if you have never seen this warning before for this site. (Keep in mind, that's for the HOSTING accounts CPanel) Is there something in the SERVER probably that's causing the redirect? EDIT: .htaccess file contents # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress

    Read the article

  • How to keep multiple servers in sync file wise?

    - by GForceSys
    I'm currently managing a cluster of PHP-FPM servers, all of which tend to get out of sync with each other. The application that I'm using on top of the app servers (Magento) allows for admins to modify various files on the system, but now that the site is in a clustered set up modifying a file only modifies it on a single instance (on one of the app servers) of the various machines in the cluster. Is there an open-source application for Linux that may allow me to keep all of these servers in sync? I have no problem with creating a small VM instance that can listen for changes from machines to sync. In theory, the perfect application would have small clients that run on each machine to be synced, which would talk to the master server which would then decide how/what to sync from each machine. I have already examined the possibilities of running a centralized file server, but unfortunately my app servers are spread out between EC2 and physical machines, which makes this unfeasible. As there are multiple app servers (some of which are dynamically created depending on the load of the site), simply setting up a rsync cron job is not efficient as the cron job would have to be modified on each machine to send files to every other machine in the cluster, and that would just be a whole bunch of unnecessary data transfers/ssh connections.

    Read the article

  • Different behaviour when running exe from command prompt than in windows

    - by valoukh
    Forgive me if I'm not using the correct terminology here but I'll try to explain the issue I'm having as best I can! We use a program that allows you to view video footage from several cameras, and it works in such a way that when it is opened it then automatically loads several video files within its interface. These video files are stored in a subfolder, e.g. "videos\video1.asf", etc. I didn't create the program so I can't say what method it uses to open the files. The files are stored on a network server and are being accessed via a share/UNC path. When the file is run from windows explorer (by navigating to the network share and double-clicking the exe), it works perfectly. When the file is run via the (elevated) command prompt (e.g. by typing the \server\path\to\file.exe) it opens, but the corresponding video files do not load. I am trying to create a script that launches the program via the command prompt, so the first step is finding out why the two actions above have different consequences. Any advice on how running executables from the command prompt could produce a different result would be greatly appreciated. Thanks

    Read the article

  • How can I get (g)Vim to display the character count of the current file?

    - by OwenP
    I like to write tutorials and articles for a programming forum I frequent. This forum has a character limit per post. I've used Notepad++ in the past to write posts and it keeps a live character count in the status bar. I'm starting to use gVim more and I really don't want to go back to Notepad++ at this point, but it is very useful to have this character count. If I go over the count, I usually end up pasting the post into Notepad++ so I can see when I've trimmed enough to get by the limit. I've seen suggestions that :set ruler would help, but this only gives the character count via the current column index on the current line. This would be great if I didn't use paragraph breaks, but I'm sure you'd agree that reading several thousand characters in one paragraph is not comfortable. I read the help and thought that rulerformat would work, but after looking over the statusline format it uses I didn't see anything that gives a character count for the current buffer. I've seen that there are plugins that add this, but I'm still dipping my toes into gVim and I'm not sure I want to load random plugins before I understand what they do. I'd prefer to use something built in to vim, but if it doesn't exist it doesn't exist. What should I do to accomplish my goal? If it involves a plugin, do you use it and how well does it work?

    Read the article

  • Exchange 2013 really slow outside of localhost

    - by ItsJustJP
    We've got a 12 core xeon, 24GB of ram 2012 server. We've recently migrated from exchange 2010 (which was on another server) to exchange 2013 which resides on our new 12 core server. Accessing the OWA on the exchange server is fine; it's very quick and responsive however accessing it via any other computer connect to the domain via a 1 gpbs connection and it'll take 10-15 seconds to load. Also running slow is public calenders that people in my place need to access, again taking 10-15 seconds to access and can sometimes cause outlook to not respond. Further to that we have phones that connect via the internet (of course) to the exchange so people can get work emails when they are out of the office. Guess what, this is also running slow. I've have search for many solutions and have tried changing outlook authentication methods but there is no change in speed. The old exchange 2010 server no longer exists but there was no problem before the migration. Has anyone got any suggestions? Thanks :) Must also mention that server 2012 that exchange 2013 is installed on is also the DC. Update: It would appear that any connection via https is slow. It took more than 15 mins for an outlook client to download 50MB of emails (outlook anywhere).

    Read the article

  • Getting windows virtio mounted/installed for KVM

    - by Swifty
    There might be an easy answer to this. I have exhausted my search on google for a solution. Here's my problem. I need to get Windows working on a KVM vps with virtualizor CP. As I get into windows installation in VNC, there's the mandatory driver installation requirement, as HDD is in virtio. There seems to be 2 solutions: 1. Mount the virtio iso (http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/) in the CD drive by unmounting Windows ISO and proceed with driver installation. 2. Create a secondary CD drive and mount the virtio iso there. Well, 1st step never seems to work. If I unload the Windows iso and load the virtio iso, it never reflects back in the VNC. Second step I have yet to be successfull. I try to create a second IDE CD ROM drive via virt-manager but the virtio (virtio-win-0.1-30.iso) iso is never listed in there, whereas i specially placed it in /var/lib/libvirt/images folder. Any suggestions on where I screwed up?

    Read the article

  • Strange 3-second tcp connection latencies (Linux, HTTP)

    - by user25417
    Our webservers with static content are experiencing strange 3 second latencies occasionally. Typically, an ApacheBench run ( 10000 requests, concurrency 1 or 40, no difference, but keepalive off) looks like this: Connection Times (ms) min mean[+/-sd] median max Connect: 2 10 152.8 3 3015 Processing: 2 8 34.7 3 663 Waiting: 2 8 34.7 3 663 Total: 4 19 157.2 6 3222 Percentage of the requests served within a certain time (ms) 50% 6 66% 7 75% 7 80% 7 90% 9 95% 11 98% 223 99% 225 100% 3222 (longest request) I have tried many things: - Apache2 2.2.9 with worker or prefork MPM, no difference (with KeepAliveTimeout 10-15) - Nginx 0.6.32 - various tcp parameters (net.core.somaxconn=3000, net.ipv4.tcp_sack=0, net.ipv4.tcp_dsack=0) - putting the files/DocumentRoot on tmpfs - shorewall on or off (i.e. empty iptables or not) - AllowOverride None is on for /, so no .htaccess checks (verified with strace) - the problem persists whether the webservers are accessed directly or through a Foundry load balancer Kernel is 2.6.32 (Debian Lenny backports), but it occurred with 2.6.26 also. IPv6 is enabled, but not used. Does the issue look familiar to anyone? Help/suggestions are much appreciated. It sounds a bit like a SYN,ACK packet getting lost or ignored.

    Read the article

  • Does cloud computing offer this? [closed]

    - by TheBlackBenzKid
    I have some newb questions I want answering please about cloud hosting - we are currently looking at Rackspace and getting a windows box. This is the situation: We have 15 computers in our office. We have 3 printers, some wifi and some network plugged. We have a standard router and the office share things via dropbox. The computers are not on Windows SBS or something similar. We want a cloud hosting solution that will offer User can login on any machine in the office and see the machine software User can login on any machine in the office and open Outlook and their emails and signature will be on exchange automatically A shared company folder on the network All printers automatically installed on the network Users can login remotely to access emails via the web At the moment we have a network company saying we need Xeon server in house with backup and psu and Windows SBS with license for each machine and also we need cabinets and cabling setup and also load balancers and modification of our DNS for emails. My question is this. Can cloud offer this? Can we have a server in the cloud that does this? Is it possible I mean the computers would be wireless connected to this cloud and you turn the machine on and its hosted?

    Read the article

  • MicroSD card getting corrupted for no good reason

    - by ChaosR
    I recently bought an MicroSD card online. It's a Sandisk 16GB class 2. However, it has a nasty problem. Every time I fill it with my data, the fat tables get corrupted. I've tried reformatting it, blanking it, doesn't seem to solve the problem. I have tried windows and linux (ubuntu), both have the problem. I've used my usb microsd readers, and even tried putting it in my phone and putting data on it from there. All have this problem. Now the really odd thing is, besides the corrupted file tables, no programs can find anything wrong with the hardware. I've tried both chkdisk and "badblocks -w", neither give any type of error. Now I don't know if the actual data gets corrupted, or if its just filesystem tables. What happens is that one or more folders start showing a load of chinese-charred (random UTF8 symbols I suppose) folders and files, and it is impossible to do anything with those. All the other data (outside of the corrupted folders) seems fine. I've tried to test it, and the problem doesn't seem to show up until I fill the disk upto about 3~4GB. After that I can still access the data. But as soon as I eject/safely remove/unmount it, the bad things happen somehow. Next time I plug it in, the folders I most recently wrote to (but sometimes also the folders I wrote the time before last time to) are all gibberish. Does anybody have any clue what might be going on here?

    Read the article

  • How many iptables block rules is too many

    - by mhost
    We have a server with a Quad-Core AMD Opteron Processor 2378. It acts as our firewall for several servers. I've been asked to block all IPs from China. In a separate network, we have some small VPS machines (256MB and 512MB). I've been asked to block china on those VPS's as well. I've looked online and found lists which requires 4500 block rules. My question is will putting in all 4500 rules be a problem? I know iptables can handle far more rules than that, what I am concerned about is since these are blocks that I don't want to have access to any port, I need to put these rules before any allow. This means all legitimate traffic needs to be compared to all those rules before getting through. Will the traffic be noticeably slower after implementing this? Will those small VPS's be able to handle processing that many rules for every new packet (I'll put an established allow before the blocks)? My question is not How many rules can iptables support?, its about the effect that these rules will have on load and speed. Thanks.

    Read the article

  • Profile not loaded correctly (Cannot access registry)

    - by xaav
    Every so often, I log on and get the Following Message: User profile was not loaded correctly. You have been logged on with a temporary profile. Changes you make to this profile will be lost when you log off. Please see the event log for details or contact your administrator This almost always happens when somebody else has been on the computer for a while, and then I log on. This never used to happen, but now it happens pretty often. My profile is not permanently corrupted, all I have to do is restart my computer, but this annoys me, and I would like to fix it. I was curios about the reason of this cause, so I looked into the Event Log, and found the root of the problem was the ntuser.dat file in the profile that I was logging on to was locked at logon time. This resulted in the current users registry not being loaded, resulting in failure to load the profile. I just found a microsoft article that mentions this exact issue: http://support.microsoft.com/kb/960464/ The problem is that I do not want to delete this profile; and this issue does not come up every time that I log on, only when somebody else has been on a long time before me. What could be locking this file? Is there any way to get a process list without logging on so that I can identify which process has the file locked? Any other suggestions?

    Read the article

  • How to setup IIS 7.5 Reverse Proxy for quite a few internal servers - Server Farm?

    - by Tim Murphree
    I have tried for a few days, but I'm lost. Here's what I'm trying to do: I want to setup an IIS 7.5 as a Reverse Proxy for about 30 internal HTTP servers, located on my internal LAN. Everything is running on port 80. The internal servers are really IP based webcams. Here is scenario: www.mycamserver.com/cam1 192.168.1.101 www.mycamserver.com/cam2 192.168.1.102 and so on, until.. www.mycamserver.com/cam30 192.168.1.130 I have installed ARR and URL Rewrite. So far, I have managed, at one time, to seem to forward an incoming URL to an internal server, but the page would not fully load (error 404). Also, I setup a Server Farm, but it seems all traffic is now set to the first node on the Server Farm (192.168.1.101). However, at least the page loads and runs correctly. I simply want to do an exact match, for example, "cam14", and reverse-proxy / rewrite to a corresponding internal server address - "192.168.1.114".

    Read the article

< Previous Page | 667 668 669 670 671 672 673 674 675 676 677 678  | Next Page >