Search Results

Search found 23103 results on 925 pages for 'performance issues and ha'.

Page 565/925 | < Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >

  • DFS Replication, Users HOME folder - seems not to catch all files... any hints?

    - by TomTom
    I amm moving stuff out of a file Server. I am using DFS for that - the Folders are anyway in a DFS tree, so I can set up a replication temporarily, then drop the old Folder. Works nice, EXCEPT for the Folder containing the users home drives. Which, incidentally, is also the one I can not see all files in due to my permissions. Small Setup. We have 159mb in the users directories, 1280 files, 133 Folders original. The copy only has 157mb, 1269 files, 133 Folders. Anyone knwos of a way to find out what files are missing? IS this a Problem (could be some Caching files that are regenerated). Users are all offline (weekend) ;) This is pretty much the last share - all others had exactly ZERO issues.

    Read the article

  • Can you span a single file share across multiple servers?

    - by Mike C.
    Let's say I have one large file server that has a company-wide share. That file server constantly runs into space issues. Now let's say I have a dozen or so random servers (print servers, web servers, etc) that have a ton of free space. Is there a way to utilize that in the company-wide file share? Like spanning the storage across different servers but making that transparent to the user? I know the likely answer is going to be get more storage for the file server, but I just curious if this is conceptually possible. Thanks!

    Read the article

  • PHP CPU utilization limit

    - by knightrider
    I have done some research on the net regarding the problem. My questions is NOT how to reduce cpu utilization by improving algorithm or improving the performance by using multitasking or limiting CPU per system user. I have a website where user logs in does some processing and logout. The site uses linux server, php and apache. The problem is that I cant control the amount of CPU allocated to each user. ie I want give a guarantee that a user will get say atleast 5% of CPU (assume total number of users is less than 20). How can I do this? Any solution (A php code, apache server settings, or any out of box soln) is welcomed. Thankyou very much for reading this :)

    Read the article

  • How to arrange 2 SSD with 2 SATA?

    - by alfish
    I like to have best io performance as well as good capaciyy and reliability out of a server that hosts a busy forum, which involves loads of static files download. I am wondering what is the best plan to format and use the disks given that the server has only 4 disk bays and I have 2 SSD and 2 SATA disks at hand. I am currently thinking about putting the disks in RAID 10 so that SSD contains /var/lib/mysql as well as most of the OS (Likely to be Debian) and SATA disk to contain /path/to/static/files. However I'd like to hear your expert opinion on this. Thanks

    Read the article

  • How do I force a 64-bit windows 8.1 install?

    - by ausairman
    I have just purchased a copy of Windows 8.1 32-bit/64-bit from the MSDNAA program through my University. It came as an .iso file so I burned it to a DVD and installed / registered without any issues. I just found out it's a 32-bit install though, and I'm running a 64-bit laptop (Dell Studio 1555). The exact produce name I purchased is Microsoft Windows 8.1 Professional 32/64-bit (English). Is that just another name for "32-bit version that also works on 64-bit machines"? I have 4GB and I would like to code for 64-bit architectures so not having a 64-bit environment will be a real pain. I didn't see any options to choose the install type (it had Windows 7 64-bit previously but I completely wiped it). Any ideas?

    Read the article

  • Would SSD drives benefit from a non-default allocation unit size?

    - by davebug
    The default allocation unit size recommended when formatting a drive in our current set-up is 4096 bytes. I understand the basics of the pros and cons of larger and smaller sizes (performance boost vs. space preservation) but it seems the benefits of a solid state drive (seek times massively lower than hard disks) may create a situation where a much smaller allocation size is not detrimental. Were this the case it would at least partially help to overcome the disadvantage of SSD (massively higher prices per GB). Is there a way to determine the 'cost' of smaller allocation sizes specifically related to seek times? Or are there any studies or articles recommending a change from the default based on this newer tech? (Assume the most average scattering of sizes program files, OS files, data, mp3s, text files, etc.)

    Read the article

  • Looking for a product configurator

    - by Netsrac
    I am looking for a product configurator for products with high complexity. The main goal is to allow a sales person to configure the product in a correct and working manner. The product is a combination of hard- and software options. The options for sure have dependecies (so option A needs B and C) and can also exclude each other. The performance requirements of the software related to the hardware need to be considered. So some rules need to be defineable. Does anybody know a tool (preferred open source) doing that job? Thanks for your help.

    Read the article

  • my.ini optimization on Windows 2008 R2 VPS

    - by MKphpDev
    I have a vmware VPS running Windows Server 2008 R2 Enterprise that has performance issues with MySQL. Every few minutes, MySQL stall for few seconds then responed to queries. I'm sure that my.ini need to be optimized, but unfortunately, I don't have any idea of my.ini configuration. What's running on the server: 2 small wordpress blogs, 1 vbulletin forums (approx. 1.2 GB database, and increasing), small database for some sort of plug-ins (no more than 4000 records) Server Info: Processor: Intel Xeon X5550 @ 2.67GHz, RAM: 6 GB (memory useage never exceeded 2 GB), MySQL 5.5, PHP 5.3.10, IIS 7 current my.ini: [mysqld] default-storage-engine=INNODB sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE _USER,NO_ENGINE_SUBSTITUTION" max_connections=250 myisam_max_sort_file_size=20G innodb_additional_mem_pool_size=256M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=8M innodb_buffer_pool_size=512MB innodb_log_file_size=128M innodb_thread_concurrency=10 key_buffer_size = 512M myisam_sort_buffer_size = 8M join_buffer_size = 256K read_buffer_size = 256K sort_buffer_size = 256K table_cache = 4000 thread_cache_size = 200 wait_timeout = 30 connect_timeout = 10 tmp_table_size = 32M max_allowed_packet = 1M max_connect_errors = 10000 query_cache_size = 16M query_cache_limit = 2M query_cache_type = 1 query_cache_min_res_unit = 1024 query_prealloc_size = 16384 query_alloc_block_size = 16384 skip-external-locking read_rnd_buffer_size=1M max_heap_table_size=16M thread_concurrency=8 [mysqld_safe] open_files_limit = 8192 [mysqldump] quick max_allowed_packet = 16M [myisamchk] key_buffer_size = 128M sort_buffer_size = 128M read_buffer = 2M write_buffer = 2M any help with that, please?

    Read the article

  • Installed Ubuntu 11.10, getting hard disk health warnings

    - by Brad
    I'm getting hard disk health problem warnings... When I click the "examine" button the disk utility pops up. None of my drives are reporting any major issues, and the very first drive doesn't even have a SMART button. I don't really care if one of the drives is crashing, I've got everything backed up, but I just want to know how to stop these god forsaken message boxes from popping up randomly. I have already gone into the Disk Utility and checked "do not notify me if this drive is failing" on all of them except the one that doesn't have the SMART button. I've googled about as much as I can for one day.

    Read the article

  • Openvpn client-to-client connection reencrypted at Server?

    - by user1684411
    currently i'm using a site-2-site openvpn setup. The routers en/decrypt all traffic that goes from one net to another. One of them is the Openvpn server. This works but performance is not as good as possible. I think the limiting factor is the cpu power of the router. Would it be better if i use client-to-client connections and access the fileserver in the one net from a pc in the other, because the openvpn-server does not have to decrypt the (whole) packets?

    Read the article

  • Our company has 100,000s+ photos, how to store and browse/find these efficiently?

    - by tobefound
    We currently store our photos in a structure like this: folder\1\10000 - 19999.JPG|ORF|TIF (10 000 files) folder\2\20000 - 29999.JPG|ORF|TIF (10 000 files) etc... They are stored on 4 different 2TB D-link NASes attached and shared on our office network (\\nas1, \\nas2, and so on...) Problems: 1) When a client (Windows only, Vista and 7) wishes to browse the let's say \\nas1\folder\1\ folder, performance is quite poor. A problem. List takes a long time to generate in explorer window. Even with icons turned off. 2) Initial access to the NAS itself is sometimes slow. Problem. SAN disks too expensive for us. Even with iSCSI interface/switch technology. I've read a lot of tech pages saying that storing 100 000+ files in one single folder shouldn't be a problem. But we don't dare go there now that we experience problems on a 10K level. All input greatly appreciated, /T

    Read the article

  • Can an SSD notify the hosting OS that its wear level is getting high?

    - by Tony_Henrich
    I read a lot about SSDs and I am interested in them for server use. My biggest concern is their reliability. A lot of writes shortens their life span. I can mitigate this problem if I can run some kind of diagnostics on a regular basis on the SSD or if the SSD can automatically warn the OS that its reliability is reaching a critical level. Think of this as S.M.A.R.T or software like SpinRite for SSDs. Does anything I mentioned exist now? Which kind/brand of SSD does this? I don't mind swapping out a tired SSD for a newer one once a while. I am pretty sure that SSDs life is calculated in years and not in few months? For me, the improved performance will pay for the SSD over and over. I am planning to use plenty of RAM as well.

    Read the article

  • Can I Store MediaWiki Files on the cloud?

    - by user219048
    I recently got a chromebook, and I've been brainstorming different ways to put mediawiki on it (with localhost, not a server). One way I've read about online is to go into developer mode to download and set up LAMP. I was wondering, wouldn't I be able to store the apache, mysql, php, and mediawiki files on the cloud (google drive)? And if so, would anything prevent me from accessing my wiki on any other computer's localhost, assuming I could just log into Google Drive to access these files? Might there be any reduced performance when operating from the cloud?

    Read the article

  • Which type of memory is faster than average for server use?

    - by Tony_Henrich
    I am building a server computer which will be used for SQL Server and I am planning to use like 32G+ of RAM and putting the databases in memory. (I know all about data loss issues when power is gone). I haven't been up to date with the new types of memory sticks out there. What kind of memory should I get which is faster than average and not very expensive? I am buying a lot of ram so I am looking for memory that's above average but below high end if high end is very expensive. (I will be using Windows Server 2008 R2 Standard or Windows HPC Server 2008 R2)

    Read the article

  • Resize a new database to predicted maximum size

    - by John Oxley
    Currently I have a SQL Server database which is about 2 Gb. I know over the next year it's going to grow to a maximum of about 10Gb. Hard drive space is not an issue in the slightest. Is there a down side to resizing the datafile to 20Gb now, then defragmenting the hard drive? Should I resize the log file to 1Gb as well? Something ridiculously large so that fragmentation doesn't happen there either. With this question I would like to avoid the datafile becoming fragmented on the disk itself, but I don't want to negatively impact performance.

    Read the article

  • Request to server x Reply from server y

    - by klaasio
    I need some advice from you guys: I'm dealing with a custom loadbalancer/software for which we will use 2 main servers and about 8 slave servers. In short: User sends request to main server, main server will receive and handle the requests, sends a request to a slave server and slave server should send data DIRECTLY to the "user". User - Main server Main server - Slave server Slave server - User The reason for which data should be send directly to the user and not through the main server is because of bandwidth and low budget. Now I have the following idea's: -IPinIP, but that is not possible in Layer7 (so far i know there some expensive routers for that) -IP Spoof, using C/C++ we will make it look like the reply came from main server. But I was thinking, perhaps the reply "slave server - User" could just come from a different IP without causing issues in the firewall from the user or his anti-virus. I don't know so well about "home" firewalls/routers and/or anti-virus software. I guess the user machine wouldn't handle it well?

    Read the article

  • Enable group policy for everything but the SBS?

    - by Jerry Dodge
    I have created a new group policy to disable IPv6 on all machines. There is only the one default OU, no special configuration. However, this policy shall not apply to the SBS its self (nor the other DC at another location on a different subnet) because those machines do depend on IPv6. All the rest do not. I did see a recommendation to create a new OU and put that machine under it, but many other comments say that is extremely messy and not recommended - makes it high maintenance when it comes to changing other group policies. How can I apply this single group policy to every machine except for the domain controllers? PS - Yes, I understand IPv6 will soon be the new standard, but until then, we have no intention to implement it, and it in fact is causing us many issues when enabled.

    Read the article

  • Can I install linux has the host in a dell poweredge server (r710)?

    - by bksunday
    I might have a deal on dual six core poweredge server and I'm about to go test its performance but I'm wondering few things which I can't find answers for, and I can't test them before buying the machine. I don't want vmware at all so can I just wipe it and install linux instead, or is it embedded in some parts I have no access to. Will I still be able to update different firmwares (perc controllers, motherboard, etc) on this dell poweredge or does it require to have the vmware esxi installed as the host os. And optionally.. is there any foreseeable problems in doing so?

    Read the article

  • Everything You Ever Wanted to Know about Mod_Rewrite Rules but Were Afraid to Ask?

    - by Kyle Brandt
    How can I become an expert at writing mod_rewrite rules? What is the fundamental format and structure of mod_rewrite rules? What form/flavor of regular expressions do I need to have a solid grasp of? What are the most common mistakes/pitfalls when writing rewrite rules? What is a good method for testing and verifying mod_rewrite rules? Are there SEO or performance implications of mod_rewrite rules I should be aware of? Are there common situations where mod_rewrite might seem like the right tool for the job but isn't? What are some common examples?

    Read the article

  • Which OS should I boot into for virtualization?

    - by acidzombie24
    This might be a silly question. I use windows 7 99% of the time. I run linux 10% of the time and XP 5% of the time. I am thinking about getting a Intel® Core™ i7-2600 Processor which has hardware support for virtualization. I dont think i want more than one partition. May have a swap partition. Which OS should I make my primary (and only) partition? I suspect windows7 if i am always using it as going through a linux layer would slow it down. Does it matter much which OS i use if i have hardware support for virtualization? At the moment I am using VMWare player. I suspect software doesnt effect performance?

    Read the article

  • DIagnosing another Windows 7 Lockup

    - by MSEoris
    Im running windows 7 on a fairly modern machine (8gb ram, amd fx-6100, gtx 560ti) and I notice that periodically my windows seems to just hang for a little while. Frequently this occurs after a cold boot and i start up five or six small to medium sized programs, but also occasionally it occurs during normal usage. Basically what occurs is the screen locks up, there is no keyboard responsiveness for a period of 30 seconds to a full minute - after a bit of patience, control is returned, but I'm interested in figuring out what is causing such lockups. I checked the event log and dont see any issues, and all i can see in task manager is a spike in cpu and memory usage right after this occurs. Any tips on how to even begin to diagnose this? Thanks.

    Read the article

  • bluray playback stuttering/choppy on mac

    - by smashtastic
    I have had a few issues with bluray playback on mac with the following details: 2.4 GHz Core 2 duo NVidia 8600 m GT 256 mb ram 4 GB ram OSX Lion 10.7.5 Using VLC 2.03 for playback Files either mounted as a disc or stored on hardisk. I have tried various sources and playback is always choppy and stutters. Where the video and audio pause for a few seconds before resuming ok. If I transcode the same bluray files into mkv files play back is seemless. I am not applying any compression and the resolution is the same as is the nominal file size. For a recent example 11.2 GB m2ts and 10.5 GB mkv file. This stuttering and choppy playback of bluray files through VLC has occured for a number of different sources. For each of these sources transcoding to mkv solves the stuttering/choppiness. Any ideas on how to resolve this?

    Read the article

  • Window 7 hangs on using Adobe Reader

    - by SahilMahajanMj
    Window 7 often hangs when i open some pdf file with Adobe Reader. First of all, Adobe Reader crashes and as soon as it stops, the window hangs and after few seconds, it displays a message quoting "Dumping Physical Memory" in a typical Windows98 manner and then the system reboots. This problem persists each time i use Adobe Reader. However, i have installed Foxit Reader to view Pdf files. But there are some issues with that also. As sometimes i have editable files, which could be done in Adobe Reader's newest version only. Suggestions on how to solve this problem would be appreciated.

    Read the article

  • Queue emails under linux

    - by md1337
    I have a slow distant mail relay server and a web application I'm using locks up when sending e-mails to that distant mail server, until the e-mail is sent. After the e-mail is sent the page comes back and the application is snappy again. SO I'm trying to set up a differed mail queue locally on the application server (linux) so that the application uses that instead of the distant mail server. My rationale is that e-mails would get queued up locally until they are processed by the distant mail server, but at least the application doesn't lock up. I have installed postfix and set up the relayhost setting to the distant mail server, but performance has not improved. What appears to happen is that postfix just forwards my SMTP instructions in real time and doesn't really queue them? What can I do? Thanks!

    Read the article

  • What are the pros & cons of these MySQL engines for OLTP -- XtraDB, PBXT, or TokuDB?

    - by Continuation
    I'm working on a social website with an approximate read/write split of 90/10. Trying to decide on a MySQL engine. The ones I'm interested in are: XtraDB PBXT TokuDB What are the pros and cons of them for my use case? A few specific questions: PBXT uses log-based structure that avoids double-writes. It sounds very elegant, but the benchmark I've seen doesn't show any/much advantages over XtraDB. Do you have any experience with PBXT/XtraDB you can share? TokuDB sounds VERY interesting. But all the benchmarks I've seen are about single-threaded bulk inserts - inserting 100M rows for example. that's not very relevant for OLTP. What about its performance with large number of concurrent threads writing and reading at the same time? Anyone has tried that?

    Read the article

< Previous Page | 561 562 563 564 565 566 567 568 569 570 571 572  | Next Page >