Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 230/457 | < Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >

  • compare the contents of two folders that are replicating by dfs

    - by Funky Si
    I have a large folder that I am replicating by dfs and I want to check that all files have been replicated correctly. Currently I am running the following script at both ends. cd e:\data\shared\ dir /a:-h /b /s > e:\data\shared\result.txt and then using a text editor to tidy the file before using a diff tool to compare them. Does anyone know a better way of doing this? Failing that does anyone know how to adapt my script to ignore all the files in the DfsrPrivate folders

    Read the article

  • How can I delete, break, or otherwise convert cross references to simple text in microsoft word 2013

    - by Mr Purple
    Cross referencing figure and table captions is useful while editing a document but can become confude when copying and pasting between large documents. I need to pass my document to a colleague who will collate my document with others and has requested that I remove or delete any cross referencing so that my "correct" cross references do not interfere or get interfered with by any other cross references that may be in my colleagues master collated document. My document will be cut and pasted into the master and no further complicated instructions after that point will be tolerated by my colleague. Is there a simple way to convert my cross references to simple text? I am using microsoft word 2013.

    Read the article

  • Mapping Super+hjkl to arrow keys under X

    - by Bill Casarin
    I'm trying to map: Super+h -> Left Super+j -> Down Super+k -> Up Super+l -> Right globally under X. The idea is I don't want to leave my home row that often to use the arrow keys, so I'll use the Super modifier + hjkl to emulate the arrow keys under X. Is there any way to do this? One thing I've tried is xbindkeys + xte using this configuration: "xte 'keydown Up' 'keyup Up'" Mod4+k "xte 'keydown Down' 'keyup Down'" Mod4+j "xte 'keydown Left' 'keyup Left'" Mod4+h "xte 'keydown Right' 'keyup Right'" Mod4+l but there seems to a large delay between me pressing the key and noticing any result, and most of the time nothing happens at all. Is there a more elegant way of doing this that actually works with no delay?

    Read the article

  • Mass remove passwords from rar archives

    - by ldigas
    Is there a way to (I'm using WinRAR; demo, but I'm willing to change it to whatever is needed) mass remove passwords from a bunch of files ? Problem description: for reasons unknown to me, some archiving was done for two-and-something years in RAR format, and all archives have passwords. I have a list of them, them all being similar (mostly something like John-03, John-04, John-05 ... e.g. name-month ...) but I need to manipulate the files at large, and it is a real problem removing and or dearchiving all those files, while entering passwords manually. What would be my best options concerning ? Ideally, I'm looking for some kind of archiver which tries out a predefined list of passwords, and asks only if non of them cracks the safe. Afaik, WinRAR has no such feature.

    Read the article

  • How to rename a BTRFS subvolume?

    - by hochl
    I have a BTRFS filesystem with a set of subvolumes in it. So far so good. I need to change the name of a subvolume, unfortunately the btrfs program does not allow me to rename a subvolume. Searching with Google has yielded some results, one said I can just mv, the other said I can just snapshot to a new name and delete the old subvolume. Before I crash my partition and have to reload it from the backup (it's quite large), my question is: What is the currently best way to rename a subvolume? Is it ok to just mv it, or will it invalidate some internal structures? Is making a new snapshot and removing the old subvolume the way to go, or has this some drawbacks? I know everything is still experimental, but for my purposes it has been working quite well (so far, and I have incremental backups for each day).

    Read the article

  • Is there no such thing as a Gigabit switch?

    - by Torben Gundtofte-Bruun
    According to the manufacturer specification, even my rather plain desktop computer has "Gigabit Ethernet". So when I want to copy large files over the LAN (not Internet) it would make sense to have a gigabit switch. I'm searching eBay for a gigabit switch for a planned home network upgrade. The products I find are all labeled "gigabit" but they all have 24 x 10/100Mbit autosensing ports and sometimes 2 x 10/100/1000Mbit autosensing ports. It was my understanding that 10/100 is ancient and that modern computers have network interfaces that work with 1000Mbit, so it would make sense to get a switch that has 24 x 1000Mbit ports. Did I misunderstand, or are sellers (deliberately?) mislabeling older hardware? (Let's not dive into wired vs. wireless networks and how "N" wireless is fast. You'd be right, but not answering the question.)

    Read the article

  • Changing Word mail merge data source locations in bulk?

    - by Daft Viking
    I've just moved a number of Word mail merge files, and a number of Excel spreadsheets that are the data sources for the mail merges, from a Windows XP computer to a Windows 7 computer, and now all the paths for the merge sources are incorrect (used to be c:\documents and settings\user\my documents.... now c:\users\documents....). While I can correct the path of the data source in each file individually, I was hoping that there would be some way of updating the files in bulk, as there are a relatively large number of them. Word 2007 is what is being used, but the documents are all in the previous DOC format (not DOCX).

    Read the article

  • MacBook Pro screen goes dark

    - by Mike M
    I've had my MacBook Pro for two years now; no problems so far (it has had 3rd party RAM from the get go). Today, I'm copying a particularly large VM from an External disk drive to local MacBook disk. It has about 3GB to go and I take off to do some other things and when I come back my screen is "dark". The computer is still on but I can't see anything. I forced a reboot by holding down the power button, it starts up with the "chimes", but still no screen. I've done this several times. Any ideas? Do you think the hard disk activity caused it to get too hot?

    Read the article

  • UDF filesystem -> Maximum number of files

    - by user978122
    I am considering partitioning a rather large hard drive with the UDF filesystem for an experiment, and would like to ask if anyone knows the maximum number of files, either by directory, or as a whole, that the UDF filesystem can handle? For some background, I looked at the JFS and XFS filesystems (NTFS has a limitation of the number of files per volume); however, since I run Windows, that's kind of out. UFD, on the other hand, does not appear to have these limitations, but then, I cannot really find any information on just how many files per volume the UDF file system supports.

    Read the article

  • Mongodump on Gridfs is killing the host IOs

    - by Raphael
    I'm trying to make a mongodump from our production mongodb while the production is running. We have three production instances, one regular mongodb, one with very few gb of data on gridfs, one with a larger amount of data on gridfs. All mongodb instances are running in version 2.4.9 on a ubuntu 10.04 virtual server. I use a mongodump command to export the bases to another server. Unfortunately our machines are virtually hosted in a "low performances" datacenter (vmware based) so when I try to export the large gridfs db, the disk IO hits 100% (and 50% of the cpu starts waiting for IO too). This has a very negative impact on the production applicatiosn because db access time is excessively increased, making the applications unusable. I'm looking for a way to regulate the mongodump so the export goes slower but cooler on the hardware ressources allowing better performances for the applications to run. Has anyone had a similar scenario ?

    Read the article

  • SPF include: too many IP addresses

    - by sprezzatura
    I've hit a snag with SPF. The SPF record for my domain will contain four or five entries, plus it will contain: include:sgizmo.com The SPF record for sgizmo.com contains eleven entries! This, plus mine, is way over the maximum ten allowed by the RFC (and probably by most servers). I realize that there has to be a limit in order to prevent DoS attacks. However, in the real world, it is probably not unreasonable for large companies to have many server addresses. Furthermore, must I know monitor my 'include:' counterparts for changes and additions? Must I check weekly, daily, to insure that some combination of changes doesn't suddenly put me over the top? It doesn't seem to me that SPF is suitable for prime time. Is there another way to do this?

    Read the article

  • Is a ext3 Linux filesystem byte order independent

    - by Lothar
    I have a good old HP-C3700 Workstation with PA-RISC CPU here that I would like to use as a subversion server for a very large repository. I just worry what happens if the workstation dies (everybody who knows this computer knows that it is running like an Abrams tank and unlikely to happen in the next decade). I'm using Debian Linux on this system. If the mainboard dies can I just plug the SCSI drive into a PC and read the files from a normal Intel Linux PC? Which software RAID levels would be safe?

    Read the article

  • SOLR high CPU usage in amazon EC2

    - by user644745
    I installed solr-3.6 in my local windows box and it worked fine. I installed solr-4.0 in amazon ec2 linux large instance and the cpu usage shot upto 100%. It maintained at 80-90% average cpu power. I thought it could be because of 4.0, So I installed 3.6 in EC2 again. But again the CPU usage was 80-90% average. With both the versions, solr works in EC2. dont know why CPU usage is so high. i started the solr server using "sudo nohup java -jar start.jar &" In my local box java 1.7 is installed and in EC2 it is 1.6.0_24. I have mapped solr dir to an EBS volume. /dev/mapper/vg1-solr 8361916 1935928 6342128 24% /home/ec2-user/SOLR/solr/example/solr Is there any known issue ?

    Read the article

  • What is the minimum delay between two consecutive RS232 frames?

    - by Lord Loh.
    I have been working on creating a UART on an FPGA. I can successfully transmit and receive single characters typed on PuTTY. However, when I set my FPGA to constantly write a large sequences of "A", sometimes I end up with a sequences of "@" or some other characters until I reset the FPGA a few times. I believe the UART on the computer looses track of the difference between the start bit and a zero. The delay between the two "A" is ~ 30us (measured with a logic analyzer) and the baud rate is 115200 8N1. Is there a minimum delay that must be maintained between two consecutive RS232 frames?

    Read the article

  • The cable of my USB port hub is too short - what to do?

    - by Anna
    Hi, I just bought a new USB port hub "MSY USB 2.0 4-port hub". It has two inputs: A small USB entrance input for external power (?) The problem is that the cable that comes with the hub (small USB on one end, large USB on the other - to input into my computer), is too short for my needs. Is there a solution to this? buying a longer cable might be risky, I know that it might cause problems with power. Is there anything else I can do to make it work? Thanks.

    Read the article

  • Chrome developer tools - network panel gaps

    - by Chris Nicholson
    In the Chrome developer tools, under the network tab, I'm curious to know what is happening during the gaps. If you look at my image below, I have highlighted in orange the areas where these gaps exist. Where I'm able to load a lot of my page from cache it's a shame these large gaps occur as they make up most of my page load time. What exactly is happening in this time? EDIT Okay I found this answer which essentially sums up my question, so a different question: does anyone know a good method to reduce the length of these gaps? Presumably (albeit rather extreme) if I loaded all my CSS on the page there wouldn't be a delay after loading the CSS file before the images were loaded.

    Read the article

  • Disk space profiling in Unix

    - by user1677770
    I'm looking for a tool to summarize how disk space is being used on very large partitions. Our file system is around 950TB, mostly broken up into 20TB partitions. There are some really nice graphical tools for visualising these file spaces: http://www.disksavvy.com/disksavvy_screenshots.html http://methylblue.com/filelight/ But I'm really not sure how well they will scale. Does anybody have any experience of these tools and can make any recommendations? Even something that parses and summarises a really big du output would be a good start.

    Read the article

  • Create netbook recovery image without DVD burner (virtual burner?)

    - by Dan
    I have a new Acer Aspire One which is asking to create a recovery DVD. It doesn't have a built in burner, and I don't have a USB burner. However I do have a large USB hard drive. Is there some way to get the recovery software to "burn" an image file instead of a real DVD? I know you can download a Linux recovery image, but the netbook comes with XP. I plan to install Linux on it but I'd like an XP recovery image just in case.

    Read the article

  • Is it possible to restrict fileserver access to domain users using computers that are members of the domain?

    - by Chris Madden
    It seems domain isolation can be used to accomplish, but I'd like a solution that doesn't require IPsec, or more accurately, doesn't require IPsec on the fileserver. IPsec if done in software has a large CPU overhead and our NAS boxes don't support any kind of offload. The goal is to avoid authenticated users using non-managed machines to access network resources. Network Access Protection (NAP) and the various enforcement points looked promsiing but I couldn't find a bulletproof way to use them [which doesn't require IPsec on the fileserver]. I was thinking when a domain user accesses the NAS box it will first need a Kerberos ticket from AD, so if AD could somehow verify the computer that was requesting the ticket was in the domain I'd have a solution.

    Read the article

  • Graciously shutdown external HDD enclosure?

    - by Jakobud
    I recently purchased a large HDD along with the following HDD enclosure: http://www.newegg.com/Product/Product.aspx?Item=N82E16817173043 It has a simple on-off switch on the back. When I want to turn this thing off, do I simply just flip the switch? I assume the switch simply kills the power to the HDD, but isn't that potentially a bad thing in the case that the HDD is still reading/writing? I used to have a Seagate external HDD and it had a button on the front that I had to hold down for a second or two before it would turn off, but it at least appeared to sort of go through a shutdown procedure where it probably would stop the HDD activity before cutting power. So with this external HDD, I'm a little bit leery about that power switch and understanding exactly what it does. Is this how all HDD enclosures are? EDIT: I'm running the drive in Ubuntu Server. So there is no 'ejecting' the drive lol

    Read the article

  • IIS7 response size thresholds

    - by DanielM
    I have a customer who is attempting video playback via HTTP progressive download of very large files ( 1 GB). There is no problem once a file is cached at the edge via my CDN, but hits to my origin (first hits prior to edge-cache population) experience stalling and loss of sync between audio and video about an hour and a half into playback. This occurs pretty reliably at that point, suggesting that some threshold somehwere is getting hit. Are there IIS configuration knobs governing HTTP Response size? Other data points: I am unable to replicate this problem. I am looking at client bandwidth and last mile issues. I am looking at possible encoding recipe dependencies. But this problem never came up when we were using a "push" cache configuration (CDN-hosted origin), so something funky serverside at my origin seems like a likely culprit. Thanks ...

    Read the article

  • MySQL Windows vs. Linux: performance, caveats, pros and cons?

    - by gravyface
    Looking for (preferrably) some hard data or at least some experienced anecdotal responses with regards to hosting a MySQL database (roughly 5k transactions a day, 60-70% more reads than writes, < 100k of data per transaction i.e. no large binary objects like images, etc.) on Windows 2003/2008 vs. a Debian-based derivative (Ubuntu/Debian, etc.). This server will function only as a database server with a separate Web server on another physical box; this server will require remote access for management (SSH for Linux, RDP for Windows). I suspect that the Linux kernel/OS will compete less than the Windows Server for resources, but for this I can't be certain. There's also security footprint: even with Windows 2008, I'm thinking that the Linux box can be locked down more easily than the Windows Server. Anyone have any experience with both configurations?

    Read the article

  • Can a 32-bit RHEL4 userland work with a 64-bit kernel?

    - by James
    Is there a way to change an i386 RHEL4 machine to run an amd64 kernel, but ensure that it still builds software into same i386 binaries? On Debian this seems quite straightforward: just install an amd64 kernel (worst case, build one like this guy: http://www.debian-administration.org/users/jonesy/weblog/1) and prefix everything with "linux32". Then everything that considers uname -m will be unchanged, I just need to handle the few cases that consider uname -r. What is the Red Hat equivalent? Is the only way a full 64-bit installation on another disk and then chrooting back to the 32-bit system before anyone builds anything? (Even the best examples of that seem to be Debian-based.) Background: We make a large system that runs on (a variant of) i386 RHEL4. However, some of the larger RHEL build machines now have enough RAM that they might benefit from going 64-bit (for the kernel and maybe some of the bigger build steps). Our build system doesn't support cross-compilation.

    Read the article

  • What hardware makes a good MongoDB Server ? Where to get it ?

    - by João Pinto Jerónimo
    Suppose you're on dell.com right now and you're buying a server to run your MongoDB database for your small startup. You will have to handle literally tens of thousands of writes and reads per minute (but small objects). Would you go for 2 processors ? Invest more on RAM ? I've heard (correct me if I'm wrong) MongoDB handles the most it can on the RAM and then flushes everything to the disk, in that case I should invest on a CPU with a large L2 cache, probably 40GB of RAM and a solid state drive.. right ? Would I be better off with a high end (~$11,309, 2 expensive processors, 96GB of RAM) server or 2x(~$6,419, 2 expensive processors, 12GB of RAM) servers ? Is Dell ok or do you have better sugestions ? (I'm outside the US, on Portugal)

    Read the article

  • Do virtual machines perform better on the host HDD or USB drive?

    - by Jeremy Ricketts
    The question I'm asking is kind of general, and I'll give more specifics about my specific setup. Here's the main question though: Do virtual machines generally perform better on the host HDD or is it better to operate them from an external disk? My specific setup: A Macbook Pro with a nearly full internal SATA drive that spins at 7200. On this system I'm running large programs like Photoshop and some other RAM-intense applications. I've dedicated 2 of my 8 gigs of RAM to my VMware Fusion virtual machine, which runs Windows 7 and Visual Studio, sits on the same drive. When that thing boots up, my system really starts crawling. I have an external USB (specifics of that drive are here) which I'm thinking about moving the VM to. Obviously a USB drive is slower than my internal HDD, but maybe having two operating systems using the same disk is WORSE than putting one of them on a separate (albiet slower) disk. This a bad idea?

    Read the article

< Previous Page | 226 227 228 229 230 231 232 233 234 235 236 237  | Next Page >