Search Results

Search found 16838 results on 674 pages for 'writing patterns dita cms'.

Page 366/674 | < Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >

  • How can i automatically move files based on their name?

    - by Pasha
    I have 13 folders containing scanned photographs. Some photographs have been renamed to the date on which they were taken, resulting in YYYY.MM.DD.tif name. It could potentially be YYYY.MM.DD (###).tif where ### is just a number. Others are just named IMG_###.tif I would like to move the files with the YYYY.MM.DD name to a YYYY\MM\DD folder structure. While the files are being moved, I would also like to append the original folder name to the end of the file name. So, a file 01\2012.06.26 (1).tif should end up 2012\06\26\2012.06.26 (1) - 01.tif Is there a Windows tool that can help me with this? Or do I need to resort to writing a custom app?

    Read the article

  • How to open a page in Chrome from the command line, in a new tab or an existing tab as appropriate?

    - by MattH
    I'm looking for a way to open a given page in Google Chrome from the command line, with the following behaviour: If the given page is already open in a tab, navigate to that tab If the given page is not already open in a tab, open the page in a new tab and show that tab Currently when I open a URL from the command line (e.g. using "open http://godzillahaiku.tumblr.com" on Mac OS X), Chrome will always open the URL in a new tab. I end up with lots of duplicate tabs as a result, which is a minor annoyance. I'm looking for a solution that works on Mac OS X, but a non-OS specific solution would be preferable. I'd consider writing a Chrome extension for this if there's no existing solution.

    Read the article

  • Cannot do sudo: "/etc/sudoers is mode 0740, should be 0440"

    - by dehmann
    I have a problem: I don't have a root password on my mac. I just have an Admin account, which can do stuff using sudo. Now, I wanted to add my normal user to the /etc/sudoers file as well. Since it did not let me write to that file (even writing using sudo), I did this: sudo chmod u+w /etc/sudoers That worked. But since then I can't do any sudo command anymore on my system. It complains that /etc/sudoers has the wrong mode: $ sudo touch /etc/sudoers sudo: /etc/sudoers is mode 0740, should be 0440 Segmentation fault It sounds like a bad joke, because now I can't even change the mode back to 0440: $ sudo chmod 0440 /etc/sudoers sudo: /etc/sudoers is mode 0740, should be 0440 Segmentation fault Is there any way to fix this situation? I need to get my sudo abilities back.

    Read the article

  • VMware ESXi 4 On-Disk Data Deduplication - possible and supported?

    - by hurikhan77
    Environment: We are running multiple web, database, and application servers which usually share a pretty common installation (gentoo linux) and similar configuration in VMware ESXi 4. The differences are usually only some installed features or differing component versions. To create a new server, I usually choose the most similar (by features) running server, rsync a copy of it into freshly mounted filesystems, run grub, reconfigure and reboot. Problem: Over time this duplicates many on-disk data blocks which probably sums up to several 10's of gigabytes. I suppose if I could use a base system as template with the actual machines based on top of that, only writing changed blocks to some sort of "diff image", performance should improve (increased cache hit rate) and storage efficiency should increase (deduplicated storage space). This would be similar to what ESXi already supports for RAM deduplication (page sharing). Question: Is there any way to easily do this on ESXi 4? I already share the portage tree via NFS but this would not work for the rootfs.

    Read the article

  • Finding the file that is on a bad block on a HFS+ volume (debugfs for HFS+)

    - by Blair Zajac
    I have a drive in our iMac that has bad blocks, as booting from an Ubuntu 11.10 live CD and using ddrescue -f /dev/sda /dev/null finds them. I'd like to get the drive to remap them by writing to the blocks, say using hdparm --write-sector, but I don't want to do this without knowing what's in those blocks and finding the file that owns them, so I can restore the file from another source. I found fileXray but don't feel like spending $79 to map a block to a file and hfsdebug has been taken offline. Are there suggestions on a tool or technique to use? I looked at all the Ubuntu HFS+ packages to see if they could provide this info but nothing jumped out at me. BTW, I used Disk Utility to erase the empty space, but it didn't get any of the bad blocks to be remapped, according to smartctl -A.

    Read the article

  • IF commands in a batch file

    - by Rossaluss
    I'm writing a small batch file to replace users' themes and charts in Office and I have the below batch file that works just fine. cd c:\documents and settings\%username%\application data\microsoft\templates echo Y|rmdir charts /s mkdir charts echo Y|del "c:\documents and settings\%username%\application data\microsoft\templates\document themes\*.*" net use o: \\servername\sms copy "o:\ppt themes\charts\*.*" "c:\documents and settings\%username%\application data\microsoft\templates\charts" copy "o:\ppt themes\Document Themes\*.*" "c:\documents and settings\%username%\application data\microsoft\templates\document themes" c: net use o: /delete Now what I want is the above to only run if it hasn't run before as we'll be pushing this out to all users for around 2 weeks to catch people that aren't in every day. Is there any way to begin the command with something to look for one of the new themes/charts already pushed down, and if it's present, then have it not run? Any help on this would be greatly appreciated as I'm pretty new to these batch files.

    Read the article

  • What would be better in my case - apache, nginx or lighttpd ?

    - by The Devil
    Hey everybody, I'm writing a php site that's expected to get about 200-300 concurrent users browsing it. When initializing the application will load about 30 php classes, some 10 maybe 15 images and a couple of css files. So my question is what else can I do (except optimizing my code and using apc/eaccelerator for php) to get as close as possible to those numbers of concurrent users ? Currently we haven't chosen a server for the site to be hosted on but most probably it'll be a VPS Dual core + 2 or maybe 4gb ram. Is it possible for such a server to handle that load ? Also how could I test it myself and be sure that it'll be able to handle it ? Thanks in advance, Me

    Read the article

  • Can Windows 7, Vista, or XP notify me after 30 minutes, or at 2:30pm?

    - by Jian Lin
    Come to think about it, since Windows 3.0, Windows 95, 2000, ME, XP, Vista, and Windows 7, does any Windows have a capability of giving a "beep beep" notification to me, let's say I need to go meet somebody after 30 minutes? Or give a "beep beep" at 2:30pm? I hope to hear some sound instead of a pop up window as I may be writing something on the desk instead of looking at the computer. I usually don't want to install 3rd party app for this purpose, as you never know what the app does or how trustworthy it is if it is not a popular app (like Firefox or Safari). Does any version of Windows come with that capability? I'd imagine it is an app that takes two days to write.

    Read the article

  • A website hosted on the 1.0.0.0/8 subnet, somewhere on the Internet?

    - by Dave Markle
    Background I'm attempting to demonstrate, using a real-world example, of why someone would not want to configure their internal network on the 1.0.0.0/8 subnet. Obviously it's because this is not designated as private address space. As of 2010, ARIN has apparently allocated 1.0.0.0/8 to APNIC (the Asia-Pacific NIC), who seems to have begun assigning addresses in that subnet, though not in 1.1.0.0/16, 1.0.0.0/16, and others (because these addresses are so polluted by bad network configurations all around the Internet). My Question My question is this: I'd like to find a website that responds on this subnet somewhere and use it as a counter-example, demonstrating to a non-technical user its inaccessibility from an internal network configured on 1.0.0.0/8. Other than writing a program to sniff all ~16 million hosts, looking for a response on port 80, does anyone know of a directory I can use, or even better yet, does anyone know of a site that's configured on this subnet? WHOIS seems to be too general of a search for me at this point...

    Read the article

  • Subversion hooks no longer running

    - by Chris Lieb
    I don't know when this started happening, but, for some reason, none of my Subversion hooks are running anymore. I am running Subversion 1.6.9 on a Gentoo Linux machine, which has had its hooks work in the past. I am running Subversion through the svn_dav module for Apache2.2. I modified the hook scripts that I make use of to write into a file in the /tmp directory owned by apache:apache whenever they are executed, but after making a commit, there is nothing in the file that should be written to. The scripts are executable and owned by apache:apache, so I don't think that is the issue. Here is one of my test scripts (post-commit.sh) that isn't getting executed: #!/bin/sh /bin/echo post-commit >> /tmp/z_test exit 0 After running a commit, I expect both the pre-commit.sh and post-commit.sh hooks to be run, but neither of them appear to be writing into the desired file (/tmp/z_test). What's going on?

    Read the article

  • Mapping an sFTP connection to a Windows drive?

    - by Nicolas
    I'm looking for a way to map an sFTP connection to a Windows (Vista) drive. In other words, a tool that would add a new drive (let's say N:) to my computer, that would directly point to my remote server via sFTP. That way, "N:\my_dir\file.txt" would actually be something like "/home/user/my_dir/file.txt" on the remote server. Reading the file on Windows would download it, and writing content in it would upload it...network transfers being made via sFTP. I'm aware of Novell NetDrive, but it has various issues with long filenames, and seems to corrupt UTF-8 files content depending on the BOM. Do you know about any reliable alternative ? Thanks ! Edit : I have complete control of the remote server, except that it's remote enough for me not to be able to physically access it.

    Read the article

  • Graciously shutdown external HDD enclosure?

    - by Jakobud
    I recently purchased a large HDD along with the following HDD enclosure: http://www.newegg.com/Product/Product.aspx?Item=N82E16817173043 It has a simple on-off switch on the back. When I want to turn this thing off, do I simply just flip the switch? I assume the switch simply kills the power to the HDD, but isn't that potentially a bad thing in the case that the HDD is still reading/writing? I used to have a Seagate external HDD and it had a button on the front that I had to hold down for a second or two before it would turn off, but it at least appeared to sort of go through a shutdown procedure where it probably would stop the HDD activity before cutting power. So with this external HDD, I'm a little bit leery about that power switch and understanding exactly what it does. Is this how all HDD enclosures are? EDIT: I'm running the drive in Ubuntu Server. So there is no 'ejecting' the drive lol

    Read the article

  • Why do Windows 7 & 8 have different default behaviour when trying to modify contents of protected folder

    - by Ben
    Here's the situation: I have a Windows 7 PC and a Windows 8 PC and I'm logged in as the same domain user on both machines. My domain user is in the local Administrator group on both. When I run cmd.exe on each machine and then attempt to do this (also on both machines) mkdir "c:\Program Files\cheese" the Windows 8 PC gives an "Access Denied" error, while it works fine on the Windows 7 PC. I understand that C:\Program Files is a protected folder and I'm not interested in a debate on the morals of writing to such a folder directly. But I am interested in understanding what exactly has changed in Windows 8 to cause this. I don't seem to be able to find anything that acknowledges or explains this change in behaviour in Windows 8.

    Read the article

  • understanding mount -o bind

    - by Ionut
    Few questions after the following commands: mount -o bind /new_disk/home/user/ /home/user/ mount -o bind --no-mtab /new_disk/home/user/ /home/user/ What is the difference between the two commands other than " Mount without writing in /etc/mtab. This is necessary for example when /etc is on a read-only filesystem." What is the difference between mount -o bind and mount --bind ...if there are Let's suppose i don't know there is a partition mounted using -o bind --no-mtab...where can I find if there is any mound point with bind ? The only way i can detect this is grep user /proc/mounts but in that line there is no info abut bind. Thank you.

    Read the article

  • Is there a test to see if hardware virtualization (vmx / svm) are presently enabled within a Linux session?

    - by Dr. Edward Morbius
    I'm writing procedures for configuring VirtualBox support for 64-bit SMP guests, which requires hardware virtualization suppot (VTx/Intel, AMD-V/AMD). I have successfully configured this myself, however I'd like the procedure to be clear. sed -ne '/^flags/s/^.*: //p' /proc/cpuinfo | egrep -q '(vmx|svm)' && echo Has hardware virt || echo No HW virt ... shows if the CPU is capable. I've still got to go enable the feature in BIOS. Any way to test from within Linux to see that this is no or not? Thanks. (Edit: s/xvm/svm/ in title)

    Read the article

  • One-Way Backup Service? [closed]

    - by Jon Rodriguez
    Up until a month ago, my girlfriend has used MobileMe to backup all the files on her MacBook. This turned out terribly when a quirk of MobileMe caused it to erase all of her files on MobileMe, and then sync the newly-erased MobileMe down to her computer, erasing everything. A week's worth of college essays and CS homework were gone. Now, I am terrified of any commercial cloud-backup solutions because of the possibility of this happening. Going off the list provided in these answers, could you please help me find a good backup service that is completely one-way? I want a service where there is literally not a single line of code that has the possibility of writing to my computer's drive. I want a pure one-way backup service.

    Read the article

  • What does dd conv=sync,noerror do?

    - by dding
    So what is the case when adding conv=sync,noerror makes a difference when backing up an entire hard disk onto an image file? Is conv=sync,noerror a requirement when doing forensic stuff? If so, why is it the case with reference to linux fedora? Edit: OK, so if I do dd without conv=sync,noerror, and dd encounters read error when reading the block (let's size 100M), does dd just skip 100M block and reads the next block without writing something (dd conv=sync,noerror writes zeros to 100M of output - so what about this case?)? And if is hash of original hard disk and output file different if done without conv=sync,noerror? Or is this only when read error occurred?

    Read the article

  • Download folders from dev server to local drive

    - by Niall Collins
    I am developing a .net web application on a local environment. I have a dev server that the application is installed on. Within the web application on the dev server I have four folders that I dont have locally and that are controlled by another application. In my day to day development I require the four folders on local PC. I would like to automate the process of pulling the folders from the dev server to my local drive, so I can keep thing in sync. Ideally something like this Run file from main folder (be it a bat file, powershell, some sort of job, open to recommendations) Download 4 folders supplied to it. First download bring everything down, from them on only pull the changes Not sure where to start with achieving this but would appreciate any help would with. I know there are apps out there that do something like this but would like to give a go writing something to do this before I resort to using one of them.

    Read the article

  • fstab line for auto mount drive that all users can read/write

    - by evilblender
    I have installed a cable that connects from the CPU's SATA motherboard connection to a removable drives' ESATA connection. I would like to be able to swap drives on the ESATA connection and have all users be able to read and write to these drives. I have created the directory /archive/ where I would like the drive(s) to mount. The drives are all formatted Fat 32 - but in the future I may use HFS for formatting. When I used the command (as root): mount /dev/sdc1 /archive the drive was mounted (but read only) What can I use in my /etc/fstab file that will allow drives to be mounted and unmounted by all users on the system? (both reading and writing) Also, will I be able to mount and unmount these drives without shutting down? or will I need to reboot every time I want to change drives? Thank you. Jeff

    Read the article

  • Make shortened and long urls play together on the same domain (RewriteRule).

    - by Renato Renato
    Long story short, I want to have both example.com/aJ5 and example.com/any-other-url working together. I'm using apache and not very good at writing regex. I have already a global RewriteRule which sends everything to the app entry point. What I need is to tell apache if length($path) is <= 5 chars then rewrite to another location. I know I can use {1,5} like syntax in regex, but don't really know if it's what I'm looking for. I'd like to implement this at web-server level rather than php level. Any help is appreciated.

    Read the article

  • FoxPro 2.6 DOS on Windows 7 64-bit

    - by Rolando
    I support a company that has a very old, mission critical, FoxPro for DOS 2.6 (FPD) application. For variuos reasons the company didn't adapt/migrate their app, which, ironically, has been running even better under Windows XP (and 32-bit Win7) because the OS allowed new features like more reliable networking, distributed printing, email integration. Unfortunately for this company, most new machines now come with a 64-bit version of Windows 7, which is incompatible with their FPD app. I know this time the writing is on the wall: the only long-term solution is to migrate their app. But I wonder if anyone can suggest a temporary alternative path, which doesn't involve either: downgrade 64-bit Windows to 32-bit, or run the app on a virtualized 32-bit XP

    Read the article

  • Dynamic VPN tunneling technologies

    - by Adam
    Ok, so I'm asking a more specific question this time. I'm writing a paper about Cisco's DMVPN and one of the tasks I have is to make the analysis of available network solutions which use dynamic VPN tunnels. Because the paper is about DMVPN, I have to compare those solutions to it. I know there are a lot of dynamic tunneling technologies but I'm looking for ones that can be compared to DMVPN. So the question is: are there any technologies which use dynamic VPN tunnels (not necessarily using crypto) that can be compared to DMVPN? What are those technologies?

    Read the article

  • How can I improve performance over SMB/CIFS for an application that has poor write speeds?

    - by Jeremy
    I have a third party application that reads several large files and generates a third large file. Its performance is quite good when the generated file is stored on "local storage", i.e. either a direct attached or iSCSI-based disk. The source files that are read can be stored remotely on our NAS and accessed via SMB with little effect on performance. However, if we attempt to write the target file to any kind of SMB/CIFS share (Samba or Windows Server) the performance drops almost ten-fold. This is unacceptably slow in our case. Writing files to network shares is not otherwise slow. I can copy large files to SMB shares and get great performance - near what I would expect is possible given the disks and network in question. I have a theory that this application's problem with SMB shares has something to do with a lack of write caching over the share and perhaps lots of network roundtrips. Is this possible and is there anything that can be done about it?

    Read the article

  • Concurrent modification during backup: rsync vs dump vs tar vs ?

    - by pehrs
    I have a Linux log server where multiple applications write data. Data is written in bursts, and in a lot of different files. I need to make a backup of this mess, preferably preserving as much coherence between the file versions as possible and avoiding getting truncated files. Total amount of data on the server is about 100Gb. What I really would want (but can't) is to shut-down, backup the system cold and then start it up again. What kind of guarantees against concurrent modification does the various backup tools give? When do they "freeze" the file versions? I am looking at rsync, dump and tar at the moment, but I am open for other (open source) alternatives. Changing the application or blocking writing for backups is sadly not an option. System is not running LVM (yet), but I have considered that for rebuilding the system and then snapshots.

    Read the article

  • Speed up file access on home network

    - by kurasa
    I have 2 PCs (Windows 7 Ultimate) and a Mac running Windows 7 using vmware fusion on my home network tied together using WRN1000 NETGEAR Router On one of the PC's I have a set of file (MYOB .myo). These use a data source to access the data in the files. Operations (reading,writing) to the .myo on the PC which hosts the files is fine but the other 2 it is painfully slow/unreliable and I am wondering what I can do to speed this up. Some ideas I have are 1. Turn off the Windows firewall on all the windows installations on the home network 2. Buy another router. Specifically a router which I can connect a USB flash drive on the back where I can put the .myo files and all the PC can access the files from the USB flash drive on the router (does this speed things up?) Any advice greatly appreciated on how I can speed up this access to data

    Read the article

< Previous Page | 362 363 364 365 366 367 368 369 370 371 372 373  | Next Page >