Search Results

Search found 7902 results on 317 pages for 'structure'.

Page 228/317 | < Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >

  • CentOS centralised logging, syslogd, rsyslog, syslog-ng, logstash sender?

    - by benbradley
    I'm trying to figure out the best way to setup a central place to store and interrogate server logs. syslog, Apache, MySQL etc. I've found a few different options but I'm not sure what would be best. I'm looking for something that is easy to install and keep updated on many virtual machines. I can add it to a VM template going forward but I'd also like it to be easy to install to keep the VM complexity down. The options I've found so far are: syslogd syslog-ng rsyslog syslogd/syslog-ng/rsyslog to logstash/ElasticSearch logstash agent in each log "client" to send to Redis/logstash/ElasticSearch And all sorts of permutations of the above. What's the most resilient and light from the log "client" perspective? I'd like to avoid the situation where log "clients" hang because they are unable to send their logs to the logging server. Also I would still like to keep local logging and the rotation/retention provided by logrotate in place. Any ideas/suggestions or reasons for or against any of the above? Or suggestions of a different structure entirely? Cheers, B

    Read the article

  • How does one open multiple tabs in TextWrangler?

    - by Closure Cowboy
    No, I'm not bluffing. I really can't figure this out. The setup: I went to File -> Open, and then selected a directory rather than a file. As expected, a directory tree opened on the left side of my document. Hooray! I can easily view the files' structure in my Rails project. So, I make a few changes in a file, and then I click on a different file in the directory tree. My problem: TextWrangler then asks me whether I want to save my changes. Huh? I say "No", and the new document doesn't open at all. Great. I try hitting Command+N (new document). A new window opens. Ughhhh. How the heck do I open documents in a new tab? Note: I have set the "New & opened documents" behavior to "Open in the front window". This does not change the behavior described (i.e. when a directory is opened rather than a single file).

    Read the article

  • Diagnosing a BSOD involving USB

    - by David Ebbo
    [Running Win7 Ultimate 64 bit] My new HP Pavilion Elite HPE-450t has been plagued by BSDO crashes since I got it about 5 weeks ago. The crashes are somewhat rare, sometimes not occurring for 3 or 4 days. I have spent a lot of time trying to isolate the device that could be at fault, but I have seen crashes with only the keyboard and mouse plugged in (as USB devices), and I tried two sets of keyboard/mouse, so I'm running out of ideas. :( The WhoCrashed tool gave this info about my latest BSOD: crash dump file: C:\Windows\Minidump\121310-11887-01.dmp This was probably caused by the following module: usbport.sys (USBPORT+0x2DE4E) Bugcheck code: 0xFE (0x5, 0xFFFFFA8008F571A0, 0x80863B34, 0xFFFFFA80092F2510) Error: BUGCODE_USB_DRIVER file path: C:\Windows\system32\drivers\usbport.sys product: Microsoft® Windows® Operating System company: Microsoft Corporation description: USB 1.1 & 2.0 Port Driver Bug check description: This indicates that an error has occurred in a Universal Serial Bus (USB) driver. The crash took place in a standard Microsoft module. Your system configuration may be incorrect. Possibly this problem is caused by another driver on your system which cannot be identified at this time. I looked at http://msdn.microsoft.com/en-us/library/ff560407(VS.85).aspx, and for Parameter1 = 0x5, it says "A hardware failure has occurred due to a bad physical address found in a hardware data structure. This is not due to a driver bug". Should I conclude that it's a hardware issue in the machine itself, rather than a bad USB driver or USB device? Here is the MiniDump, in case someone can get more info out of it: http://ewt52q.blu.livefilestore.com/y1peS4Ce8nSK1SXghzMDoxDWXlaEu-EKCJsv25y8y5DXXIUzZ9U0_tYgFJXd939fykwa0zRmx98IW0PYG18GioqKAuARYjtspSA/121310-11887-01.dmp?download&psid=2

    Read the article

  • Libraries merged folder views

    - by Stigma
    So I pretty much love the Windows 7 Libraries feature, and saw one use for them that I thought would be perfect, but I can't seem to manage it. Basically, a merged view of different folder structures. Suppose I make a new generic library and add three locations to it: C:\Test\, D:\Test\ and D:\temp\Test\. Now, these may look somewhat okay as long as there are no duplicates in these folders. (It wants to group them based on the included directory, which one can work around by looking on google - I don't have the precise trick on hand I am afraid.) But when you get collisions and, say, two of those directories have a Sub directory in them, stuff becomes unusable (assuming Arrange by: Folder view). You'll have multiple folders listed named Sub, which is pretty useless when looking for data. I want folders to get 'merged', which ought to be possible somehow since it can create these merged views based on artist, album etc in other views. So all subdirectories that are double (and recursively checking for doubles inside those, etc) ought to be merged for as far the View is concerned. If files have a collision, I don't really care what happens - hide one, show both, filter out duplicates, whatever. (Although an option would be nice...) Anyhow, is there anyone who knows how to get such a 'merged folder structure' functionality for Libraries? It would be really useful for me.

    Read the article

  • RHEL 5.2 installing on ProLiant BL460c - hangs at 'now booting the kernel'

    - by Dr Rocket Mr Socket
    As the title states, I have a problem. This server and the installation disc are also on the other side of the world to me so... So far, I have tried to start the install with the parameters: linux text noapic noacpi no=apic no=acpi which results in the same hang. I have also disabled a PCI ethernet adapter, I am uneasy about disabling the onboard ethernet adapter I do not know if ILO uses this. Anyone have any advice? Much appreciated. EDIT: full output after trying to begin the installation. boot: linux text Loading initrd.img.................. Loading vmlinuz....... Uncompressing Linux...done. Now booting the kernel stays on this for hours EDIT2: adding the 'mem=40960M' (server has 40 gigs of ram) parameter allows it to proceed but the following output directly after 'Now booting the kernal' Memory: sized by int13 0e801h initrd extends beyond end of memory (0x00ef2090 > 0x00000000) disabling initrd Console: 16 point font, 400 scans Console: colour VGA+ 80x25, 1 virtual console (max 63) pcibios_init : BIOS32 Service Directory Structure at 0x000ffee0 pcibios_init : BIOS32 Service Directory entry at 0xf0000

    Read the article

  • Error setting up Data Protection Manager 2010 Agents / Network "Unauthenticated" in network settings

    - by Bowsa
    I'm not sure if the two are connected but i suspect they are. Basically I'm tring to setup Data Protection Manager 2010 on a fresh install of Server 2008 R2 in a SBS 2003 domain. Everything went fine until trying to install agents across the network. Upon clicking add, i get the following error message: Unable to connect to the Active Directory Domain Services Database. Make sure that the DPM server is a member of a domain and that the controller is running. Also verify that there is network connectivity between the DPM server and the domain controller. ID: 7 As usual (worryingly) the MSDN support for 2010 products is nearly non existant, clicking the error ID simply gives a page not found error. So after 2 days of Googling and trying various fixes (DNS settings, adding permissions to AD objects, rejoining the domain and many more) I thought I'd ask here in the hope that someone out there may have had this issue before. Any help greatly appreciated! Some further info: Firewalls are disabled on the Server 2008, SBS, and client machines. Manually installing and adding the client in also fails, as the DPM server tries to contact the DC first. Edit: I tried creating a new protection group instead, and it gives a different error upon adding the machines: Following machines are not found in AD: COMPUTERNAME.COMPANYNAME.LOCAL Is there a certain directory structure it follows in AD?

    Read the article

  • Samba Public and "Dropbox" folder

    - by cb0
    I want to create a special Group Directory structure for my Users. /home/groupA <-- home folder for groupA where every user of groupA can create/edit/delete files /home/groupA/Public <-- Public Folder where every user can read files /home/groupA/Public/Dropbox <-- Folder where every user can write files but only users of groupA can access this directory and create/edit/delete files Now I have: 4 drwxrwx--t 10 nobody groupA 4096 Feb 18 15:44 /home/groupA 4 drwxrwxr-x 7 nobody groupA 4096 Feb 18 15:40 /home/groupA/Public/ 4 drwxrwx-w- 10 nobody groupA 4096 Feb 18 15:55 /home/groupA/Public/Dropbox My smc.confcontains the following entrys [groupA] path = /home/groupA comment = Folder for users of groupA browseable = yes read only = no create mask = 0770 directory mask = 0770 force group = groupA [groupA Public] path = /home/groupA/Public comment = Admins Public Browseable = yes write list = @groupA create mask = 0775 directory mask = 0775 [groupA DropBox] path = /home/groupA/Public/Dropbox comment = groupA Dropbox read only = no valid users = @groupA browseable = yes inherit owner = yes directory mode = 3770 force directory mode = 3770 The working part is: Users of groupA can access and fully use /home/groupA All Users can access /home/groupA/Public/ and read the files from there. My problem is that all users, even those who are not in groupA can access the /home/groupA/Public/Dropbox and see all files. I just want them to be able to put files in there but not see the content of the folder and prohibit them from deleting any files in there. Does anybody have a clue what could be the problem and how I can fix it ?

    Read the article

  • How to grant read/write to specific user in any existent or future subdirectory of a given directory? [migrated]

    - by Samuel Rossille
    I'm a complete newbie in system administration and I'm doing this as a hobby. I host my own git repository on a VPS. Let's say my user is john. I'm using the ssh protocol to access my git repository, so my url is something like ssh://[email protected]/path/to/git/myrepo/. Root is the owner of everything that's under /path/to/git I'm attempting to give read/write access to john to everything which is under /path/to/git/myrepo I've tried both chmod and setfacl to control access, but both fail the same way: they apply rights recursively (with the right options) to all the current existing subdirectories of /path/to/git/myrepo, but as soon as a new directory is created, my user can not write in the new directory. I know that there are hooks in git that would allow me to reapply the rights after each commit, but I'm starting to think that i'm going the wrong way because this seems too complicated for a very basic purpose. Q: How should I setup my right to give rw access to john to anything under /path/to/git/myrepo and make it resilient to tree structure change ? Q2: If I should take a step back change the general approach, please tell me.

    Read the article

  • Maintaining "Portability" Between Linux and Windows 7

    - by lokheart
    I am using the following ways in my office's Windows 7 machine to maintain my "portabilibity" when disaster strikes and I need to switch computer while I have no luxury of time for reinstalling all my program to the new PC. a majority of programs I used are portable, mostly from portableapp.com, like notepad+, GIMP, even R, I extract them and store them in a folder in My document, in a structure similar to the default portableapp installation when they are installed to a thumbdrive only a few software that portable version is not available and I will install them as usual all of my working files are stored in a folder in My document I regularly backup them all using syncback, because this program can keep versioning of my backup, and the backup is stored in a portable drive. One day I need to switch my computer and the operation is relative simple for me: I just move the two folders mentioned above into the my document folder of the new PC, install those few "non-portable" program in it, and this is almost done, some minor hiccups can be solved by reinstalling the portableapp into the drive. Overall speaking it is a smooth process. I would like to maintain the same degree of "portability" in my home Linux desktop (Ubuntu or Mint, I'm still deciding), that is, if my Linux crash and I need to reinstall it again. All I need to do is the move the two folder back to the new Linux, and most of my work will be almost ready to be worked on again. But I don't know how to find a Linux-alternative of portableapps. Being a newer to Linux, can anyone tell me whether this is possible in Linux?

    Read the article

  • Save and restore multiple layers within a Photoshop action that flattens

    - by SuitCase
    I'm editing comic pages with layers - "background", "foreground", "lineart" and "over lineart". I have a Photoshop action that includes a Mode-Bitmap command, which requires the image to be flattened. I need this part of the action because I use the Halftone Screen method of reducing the greyscale image to bitmap on the "background" layer, creating a certain effect. I am pretty sure there is no filter or anything else that gives the same effect. After the mode is changed to bitmap, my action changes things back to greyscale for further changes. This poses a problem. I only want to do the bitmap mode change on the background layer, and after I do the change I want to restore the layer structure as it was - with the foreground, lineart and over lineart layers back above the now-halftoned background. My current method of saving these layers and restoring them is clumsy. My action is able to automatically save the "foreground" layer by selecting it, cutting it, then pasting it back in after the mode changing is over. But, for the "ink" and "over ink" layers, I have to manually cut these layers, paste them into a new document, and later re-cut and re-paste after running my action. This is so clunky! What I would like to know is if it's possible to set aside my layers in an automated way, and then bring them back in, also in an automated way. An ugly (but functional) solution would be to replicate my actions of creating new documents and pasting them temporarily there, but I don't think Photoshop allows you to do things outside of your current document with an action. It seems to me that the only way to do what I want is to use the "hack" of incorporating the clipboard into the action as a clever hack, but that leaves me stuck as I have two more layers that can't fit onto that same clipboard. Help or suggestions would be appreciated. I can keep on doing it manually, but to have a comprehensive action would save me a ton of time.

    Read the article

  • Programs don't have permissions when using absolute path

    - by Markos
    I have asked this on askubuntu but didn't get a single response in days, so I will try it here. I have directory structure like this: /path/dir1 - all users in group1 must have rwx permissions, including subdirs and newly created dirs /path/dir1/dir2 - also users in group2 must have rwx permissions So what I tried is that I used ACL. getfacl /path/dir1 # file: /path/dir1 # owner: root # group: nogroup user::rwx group::--- group:group1:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:group1:rwx default:mask::rwx default:other::--- getfacl /path/dir1/dir2 # file: /path/dir1/dir2 # owner: root # group: nogroup user::rwx group::--- group:group1:rwx group:group2:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:group1:rwx default:group:group2:rwx default:mask::rwx default:other::--- That shows that I have granted rwx to group1 in /path/dir1 and rwx to group1 and group2 in /path/dir1/dir2. Now it gets interesting. Let's assume, that user2 is member of group2. If I issue commands as user2: cd /path/dir1/dir2 mkdir foo Then folder is succesfully created. However, if I do this: mkdir /path/dir1/dir2/foo I get permission denied error. I have tried extensively to resolve the problem. What I have found is that ACL is to blame. If I add permissions to group2 in /path/dir1 it starts to work. Also if I completely remove /path/dir1 ACL it starts to work. Obviously I am missing something VERY basic. I don't have much experience with linux, but this is a no-brainer on Windows. I have spent way too many hours to resolve this basic requirement. If you need more information, I will try to update the question, so feel free to ask!

    Read the article

  • MySQL extension of PHP not working

    - by Víctor
    In a Debian server, and after intallation and removal of SquirrelMail (with some downgrade and upgrade of php5, mysql...) the MySQL extension of PHP has stopped working. I have php5-mysql installed, and when I try to connect to a database through php-cli, i connect successfully, but when I try to connect from a web served by Apache I cannot connect. This script, run by php5-cli: echo phpinfo(); $link = mysql_connect('localhost', 'user, 'password'); if (!$link) { die('Could not connect: ' . mysql_error()); } echo 'Connected successfully'; mysql_close($link); Prints the phpinfo, which includes "/etc/php5/cli/conf.d/mysql.ini", and also the MySQL section with all the configuration: SOCKET, LIBS... And then it prints "Connectes successfully". But when run by apache accessed by web browser, it displays the phpinfo, which includes "/etc/php5/apache2/conf.d/mysql.ini", but has the MySQL section missing, and the script dies printing "Fatal error: Call to undefined function mysql_connect()". Note that both "/etc/php5/cli/conf.d/mysql.ini" and "/etc/php5/apache2/conf.d/mysql.ini" are in fact the same configuration, because I have in debian the structure: /etc/php5/apache2 /etc/php5/cgi /etc/php5/cli /etc/php5/conf.d And both point at the same directory: /etc/php5/apache2/conf.d -> ../conf.d /etc/php5/cli -> ../conf.d Where /etc/php5/conf.d/mysql.ini consists of one line: extension=mysql.so So my question is: why is the MySQL extension for PHP not working if I have the configuration included just in the same way as in php-cli, which is working? Thanks a lot!

    Read the article

  • Copy files from sub directories into one directory.

    - by Derek Organ
    Ok I have a bunch of files in this file structure format. /backup/daily/database1/database1-2011-01-01.sql /backup/daily/database1/database1-2011-01-02.sql /backup/daily/database1/database1-2011-01-03.sql /backup/daily/database1/database1-2011-01-04.sql /backup/daily/database1/database1-2011-01-05.sql /backup/daily/database1/database1-2011-01-06.sql /backup/daily/database1/database1-2011-01-07.sql /backup/daily/anotherdb/anotherdb-2011-01-01.sql /backup/daily/anotherdb/anotherdb-2011-01-02.sql /backup/daily/anotherdb/anotherdb-2011-01-03.sql /backup/daily/anotherdb/anotherdb-2011-01-04.sql /backup/daily/anotherdb/anotherdb-2011-01-05.sql /backup/daily/anotherdb/anotherdb-2011-01-06.sql /backup/daily/anotherdb/anotherdb-2011-01-07.sql /backup/daily/stuff/stuff-2011-01-01.sql /backup/daily/stuff/stuff-2011-01-02.sql /backup/daily/stuff/stuff-2011-01-03.sql /backup/daily/stuff/stuff-2011-01-04.sql /backup/daily/stuff/stuff-2011-01-05.sql /backup/daily/stuff/stuff-2011-01-06.sql /backup/daily/stuff/stuff-2011-01-07.sql And there are lots lots more. ultimately I want to import all the 2011-01-07.sql files into my mysql database. This works for one mysql -u root -ppassword < /backup/daily/database1/database1-2011-01-07.sql That will nicely restore that database from this backupfile. I want to run a process where it does this for all databases. So my plan is to first cp all 2011-01-07 sql files into a tmp dir e.g. cp /backup/daily/*/*2011-01-07*.sql /tmp/all The command above unfortunately isn't working I get an error: cp: cannot stat ..... No such file or directory So can you guys help me out with this. For bonus points if you can tell me how to do the next step which is import all databases in one command doing one at a time that would be great too. I really want to do these in two separate steps because I need to delete a few sql files manually from the tmp dir before I run the restore command. So I need: 1) command to copy all 2011-01-07 sql files to a tmp dir 2) command to import all those files in that dir into mysql I know its possible to do in one but for lots of reasons I really would prefer to do it in two steps.

    Read the article

  • Isolate clients on same subnet?

    - by stefan.at.wpf
    Given n (e.g. 200) clients in a /24 subnet and the following network structure: client 1 \ . \ . switch -- firewall . / client n / (in words: all clients connected to one switch and the switch connected to the firewall) Now by default, e.g. client 1 and client n can communicate directly using the switch, without any packets ever arriving the firewall. Therefore none of those packets could be filtered. However I would like to filter the packets between the clients, therefore I want to disallow any direct communication between the clients. I know this is possible using vlans, but then - according to my understanding - I would have to put all clients in their own network. However I don't even have that much IP addresses: I have about 200 clients, only a /24 subnet and all clients shall have public ip addresses, therefore I can't just create a private network for each of them (well, maybe using some NAT, but I'd like to avoid that). So, is there any way to tell the switch: Forward all packets to the firewall, don't allow direct communication between clients? Thanks for any hint!

    Read the article

  • Are my Linux symbolic links acting correctly?

    - by Andy Castles
    Hi all I've been using Linux on and off for the last 15 years and today I came across something in bash that surprised me. Setup the following directory structure: $ cd /tmp $ mkdir /tmp/symlinktest $ mkdir /tmp/symlinktest/dir $ mkdir /tmp/symlinktarget Now create two sym links in symlinktest pointing to symlinktarget: $ cd /tmp/symlinktest $ ln -s ../symlinktarget Asym $ ln -s ../symlinktarget Bsym Now, in bash, the following tab completion does strange things. Type the following: $ cd dir $ cd ../A[TAB] Pressing the tab key above completes the line to: $ cd ../Asym/ as I expected. Now press enter to change into Asym and type: $ cd ../B[TAB] This time pressing the tab key completes the link to: $ cd ../Bsym[space] Note that there is now a space after the Bsym and there is no trailing slash. My question is, why when changing from the physical directory "dir" to Asym it recognises that Asym is a link to a directory, but when changing from one sym link to another, it doesn't recognise that it's a link to a directory? In addition, if I try to create a new file within Asym, I get an error message: $ cd /tmp/symlinktest/Asym $ cat hello > ../Bsym/file.txt -bash: ../Bsym/file.txt: No such file or directory I always thought that symlinks were mostly transparent except to programs that need to manipulate them. Is this normal behaviour? Many thanks, Andy

    Read the article

  • Anyone know a good mind mapper that works with a scheduler?

    - by GLycan
    TL;DR: Mind mapping tasks to be processed into a schedule based on task metadata. I have all sorts of ideas about what to invest resources (mainly time) in, but when I actually have time to do something I useually end up browsing reddit for not knowing what do to, and the frequancy with which I forget deadlines scares me. I'd love to bring order and structure into my mind, and always know what to do next. So, I want a mind mapping app, where I'd give each branch (types and subtypes of things I want to do) a importance score (if there were two branches, and one had 60 while the other 40, they would respectivily get 60% and 40% of the parent's importance, with the root being 100) and a how soon that branch should be revised/updated (an hobby I want to try out might be checked, say, once a week, while a school subject should be checked once a day) and give each leaf (something I want/need to do) how much time it takes, deadline (if any), and optionally an absolute importance, reoccurrence (guitar practice might repeat once a week), and prerequisites (reading something requires that book (although that could be brought somewhere), coding requires a box, jogging requires being outside) and maybe some other flags, like if it's enjoyable or not. It should either be packaged or working with a schedular app, to which I'd say, look, my day works this way (completely busy from 8 to 9:15, then 15 minutes of being inside with nothing, ..., two hours with box and possibility to go outside, etc), saying that such-and-such pattern is school and happens ever weekday except such-and-such days. The output should be of the form of a schedule, fit for printing or, when I finally get an android, mobile viewing, that schedules tasks with regards to availability of resources and importance (importance being derived from the leaf-task's parent branches), and the set of flags (all work and no play makes me a dull boy). One of these tasks should be reviewing anything that should be updated on that day, including future day layouts (e.g, if the time slots of future days have changed. This should be done every day.) Does anyone know some collection of preferably open-source (or free, or pirateable) tools, or better yet a single one, that accomplishes this task? I know python pretty well, and should be able to write any necessary glue.

    Read the article

  • Redirect before rewrite

    - by Kirk Strobeck
    Had an issue where I need to redirect old URLs, but not disable the mod_rewrite for page structure. redirect 301 /home.html http://www.url.com/ It needs to live on the Symphony 2.0 .htaccess file ### Symphony 2.0.x ### Options +FollowSymlinks -Indexes <IfModule mod_rewrite.c> RewriteEngine on RewriteBase / ### DO NOT APPLY RULES WHEN REQUESTING "favicon.ico" RewriteCond %{REQUEST_FILENAME} favicon.ico [NC] RewriteRule .* - [S=14] ### IMAGE RULES RewriteRule ^image\/(.+\.(jpg|gif|jpeg|png|bmp))$ extensions/jit_image_manipulation/lib/image.php?param=$1 [L,NC] ### CHECK FOR TRAILING SLASH - Will ignore files RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !/$ RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ $1/ [L,R=301] ### ADMIN REWRITE RewriteRule ^symphony\/?$ index.php?mode=administration&%{QUERY_STRING} [NC,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^symphony(\/(.*\/?))?$ index.php?symphony-page=$1&mode=administration&%{QUERY_STRING} [NC,L] ### FRONTEND REWRITE - Will ignore files and folders RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*\/?)$ index.php?symphony-page=$1&%{QUERY_STRING} [L] </IfModule> ###### Updated redirect 301 ^home.html http://www.url.com/ [L] ### Symphony 2.0.x ### Options +FollowSymlinks -Indexes <IfModule mod_rewrite.c> RewriteEngine on RewriteCond %{HTTP_HOST} !^www\.krista-swim\.com [NC] RewriteRule ^(.*)$ http://www.krista-swim.com/$1 [L,R=301] RewriteBase / ### DO NOT APPLY RULES WHEN REQUESTING "favicon.ico" RewriteCond %{REQUEST_FILENAME} favicon.ico [NC] RewriteRule .* - [S=14] ### IMAGE RULES RewriteRule ^image\/(.+\.(jpg|gif|jpeg|png|bmp))$ extensions/jit_image_manipulation/lib/image.php?param=$1 [L,NC] ### CHECK FOR TRAILING SLASH - Will ignore files RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_URI} !/$ RewriteCond %{REQUEST_URI} !(.*)/$ RewriteRule ^(.*)$ $1/ [L,R=301] ### ADMIN REWRITE RewriteRule ^symphony\/?$ index.php?mode=administration&%{QUERY_STRING} [NC,L] RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^symphony(\/(.*\/?))?$ index.php?symphony-page=$1&mode=administration&%{QUERY_STRING} [NC,L] ### FRONTEND REWRITE - Will ignore files and folders RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_FILENAME} !-f RewriteRule ^(.*\/?)$ index.php?symphony-page=$1&%{QUERY_STRING} [L] </IfModule> ######

    Read the article

  • Fix Video timelines

    - by Josh
    So, I have been going through and riping all of my DVD's and it seems that the way to get the highest quality out of these is to have DVD Shrink de-encrypt, rip, and decompress, the DVD's. After that I usually end up with a high quality (high size) set of .vob files in a classic DVD structure. Then I use a python script that I wrote to automate the process of finding the title sequence and then combining all of the title sequences' .vob files together into one file(similar to the "copy /b" command in windows), and then changing the extension to .mpg (a more widely supported format then .vob). This allows me to get a high quality rip in about 40 min. The problem comes in playing the files. I need all of the ripped dvd's to play on my media computer using windows media center but windows media center (and vlc for that matter) all think that the video files are anywhere from 5 min. to 0 min. which is not a problem (the video will still play all the way through) but if you want to pause it, when it is unpaused the video will start all the way over (Also fast forward and rewind don't work). I suspect that it is something wrong with the way the timeline is encoded in the video file, various forums on the internet recommended using virtualdub to fix the errors. But when I try to open the file virtual dub says that the file is not in mpeg-1 encoding and may be in mpeg-2. Is there any way to fix this? PS: I am aware that there was a similar question but it hasn't had any activity for 2 months and is dealing more with wmv files.

    Read the article

  • How can I take a one time full backup of a Windows Server without the need for a restore program?

    - by TheCleaner
    I have a Windows SBS server with about 500GB of data that I'm decommissioning but I'd like to take a final backup of the server and place it on an external USB drive. I already have multiple backups of the server on disk from the past but they are through Simpana Commvault. I'd like a backup that will simply copy the file structure, ACLs, timestamps, etc. as is to a NTFS volume on the external drive. This way if someone says "I need x file on the server you decommed" I can search the external drive real quick instead of firing up Commvault, cataloging, restore, etc. I know the built-in Windows backup is great, I just don't feel like running it for a restore job on this. I'd like an option where in the future it won't require a program to run a restore. Rather a simple mount of the drive will suffice. I believe I can use robocopy just fine, but I'm not sure if it will grab the Windows directory, system files, and full user profiles correctly even with the /ZB option. Options? Is Robocopy /E /ZB /COPYALL /DCOPY:DAT /MT:32 /R:5 /W:5 /LOG:copylog.log the way to go here?

    Read the article

  • How to clone MySQL DB? Errors with CREATE VIEW/SHOW VIEW privileges

    - by user38071
    Running MySQL 5.0.32 on Debian 4.0 (Etch). I'm trying to clone a WordPress MySQL database completely (structure and data) on the same server. I tried doing a dump to a .sql file and an import into a new empty database from the command line, but the import fails with errors saying the user does not have the "SHOW VIEW" or "CREATE VIEW" privilege. Trying it with PHPMyAdmin doesn't work either. I also tried doing this with the MySQL root user (not named "root" though) and it shows an "Access Denied" error. I'm terribly confused as to where the problem is. Any pointers on cloning a MySQL DB and granting all privileges to a user account would be great (specifically for MySQL 5.0.32). The SHOW GRANTS command for the existing user works (the one who has privileges over the source database). It shows that the user has all privileges granted. I created a new user and database. Here's what I see with the grant commands. $ mysql -A -umyrootaccount --password=myrootaccountpassword mysql> grant all privileges on `newtarget_db`.* to 'newtestuser'@'localhost'; ERROR 1044 (42000): Access denied for user 'myrootaccount'@'localhost' to database 'newtarget_db' mysql> grant all privileges on `newtarget_db`.* to 'existingsourcedbuser'@'localhost'; ERROR 1044 (42000): Access denied for user 'myrootaccount'@'localhost' to database 'newtarget_db'

    Read the article

  • Differential backup missing moved folders (flawed archive attribute logic)

    - by Max
    Recently I've discovered that my backup system it flawed: there are situation where various files/folders are missed. I do my backup from local disk to a network NAS. I use Cobian backup, and I have setup the backup software to create one full backup every week, and one differential backup every day. Now, the backup software (to my knowledge any backup software work this way) decide the files that go in the differential backup by looking at the file archive attribute. If the attribute is set, then the file go in to the backup. Now, when you move a file to a new location, on Windows systems, the archive attribute get set and the file is included in the backup, and that's fine... but when you move an entire folder, no archive attribute is set, nor on the folder, nor in any files inside the folder, so the moved folder isn't included in the differential backup! So, if you have a full backup plus a differential backup, and you moved folders around... then it's impossible to reconstruct the original files/folders structure starting from the full+differential backup, because the backup software didn't include the moved folders in the differential backup. So my differential backup are useless... Why does windows set the archive attribute when moving a file, but not when moving a folder? How can I deal with this issue? Is there a way to create a differential backup that works as it's supposed to do? Doing full backup every day is not practical, because the changed data is about 0.1% at day (by using a differential backup I can keep 4 weeks of files history without using too much disk space.)

    Read the article

  • Directory tree in a Resource without extraction...

    - by Corelgott
    Hi all, i am looking for a way to store a complete directory including sub directories in an application's resource and not have to extract it to use it. Details: We would like to use GeckoFx (Gecko as C# Component) in one of our applications. GeckoFX needs the XUL-Runner and needs to find it's folder structure We have some other data which I would not prefer to extracted to the customer's pc; At least not onto something persistent like a hdd... Getting the complete directory into the resources is not that kind of a big deal. Compress to one file and done. But not writing it to the disk to use it is something else. I have a strong dislike against temp folders and such things. Would anything like a RAM drive be possible? Some part of the RAM beeing mounted? Does something like this even exist as a lib, or would this only be possible by a device driver? Any thoughts on this? Thanks in advance! Corelgott

    Read the article

  • Did chkdsk make it harder to restore files?

    - by neyl
    My friend asked me to try and fix his loaded Sansa Clip + which wasn't playing. After opening it in MSC mode I discovered that the Music directory was empty and total of all files was only a few MB. However Disk properties showed me that it was 7Gb full. I then ran Tools - Error Checking and Windows dutifully informed me that disk was corrupt and I should run again Allowing Windows to Fix Errors. I did that and it told me everything was fixed and that all files were placed in FOUND.000 Dir. FOUND.000 was about 7.5 GB with FILE0000-1546 . CHK. (I am aware of methods like ChkBack to scan and convert to mp3 etc BUT Original filenames and structure needed!) Now I started getting worried that I made things worse! I have plenty of experience with Data Recovery Programs - Recuva, Restore My Files etc. and I was anyhow planning to use them to scan the drive. But NOW after CHKDSK "fixed" the drive maybe it modified critical FAT information vital for data recovery. So I run these programs and 0!!!. No trace of files! I tried a ton of Recovery Programs with same results TILL EaseUS Data Recovery Wizard found all files and I purchased program for $55! My Question In your opinion - did running CHKDSK with automatic fixing of errors make matters worse (i.e. many data recovery progs. didn't find a trace and they would have done if not for chkdsk) or was the filesystem too corrupt anyhow for regular File Recovery Progs.? If I would be a Professional - would I be responsible for running CHKDSK - automatic Fixing. Do you know of a better Data Recovery Program than EaseUs Data Recovery wizard - According to my experience I haven't found better!? Thanks

    Read the article

  • Move the uploads folder in Wordpress

    - by Victor Hurdugaci
    Currently, my Wordpress' upload folder is located in \wp-content\uploads. Initially there was no structure so all files were put there. After a while it was changed to upload the files in \wp-content\uploads\YEAR\MONTH. Now that folder contains a mix of files (those starting with + are folders) like: +wp-content | +2010 | | +02 | | | File-1 | | | File-2 | | | .. | | | File-n | | +01 | | | File-1 | | | File-2 | | | .. | | | File-n | +2009 | | +12 | | | File-1 | | | File-2 | | | .. | | | File-n | | +11 | | | File-1 | | | File-2 | | | .. | | | File-n | +.. | | | .. | Unstructured-file-1 | Unstructured-file-2 | ... | Unstructured-file-n Based on the dates of the unstructured files, I would like to move them in a structured hierarchy (based on date, move it to a folder \wp-content\uploads\YEAR\MONTH). Now, my questions are: Where do I write and execute a script to the movement (I don't have full access to the server, just to a cPanel and to the Wordpress Admin page)? What must be updated so that the links in posts, that reference the unstructured files, point to the new location of those files? Not fully related to the previous description: is it alright to move the whole uploads folder to another location, like \uploads? PS: Moving the files/updating the database manually is not an option :)

    Read the article

  • KVM and libvirt: How to configure a new disc device to an existing VM?

    - by initall
    I've got an Ubuntu 9.04 server running two VM's. In /etc/libvirt/qemu/machine1.xml two disk devices are defined like this: <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <source file='/vserver/machine1/disk0.qcow2'/> <target dev='hda' bus='ide'/> </disk> <disk type='file' device='disk'> <source file='/vserver/machine1/disk1.qcow2'/> <target dev='hdb' bus='ide'/> </disk> I need more storage space in at least one of the devices and thought about adding a third hdc device by simply adding one with same style as above and re-organising my mount structure (The virtual sizes of the current qcow2 files are unfortunately limited.) My problem is that reloading libvirtd and restarting the VM do not result in a new visible device (checked with fdisk). I'm aware of extending an existing qcow2 file (converting to raw format, cat-ing/adding the new one, using smth. like gparted) - but only as a last resort. Hopefully it's something very simple I'm missing?

    Read the article

< Previous Page | 224 225 226 227 228 229 230 231 232 233 234 235  | Next Page >