Search Results

Search found 71429 results on 2858 pages for 'file attributes'.

Page 13/2858 | < Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >

  • Windows Server 2003 R2 Standard: Locks MS Office files, but not Adobe .AI and .PSD files?

    - by Bruce Garlock
    We have some shares setup on a Windows 2003 R2 server, and the MS Office files people save behave properly: The first person to open the file gets read/write, and the second person to open the file while the first person still has the file open, gets a read-only version. This is not true for the graphics files, like Adobe Illustrator .AI files, and Photoshop .PSD files. Anyone who goes to open these files has full read/write, even if someone else is already working on the file! This has lead to numerous file corruption issues, as well as other lost work, since it always saves the last changes to the file. How do we get Windows to properly lock these files so when someone is working on a file, and someone else wants to open one, they get read-only access? Many thanks, Bruce

    Read the article

  • File locked / read-only

    - by oshirowanen
    On a networked computer, I have a file which is coming up as read-only because someone else has it open. This is not true. This is a file stored locally on the computer and it is not being used by anyone else. I can login to the same computer using a different user, and the file opens up fine. I just get the issue with a particular user account. Other than deleting theses account/profile and creating it again, how can I unlock this file? Double clicking on this file gives me a message saying The file is locked for editing by another user, or the file (or the folder in which it is located,) is marked as read-only, or you specified that you wanted to open this file read-only. I don't think the folder is locked, because I can use other files in that folder fine, it's just 1 particular file which is causing this issue. I know that only 1 user is using this file as the file is on his c: and the same file works fine if he logs off to allow another user to log in.

    Read the article

  • Fragmented Log files could be slowing down your database

    - by Fatherjack
    Something that is sometimes forgotten by a lot of DBAs is the fact that database log files get fragmented in the same way that you get fragmentation in a data file. The cause is very different but the effect is the same – too much effort reading and writing data. Data files get fragmented as data is changed through normal system activity, INSERTs, UPDATEs and DELETEs cause fragmentation and most experienced DBAs are monitoring their indexes for fragmentation and dealing with it accordingly. However, you don’t hear about so many working on their log files. How can a log file get fragmented? I’m glad you asked. When you create a database there are at least two files created on the disk storage; an mdf for the data and an ldf for the log file (you can also have ndf files for extra data storage but that’s off topic for now). It is wholly possible to have more than one log file but in most cases there is little point in creating more than one as the log file is written to in a ‘wrap-around’ method (more on that later). When a log file is created at the time that a database is created the file is actually sub divided into a number of virtual log files (VLFs). The number and size of these VLFs depends on the size chosen for the log file. VLFs are also created in the space added to a log file when a log file growth event takes place. Do you have your log files set to auto grow? Then you have potentially been introducing many VLFs into your log file. Let’s get to see how many VLFs we have in a brand new database. USE master GO CREATE DATABASE VLF_Test ON ( NAME = VLF_Test, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test.mdf', SIZE = 100, MAXSIZE = 500, FILEGROWTH = 50 ) LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5MB, MAXSIZE = 250MB, FILEGROWTH = 5MB ); go USE VLF_Test go DBCC LOGINFO; The results of this are firstly a new database is created with specified files sizes and the the DBCC LOGINFO results are returned to the script editor. The DBCC LOGINFO results have plenty of interesting information in them but lets first note there are 4 rows of information, this relates to the fact that 4 VLFs have been created in the log file. The values in the FileSize column are the sizes of each VLF in bytes, you will see that the last one to be created is slightly larger than the others. So, a 5MB log file has 4 VLFs of roughly 1.25 MB. Lets alter the CREATE DATABASE script to create a log file that’s a bit bigger and see what happens. Alter the code above so that the log file details are replaced by LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 1GB, MAXSIZE = 25GB, FILEGROWTH = 1GB ); With a bigger log file specified we get more VLFs What if we make it bigger again? LOG ON ( NAME = VLF_Test_Log, FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL10.ROCK_2008\MSSQL\DATA\VLF_Test_log.ldf', SIZE = 5GB, MAXSIZE = 250GB, FILEGROWTH = 5GB ); This time we see more VLFs are created within our log file. We now have our 5GB log file comprised of 16 files of 320MB each. In fact these sizes fall into all the ranges that control the VLF creation criteria – what a coincidence! The rules that are followed when a log file is created or has it’s size increased are pretty basic. If the file growth is lower than 64MB then 4 VLFs are created If the growth is between 64MB and 1GB then 8 VLFs are created If the growth is greater than 1GB then 16 VLFs are created. Now the potential for chaos comes if the default values and settings for log file growth are used. By default a database log file gets a 1MB log file with unlimited growth in steps of 10%. The database we just created is 6 MB, let’s add some data and see what happens. USE vlf_test go -- we need somewhere to put the data so, a table is in order IF OBJECT_ID('A_Table') IS NOT NULL DROP TABLE A_Table go CREATE TABLE A_Table ( Col_A int IDENTITY, Col_B CHAR(8000) ) GO -- Let's check the state of the log file -- 4 VLFs found EXECUTE ('DBCC LOGINFO'); go -- We can go ahead and insert some data and then check the state of the log file again INSERT A_Table (col_b) SELECT TOP 500 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO -- insert 500 rows and we get 22 VLFs EXECUTE ('DBCC LOGINFO'); go -- Let's insert more rows INSERT A_Table (col_b) SELECT TOP 2000 REPLICATE('a',2000) FROM sys.columns AS sc, sys.columns AS sc2 GO 10 -- insert 2000 rows, in 10 batches and we suddenly have 107 VLFs EXECUTE ('DBCC LOGINFO'); Well, that escalated quickly! Our log file is split, internally, into 107 fragments after a few thousand inserts. The same happens with any logged transactions, I just chose to illustrate this with INSERTs. Having too many VLFs can cause performance degradation at times of database start up, log backup and log restore operations so it’s well worth keeping a check on this property. How do we prevent excessive VLF creation? Creating the database with larger files and also with larger growth steps and actively choosing to grow your databases rather than leaving it to the Auto Grow event can make sure that the growths are made with a size that is optimal. How do we resolve a situation of a database with too many VLFs? This process needs to be done when the database is under little or no stress so that you don’t affect system users. The steps are: BACKUP LOG YourDBName TO YourBackupDestinationOfChoice Shrink the log file to its smallest possible size DBCC SHRINKFILE(FileNameOfTLogHere, TRUNCATEONLY) * Re-size the log file to the size you want it to, taking in to account your expected needs for the coming months or year. ALTER DATABASE YourDBName MODIFY FILE ( NAME = FileNameOfTLogHere, SIZE = TheSizeYouWantItToBeIn_MB) * – If you don’t know the file name of your log file then run sp_helpfile while you are connected to the database that you want to work on and you will get the details you need. The resize step can take quite a while This is already detailed far better than I can explain it by Kimberley Tripp in her blog 8-Steps-to-better-Transaction-Log-throughput.aspx. The result of this will be a log file with a VLF count according to the bullet list above. Knowing when VLFs are being created By complete coincidence while I have been writing this blog (it’s been quite some time from it’s inception to going live) Jonathan Kehayias from SQLSkills.com has written a great article on how to track database file growth using Event Notifications and Service Broker. I strongly recommend taking a look at it as this is going to catch any sneaky auto grows that take place and let you know about them right away. Hassle free monitoring of VLFs If you are lucky or wise enough to be using SQL Monitor or another monitoring tool that let’s you write your own custom metrics then you can keep an eye on this very easily. There is a custom metric for VLFs (written by Stuart Ainsworth) already on the site and there are some others there are very useful so take a moment or two to look around while you are there. Resources MSDN – http://msdn.microsoft.com/en-us/library/ms179355(v=sql.105).aspx Kimberly Tripp from SQLSkills.com – http://www.sqlskills.com/BLOGS/KIMBERLY/post/8-Steps-to-better-Transaction-Log-throughput.aspx Thomas LaRock at Simple-Talk.com – http://www.simple-talk.com/sql/database-administration/monitoring-sql-server-virtual-log-file-fragmentation/ Disclosure I am a Friend of Red Gate. This means that I am more than likely to say good things about Red Gate DBA and Developer tools. No matter how awesome I make them sound, take the time to compare them with other products before you contact the Red Gate sales team to make your order.

    Read the article

  • Intermittent bug - IE6 showing file as text in browser, rather than as file download

    - by Richard Ev
    In an ASP.NET WebForms 2.0 site we are encountering an intermittent bug in IE6 whereby a file download attempt results in the contents of the being shown directly in the browser as text, rather than the file save dialog being displayed. Our application allows the user to download both PDF and CSV files. The code we're using is: HttpResponse response = HttpContext.Current.Response; response.Clear(); response.AddHeader("Content-Disposition", "attachment;filename=\"theFilename.pdf\""); response.ContentType = "application/pdf"; response.BinaryWrite(MethodThatReturnsFileContents()); response.End(); This is called from the code-behind click event handler of a button server control. Where are we going wrong with this approach? Edit Following James' answer to this posting, the code I'm using now looks like this: HttpResponse response = HttpContext.Current.Response; response.ClearHeaders(); // Setting cache to NoCache was recommended, but doing so results in a security // warning in IE6 //response.Cache.SetCacheability(HttpCacheability.NoCache); response.AppendHeader("Content-Disposition", "attachment; filename=\"theFilename.pdf\""); response.ContentType = "application/pdf"; response.BinaryWrite(MethodThatReturnsFileContents()); response.Flush(); response.End(); However, I don't believe that any of the changes made will fix the issue.

    Read the article

  • Improving File Read Performance (single file, C++, Windows)

    - by david
    I have large (hundreds of MB or more) files that I need to read blocks from using C++ on Windows. Currently the relevant functions are: errorType LargeFile::read( void* data_out, __int64 start_position, __int64 size_bytes ) const { if( !m_open ) { // return error } else { seekPosition( start_position ); DWORD bytes_read; BOOL result = ReadFile( m_file, data_out, DWORD( size_bytes ), &bytes_read, NULL ); if( size_bytes != bytes_read || result != TRUE ) { // return error } } // return no error } void LargeFile::seekPosition( __int64 position ) const { LARGE_INTEGER target; target.QuadPart = LONGLONG( position ); SetFilePointerEx( m_file, target, NULL, FILE_BEGIN ); } The performance of the above does not seem to be very good. Reads are on 4K blocks of the file. Some reads are coherent, most are not. A couple questions: Is there a good way to profile the reads? What things might improve the performance? For example, would sector-aligning the data be useful? I'm relatively new to file i/o optimization, so suggestions or pointers to articles/tutorials would be helpful.

    Read the article

  • partially downloaded a torrent file and renamed it

    - by user2613789
    I have partially downloaded a torrent file in ubuntu and unfortunately i renamed it. and after some time i resumed the remaining torrent file.but the torrent then got downloaded with its default name and i lost the complete file information into two different files... please help me out...is there any way that i can produce the whole information into one file...i can't open either file that means both file got corrupted or so...the file is in mp4 format...please help me

    Read the article

  • Which language is more suitable heavy file tasks?

    - by All
    I need to write a script (based on basic functions) to process /image/audio/video files. The process is mainly filesystem tasks and converts. The database of files has been stored by mysql. The script is simple but cause heavy tasks on the system; for example renaming/converting/copying thousands of file in a run. The script does not read the content of files into memory, it just manage the commands for sub-processes. The main weight is on the communication with filesystem. The script will be used regularly for new files. My concern is about performance. I am thinking of Shell script a complied language like C Please advise which programming language is more suitable for this purpose and why? UPDATE: An example is to scan a folder for images, convert them with ImageMagick, move files to destination folder, get file info, then update the database. As you can see, the process has no room for optimization, and most of languages have similar APIs for popular programs like ImageMagick, MySQL, etc. Thus, it can be written in any language. I just wish to reduce resource usage by speeding up the long loop. NOTE: I know that questions about comparing languages are not favorable, but I really had problem to choose, because the problems can appear in action.

    Read the article

  • Associating File Types with AutoVue Desktop Deployment

    - by [email protected]
    Windows users take for granted that when they double click on a document or design, that it will open up in its application automatically. One of the questions I'm commonly asked is "How can I get the same behavior with AutoVue Desktop Deployment?". It's pretty easy, but there are a few tricks to doing it. Step 1: Download new jvue_direct.bat and icon The first thing you'll need to do is download a slightly modified version of jvue_direct.bat. You can find it here (Document 1075784.1) on Oracle's Support Portal. You also want to download the AV.ico file. This is the icon that will be used for all file types associated with AutoVue. Place both of these files in your <AutoVueInstallDirectory>\bin directory. Step 2: Associate File Types With AutoVue There are two ways to do this. You can do this through the Windows user interface, or you can set up a batch file to do this. Associating File Types Through Windows The way most people associate file types to an application is using the Windows user interface. You've probably tried to open a file type that Windows doesn't recognize and seen this window pop up: Although you can use this dialog to associate that file type with AutoVue, I don't recommend it. I much prefer using a batch file to associate file types with AutoVue. Associating File Types Using A Batch File There are a few good reasons to associate file types using a batch file instead of using the pop-up dialog method: If you have several file types to associate with AutoVue, it's much easier to use a batch file to do them all at once. Doing it through the Windows user interface requires having files of each type available. Using a batch file doesn't require having the files you're associating. Associating file types through the dialog may work well for one person, but what if you're an administrator doing an enterprise wide deployment of AutoVue Desktop Deployment for several hundred users? You don't want to do this manually for each user. You can have one simple batch file that's run on each user's PC to set up all the file types. You can easily associate an icon with the file types you're opening with AutoVue. To use the batch file method follow these steps: Create a file called filetype.bat using a text editor and copy and paste the following into it: @assoc .dwg=AVFile @assoc .jpg=AVFile @assoc .doc=AVFile @ftype AVFile="%~dp0jvue_direct.bat" "%%1" @reg add HKEY_CLASSES_ROOT\AVFile\DefaultIcon /v "" /f /d "%~dp0AV.ico" Change the lines starting with @assoc. Each of these lines associates a file extension with AutoVue. You can have as many @assoc lines as you want. Save this file in your <AutoVueInstallDirectory>\bin directory. Double click this file, or run it from a command prompt. Restart Windows to get the icons to show up. How Does This Work? The first three lines are creating a file type called AVFile. We are associating the extensions .dwg, .jpg, and .doc with this file type. You will want to change these lines when creating your own batch file. For example, to associate Microstation designs, which have extension .dgn, you should delete the @assoc lines above and add the line: @assoc .dgn=AVfile The line beginning with @ftype tells Windows that all AVFile type files should be opened using AutoVue Desktop Deployment. The final line associates the AutoVue icon with these file types. You may need to restart Windows to see the new icons. Warning: One Size Doesn't Fit All When deciding which file types should be associated with AutoVue, remember that there are different types of users using it. Your engineers may be pretty surprised to find that after installing AutoVue, double clicking their .dwg file opens up AutoVue instead of AutoCAD. If you have more than one type of AutoVue user, make sure you've considered what file types each user group will and will not want to be associated with AutoVue. If necessary, create a separate file association batch file for each user type. So that's it. In two simple steps you can double click your favorite designs and have them open automatically in AutoVue Desktop Deployment. I'd love to hear how are you using AutoVue Desktop Deployment. What other deployment tips would you be interested in learning about?

    Read the article

  • SQL SERVER – Attach mdf file without ldf file in Database

    - by pinaldave
    Background Story: One of my friends recently called up and asked me if I had spare time to look at his database and give him a performance tuning advice. Because I had some free time to help him out, I said yes. I asked him to send me the details of his database structure and sample data. He said that since his database is in a very early stage and is small as of the moment, so he told me that he would like me to have a complete database. My response to him was “Sure! In that case, take a backup of the database and send it to me. I will restore it into my computer and play with it.” He did send me his database; however, his method made me write this quick note here. Instead of taking a full backup of the database and sending it to me, he sent me only the .mdf (primary database file). In fact, I asked for a complete backup (I wanted to review file groups, files, as well as few other details).  Upon calling my friend,  I found that he was not available. Now,  he left me with only a .mdf file. As I had some extra time, I decided to checkout his database structure and get back to him regarding the full backup, whenever I can get in touch with him again. Technical Talk: If the database is shutdown gracefully and there was no abrupt shutdown (power outrages, pulling plugs to machines, machine crashes or any other reasons), it is possible (there’s no guarantee) to attach .mdf file only to the server. Please note that there can be many more reasons for a database that is not getting attached or restored. In my case, the database had a clean shutdown and there were no complex issues. I was able to recreate a transaction log file and attached the received .mdf file. There are multiple ways of doing this. I am listing all of them here. Before using any of them, please consult the Domain Expert in your company or industry. Also, never attempt this on live/production server without the presence of a Disaster Recovery expert. USE [master] GO -- Method 1: I use this method EXEC sp_attach_single_file_db @dbname='TestDb', @physname=N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf' GO -- Method 2: CREATE DATABASE TestDb ON (FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH_REBUILD_LOG GO Method 2: If one or more log files are missing, they are recreated again. There is one more method which I am demonstrating here but I have not used myself before. According to Book Online, it will work only if there is one log file that is missing. If there are more than one log files involved, all of them are required to undergo the same procedure. -- Method 3: CREATE DATABASE TestDb ON ( FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\TestDb.mdf') FOR ATTACH GO Please read the Book Online in depth and consult DR experts before working on the production server. In my case, the above syntax just worked fine as the database was clean when it was detached. Feel free to write your opinions and experiences for it will help the IT community to learn more from your suggestions and skills. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Unable to delete file from read-only file system

    - by tech
    I cannot delete files, when I tried to delete the file, this message appeard: "Error while deleting. There was an error deleting file2.rar." This is what I get from clicking on SHOW MORE DETAILS: "Error removing file: Read-only file system" This is what I've tried so far: delete from the downloads directory from my user. deleting permanently, but it does not work. EDIT When trying to remount it as read/write I get this error: jianyue@jianyue:~$ sudo mount -o remount,rw /media/A88F-8788 [sudo] password for jianyue: sudo: unable to open /var/lib/sudo/jianyue/1: Read-only file system mount: can't find /media/A88F-8788 in /etc/fstab or /etc/mtab

    Read the article

  • Clean file separators in Ruby without File.join

    - by kerry
    I love anything that can be done to clean up source code and make it more readable.  So, when I came upon this post, I was pretty excited.  This is precisely the kind of thing I love. I have never felt good about ‘file separator’ strings b/c of their ugliness and verbosity. In Java we have: 1: String path = "lib"+File.separator+"etc"; And in Ruby a popular method is: 1: path = File.join("lib","etc") Now, by overloading the ‘/’ operator on a String in Ruby: 1: class String 2: def /(str_to_join) 3: File.join(self, str_to_join) 4: end 5: end We can now write: 1: path = 'lib'/'src'/'main' Brilliant!

    Read the article

  • File property information (last write time and file size) in explorer out of date by hours over netw

    - by David L Morris
    An application is running on a windows XP prof machine picking up file from a network share from another windows machine. It detects that the file has been updated (by date and time or optionally file size) and reads it for any new data. Most of the time the last write time and file size, seems to be up to date. Occasionally, this information stops being updated, even though the file is growing (intermittently during the day) with appended content, so that the last write time and file size remain fixed at some arbitrary moment. This is visible in explorer, where it shows a fixed last write time on the reading machine. Just opening the file to edit it in notepad, immediately refreshes the file properties, and the other application picks up where it left of. The file location can't be changed, nor the location of the relevant applications. Any solutions to resolve this problem?

    Read the article

  • Grant account write access to specific attributes on Active Directory User object

    - by Patricker
    I am trying to allow an account to update very specific attributes on all User objects. I am setting this security on the "User" object. When I add the account on the security tab, go to advanced, edit the accounts permissions, and start going through the list of attributes I am only able to find a few, like First Name, but most of the attributes I want to let them write to are missing. How can I grant the account write access to these attributes? Attributes I need to grant permission for: First Name (givenName) Last Name (sn) Initials (initials) Department (department) Company (company) Title (title) Manager (manager) Location Info (physicalDeliveryOfficeName, streetAddress, postOfficeBox) Work Phone (telephoneNumber) Pager (pager) IP Phone (ipPhone) IP Phone Other (otherIpPhone) ThumbnailLogo (thumbnailLogo) jpegPhoto (jpegPhoto) Description (displayName) Thanks

    Read the article

  • In Eclipse, how to open a file browser in the directory of the currently edited file

    - by JC
    Hi, I know it's possible in eclipse to open file browsers from your project's resource browser, but is it possible for files that aren't part of your project ? Typically external includes are not found in your resource browser... If there is the equivalent of $(resource_loc) for the editor, it would work.. But I wasn't able to find it. Can anyone help me on this ? thanks! JC EDIT : I Found StartExplorer, but it's a joke of a plug-in. It is hardcoded to use WINDOWS explorer or cmd.exe. Also, it still requires you to use the resource browser. Other than that it can open paths selected in the editor, but they must be full paths.

    Read the article

  • Proper set up shared folders for users

    - by user221486
    First I would like to say thanks for helping, and I have huge problem with proper set up permission for shared folders. I have Windows 7 x64 ent. - name: backupfb - added to domain with shared folder on drive e: (e:\backup) 50 clients/laptops with TSM Tivoli fastback for workstations who save files on shared folder And I need to configure proper permission for my shared folders that only owner of folder can access to their folders. Folder structure is: e:\backup <- shared as a "backup" folder \\backupfb\backup\ e:\backup\BackupAdmin <-- directory is used by the Tivoli Storage Manager FastBack for Workstations client to download revisions and configurations. Nodes require read-only access to these directories e:\backup\RealTimeBackup <-- enable client accounts to create directories that are only accessible by the account that created them. As a result, the directory that contains data for a node is not created until that node connects to the server. So permission should look like that (take from instructions): Inheritable permissions from object`s parents are DISABLE Permission entries: \\backupfb\backup\BackupAdmin Allow Users Read, Execute This folder, subfolders, and files Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Delete subfolders and files Allow Delete Allow Read Permission’s Allow Allow Administrators Full Control This folder, subfolders, and files Both folders have enabled option "apply these permissions to objects and/or containers within this container only" Here everything works fine \\backupfb\backup\RealTimeBackup <<-- Allow Administrators Full Control This folder, subfolders, and files Allow CREATOR OWNER Full Control This folder, subfolders, and files (from domain) Allow Users Special This folder only Traverse Folder / Execute Allow List Folder / Read Data Allow Read Attributes Allow Read Extended Attributes Allow Create Files / Write Data Allow Create Folders / Append Data Allow Delete subfolders and files Allow Read Permission’s Allow Allow OWNER RIGHTS* Full Control This folder, subfolders, and files Here I have huge problem with CREATOR OWNER Im able to set FULL CONTROL but I can only apply "Subfolders and files only". When I change props. to "This folder, subfolders and files" and save its change to "Subfolders and files only" So I try use icacls to set up permissions @echo off takeown /F E:\backup\ /R /A for /D %%i IN (E:\backup\RealTimeBackup*) DO icacls E:\backup\RealTimeBackup\%%~nxi /grant:r cloud\%%~nxi:F /T /C pause but after that user are able to create just one folder in \backupfb\backup\RealTimeBackup\userfolder but problem is with subfolders In log i have: FBW5022E Unable to access the specified file Explanation: The file specified is unable to be accessed. Possibly spelled incorrectly, or bad path, or permissions. User response: Ensure the user has the proper permissions for the file and directories involved andthat the file and directory exist Any idea ?? pls help ;-) thanks

    Read the article

  • Photoshop CS6 Corrupted File recovery

    - by Ben Franchuk
    Last night I was working on a client application mock-up in photoshop, but was goin to take a break from my work so I saved the .PSD file on my internal HDD and put my computer into stand-by mode once the file had finished saving. Unfortunately my computer crashed while it was entering stand-by and shut itself down (photoshop was still open). I did not boot it again to make sure all my files were ok because they had already been saved, but today once I opened up the file again it was extremely corrupted and also completely un-editable (screenshot bellow). so what im asking is there any way to recover my work, or at least some of it? i have put in a good few days work on this project and would hate to have to restart it. the size of the file is 3070 KB, even though it reads as 712 KB in photoshop. i dont know if these file sizes are larger or either smaller than the original non-corrupted file's size, but considering all the layers in the file i suspect it was larger before it corrupted. im using windows XP professional 32bit SP3. both my OS and said .PSD file are located on the same internal HDD (74.4 GB). i do have an external HDD (1.5 TB) but i primarily only use it for movies music and tv shows. i dont know if it was plugged in t the time of me editing the document last, though, if it means anything. i have tried many image and PSd recovery softwares but none have returned any results that may help recover my work. edit: i tried using a photo reccovery software (odboso Photorecovery) that actually seems to recover the corrupted file in question judging by the size of the file, but i cannot recover it because of the licence fee. knowing that the file is still likely on my HDD, what location might it be located?

    Read the article

  • While using an ntfs smb share for mac users, do symbolic links and extended attributes work?

    - by scape
    We have a majority of mac users but we'd rather support their file sharing using a Windows server with an ntfs drive, or at least a Linux server with ext3. We've had trouble, much trouble, utilizing the OS X server software and after the years are now looking to abandon it. What's mostly holding us back is the fact that the mac users very often utilize symbolic links and other special features that exist for an HFS+ partition. The shared locations are mostly primary storage and not just used as an archive storage location. While there is an option to create symbolic links under ntfs, I'm curious if there is anything I need to look out for if I were to move the files over to a new partition that's hosted from a Windows server from the HFS+ partition; in addition, how well creating a symbolic link from a mac might work. I am also worried about windows backup software and if it will ruin these special sym links, and how placing permissions on sub-folders will work. Alternatively I could remotely backup the files using a mac and Bru, nonetheless I still want to get away from mac server for hosting the shares.

    Read the article

  • Problem opening SFX archive file(.exe) using the archive manager

    - by Cody
    I have installed both rar and unrar using apt-install but I am still not able to use archive manager for opening the archive file.. I have also tried installing p7zip(p7zip-full and p7zip) but no improvements... However, when I use command-line for extracting the files from the archive using unrar or rar the command executes successfully... Is there any other open source software I should install for viewing the contents of the SFX archive or what else should I install to view the same in the archive manager.. Thanks in advance...

    Read the article

  • Is it safe to convert Windows file paths to Unix file paths with a simple replace?

    - by MxyL
    So for example say I had it so that all of my files will be transferred from a windows machine to a unix machine as such: C:\test\myFile.txt to {somewhere}/test/myFile.txt (drive letter is irrelevant at this point). Currently, our utility library that we wrote ourselves provides a method that does a simple replace of all back slashes with forward slashes: public String normalizePath(String path) { return path.replaceAll("\\", "/"); } Slashes are reserved and cannot be part of a file name, so the directory structure should be preserved. However, I'm not sure if there are other complications between windows and unix paths that I may need to worry about (eg: non-ascii names, etc)

    Read the article

  • Online File Library

    - by janvdl
    I'm looking for a preferably PHP and web based system, to run on Ubuntu server. Basically it should be a "file forum" in the sense that users can register and be approved to post files to categories. Users with "read" privileges can then go through the categories and download files. Basically I sort of want a FTP-like system, but it should be as easy to manage users, categories, etc as that of a forum system like vB or phpBB. If it could have a forum look and feel that would also be great, but I don't want any discussions taking place.

    Read the article

  • Java- Copy file to either new file or existing file.

    - by jared
    Hi- I would like to write a function copy(File f1, File f2) f1 is always a file. f2 is either a file or a directory. If f2 is a directory I would like to copy f1 to this directory (the file name should stay the same). If f2 is a file I would like to copy the contents of f1 to the end of the file f2. So for example if F2 has the contents: 2222222222222 And F1 has the contents 1111111111111 And I do copy(f1,f2) then f2 should become 2222222222222 1111111111111 Thanks!

    Read the article

  • Jquery: Create hidden attributes? I need to reduce tag bulkyness.

    - by Dan
    I'm sure we've all done this before: <a id="232" rel="link_to_user" name="user_details" class="list hoverable clickable selectable"> USER #232 </a> But then we say, oh my, I need more ways to store tracking info about this div! <a id="232-343-22" rel="link_to_user:fotomeshed" name="user_details" class="groupcolor-45 elements-698 list hoverable clickable selectable"> User: John Doe </a> And the sickness keeps growing. We just keep on packing it inside that poor little element and it's attributes. All so we can keep track of who it is. So with my limited knowledge of JS, someone please tell me how to do something like this: <a id="33">USER #33</a> $(#33).attr({title:'User Record','username':'john', 'group_color':'green', 'element_num':78}); So here we just added what I would call invisible attributes, because we just played God and made those attributes up on the fly like it was no problem. The cool part is that these would be kept in their own little object somewhere in variable land. NOT in the tag itself. Then later on, in a code nested far far away, be able to say, oh, i wonder what group_color John is... user_group_color = $(table).find(a['username':'john']).attr('group_color'); THEN BAM!!!! POW!!!! alert(user_group_color + " is a bitchin color!"); You get to know his group color... all without adding a bunch of bloated element tracking nonsense into our tags. So does this sort of thing exist? If not, how do I make it?

    Read the article

  • How do you parse with SAX using Attributes & Values to a URL path using iPhone SDK?

    - by Jim
    I'm trying to get my head around parsing with SAX and thought a good place to start was the TopSongs example found at the iPhone Dev Center. I get most of it but when it comes to reaching Attributes and Values within a node I can't find a good example anywhere. The XML has a path to a URL for the coverArt. And the XML node looks like this. <itms:coverArt height="60" width="60">http://a1.phobos.apple.com/us/r1000/026/Music/aa/aa/27/mzi.pbxnbfvw.60x60-50.jpg</itms:coverArt> What I've tried is this for the startElement… ((prefix != NULL && !strncmp((const char *)prefix, kName_Itms, kLength_Itms)) && (!strncmp((const char *)localname, kName_CoverArt, kLength_Item) && !strncmp((const char *)attributes, kAttributeName_CoverArt, kAttributeLength_CoverArt) && !strncmp((const char *)attributes, kValueName_CoverArt, kValueLength_CoverArt) || !strncmp((const char *)localname, kName_Artist, kLength_Artist) || and picking it up again with just the localname at the end like this. if (!strncmp((const char *)localname, kName_CoverArt, kLength_CoverArt)) { importer.currentSong.coverArt = [NSURL URLWithString:importer.currentString]; The trace is -[Song setCoverArt:]: unrecognized selector sent to instance.

    Read the article

< Previous Page | 9 10 11 12 13 14 15 16 17 18 19 20  | Next Page >