Search Results

Search found 34489 results on 1380 pages for 'folder view'.

Page 334/1380 | < Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >

  • Default permission for newly-created files/folders using ACLs not respected by commands like "unzip"

    - by Ngoc Pham
    I am having trouble with setting up a system for multiple users accessing the same set of files. I've read tuts and docs around and played with ACLs but haven't succeeded yet. MY SCENARIO: Have multiple users, for example, user1 and user2, which is belong to a group called sharedusers. They must have all WRITE permission to a same set of files and directories, say underlying in /userdata/sharing/. I have the folder's group set to sharedusers and SGID to have all newly created files/dirs inside set to same group. ubuntu@home:/userdata$ ll drwxr-sr-x 2 ubuntu sharedusers 4096 Nov 24 03:51 sharing/ I set ACLs for this directory so I can have permission of sub dirs/files inheritted from its parents. ubuntu@home:/userdata$ setfacl -m group:sharedusers:rwx sharing/ ubuntu@home:/userdata$ setfacl -d -m group:sharedusers:rwx sharing/ Here's what I've got: ubuntu@home:/userdata$ getfacl sharing/ # file: sharing/ # owner: ubuntu # group: sharedusers # flags: -s- user::rwx group::r-x group:sharedusers:rwx mask::rwx other::r-x default:user::rwx default:group::r-x default:group:sharedusers:rwx default:mask::rwx default:other::r-x Seems okay as when I create new folder with new files inside and the permission is correct. ubuntu@home:/userdata/sharing$ mkdir a && cd a ubuntu@home:/userdata/sharing/a$ touch a_test ubuntu@home:/userdata/sharing/a$ getfacl a_test # file: a_test # owner: ubuntu # group: sharedusers user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:rw- mask::rw- other::r-- As you can see, the sharedusers group has effective permission rw-. HOWEVER, if I have a zip file, and use unzip -q command to unzip the file inside the folder sharing, the extracted folders don't have group write permisison. Therefore, the users from group sharedusers cannot modify files under those extracted folders. ubuntu@home:/userdata/sharing$ unzip -q Joomla_3.0.2-Stable-Full_Package.zip ubuntu@home:/userdata/sharing$ ll drwxrwsr-x+ 2 ubuntu sharedusers 4096 Nov 24 04:00 a/ drwxr-xr-x+ 10 ubuntu sharedusers 4096 Nov 7 01:52 administrator/ drwxr-xr-x+ 13 ubuntu sharedusers 4096 Nov 7 01:52 components/ You an spot the difference in permissions between folder a (created before) and folder administrator extracted by unzip. And the ACLs of a files inside administrator: ubuntu@home:/userdata/sharing$ getfacl administrator/index.php # file: administrator/index.php # owner: ubuntu # group: ubuntu user::rw- group::r-x #effective:r-- group:sharedusers:rwx #effective:r-- mask::r-- other::r-- It also has ubuntu group, not sharedusers group as expected. Could someone please explain the problem and give me advice? Thank you in advance!

    Read the article

  • Firefox can't establish a connection to the server at www.google.com

    - by Tom
    My home page in Firefox [v4.0] and Internet Explorer [v9.0.8112.16421, Update Versions RTM (KB982861)] is currently set to Google but when I depress the quick start icon to start up either browser, I am getting the following immediate results: Unable to connect (In Firefox) Firefox can't establish a connection to the server at www.google.com. The site could be temporarily unavailable or too busy. Try again in a few moments. If you are unable to load any pages, check your computer's network connection. If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. Internet Explorer cannot display the webpage What you can try: Diagnose Connection Problems More information This problem can be caused by a variety of issues, including: Internet connectivity has been lost. The website is temporarily unavailable. The Domain Name Server (DNS) is not reachable. The Domain Name Server (DNS) does not have a listing for the website's domain. There might be a typing error in the address. If this is an HTTPS (secure) address, click Tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For offline users You can still view subscribed feeds and some recently viewed webpages. To view subscribed feeds: Click the Favorites button , click Feeds, and then click the feed you want to view. To view recently visited webpages (might not work on all pages): Press Alt, click File, and then click Work Offline. Click the Favorites button, click History, and then click the page you want to view. Thankfully, I am able to use one browser that I have installed on my computer (Mathon v3.0.20.5000) to search online for technical assistance in this matter. I have seen several WinSock error issues mentioned; but, they are pointing to Windows XP and I am using Windows 7 Pro and remain uncertain whether anything identified as a fix for one OS will work in another. Things I've tried: HiJackThis Complete scan with Avira AntiVirus Premium. What am I overlooking? What should I do to address this problem?

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Creating a really public Windows network share

    - by Timur Aydin
    I want to create a shared folder under Windows (actually, Windows XP, Vista, and Win 7) which can be mounted from a linux system without prompting for a username/password. But before attempting this, I first wanted to establish that this works between two Windows 7 machines. So, on machine A (The server that will hold the public share), I created a folder and set its permissions such that Everyone has read/write access. Then I visited Control Panel - Network and Sharing Center - Advanced Sharing Settings and then selected "Turn off password protected sharing". Then, on machine B (The client that wants to access the public share with no username/password prompt), I tried to "map network driver" and I was immediately prompted by a password prompt. Some search on google suggested changing "Acconts: Limit local account use of blank passwords to console logon only" to "Disabled". Tried that, no luck, still getting username/password prompt. If I enter the username/password, I am not prompted for it again and can use the share as long as the session is active. But still, I really need to access the share without any username/password transaction whatsoever and this is not just a convenience related thing. Here is the actual reason: The device that will access this windows network share is an embedded system running uclinux. It will mount this share locally and then play media files. Its only user interface is a javascript based web page. So, if there is going to be any username/password transaction, I would have to ask the user to enter them over the web page, which will be ridiculously insecure and completely exposed to packet sniffing. After hours of doing experiments, I have found one way to make this happen, but I am not really very fond of it... I first create a new user (shareuser) and give it a password (sharepass). Then I open Group Policy Editor and set "Deny log on locally" to "A\shareuser". Then, I create a folder on A and share it so that shareuser has Read access to it. This way, shareuser cannot login to A, but can access the shared folder. And, if someone discovers the shareuser/sharepass through network sniffing, they can just access the shared folder, but can't logon to A. The same thing can be achieved by enabling the Guest user and then going to Group Policy Editor and deleting the "Guest" from the "Deny access to this computer from the network" setting. Again, Guest can mount the public share, but logging in to A as Guest won't be possible, because Guest is already not allowed to log in by default. So my question would be, how can I create a network share that is truly public, so that it can be mounted from a linux machine without requiring a password? Sorry for the long question, but I wanted to explain the reason for really needing this...

    Read the article

  • git, egit, submodules, and symlinks -- how should shared sub-projects be handled in eclipse?

    - by Autophil
    Here's the situation. I have a few git projects with a directory structure layed out more or less like this: simpleproj app www admin demo lib model orm view model user blah ... storeproj app www about mobile fbapp lib model orm view model user message cart product merchant Each directory in "lib" contains a separate project, either created in-house or forked, all of which use git for source control. So I figured I should make them submodules of my projects, right? Well, we've been moving toward eclipse + egit, because some of our windows guys not used to a CLI need something they can use without being scared of screwing things up. Anyway, the problem is, egit doesn't support submodules. So, my solution has been a rather crude one involving symlinks... lets say my directory structure on my dev box is generally layed out like this: ~/projects/ bigproj .git app lib model (- ~/lib/model/src/) orm (- ~/lib/orm/src/) neatproj .git app lib view (- ~/lib/view/src/) oldproj .git app lib orm (- ~/lib/orm/src/) ~/lib/ model .git src README.md orm .git src COPYING view .git src ...the symlinks link inside of directory with the git repo, so eclipse doesn't get confused, and everything sort of works. On my machine, I can update the libs from anywhere and all projects will be updated (needing to be committed again of course). Each project stores a separate copy of the contents of the symlinked directories within "lib" -- but only when staged from within eclipse. After committing from eclipse and moving back to the CLI, git sees that a bunch of files have been removed and a few symlinks have been created. Of course this is acceptable also, probably more so than keeping a separate history of the libs for each project... but eclipse and CLI git obviously need to be on the same page so tons of files aren't vanishing and reappearing. So this brings me to my question. I'd like to know how to either: get eclipse+egit to see the symlinks as symlinks if git will somehow handle them properly*, or get the CLI git to treat them as non-symlinks. Or, if there's a better way to do this, I'm all ears. Hope this all made sense! :D Note: tried to tag this as git-submodules, but was not allowed :( * should I make them relative or absolute? Either way it's a mess. Also will symlinks will work on windows? i know there's something similar but you need a 3rd party tool to manage them AFAIK, i doubt these would translate well.

    Read the article

  • web services access not being reached thru the web browser [closed]

    - by Tony
    I am trying to reference my .asmx webservices in .NET but my server is not exposed to the internet. When I put on the following address I get the message mentioned below. What's the reason for not being able to see the directory? Am I missing something in my IIS configuraction? Am I missing anything in my permissions? Just as reference I have other folders with webservices and I have the same issue. When I login to the server I am doing it with my windows user and password (I am using windows authentication). It's necessary to mention that when I put the URL I am getting a popup screen to put in my userid and password but it seems that's not able to validate since keeps asking me a couple of times. Let me know if you need more information to address this issue . http://appsvr02/Inetpub/wwwroot/DevWebApi/ Internet Explorer cannot display the webpage What you can try: It appears you are connected to the Internet, but you might want to try to reconnect to the Internet. Retype the address. Go back to the previous page. Most likely causes: •You are not connected to the Internet. •The website is encountering problems. •There might be a typing error in the address. More information This problem can be caused by a variety of issues, including: •Internet connectivity has been lost. •The website is temporarily unavailable. •The Domain Name Server (DNS) is not reachable. •The Domain Name Server (DNS) does not have a listing for the website's domain. •If this is an HTTPS (secure) address, click tools, click Internet Options, click Advanced, and check to be sure the SSL and TLS protocols are enabled under the security section. For offline users You can still view subscribed feeds and some recently viewed webpages. To view subscribed feeds 1.Click the Favorites Center button , click Feeds, and then click the feed you want to view. To view recently visited webpages (might not work on all pages) 1.Click Tools , and then click Work Offline. 2.Click the Favorites Center button , click History, and then click the page you want to view.

    Read the article

  • Apache will not stop/start gracefully

    - by ddjammin
    CentOs 6 64bit running apache 2.2.15-29.el6.centos. When I try to stop/start or restart httpd I get an error that says it has failed. A tail of the error log is below. I also noticed that a httpd.pid file is not created even though it is configured in the main conf file. If I set selinux to permissive, it works just fine. I do not want to run it with selinux disabled. If I delete the SSL_Mutex file it will start. HTTPD was running fine until I tried to add the ssl configuration. I copied over the ssl.conf file from a working server into the conf.d folder. I also copied a sslcert folder into the conf folder. It contains the certs, key, csr and password file. I think the problem has to do with the selinux context for the sslcert folder that was copied but I am not certain and not sure how to fix it. Below is the security context for the sslcert folder after executing restorecon -R sslcert ls -Z -rw-r--r--. root root system_u:object_r:httpd_config_t:s0 httpd.conf -rw-r--r--. root root system_u:object_r:httpd_config_t:s0 magic **drwxr-xr-x. root root system_u:object_r:httpd_config_t:s0 sslcert** tail -f /var/log/httpd/error_log [Thu Oct 17 13:33:19 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Oct 17 13:33:20 2013] [notice] Digest: generating secret for digest authentication ... [Thu Oct 17 13:33:20 2013] [notice] Digest: done [Thu Oct 17 13:33:20 2013] [warn] pid file /etc/httpd/logs/ssl.pid overwritten -- Unclean shutdown of previous Apache run? [Thu Oct 17 13:33:20 2013] [notice] Apache/2.2.15 (Unix) DAV/2 mod_ssl/2.2.15 OpenSSL/1.0.0-fips configured -- resuming normal operations [Thu Oct 17 21:04:48 2013] [notice] caught SIGTERM, shutting down [Thu Oct 17 21:06:42 2013] [notice] **SELinux policy enabled; httpd running as context system_u:system_r:httpd_t:s0** [Thu Oct 17 21:06:42 2013] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Oct 17 21:06:42 2013] [error] (17)File exists: Cannot create SSLMutex with file `/etc/httpd/logs/ssl_mutex' I also saw mention of possible issues with semaphores. Below is the output of the current semaphores and apache is currently not running. ipcs -s ------ Semaphore Arrays -------- key semid owner perms nsems 0x00000000 0 root 600 1 0x00000000 65537 root 600 1 Finally selinux reports the following error. `sealert -a /var/log/audit/audit.log` 0% donetype=AVC msg=audit(1382034755.118:420400): avc: denied { write } for pid=3393 comm="httpd" name="ssl_mutex" dev=dm-0 ino=9513484 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:httpd_log_t:s0 tclass=file **** Invalid AVC allowed in current policy *** 100% doneERROR: failed to read complete file, 1044649 bytes read out of total 1043317 bytes (/var/log/audit/audit.log) found 1 alerts in /var/log/audit/audit.log -------------------------------------------------------------------------------- SELinux is preventing /usr/sbin/httpd from remove_name access on the directory ssl_mutex.

    Read the article

  • Microsoft SSIS Service: Registry setting specifying configuration file does not exist.

    - by mbrc
    Microsoft SSIS Service: Registry setting specifying configuration file does not exist. Attempting to load default config file. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. this is my MsDtsSrvr.ini.xml <?xml version="1.0" encoding="utf-8"?> <DtsServiceConfiguration xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <StopExecutingPackagesOnShutdown>true</StopExecutingPackagesOnShutdown> <TopLevelFolders> <Folder xsi:type="SqlServerFolder"> <Name>MSDB</Name> <ServerName>.\SQL2008</ServerName> </Folder> <Folder xsi:type="FileSystemFolder"> <Name>File System</Name> <StorePath>..\Packages</StorePath> </Folder> </TopLevelFolders> </DtsServiceConfiguration> i found here http://msdn.microsoft.com/en-us/library/ms137789.aspx that i need to update my registry. Only entry at HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\100\SSIS\ServiceConfigFile is (Default) with no value. what i must add in registry that i will not get this error any more?

    Read the article

  • Cannot create a project TFS 2012 - TF218027

    - by GrandMasterFlush
    I've just installed TFS2012 and am trying to create a new project in the default collection via Visual Studio 2012 but I keep getting this error message: TF218027: The following reporting folder could not be created on the server that is running SQL Server Reporting Services: /TfsReports/DefaultCollection. The report server is located at: http://<servername>/Reports. The error is: The permissions granted to user '<domain>/grandmasterflush' are insufficient for performing this operation.. Verify that the path is correct and that you have sufficient permissions to create the folder on that server and then try again. I've checked the permissions and my user is a member of the Project Collection Administrators and the Project Collection Administrators group has the 'Create new project' permission set to allow. The only thing I think it might be is that the user that I created during installation for the Sharepoint access and reports viewing does not have permission to write to the reports folder, however if I select "Do not configure a SharePoint site at this time" then I still get the error messages. I can't find the reports folder to check the permissions either. TFS is using an instance of SQL 2012 that was already on the machine when TFS was installed. Can anyone see what I'm doing wrong please?

    Read the article

  • How do you guys handle custom yum repository?

    - by luckytaxi
    I have a bunch of tools (nagios, munin, puppet, etc...) that gets installed on all my servers. I'm in the process of building a local yum repository. I know most folks just dump all the rpms into a single folder (broken down into the correct path) and then run createrepo inside the directory. However, what would happen if you had to update the rpms? I ask because I was going to throw each software into its own folder. Example one, put all packages inside one folder (custom_software) /admin/software/custom_software/5.4/i386 /admin/software/custom_software/5.4/x86_64 /admin/software/custom_software/4.6/i386 /admin/software/custom_software/4.6/x86_64 What I'm thinking of ... /admin/software/custom_software/nagios/5.4/i386 /admin/software/custom_software/nagios/5.4/x86_64 /admin/software/custom_software/nagios/4.6/i386 /admin/software/custom_software/nagios/4.6/x86_64 /admin/software/custom_software/puppet/5.4/i386 /admin/software/custom_software/puppet/5.4/x86_64 /admin/software/custom_software/puppet/4.6/i386 /admin/software/custom_software/puppet/4.6/x86_64 Ths way, if I had to update to the latest version of puppet, I can save manage the files accordingly. I wouldn't know which rpms belong to which software if I threw them into one big folder. Makes sense?

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Outlook 2007 / 2010 Calendar: hide meetings in specific category

    - by Jeroen
    Question Is there any easy way in Outlook 2007/2010 to show/hide meetings in a specific category? Preferably only for a specific view (the Month view, in this case). Note: I was almost done writing this question, adding just one more "What I've tried" option, when I found an acceptable (though imperfect) solution. Remembering this SE blog post I figured I might as well post it after all and answer it myself. And who knows, perhaps someone else has a more elegant solution. The reason for me personally is that I'd like to hide the "small, recurring meetings" like our daily stand-up meeting in the month view. I'd prefer an Outlook feature that is meant for this (there must be one for this, right?), but I'm open to workarounds or plugin suggestions as well. What I expected to find somewhere was a list of categories (with added option "No category") where you could select/deselect from which categories you'd see meetings. Something like this mock-up: What I've tried Edit "View Settings", and use a "Filter..." on categories. This has several disadvantages, the major one is that the filter only allows me to choose what I want to show, but not what I want to hide. Even if I tick all categories but one for the filter it would still hide any uncategorized meeting. Similar to 1, but then using Advanced filters. Still a bit clumsy as changing views can be up to three clicks, but this is the best solution so far (see the corresponding answer below). Creating a sub-calendar for these "small" meetings that I wish to hide. This felt a bit clumsy and like overkill, but did provide an easy "select/deselect" option to show/hide these meetings. Search for plug-ins that do this. Couldn't find one (yet).

    Read the article

  • JAWStats statspath error on windows

    - by crosenblum
    I have AWStats which works fine, and JAWStats I am trying to get working. I have tried back and forward slashes to get the program to read the dirdata files. I even moved the folder of dirdata outside of program files folder, in case it had problems with folder names with spaces in them. Here is my config file. // core config parameters $sConfigDefaultView = "thismonth.all"; $bConfigChangeSites = true; $bConfigUpdateSites = true; $sUpdateSiteFilename = "xml_update.php"; // individual site configuration // awstats092012.noname.jumpingcrab.com.txt $aConfig["site1"] = array( "statspath" => "C:\\Program Files\\AWStats\\DirData\\", "statsname" => "awstats[MM][YYYY].yourexample.com.txt", "updatepath" => "C:\\Program Files\\AWStats\\wwwroot\\cgi-bin\\awstats.pl\\", "siteurl" => "http://yourexample.com", "theme" => "default", "fadespeed" => 250, "password" => "", "includes" => "" ); Domain names changed to protect the innocent...:P Here is the error message: An error has occured: No AWStats Log Files Found JAWStats cannot find any AWStats log files in the specified directory: C:\Program Files\AWStats\DirData\ Is this the correct folder? Is your config name, site1, correct? Please refer to the installation instructions for more information.

    Read the article

  • ghettoVCB issue

    - by romgo75
    I have setup a ghettoVCB script in order to backup three VM. I put it in a crontab but I have an issue. In my backup folder I have 3 different folders, one for each VM. In each folder I have the following files: -rw-r--r-- 1 root root 1263 Mar 17 01:51 vm1-2010-03-16--2.gz -rw-r--r-- 1 root root 1263 Mar 17 00:41 vm1-2010-03-16--3.gz -rw-r--r-- 1 root root 1261 Mar 18 01:22 vm1-2010-03-17--1.gz drwxr-xr-x 1 root root 980 Mar 19 23:39 vm1-2010-03-19 The problem is the last folder. It seems that a backup didn't finish the process. When I read the logs concerning this folder I get: 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_VOLUME = /vmfs/volumes/datastore1/backup/ 2010-03-19 23:00:01 -- info: CONFIG - VM_BACKUP_ROTATION_COUNT = 3 2010-03-19 23:00:01 -- info: CONFIG - DISK_BACKUP_FORMAT = zeroedthick 2010-03-19 23:00:01 -- info: CONFIG - ADAPTER_FORMAT = buslogic 2010-03-19 23:00:01 -- info: CONFIG - POWER_VM_DOWN_BEFORE_BACKUP = 0 2010-03-19 23:00:01 -- info: CONFIG - ENABLE_HARD_POWER_OFF = 0 2010-03-19 23:00:01 -- info: CONFIG - ITER_TO_WAIT_SHUTDOWN = 3 2010-03-19 23:00:01 -- info: CONFIG - POWER_DOWN_TIMEOUT = 5 2010-03-19 23:00:01 -- info: CONFIG - SNAPSHOT_TIMEOUT = 15 2010-03-19 23:00:01 -- info: CONFIG - LOG_LEVEL = info 2010-03-19 23:00:01 -- info: CONFIG - BACKUP_LOG_OUTPUT = stdout 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_MEMORY = 0 2010-03-19 23:00:01 -- info: CONFIG - VM_SNAPSHOT_QUIESCE = 0 2010-03-19 23:00:01 -- info: CONFIG - VMDK_FILES_TO_BACKUP = all http://... 2010-03-19 23:39:35 -- info: Initiate backup for vm1 2010-03-19 23:39:35 -- info: Creating Snapshot "ghettoVCB-snapshot-2010-03-19" for vm1 Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1_1.vmdk'... ^MClone: 0% done.^MClone: 1% done.^MClone: 2% done.^MClone: 3% done.^MClone: 4% done.^MClone: 5% done.^MClone: 6% done.^MClone: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone Failed to clone disk : The file already exists (39). Destination disk format: VMFS zeroedthick Cloning disk '/vmfs/volumes/datastore1/vm1/vm1.vmdk'... 2010-03-20 00:46:20 -- info: Removing snapshot from vm1 ... one: 7% done.^MClone: 8% done.^MClone: 9% done.^MClone: 10% done.^MClone: 11% done.^MClone: 12% done.^MClone: 13% done.^MClone: 14% done.^MClone: 15% done.^MClone: 16% done.^MCl 2010-03-19 23:51:19 -- info: Removing snapshot from vm1 ... I can't run ghettoVCB anymore because the VM has a snapshot which has not been deleted. I know how to delete the snapshot, but I don't know why the VCB script is not able to handle rotation of the VM backups? Any ideas? Thanks!

    Read the article

  • SQL Server Management Studio not scripting all objects

    - by Ian Boyd
    i've been attempting to script a database using SQL Server 2005 Management Studio. i cannot get it to script some objects. It scripts others, but skips some. i can provide detailed screen shots the options being selected including all tables the folder where the script files will go the folder being empty before scripting the scripting process saying Sucess when scripting a table the destination folder no longer empty, with a hundred or so script files the script of some tables not being in the folder. And earlier SSMS would not script some views. Is this a known thing that the the Generate Scripts task does not generate scripts? Update Known issue on Microsoft Connect, but Microsoft couldn't repro the steps, so they closed closed the ticket. Fails on SQL Server 2005, also fails on SQL Server 2008. Update Two Some basic questions: 1.What version of SQL Server? Microsoft SQL Server 2000 - 8.00.194 (Intel X86) Microsoft SQL Server 2005 - 9.00.3042.00 (Intel X86) Microsoft SQL Server 2008 - 10.0.2531.0 (Intel X86) Microsoft SQL Server 2005 Management Studio: 9.00.4035.00 Microsoft SQL Server 2008 Management Studio: 10.0.1600.22 2.What O/S are you running on? Windows Server 2000 Windows Server 2003 Windows Server 2008 3.How are you logging in to SQL server? sa/password Trusted authentication 4.Have you verified your account has full access to all objects? Yes, i have access to all objects. 5.Can you use the objects that fail to script? (eg: select top(10) * from nonScriptingTable) Yes, all objects work fine. SQL Server Enterprise Manager can script the objects fine. Update Three They fail no matter what version of SQL Server you script against. It wasn't a problem in Enterprise Manager: Client Tools SQL Server 2000 SQL Server 2005 SQL Server 2008 ============ =============== =============== =============== 2000 Yes n/a n/a 2005 No No No 2008 No No No Update Four No errors found in the database using: DBCC CHECKDB go DBCC CHECKCONSTRAINTS go DBCC CHECKFILEGROUP go DBCC CHECKIDENT go DBCC CHECKCATALOG go EXECUTE sp_msforeachtable 'DBCC CHECKTABLE (''?'')' Honk if you hate SSMS.

    Read the article

  • Access Denied / Server 2008 / Home Directories

    - by Shaun Murphy
    Domain Controller: BDC01 (192.168.9.2) Storage Server: BrightonSAN1 (192.168.9.3) Domain: brighton.local Last night I moved our users home directories off of our Domain Controller onto a storage server using the MS FSMT. I'm getting a mixed bag of errors. The first being some users cannot logon properly, they can't access the logon.vbs in the sysvol folder on the DC and consequently cannot map their drives. I've narrowed that down to a DNS issue as we there was a remnant of our previous DNS server in the DHCP server options and scope options. I'm able to get their drives remapped by browsing to the sysvol folder by IP address as opposed to Computer Name and manually running the logon.vbs script. The other error I'm getting is Access Denied on a few of the users home directories. The top level folder (Home) is shared as normal and I've removed and re-added the NTFS security a number of times now including making the user the owner with full control. I've checked each and every individual file and folder in said users home directory and they are indeed the owner but I'm unable to write but I can read the contents. I'm stumped. This isn't happening to all clients. I'm considering removing their AD accounts, backing up their folders and readding them as a last resort but obviously I'd like to know why the above errors are happening.

    Read the article

  • "The directory name is invalid" when trying to install drivers in Windows 7 via Device Manager

    - by Luke
    First off, this computer is not mine, it's a customer's system. Having said that... The hard drive was moved to a new motherboard, CPU, RAM combo, and booted up fine. Customer puts in driver CD, drivers won't load. He brings it into me. Under Device Manager for Windows 7 x64, I see lots of PCI to PCI bridge, one SMBus Controller, and about 20 Unknown Devices. Greeeeeat... So I start with the SMBus driver directly from the Asus website for the motherboard (P8H77-M Pro). If I install from the setup program, it tells me to reboot, then it starts the install. It gets half way through the setup, then fails (An unknown error occurred. Setup will exit). When I try to point to the folder from Device Manager, it starts copying files for the driver, even presents me with the proper name of the device, but says that an error has occurred there as well: The directory name is invalid. Doing some Googling, I saw that many people had this issue with Vista. K, Vista and 7 are similar, maybe the solutions are the same... But they aren't. I tried: Copying the entire driver folder and setup utility to the Program Files folder and running it / selecting it in DM Downloading another set of drivers in case this one is corrupt Disabling UAC Deleting and recreating the %WINDIR%\TEMP folder Removing all references to previous hardware that I could find, even in Device Manager's hidden mode So far, nothing has worked. A wipe and reload will be out of the question

    Read the article

  • Help transferring Mac OS X fonts to Windows

    - by Addsy
    I have been sent a QuarkXPress document which contains a number of unusual fonts which were also sent to me. The fonts were in a folder named fonts and then it was all zipped up into a single archive containing the document and fonts folder. I believe this was done on a machine running Mac OS X. I have unzipped the archive, but all of the files in the fonts folder have no size - i.e. they are 0 bytes files. However there is also a _MACOSX folder which has a fonts subfolder. In here I can see all the font files (prefixed with .) and they all have a size, i.e. are not 0 bytes. Presumably these are the actual font files I need but I cannot figure out how to install them on Windows. I have tried renaming the files to have .ttf and .otf extensions and then installing them but that doesn't seem to work - I just get an "Invalid font file" message. Any other ideas?

    Read the article

  • Win7 Domain User Profile- Desktop Icon management best practices request

    - by Doltknuckle
    Here's the situation: We have a large (5,000+ user) organization that is currently using folder redirection to manage the windows desktop icons. This folder is redirected to a network share where we can centrally manage the different sites and such. When a user tries to use a computer when the network is not available, they are unable to use any shortcuts in the Public folder. We only redirect the C:\Users\%username%\Desktop folder. Does anyone have any suggestions of how to go about managing desktop icons? We still want a central location to manage these items, but find a way to keep the system working when the network is unavailable. As a point of clarification, the network rarely goes down. We do have instances where a few computers do not have a network connection. Usually, something is simply unplugged. Since we have multiple sites, the line from a branch to the central office has gone down a few times. This is more of an attempt to maintain a positive end user experience when disconnected from the network.

    Read the article

  • What is the collaborative screen shot/diagramming application recently featured on Hacker News and p

    - by wonsungi
    A few days ago, I saw this video for a screen capture application. I'm pretty sure I followed a link from Hacker News, possibly to a Life Hacker article. The video was very short, but demonstrated how the application could be used: The application was basically a movable/resize-able view port with a button. When the button is pressed, the contents of the view port are saved to an image (basically a screen capture.) The interesting thing is what you could do after that point. One of the specific examples from the video browsed to Google maps street view, grabbed a photo of an intersection, then scribbled notes about where to meet and where the restaurant was in colored "marker." Another example shown was grabbing a house layout from from CAD tool, then scribbling notes on it. The last part of the video showed several possible uses being scrolled through the application's view port. Now, it seemed it was very easy to share these images with other people because there was some type of integration, either with their own site and/or common social websites/chat services. The application was shown running on both Windows and Mac. edit: I think there was an iPhone app, as well. Anyone know what this application is? I tried searching Google, Hacker News, and Life Hacker already. It is not Jing.

    Read the article

  • 403 Forbidden when Deploying asp.net 4.0 site to IIS 7

    - by Jordan
    So I have an EC2 instance running, the URL NoWeatherSurprises.com I have the DNS pointing there, and I set up a new site in IIS 7 and pointed it to a folder. I used Visual Studios Web Developer 2010 express to publish to this folder. It now has the binaries and such. However if I go to NoWeatherSurprises.com I get the welcome to IIS 7 screen. I'd expect to go to my application If I navigate to http://noweathersurprises.com/weather/ [weather was the folder I published to under wwwroot] I get a 403 forbidden. I have no idea why, I am guessing that it is trying to do a directory listing or something instead of launching my MVC Application. So 2 problems in summary. It is not pointing the domain to the folder directly and I need to add /weather I am getting a 403 forbidden instead of the results of my home controller with the index action. I am new to IIS 7, I had been using IIS 6 and had a lot less trouble setting it up, but I suspect that's my own fault and i am just missing something. Thanks in advance for any help

    Read the article

  • Config Apache HTTP Server for Eclipse

    - by hqt
    Maybe this question is silly but I really don't know how to solve. First, as other server, I want to define new server. So, in Eclipse, I go to: WindowsPreferenceServer: 1) When I add new server, in list, no category for apache HTTP server. Just has apache tomcat. So, I click into download additional server adapter--still don't have in list. 2) So, I search. I point to location I have installed. Good, Eclipse sees that is a HTTP Server. And Eclipse see folder to put project into for me (because I use LAMPP so that folder isn't in Apache folder). But here is my problem. When I want to run a new PHP Project. Right click, run on server. A new dialog appear take me to choose which server to run. And, in list of server, no HTTP Server, So, I don't know how to choose Apache HTTP Server !!! (because Eclipse doesn't see which server that I have defined, eclipse just find adapter first) So, if I want to run this project, I must copy all and paste to apache folder. Too handy !!! Please help me. Thanks :)

    Read the article

  • Backup with Mercurial and Robocopy?

    - by Andrew Neely
    The Problem We would like to backup our critical files from several network shares to a removable hard drive. We want to automate the backup so we don't have to remember to run it. It needs to finish overnight. Furthermore, we want to be able to preserve multiple versions of each file so we can back out of our user's mistakes easier. Background Information I work in a large Windows-based enterprise with a centralized IT section who is responsible for all backups. Their backups are geared towards disaster recovery and not user error, and require upper-level management approval for any non-disaster recoveries. Several times we have noticed that our backups have failed, we weren't notified. I do not have administrative rights to the server or my desktop. We are trying to backup some 198,000 files spanning about 240 gigabytes. These files rarely change. Our backup drive is one terabyte. My Proposed Solution What I would like to do is to write a batch file using Robocopy with the /mir option along with Mercurial SCM to store all versions of the file. I would do an hg add followed by an hg commit before each execution of Robocopy to save the current state, and then make a mirrored copy of the file structures. The problem is the /mir will delete every folder not present in the source, and Mercurial stores the repository in a .hg folder in the destination folder. Does anybody know how I could either convince Mercurial to store the .hg folder elsewhere, or convince Robocopy not to delete it from the destination? I'm trying to avoid writing a custom program do to copying.

    Read the article

  • Powershell script to delete sub folders and files if creation date is >7 days but maintain parent folders of sub folders and files <7 days old

    - by Mark
    I'm currently using the Powershell script below to delete all files directories and sub directories of "$dump_path" that are seven days or older based upon the creation date and not modified date. The problem with this script is this: If folder "A" is seven (or more) days old it will be deleted even if its sub folders and files are less then seven days old. What I would like this script to do is this: Delete all files from the root and in all sub folders of "$dump_path" that are seven or more days old but maintain the parent folder(s) of files and folders that are less than seven days old even if that means the parent folders are more than seven days old. If all subfolders and files are seven days or older than the parent folder then the parent can be deleted. Slightly obscure problem I know, but the intention is to have a 7 day retention period of all data in a 'sandbox' location of our shared areas. Also, an added bonus if it could generate a log of what it deletes and e-mails it out post deletion. Thank you for reading and I hope that all makes sense! Mark # set folder path $dump_path = "c:\temp" # set minimum age of files and folders $max_days = "-7" # get the current date $curr_date = Get-Date # determine how far back we go based on current date $del_date = $curr_date.AddDays($max_days) # delete the files and folders Get-ChildItem $dump_path | Where-Object { $_.CreationTime -lt $del_date } | Remove-Item -Recurse

    Read the article

< Previous Page | 330 331 332 333 334 335 336 337 338 339 340 341  | Next Page >