Search Results

Search found 24316 results on 973 pages for 'source removal'.

Page 356/973 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • How to safely remove a USB device from 2008 Core server?

    - by Qwerty
    I have a Hyper-V Core server 2008 that I administer via command line and remote tools. We have now got a new backup system in place and it involves me connecting an External USB drive (G:) to backup system state files. My question is how should I safely remove the drive for its weekly offsite swap? I've tried using the devcon tool however it just says the 'removal failed with no devices removed' with no other explanation. I have noticed that there isnt a readily available x64 version of devcon and that might be the cause of the problem. (I have read of people downloading a amd64 version but I have not located it myself, if someone knows where it is please let me know). The devcon command worked on my old 2003 x86 server with the command: devcon remove *3200AVJ_EXTERNAL* I have also looked at using fsutil volume dismount g: but it doesn't seem to work as G: is still listed as a connected volume. I have checked that the volume is not in use via remote tools and the net file command. Both show no open files in the G:\ volume. This could be a decent substitute as it might be used to flush any remaining IO to the volume can anyone clarify? Thanks in advance.

    Read the article

  • Windows Backup failed as another backup or recovery is in progress.

    - by remunda
    Hi, i have set up backup schedule on our server. SQL Server 2008 to 01:00am on windows server 2008 R2 to 4:00am. Sql backup runs well, but system backup ends sometimes with error. The error is : http://technet.microsoft.com/en-us/library/dd364768%28WS.10%29.aspx I mean, that is caused by SQL Server, because it unexpectedly runs backup. this is from sql server log (it runs for every database): Date 4.2.2010 4:00:21 Log SQL Server (Current - 29.1.2010 23:25:00) Source spid72 Message I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup. and Date 4.2.2010 4:00:24 Log SQL Server (Current - 29.1.2010 23:25:00) Source Backup Message Database backed up. Database: master, creation date(time): 2010/01/29(23:25:32), pages dumped: 370, first LSN: 859:56:37, last LSN: 859:80:1, number of dump devices: 1, device information: (FILE=1, TYPE=VIRTUAL_DEVICE: {'{17B91D5C-9968-4D11-A7F1-1A31523D32F0}25'}). This is an informational message only. No user action is required. My questions is: Why runs sql backup with windows backup? How can i dissolve this that errors? Thank you a lot.

    Read the article

  • Gnome-Applets Package Causing Gnome to Segfault

    - by FranticPedantic
    When I boot ubuntu I get a bunch of windows saying "The panel encountered a problem while loading: OAFIID:GNOME_ShowDesktopApplet", with a bunch of others(5) besides ShowDesktopApplet. I am able to launch a terminal, and I do have the taskbar at the top, although the bar at the bottom that holds the windows is blank, so when I minimize a window it disappears. Furthermore, when I try to launch most applications (firefox for example) it segfaults. I have googled the error but it seems to be common but general issue, and a lot of the solutions that worked for others didn't work for me. I tried deleting the gconf, gnome, and gnome2 folders in my home directory. I tried deleting a bunch of the gnome folders (although I might have done this wrong, I was confused as to which ones). I have also tried to use apt-get and dpkg to re-install ubuntu-desktop and gnome-applets. However, when I try apt-get install ubuntu-desktop I get errors regarding /var/cache/apt/archives/gnome-applets_2.28.0-0ubunt2_amd64.deb I tried dpkg --configure -a and interestingly I get Package gnome-applets is not installed. When I try to use apt-get to install it, that same pesky problem pops up error processing /var/cache/apt/archives/gnome-applets_2.28.0-0ubunt2_amd64.deb (--unpack) subprocess new post-removal script returned error exist status 245 sub process /usr/bin/dpkg returned an error code(1) This file has been popping up with regards to a lot of the solutions suggested I see. Any ideas?

    Read the article

  • Windows 7 Backup not backing up custom library?

    - by James McMahon
    I have created a custom Library under Windows 7 64bit professional to handle my source code. When I tried Windows Backup and Restore for the first time I get the following error Backup encountered a problem while backing up file C:\Windows\System32\config\systemprofile\Source. Error:(The system cannot find the file specified. (0x80070002)) I've found a thread on the error on the Microsoft answers site. But it appears to be 404 (there is a version in Google's Cache) and the thread starter never gets an answer to his issue that works. The official Microsoft answer on this is This problem is due to one or more profiles under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\WindowsNT\CurrentVersion\ProfileList with missing ProfileImagePath. To check whether you have missing profiles: Open regedit, navigate to the above registry key. (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProfileList). Expand the list Click on each of the profiles listed. The first 3 profiles should have ProfileImagePath value of %SystemRoot%\System32\Config\SystemProfile, %SystemRoot%\ServiceProfiles\LocalService, and %SystemRoot%\ServiceProfiles\NetworkService respectively. Starting from the 4th profile, the ProfileImagePath should contain path to the user profiles on your machine, such as C:\users\Christine If one or more of the profile has no profile image, then you have missing profiles. To work around this, delete the profile in question (Caution: The registry contains critical settings that are necessary for your system to function properly. Take extra caution while making changes) First, export the ProfileList key for safekeeping. (Right click on the key, choose “Export”, and save it to the desktop.) Right click on the profile in question, choose delete. Try backup again. This does not work for me. Anyone have any idea what is going on here?

    Read the article

  • Squid external_acl_type Cannot run process

    - by Alex Rezistorman
    I want to restrict uploading for group of the users via squid. So I've choosen to use external_acl_type but after reload of the squid it returns error. WARNING: Cannot run '/usr/local/etc/squid/lists/newupload.sh' process. Permissions of newupload.sh and squid are the same. newupload.sh is executive. How can I solve this problem? Thnx in advance. newupload.sh #!/bin/sh while read line; do set -- $line length=$1 limit=$2 if [ -z "$length" ] || [ "$length" -le "$2" ]; then echo OK else echo ERR fi done Strings from squid.conf external_acl_type request_body protocol=2.5 %{Content-Lenght} /usr/local/etc/squid/lists/newupload.sh acl request_max_size external request_body 5000 http_access allow users request_max_size Squid version squid -v Squid Cache: Version 3.2.13 configure options: '--with-default-user=squid' '--bindir=/usr/local/sbin' '--sbindir=/usr/local/sbin' '--datadir=/usr/local/etc/squid' '--libexecdir=/usr/local/libexec/squid' '--localstatedir=/var' '--sysconfdir=/usr/local/etc/squid' '--with-logdir=/var/log/squid' '--with-pidfile=/var/run/squid/squid.pid' '--with-swapdir=/var/squid/cache/squid' '--enable-auth' '--enable-build-info' '--enable-loadable-modules' '--enable-removal-policies=lru heap' '--disable-epoll' '--disable-linux-netfilter' '--disable-linux-tproxy' '--disable-translation' '--enable-auth-basic=PAM' '--disable-auth-digest' '--enable-external-acl-helpers= kerberos_ldap_group' '--enable-auth-negotiate=kerberos' '--disable-auth-ntlm' '--without-pthreads' '--enable-storeio=diskd ufs' '--enable-disk-io=AIO Blocking DiskDaemon IpcIo Mmapped' '--enable-log-daemon-helpers=file' '--disable-url-rewrite-helpers' '--disable-ipv6' '--disable-snmp' '--disable-htcp' '--disable-forw-via-db' '--disable-cache-digests' '--disable-wccp' '--disable-wccpv2' '--disable-ident-lookups' '--disable-eui' '--disable-ipfw-transparent' '--disable-pf-transparent' '--disable-ipf-transparent' '--disable-follow-x-forwarded-for' '--disable-ecap' '--disable-icap-client' '--disable-esi' '--enable-kqueue' '--with-large-files' '--enable-cachemgr-hostname=proxy.adir.vbr.ua' '--with-filedescriptors=131072' '--disable-auto-locale' '--prefix=/usr/local' '--mandir=/usr/local/man' '--infodir=/usr/local/info/' '--build=amd64-portbld-freebsd8.3' 'build_alias=amd64-portbld-freebsd8.3' 'CC=cc' 'CFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'LDFLAGS= -L/usr/local/lib' 'CPPFLAGS=-I/usr/local/include' 'CXX=c++' 'CXXFLAGS=-O2 -fno-strict-aliasing -frename-registers -fweb -fforce-addr -fmerge-all-constants -maccumulate-outgoing-args -pipe -march=core2 -I/usr/local/include -DLDAP_DEPRECATED' 'CPP=cpp' --enable-ltdl-convenience Related post: Restrict uploading for groups in squid http://squid-web-proxy-cache.1019090.n4.nabble.com/flexible-managing-of-request-body-max-size-with-squid-2-5-STABLE12-td1022653.html

    Read the article

  • IIS 6.0 Application pool crash

    - by David
    One application pool on one of our webservers crashed and we found this in the Eventlog, where can we find more information about it? Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1101 Date: 11/23/2009 Time: 10:57:55 AM User: N/A Computer: ID-WEB Description: The World Wide Web Publishing Service failed to create app pool 'Global'. The data field contains the error number. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: b7 00 07 80 ·..? Attempting to manually start the application pool gives the following in the event log: Event Type: Error Event Source: W3SVC Event Category: None Event ID: 1107 Date: 11/23/2009 Time: 3:53:13 PM User: N/A Computer: ID-WEB Description: The World Wide Web Publishing Service failed to modify app pool 'Global'. The data field contains the error number. For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. Data: 0000: 05 40 00 80 .@.? We are running IIS 6.0 on a Windows server 2003 R2, 32bits.

    Read the article

  • ADSIEdit Cleanup After Exchange 2003 Crash During Transition To Exchange 2010

    - by ThaKidd
    Hello all. I would value some input from a few Exchange 2010 experts. I have almost completed the transition from Exchange 2003 Standard to Exchange 2010 Standard. Everything went smoothly until I tried to uninstall Exchange 2003. At that point the server bit the dust and died completely. I now have NO access to the old Exchange System Management MMC as I am running Windows 2008 SR2 and Windows 7 only. I can only fix this with ADSIEdit, EMShell, and EMConsole. I have used the 2010 shell to move/remove/verify that all mailboxes, public folders and OAB are hosted on Exchange 2010. I also verified that the routing connector has been deleted. The only two things that were not done was to remove the Recipient Update Service and actually perform the removal of the 2003 software. I have spent a lot of time going through ASDIedit and have located the old Administrative Group and the Exchange 2003 server listed under it. I also located the Recipient Update Service which includes two entries; Enterprise and my domain name. I have read that it is an unwise idea to remove the old administrative group so I won't bother messing with that. I am repeatedly getting three warnings in the Application Log. Both are from MSExchangeTransport EventID 5006 (Cannot find route to Mailbox Server OLDSERVER) and 5020 (The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003) So my questions are: To clean out AD of the old Exchange 2003 info, can I delete the server name folder (Configuration - Services - Microsoft Exchange - ExchOrg - Administrative Groups - First Administrative Group - Servers - Old Server) and also delete the Update Recipient Service (Enterprise) and Update Recipient Service (DOMAIN) containers safely? Are there any additional items I need to address to ensure the AD is clean? Thanks in advance for your help!

    Read the article

  • IIS7 bulk bindings in vbscript , how to remove a binding

    - by minus4
    I have a script to manage adding over a thousand domains to a single site bindings, this has gone fine, but now client wants about 20 of them removed, Microsoft programmers don't think it would be nice to sort the bindings alphabetically, so does anyone know the code to remove the domains ( array list ) in bulk please. this is what i am using: Set adminManager = createObject("Microsoft.ApplicationHost.WritableAdminManager") adminManager.CommitPath = "MACHINE/WEBROOT/APPHOST" Set sitesSection = adminManager.GetAdminSection("system.applicationHost/sites", "MACHINE/WEBROOT/APPHOST") Set sitesCollection = sitesSection.Collection siteElementPos = FindElement(sitesCollection, "site", Array("name", "microsites")) If siteElementPos = -1 Then WScript.Echo "Element not found!" WScript.Quit End If on error resume next Set siteElement = sitesCollection.Item(siteElementPos) Set bindingsCollection = siteElement.ChildElements.Item("bindings").Collection Dim arrFileNames : arrFileNames = Array("list of domains") Dim objDict : Set objDict = CreateObject("Scripting.Dictionary") Dim strFileName, strTemp For Each strFileName In arrFileNames Set bindingElement = bindingsCollection.CreateNewElement("binding") bindingElement.Properties.Item("protocol").Value = "http" bindingElement.Properties.Item("bindingInformation").Value = "192.168.100.19:80:" & strFileName bindingsCollection.AddElement(bindingElement) Next adminManager.CommitChanges() WScript.Echo "Job Completed" WScript.Quit Function FindElement(collection, elementTagName, valuesToMatch) For i = 0 To CInt(collection.Count) - 1 Set element = collection.Item(i) If element.Name = elementTagName Then matches = True For iVal = 0 To UBound(valuesToMatch) Step 2 Set property = element.GetPropertyByName(valuesToMatch(iVal)) value = property.Value If Not IsNull(value) Then value = CStr(value) End If If Not value = CStr(valuesToMatch(iVal + 1)) Then matches = False Exit For End If Next If matches Then Exit For End If End If Next If matches Then FindElement = i Else FindElement = -1 End If End Function so as you can see it is easy to add, but i can find no code or manual or instructions for the removal. i cant seem to run appcmd either. at first i tried creating a batch file using the appcmd but this never worked, saying appcmd can not be found. thanks

    Read the article

  • ICMP Redirect Theory VS. Application

    - by joeqwerty
    I'm trying to watch ICMP redirects in a lab using Cisco Packet Tracer (version 5.3.2) and I'm not seeing them, which leads me to believe that either my lab configuration isn't correct or my understanding of ICMP redirects isn't correct or that Packet Tracer doesn't support/use ICMP redirects. Here's what I believe to be true regarding ICMP redirects: Routers send ICMP redirects when all of these conditions are met: The interface on which the packet comes into the router is the same interface on which the packet gets routed out. The subnet or network of the source IP address is on the same subnet or network of the next-hop IP address of the routed packet. The datagram is not source-routed. The router kernel is configured to send redirects. I have the lab set up in Cisco Packet Tracer as displayed in the image and would expect to see an ICMP redirect from Router1 when pinging from PC1 to PC3. I'm not seeing the ICMP redirect and it looks like Router1 is actually routing all of the packets via Router2. I have IP ICMP debugging enabled on Router1 (and Router2) and I'm not seeing any ICMP redirect activity in either console. I'm also not seeing a route to the PC3 network in the routing table on PC1, which I think confirms that the ICMP redirect is not occurring. I'm using only static routing on Routers 1 and 2. Is my understanding of ICMP redirects incorrect, or is there a problem with my lab configuration or does Packet Tracer not support/use ICMP redirects?

    Read the article

  • ASDIEdit Cleanup After Exchange 2003 Crash During Transition To Exchange 2010

    - by ThaKidd
    Hello all. I would value some input from a few experts. I have almost completed the transition from Exchange 2003 Standard to Exchange 2010 Standard. Everything went smoothly until I tried to uninstall Exchange 2003. At that point the server bit the dust and died completely. I now have NO access to the old Exchange System Management MMC as I am running Windows 2008 SR2 and Windows 7 only. I can only fix this with ASDIEdit, EMShell, and EMConsole. I have used the 2010 shell to move/remove/verify that all mailboxes, public folders and OAB are hosted on Exchange 2010. I also verified that the routing connector has been deleted. The only two things that were not done was to remove the Recipient Update Service and actually perform the removal of the 2003 software. I have spent a lot of time going through ASDIedit and have located the old Administrative Group and the Exchange 2003 server listed under it. I also located the Recipient Update Service which includes two entries; Enterprise and my domain name. I have read that it is an unwise idea to remove the old administrative group so I won't bother messing with that. I am repeatedly getting three warnings in the Application Log. Both are from MSExchangeTransport EventID 5006 (Cannot find route to Mailbox Server OLDSERVER) and 5020 (The topology doesn't contain a route to Exchange 2000 Server or Exchange Server 2003) So my questions are: To clean out AD of the old Exchange 2003 info, can I delete the server name folder (Configuration - Services - Microsoft Exchange - ExchOrg - Administrative Groups - First Administrative Group - Servers - Old Server) and also delete the Update Recipient Service (Enterprise) and Update Recipient Service (DOMAIN) containers safely? Are there any additional items I need to address to ensure the AD is clean? Thanks in advance for your help!

    Read the article

  • Issue configuring Oracle database for SSL

    - by Santhosha Kaldambe
    Hello, I want to setup Oracle for SSL communication. I am not using SSL authentication for database user. As first requirement, generated self signed certificate using OpenSSL and added certificate to wallet. The wallet location is specified in server configuration. Created listener and it is starting however it does not provide any service. The default listener (non SSL) is working fine. When I execute LSNRCTL.EXE status SSLLISTENER it gives below output. STATUS of the LISTENER Alias SSLLISTENER Version TNSLSNR for 32-bit Windows: Version 11.1.0.6.0 - Production Start Date 14-NOV-2009 01:47:08 Uptime 16 days 22 hr. 14 min. 3 sec Trace Level off Security ON: Local OS Authentication SNMP OFF Listener Parameter File C:\app\Administrator\product\11.1.0\db_1\network\admin\listener.ora Listener Log File c:\app\administrator\diag\tnslsnr\\ssllistener\alert\log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=tcps)(HOST=)(PORT =2484))) The listener supports no services The command completed successfully Here is exact content of various files after configuration. 1) File Name: tnsnames.ora ORCL = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl) ) ) 2) File Name: sqlnet.ora SSL_VERSION = 0 NAMES.DIRECTORY_PATH= (TNSNAMES, EZCONNECT) sqlnet.authentication_services= (NONE) tcp.validnode_checking = no tcp.invited_nodes=(PS0803.oraebs.com,PS2948,PS5098) SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) 3) File Name: listener.ora SSL_CLIENT_AUTHENTICATION = FALSE WALLET_LOCATION = (SOURCE = (METHOD = FILE) (METHOD_DATA = (DIRECTORY = C:\app\Administrator\admin\orcl\Server_Wallet) ) ) LISTENER = (DESCRIPTION_LIST = (DESCRIPTION = (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521)) ) (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = )(PORT 1521)) ) ) SSLLISTENER = (DESCRIPTION = (ADDRESS = (PROTOCOL = TCPS)(HOST = )(PORT = 2484)) ) Thanks Santhosh

    Read the article

  • rpm build from src file

    - by danielrutledge
    Hi all, I'm trying to build from a *.src.rpm file on FC 12 in such a way that the files are distributed a across my system as they would with a normal binary build (in this case, *.h files end up in /usr/include). When I ran rpmbuild, the headers weren't present. Here's my rpmbuild command: [root@localhost sphirewalld]# rpm -ivv /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm ============== /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm Expected size: 489395 = lead(96)+sigs(180)+pad(4)+data(489115) Actual size: 489395 loading keyring from pubkeys in /var/lib/rpm/pubkeys/*.key couldn't find any keys in /var/lib/rpm/pubkeys/*.key loading keyring from rpmdb opening db environment /var/lib/rpm/Packages cdb:mpool:joinenv opening db index /var/lib/rpm/Packages rdonly mode=0x0 locked db index /var/lib/rpm/Packages opening db index /var/lib/rpm/Name rdonly mode=0x0 read h# 931 Header sanity check: OK added key gpg-pubkey-57bbccba-4a6f97af to keyring read h# 1327 Header sanity check: OK added key gpg-pubkey-7fac5991-4615767f to keyring read h# 1420 Header sanity check: OK added key gpg-pubkey-16ca1a56-4a100959 to keyring read h# 1896 Header sanity check: OK added key gpg-pubkey-a3a882c1-4a1009ef to keyring Using legacy gpg-pubkey(s) from rpmdb /home/dan/Downloads/gtest-1.3.0-2.20090601svn257.fc12.src.rpm: Header SHA1 digest: OK (3e98ed9b1631395d417e00f35c83ebe588ea9d3b) added source package [0] found 1 source and 0 binary packages Expected size: 489395 = lead(96)+sigs(180)+pad(4)+data(489115) Actual size: 489395 InstallSourcePackage at: psm.c:232: Header SHA1 digest: OK (3e98ed9b1631395d417e00f35c83ebe588ea9d3b) gtest-1.3.0-2.20090601svn257.fc12 ========== Directories not explicitly included in package: 0 /root/rpmbuild/SOURCES/ 1 /root/rpmbuild/SPECS/ ========== warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100664 1 ( 0, 0) 478034 /root/rpmbuild/SOURCES/gtest-1.3.0.tar.bz2;4ba93ce1 unknown warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100644 1 ( 0, 0) 30505 /root/rpmbuild/SOURCES/gtest-svnr257.patch;4ba93ce1 unknown warning: user mockbuild does not exist - using root warning: group mockbuild does not exist - using root fini 100644 1 ( 0, 0) 2732 /root/rpmbuild/SPECS/gtest.spec;4ba93ce1 unknown GZDIO: 63 reads, 511788 total bytes in 0.005930 secs closed db index /var/lib/rpm/Name closed db index /var/lib/rpm/Packages closed db environment /var/lib/rpm/Packages Thanks for your help.

    Read the article

  • Remote Desktop to Server 2008R2 fails from one particular Win7 client

    - by Jesse McGrew
    I have a VPS running Windows Web Server 2008 R2. I'm able to connect using Remote Desktop from my home PC (Windows 7), personal laptop (Windows 7), and work laptop (Windows XP). However, I cannot connect from my work PC (Windows 7). I receive the error "The logon attempt failed" in the RDP client, and the server event log shows "An account failed to log on" with this explanation: Subject: Security ID: NULL SID Account Name: - Account Domain: - Logon ID: 0x0 Logon Type: 3 Account For Which Logon Failed: Security ID: NULL SID Account Name: username Account Domain: hostname Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x0 Caller Process Name: - Network Information: Workstation Name: JESSE-PC Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: NtLmSsp Authentication Package: NTLM Transited Services: - Package Name (NTLM only): - Key Length: 0 I can connect from the offending work PC if I start up Windows XP Mode and use the RDP client inside that. The server is part of a domain but my account is local, so I'm logging in using a username of the form hostname\username. None of the clients are part of a domain. The server uses a self-signed certificate, and connecting from home I get a warning about that, but connecting from work I just get the logon error.

    Read the article

  • Use icacls to make a directory read-only on Windows 7

    - by Dave G
    I'm attempting to test some filesystem exceptions in a Java based application. I need to find a way to create a directory that is located under %TMP% that is set to read-only. Essentially on UNIX/POSIX platforms, I can do a chmod -w and get this effect. Under Windows 7/NTFS this is of course a different story. I'm running into multiple issues on this. My user has "administrative" right (although this may not always be the case) and as such the directory is created with an ACL including: NT AUTHORITY\SYSTEM BUILTIN\Administrators <my current user> Is there a way using icacls to essentially get this directory into a state where it is read-only PERIOD, do my test, then restore the ACL for removal? EDIT With the information provided by @Ansgar Wiechers I was able to come up with a solution. I used the following: icacls dirname /deny %username%:(WD) In the page located here I found this in the remarks section: icacls preserves the canonical order of ACE entries as: * Explicit denials * Explicit grants * Inherited denials * Inherited grants By performing the above icalcs command, I was able to set the current user's ability to write or append files (WD) to the directory to deny. Then it was a question of returning it to a state post test: icacls dirname /reset /t /c Done

    Read the article

  • MS SQL - Problem running SQL Server Agent Job via service account credentials

    - by molecule
    There are 5 steps in this job. First job is an SSIS Package store, second to fifth are file system jobs. We configured all jobs to use Windows Authentication. Under Run As, we specified a user account which was created under SecurityCredentials and SQL Server AgentProxiesSSIS Package execution. The job runs without any problems with this user account. We then proceeded to configure the job to use a service account instead. Service account was specified under SecurityCredentials and SQL Server AgentProxiesSSIS Package Execution. The job fails with this error. Executed as user: domain\serviceaccount. ....00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 3:37:57 PM Error: 2010-03-09 15:37:57.95 Code: 0xC0016016 Source: Description: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. End Error Error: 2010-03-09 15:38:01.19 Code: 0xC0047062 Source: Get CONT_VIEW_LADDER in latest 45days OracleFMDatabase [1] Description: System.Data.OracleClient.OracleException: ORA-01005: null password given; logon denied at System.Data.OracleClient.OracleException.Check(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleInternalConnection.OpenOnLocalTransaction(String userName, String password, String serverName, Boo... The package execution fa... The step failed. Based on some research, I then go into MS Visual Studio and Open the project. I change the property of the package security from "EncryptSensitiveWithUserKey" to "DontSaveSensitive" but i still get the above error. I am new to this so any help will be very much appreciated. Thanks in advance

    Read the article

  • Grant access for users on a separate domain to SharePoint

    - by Geo Ego
    Hello. I just completed development of a SharePoint site on a virtual server and am currently in the process of granting users from a different domain to the site. The SharePoint domain is SHAREPOINT, and the domain with the users I want to give access to is COMPANY. I have provided them with a link to the site and added them as users via SharePoint, which is all I thought I would need to do. However, when they go to the link, the site shows them a SharePoint error page. In the security event log, I am showing the following: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:11:49 AM User: COMPANY\ThisUser Computer: SHAREPOINT Description: Object Open: Object Server: Security Account Manager Object Type: SAM_ALIAS Object Name: DOMAINS\Account\Aliases\00000404 Handle ID: - Operation ID: {0,1719489} Process ID: 416 Image File Name: C:\WINDOWS\system32\lsass.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: ThisUser Client Domain: PRINTRON Client Logon ID: (0x0,0x1A3BC2) Accesses: AddMember RemoveMember ListMembers ReadInformation Privileges: - Restricted Sid Count: 0 Access Mask: 0xF Then, four of these in a row: Event Type: Failure Audit Event Source: Security Event Category: Object Access Event ID: 560 Date: 3/18/2010 Time: 11:12:08 AM User: NT AUTHORITY\NETWORK SERVICE Computer: SHAREPOINT Description: Object Open: Object Server: SC Manager Object Type: SERVICE OBJECT Object Name: WinHttpAutoProxySvc Handle ID: - Operation ID: {0,1727132} Process ID: 404 Image File Name: C:\WINDOWS\system32\services.exe Primary User Name: SHAREPOINT$ Primary Domain: COMPANY Primary Logon ID: (0x0,0x3E7) Client User Name: NETWORK SERVICE Client Domain: NT AUTHORITY Client Logon ID: (0x0,0x3E4) Accesses: Query status of service Start the service Query information from service Privileges: - Restricted Sid Count: 0 Access Mask: 0x94 Any ideas what permissions I need to grant to the user to get them access to SharePoint?

    Read the article

  • get-eventlog issue

    - by Jim B
    I wanted to get a quick report of some log entries I saw on a server, so I ran: Get-Eventlog -logname system -newest 10 -computer fs1 | fl I got events back however the descriptions were all wrong. Here's an example: Index : 1260055 EntryType : Warning InstanceId : 2186936367 Message : The description for Event ID '-2108030929' in Source 'W32Time' cannot be found. The local compute r may not have the necessary registry information or message DLL files to display the message, or you may not have permission to access them. The following information is part of the event:'time. windows.com,0x1' Category : (0) CategoryNumber : 0 ReplacementStrings : {time.windows.com,0x1} Source : W32Time TimeGenerated : 1/25/2010 10:43:31 AM TimeWritten : 1/25/2010 10:43:31 AM UserName : Note that if I pull the event ID property it's correct (in this case 38) Is this is known issue or is something wrong. The messages resolve fine via event viewer locally and remotely Here is the powershell version info: Name : ConsoleHost Version : 2.0 InstanceId : bc58fcf8-bba3-4ca8-8972-17dbd5d9ff08 UI : System.Management.Automation.Internal.Host.InternalHostUserInterface CurrentCulture : en-US CurrentUICulture : en-US PrivateData : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy IsRunspacePushed : False Runspace : System.Management.Automation.Runspaces.LocalRunspace Here is the revised version info: Name Value ---- ----- CLRVersion 2.0.50727.3603 BuildVersion 6.0.6002.18111 PSVersion 2.0 WSManStackVersion 2.0 PSCompatibleVersions {1.0, 2.0} SerializationVersion 1.1.0.1 PSRemotingProtocolVersion 2.1

    Read the article

  • Iverter, inverter cable, or a display cable?

    - by krebshack
    I was recently hired at a small repair shop. I work indoors while my boss does on site calls for small businesses. I have to troubleshoot and fix laptop screens a few times a week and this is why I'm posting this question. I'm having trouble figuring out how to stream line the troubleshooting process. For example, how do I determine whether the inverter is broken while also determining that the inverter cable is not. How can I quickly decide that the inverter cable is broken while knowing that the inverter is most likely not broke. Or how do I know that it's just the display cable? It seems like this is a good way to approach things: "It could be the inverter, backlight, or the LCD panel itself. Backlight failure is usually hinted at by a pink hue to everything. Inverter failure usually results in the dimming of images on the screen to the point where the backlight is not even on (the inverter provides power to the backlight)" Source. While remembering that things "like flickering, screen freeze in dark image and [a] corner starts to get brighter" point to a "failure in the LCD panel itself, though it could just as easily be a loose data cable connected to the back of the LCD." Source. In short, I'm soliciting advice that anyone might have on how to quickly make the best decision about what's causing problems with laptops display. Thank you.

    Read the article

  • socket problem with MySQL

    - by Hristo
    This is a recent problem... MySQL was working and a couple of days ago I must have done something. I deleted MySQL and tried reinstalling using the .dmg file. The mysql.sock file never gets created and I get the following error messages: Hristo$ mysql Enter password: ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/mysql/mysql.sock' (2) I also tried stopping Apache and installing but Apache gave me an error... I don't know if this is good or bad: Hristo$ sudo apachectl stop launchctl: Error unloading: org.apache.httpd I tried the MacPorts installation as well but the socket file still didn't get created. I don't really know what to do and I don't want to reinstall Snow Leopard and start from scratch :/ I also tried installing the 32-bit version and same deal. No luck. Finally... I tried doing the source installation but when I get to the configuration step, I get the following error: -bash: ./configure: No such file or directory The file is "mysql-5.1.47-osx10.6-x86_64.tar.gz" so I think it is the proper file for source installation and yes I have a 64 bit system. I don't know what to do anymore. Any ideas? Thanks, Hristo

    Read the article

  • Cannot open files in Visual Studio but in Delphi and Notepad

    - by Andrew J. Brehm
    About an hour ago Visual Studio 2008 decided that it cannot find files any more. This is on 64 bit Windows Vista. When I right-click on a text file (source code or otherwise) and select "open with" and "Visual Studio 2008", I get the following error (example): Windows cannot find 'C:\Users\ajbrehm\Documents\Visual Studio 2008\Projects\Hello Prism\Hello Prism\Main.pas'. Make sure you typed the name correctly, and then try again. When I right-click the same file and select "open with" and "Delphi 2010" or "Notepad" (both other options available for text files on my system), the file opens correctly. Oddly enough when the file is part of a Visual Studio project and I open the project itself with Visual Studio (this works), I can open the file from within Visual Studio. Any ideas what might be going on? This started about an hour after I made a complete backup of my Vista VM and after I installed IIS 7, SQL Express, and Sourcegear Vault. The first files I noticed couldn't be opened in Visual Studio any more where Pascal source files in checked-outed folders from Vault. And Vault also seems to be unable to see one of the sources files and claims they don't exist. I found out about Visual Studio not opening ANY files any more when I tried to recreate the file Vault refused to see. Update: I just checked. Another user, "administrator", can still open text files with Visual Studio 2008. Both users have administrator rights. Update: I just restored the hours-old backup. Same problem. Apparently whatever triggered this happened before the install of IIS 7 and SQL Express. Never noticed it before.

    Read the article

  • puppet cert mismatch in ec2

    - by Stick
    I'm setting up a puppetmaster (2.7.6) in ec2 via gems (on rhel6) and I'm running into problems with the cert names and getting the master able to talk to itself. my puppet.conf looks like this: [main] logdir = /var/log/puppet rundir = /var/run/puppet vardir = /var/lib/puppet ssldir = $vardir/ssl pluginsync = true environment = production report = true certname = master When I start the puppetmaster process the ssl directory looks like: ssl/private_keys/master.pem ssl/crl.pem ssl/public_keys/master.pem ssl/ca/ca_crl.pem ssl/ca/signed/master.pem ssl/ca/ca_crt.pem ssl/ca/ca_pub.pem ssl/ca/ca_key.pem ssl/certs/ca.pem ssl/certs/master.pem I have an /etc/hosts entry on the box to point the 'puppet' hostname to localhost so that I don't have to change the 'server' option. When I run the agent I get the following: # puppet agent --test info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Server hostname 'puppet' did not match server certificate; expected master err: /File[/var/lib/puppet/lib]: Could not evaluate: Server hostname 'puppet' did not match server certificate; expected master Could not retrieve file metadata for puppet://puppet/plugins: Server hostname 'puppet' did not match server certificate; expected master err: Could not retrieve catalog from remote server: Server hostname 'puppet' did not match server certificate; expected master warning: Not using cache on failed catalog err: Could not retrieve catalog; skipping run err: Could not send report: Server hostname 'puppet' did not match server certificate; expected master If I specify the certname as the server (with corresponding hosts entry) I get: # puppet agent --test --server master info: Retrieving plugin err: /File[/var/lib/puppet/lib]: Could not evaluate: Could not retrieve information from environment production source(s) puppet://master/plugins info: Caching catalog for master info: Applying configuration version '1321805956' notice: Finished catalog run in 0.05 seconds Which is success of a sort, that source error will bite me later when I'm applying manifests. I've tried a couple of other variations with using the ec2 private hostname and gotten mixed results. I'd like to avoid setting server = 'x' and use dns/hosts to control what 'puppet' resolves to in order to decide which server (plays easier with availability zones, etc)

    Read the article

  • QT Creator 64-bit Snow Leopard

    - by quadelirus
    I have a bunch of libraries that I need to link against that I installed via macports. They are 64-bit libraries. I'm working on an application written with QT Creator and the .pro is set up. I downloaded the QT SDK for Mac OS X, but it is 32-bit and so the compiled code won't link against the 64-bit binaries that I got from macports. Ok. So I downloaded the QT SDK source and built from source using -arch x86_64. Now I have a 64-bit version of the SDK (I think) but it didn't build a QT Creator app. So. I need to know one of 4 things: Either, 1.) I'm guessing that a simple make command will convince the QT SDK to build the creator for me. If this is true, then what is the command (make creator?). barring that, I need to know 2.) The easiest way to get MacPorts to redownload the libraries that I installed with a 32-bit version (I keep seeing a "+universal" mentioned, but I haven't seen it on a line, and simply calling ports +universal install XYZ doesn't seem to work--perhaps I need to uninstall and reinstall the package?). Also, is this a stupid idea? or 3.) Someone who actually has a prebuilt 64-bit QT SDK installer so I don't have to mess with this. It is ridiculous that QT doesn't already have this available, in my opinion--SL has been out since, what, last August? 4--and this would take the cake.) I don't understand why I can't simply put a "compile-for-64-bit stupid" command directly into the QT pro file and have it build. There isn't really a reason why a compiler compiled in 32-bits couldn't compile to 64-bits is there? Thanks.

    Read the article

  • Explorer and open file dialog not responding (Vista)

    - by rohancragg
    Any explorer window opened for the first time on my machine causes the explorer window to display the folders tree and folder path in the address bar immediately but the file/folder list pane is blank and the window displays 'Not Responding' in the title bar, this hangs for up to a minute or more. Any file dialog displays 'Not Responding' in the title bar. The files list is eventually displayed after a few seconds or more. Steps to repro: Close all open instances of explorer Windows Key | Run | [enter a folder path such as 'c:\temp'] Or within any app: use a file open / save dialog Once there is at least one open instance of explorer the performance is still fairly poor but not nearly so bad and file lists are displayed in a timely fashion. What I've tried: Cleaned up registry with CCleaner tool, and uninstalled all other unused software Checked nothing unwanted running at startup with Autoruns Removed any ISO burner/recorder/mount software Still to try Get latest version of everything - especially stuff with shell extension behaviour such as TortoiseSVN Anyone have any other suggestions? Thanks alot. Update I'm wondering if this is related, I'll try the hotfix when I get home and report back: KB972685 - FIX:Explorer.exe hangs when using a shell extension written using MFC Update 2 Before I got a chance to try the hotfix it seems one of the above actions fixed this for me; either the removal of IsoRecorder or TortoiseHg (which I was no longer using anyway). Update 3 A similar issue with Explorer.exe has come back since installing TortoiseHg 1.01 :-(

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • switchless Infiniband between two servers on RHEL 6.3

    - by exfizik
    I have 2 servers running RHEL 6.3 which have 2 port Infiniband cards >lspci | grep -i infini 07:00.0 InfiniBand: QLogic Corp. IBA7322 QDR InfiniBand HCA (rev 02) I'm interested in connecting them directly to each other bypassing an Infiniband switch (which I don't have). Quick googling showed that at least in some configurations it's possible. I installed all RedHat Infiniband packages with yum groupinstall "Infiniband Support". However, ibv_devinfo shows that both ports in each card are down, which indicates that cables are not connected. But the cable is connected, although the LEDs are off on the cards (not a good sign). Another source of confusion for me is that according to this, RedHat doesn't come with OFED packages and I'm slightly hesitant to install them from source due to the lack of RedHat support for them... So where am I going with this? The questions I have are: is it possible to have a switchless/direct Infiniband connection between two servers the way I described above? If it's possible, do I have to use the OFED packages or can I configure everything with just the packages coming with RHEL. Why are the LEDs off on my servers even though the cable is connected? Any additional input/advice/pointers would be appreciated. P.S. I followed this guide for installation instructions. The Infiniband cards are clearly recognized by my OS and the rdma service is running. Update: I have opensm installed. When I run it it says: OpenSM 3.3.13 Command Line Arguments: Log File: /var/log/opensm.log ------------------------------------------------- OpenSM 3.3.13 Entering DISCOVERING state Using default GUID 0x1175000076e4c8 SM port is down and stays at that point.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >