Search Results

Search found 48887 results on 1956 pages for 'access control'.

Page 400/1956 | < Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >

  • How to access previous VHD versions of system backup?

    - by feklee
    Quote from the 31 Oct 2009 TechNet article "Learn more about system image backup": During the first backup, the backup engine scans the source drive and copies only blocks that contain data into a .vhd file stored on the target, creating a compact view of the source drive. The next time a system image is created, only new and changed data is written to the .vhd file, and old data on the same block is moved out of the VHD and into the shadow copy storage area. Volume Shadow Copy Service is used to compute the changed data between backups, as well as to handle the process of moving the old data out to the shadow copy area on the target. This approach makes the backup fast (since only changed blocks are backed up) and efficient (since data is stored in a compact manner). When restoring the image, blocks will be restored to their original locations on the source disk. If you want to restore from an older backup, the engine reads from the shadow copy area and restores the appropriate blocks. For the last days, a daily system backup of drive C: to drive E: has been scheduled and run by Windows 7 Backup and Restore. Drive C: currently holds 233 GB of data, which fits comfortably on drive E:, a 1 TB drive, with 727 GB of free space remaining. How do I access the previous version of a VHD? I right clicked on files and folders in E:\WindowsImageBackup, and I looked for Previous Versions but always: There are no previous versions available

    Read the article

  • Getting access to physical drives in ESXi v5.5 installation on Dell PowerEdge R710 with PERC 6/i

    - by Big-Blue
    I've acquired a Dell PowerEdge R710 server a few days ago, which includes a PERC 6/i RAID controller. The server is now fitted with a SATA SSD, one SAS drive and four SATA HDD's, all of which I would like to be passed through to ESXi in an "as-is" state, without creating any logical drives in the RAID controller. Now, the ESXi v5.5 installation image I grabbed from the Dell homepage starts just fine but only lists the logical drives and connected flash drives as possible installation targets, not any of the physical drives. If I create a small logical drive on my SSD (which the PERC 6/i detects as SATA-SSD type), the ESXi install wizard lists the SSD value on that drive as false; which is far from optimal. I have also tried disabling the RAID controller entirely in the setup, but that also did not help. Everything that should enable passthrough is enabled in BIOS, but that shouldn't be a concern at this early stage of the ESXi installation. How would I be able to install ESXi v5.5 to a part of my SSD that is connected to the storage controller, while giving it entire physical access to the disk (to allow for SMART values to be read etc.)?

    Read the article

  • Windows 7 access denied to executables.. by what?

    - by stijn
    Ever since I started using Windows 7 this problem has been bothering me. From time to time I see similar questions popping up on misc forums, but never did I see an answer. Here are two scenarios that nearly always reproduce it: the explorer way with explorer, navigate to a directory containing at least one exe file go one directory up immediately delete the directory just navigated to yields Folder Acces Denied dialog stating You need permission to perform this action You require permission from Administrators to make changes to this folder, with the buttons try Again and Cancel hitting Try Again never works immediately. Waiting a minute or so and then clickig it again does work note: if in step 2 and waiting a minute or more before going up one directory, the problem does not occur and the folder can be deleted the visual studio way build a project producing an exe file run the executable then close it immediately build the project again (by changing a single character in a source file for example) yields fatal error LNK1168: cannot open /path/to/the.exe for writing note: if in step 2 and waiting a minute or more before building again, the problem does not occur some specs happens both on Windows 7 32 and 64 bit, with VS2008/2010/2011 happens on 3 different machines I do not have a virusscanner of any kind I do have a bunch of services disabled, but nothing that prevents Windows from running normally, UAC is disabled as well happens on any type of disc I always use a user account that is in the Administrators group Obviously both scenarios are very similar and extremely reproducable. So I figured some process must have the file open for some reason, and release it again later. However, using systinternal's handle -a the exe file in question never shows up. (that is the correct way to use handle, right?) So while explorer/VS are reporting they cannot access the file, handle.exe says it's not in use anywhere. This leaves me rather clueless, so I'm wondering if someone can come up with a solution: why does this happen, and how to solve it?

    Read the article

  • 'The rpc server is unavailable' or 'access is denied' error when using Remote desktop Services Manager on Windows 7 (but mstsc.exe works!)

    - by tbone
    I am trying to connect to a Windows XP workstation from a Windows 7 Ultimate workstation using Remote Desktop Services Manager. I am able to do a Remote Desktop (mstsc.exe) session from the Win7 machine to the WinXP machine with no problem at all. When running the Remote Desktops Admin (tsmmc.msc) too on a Windows XP box, I can also connect with no problem. However, when I use the new Remote Desktop Services Manager on Windows 7 and try to connect, I get the error: "The rpc server is unavailable" What could cause this? Has there been some fundamental change in Remote Desktop Services Manager, does it connect in a different way somehow? Update #1 Turned off firewall on the Windows XP box and the "The rpc server is unavailable" error went away; so RDSM seems to be using an entirely new port/connection/service compared to mstsc.exe or the old Remote Desktops Admin tool. Now... after disabling the firewall, I get a new error: Access is Denied. After doing some googling, I found some articles discussing this; basically, the error is very misleading - the actual problem is, if either side of the connection has dual monitors, and they are not both Win7 Ultimate, then you cannot connect using Remote Desktop Services Manager...the reason is, by default it uses the /multimon switch, and this switch requires a certain level of Windows license - and, there seems to be no way of changing this default (if anyone knows of a way to change this default, please post an answer or comment!). Nice going Microsoft. http://social.technet.microsoft.com/Forums/en-US/windowsserver2008r2rds/thread/4d06278f-e0f4-4f8e-a8e1-3697ee967ef4 http://www.experts-exchange.com/OS/Microsoft_Operating_Systems/Windows/Windows_7/Q_26225743.html

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • Setting up a server that routes local traffic through vpn, while still being able to access internet directly

    - by Kazuo
    The goal is to setup a local server that routes local traffic through an uncontrolled remote vpn service while still being able to access the internet directly (not tunneled via vpn) and provide services through that direct connection. It is supposed to look like this: http://i.stack.imgur.com/74dGC.png Note: There is another router with modem between the local server and the internet. What is the easiest (best?) way to get this network setup working? I'm planning to setup the connection between the local router and the local server with simple ip forwarding. The problem now is that all the server's traffic is routed through the vpn tunnel as soon as I connect the server's openvpn client to the remote service so there is no direct internet connection available. My first idea was to setup a virtual machine (lxc container or something) and run the vpn client and local networking stuff in the vm. So that the vm receives all the incoming traffic from the local router and tunnels it through the vpn. This, as far as I understand, should not affect the physical server's network connection and should allow it to provide services to the internet. Before I start trying to set this up (I don't have much experience in networking), is there any easier or better way to do this? I would be thankful for every suggestion. Edit: Let's say the interface connected to the internet is eth0 and the interface connected to the local router is eth1. Another idea would be to create a virtual interface eth0:0 and specifiy it as openvpn's local endpoint and then force any traffic coming from eth1 through eth0:0. I'm not sure how I would force the traffic through eth0:0, though (possibly by adding routes).

    Read the article

  • Replacing text in NSTextFieldCell inside NSTableView

    - by earl.ct
    Whenever a user would type a number, my app would automatically prepend a currency sign before that number. For example, when the user types "1" in a text field, the text inside it becomes "$1.00". All is good when I use an NSNumberFormatter, an NSTextField, and its delegate method control:didFailToFormatString:errorDescription:. - (BOOL)control:(NSControl *)control didFailToFormatString:(NSString *)string errorDescription:(NSString *)error { if ([[control formatter] isKindOfClass:[NSNumberFormatter class]]) { NSNumberFormatter *formatter = [control formatter]; if ([formatter numberStyle] == NSNumberFormatterCurrencyStyle && ! [string hasPrefix:[formatter currencySymbol]]) { NSDecimalNumber *new = [NSDecimalNumber decimalNumberWithString:string]; if (new == [NSDecimalNumber notANumber]) { new = [NSDecimalNumber zero]; } [control setObjectValue:new]; } } return YES;} Now I would like to have this functionality when a user types a number in a cell inside an NSTableView. I tried using control:didFailToFormatString:errorDescription: but the cell would erase the text instead.

    Read the article

  • AssociatedControlId of inner namingcontainer

    - by Eric
    Hi, I have a custom control contains a label control. I want to set the AssociatedControlId of this label to be other control id on the page, but as soon as I implement the INamingContainer in my custom control, it will run into an error saying "Unable to find control with id 'abc' that is associated with the Label 'xyz'." This would be due to the fact that the label is in a nested naming container and it trys to find the control within the same container but couldn't (as the control is on the page, outside of it own naming container) Anyone know of a way to set this property? Thanks, Eric

    Read the article

  • Wrapping with Dependency Properties

    - by Chris
    I've got a Windows Forms control that I'm wrapping with a WindowsFormsHost-derived class to access WPF's data binding functionality. The Forms control exposes properties that indicate its state, along with the standard property-changed event notifier. For example, a Zoom property on the Forms control is accompanied with a ZoomChanged event. In the WindowsFormsHost wrapper, I'm using a DependencyProperty to represent the underlying Windows Forms control property. Binding works as expected going to the control; however, I'm not sure how to correctly propogate property changes from the wrapped control back out to binding subscribers (i.e., the Windows Form control changes its Zoom property and raises the ZoomChanged event). Any ideas on how to accomplish this? Should I be using a different approach?

    Read the article

  • System.Timers.Timer leaking due to "direct delegate roots"

    - by alimbada
    Apologies for the rather verbose and long-winded post, but this problem's been perplexing me for a few weeks now so I'm posting as much information as I can in order to get this resolved quickly. We have a WPF UserControl which is being loaded by a 3rd party app. The 3rd party app is a presentation application which loads and unloads controls on a schedule defined by an XML file which is downloaded from a server. Our control, when it is loaded into the application makes a web request to a web service and uses the data from the response to display some information. We're using an MVVM architecture for the control. The entry point of the control is a method that is implementing an interface exposed by the main app and this is where the control's configuration is set up. This is also where I set the DataContext of our control to our MainViewModel. The MainViewModel has two other view models as properties and the main UserControl has two child controls. Depending on the data received from the web service, the main UserControl decides which child control to display, e.g. if there is a HTTP error or the data received is not valid, then display child control A, otherwise display child control B. As you'd expect, these two child controls bind two separate view models each of which is a property of MainViewModel. Now child control B (which is displayed when the data is valid) has a RefreshService property/field. RefreshService is an object that is responsible for updating the model in a number of ways and contains 4 System.Timers.Timers; a _modelRefreshTimer a _viewRefreshTimer a _pageSwitchTimer a _retryFeedRetrievalOnErrorTimer (this is only enabled when something goes wrong with retrieving data). I should mention at this point that there are two types of data; the first changes every minute, the second changes every few hours. The controls' configuration decides which type we are using/displaying. If data is of the first type then we update the model quite frequently (every 30 seconds) using the _modelRefreshTimer's events. If the data is of the second type then we update the model after a longer interval. However, the view still needs to be refreshed every 30 seconds as stale data needs to be removed from the view (hence the _viewRefreshTimer). The control also paginates the data so we can see more than we can fit on the display area. This works by breaking the data up into Lists and switching the CurrentPage (which is a List) property of the view model to the right List. This is done by handling the _pageSwitchTimer's Elapsed event. Now the problem My problem is that the control, when removed from the visual tree doesn't dispose of it's timers. This was first noticed when we started getting an unusually high number of requests on the web server end very soon after deploying this control and found that requests were being made at least once a second! We found that the timers were living on and not stopping hours after the control had been removed from view and that the more timers there were the more requests piled up at the web server. My first solution was to implement IDisposable for the RefreshService and do some clean up when the control's UnLoaded event was fired. Within the RefreshServices Dispose method I've set Enabled to false for all the timers, then used the Stop() method on all of them. I've then called Dispose() too and set them to null. None of this worked. After some reading around I found that event handlers may hold references to Timers and prevent them from being disposed and collected. After some more reading and researching I found that the best way around this was to use the Weak Event Pattern. Using this blog and this blog I've managed to work around the shortcomings in the Weak Event pattern. However, none of this solves the problem. Timers are still not being disabled or stopped (let alone disposed) and web requests are continuing to build up. Mem Profiler tells me that "This type has N instances that are directly rooted by a delegate. This can indicate the delegate has not been properly removed" (where N is the number of instances). As far as I can tell though, all listeners of the Elapsed event for the timers are being removed during the cleanup so I can't understand why the timers continue to run. Thanks for reading. Eagerly awaiting your suggestions/comments/solutions (if you got this far :-p)

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • Are file access times not properly maintained in Mac OS X?

    - by Ether
    I'm trying to determine how file access times are maintained by default in Mac OS X, as I'm trying to diagnose some odd behaviour I'm seeing in a new MBP Unibody (running Snow Leopard, 10.6.2): The symptoms (drilling down to the specific behaviour that seems to be causing the issue): mutt is unable to switch to mailboxes which have recently received new mail mail is delivered by procmail, which updates the mtime of the mbox folder it is updating, but does not alter the atime (this is how new mail detection works: by comparing atime to mtime) however, both the mtime and atime of the mbox file is getting updated Through testing, it does not appear that atimes can be set separately in the filesystem: : [ether@tequila ~]$; touch test : [ether@tequila ~]$; touch -m -t 200801010000 test2 : [ether@tequila ~]$; touch -a -t 200801010000 test3 : [ether@tequila ~]$; ls -l test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Jan 1 2008 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 : [ether@tequila ~]$; ls -lu test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Dec 30 11:43 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 The test2 file is created with an old mtime, and the atime is set to now (as it is a new file), which is correct. However, test3 is created with an old atime, but is not set properly on the file. To be sure this is not just behaviour seen with new files, let's modify an old file: : [ether@tequila ~]$; touch -a -t 200801010000 test : [ether@tequila ~]$; ls -l test -rw------- 1 ether staff 0 Dec 30 11:42 test : [ether@tequila ~]$; ls -lu test -rw------- 1 ether staff 0 Dec 30 11:45 test So it would seem that atimes cannot be set explicitly (it is always reset to "now" when either mtime or atime modifications are submitted). Is this something inherent to the filesystem itself, is it something that can be changed, or am I totally crazy and looking in the wrong place? PS. the output of mount is: : [ether@tequila ~]$; mount /dev/disk0s2 on / (hfs, local, journaled) devfs on /dev (devfs, local, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse) ...and Disk Utility says that the drive is of type "Mac OS Extended (Journaled)".

    Read the article

  • How can I force all internet traffic over a PPTP VPN but still allow local lan access?

    - by user126715
    I have a server running Linux Mint 12 that I want to keep connected to a PPTP VPN all the time. The VPN server is pretty reliable, but it drops on occasion so I just want to make it so all internet activity is disabled if the VPN connection is broken. I'd also like to figure out a way to restart it automatically, but that's not as big of an issue since this happens pretty rarely. I also want to always be able to connect to the box from my lan, regardless of whether the VPN is up or not. Here's what my ifconfig looks like with the VPN connected properly: eth0 Link encap:Ethernet HWaddr 00:22:15:21:59:9a inet addr:192.168.0.171 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::222:15ff:fe21:599a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:37389 errors:0 dropped:0 overruns:0 frame:0 TX packets:29028 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:37781384 (37.7 MB) TX bytes:19281394 (19.2 MB) Interrupt:41 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1446 errors:0 dropped:0 overruns:0 frame:0 TX packets:1446 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:472178 (472.1 KB) TX bytes:472178 (472.1 KB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.10.11.10 P-t-P:10.10.11.9 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:23 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1368 (1.3 KB) TX bytes:1812 (1.8 KB) Here's an iptables script I found elsewhere that seemed to be for the problem I'm trying to solve, but it wound up blocking all access, but I'm not sure what I need to change: #!/bin/bash #Set variables IPT=/sbin/iptables VPN=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` LAN=192.168.0.0/24 #Flush rules $IPT -F $IPT -X #Default policies and define chains $IPT -P OUTPUT DROP $IPT -P INPUT DROP $IPT -P FORWARD DROP #Allow input from LAN and tun0 ONLY $IPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT $IPT -A INPUT -i lo -j ACCEPT $IPT -A INPUT -i tun0 -m conntrack --ctstate NEW -j ACCEPT $IPT -A INPUT -s $LAN -m conntrack --ctstate NEW -j ACCEPT $IPT -A INPUT -j DROP #Allow output from lo and tun0 ONLY $IPT -A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT $IPT -A OUTPUT -o tun0 -m conntrack --ctstate NEW -j ACCEPT $IPT -A OUTPUT -d $VPN -m conntrack --ctstate NEW -j ACCEPT $IPT -A OUTPUT -j DROP exit 0 Thanks for your help.

    Read the article

  • HELP: MS Virtual Disk Service to Access Volumes and Discs on Local Machine.

    - by Jibran Ahmed
    Hi, here it is my code through which I am successfully initialize the VDS service and get the Packs but When I call QueryVolumes on IVdsPack Object, I am able to get IEnumVdsObjects but unable to get IUnknown* array through IEnumVdsObject::Next method, it reutrns S_FALSE with IUnkown* = NULL. So this IUnknown* cant be used to QueryInterface for IVdsVolume Below is my code HRESULT hResult; IVdsService* pService = NULL; IVdsServiceLoader *pLoader = NULL; //Launch the VDS Service hResult = CoInitialize(NULL); if( SUCCEEDED(hResult) ) { hResult = CoCreateInstance( CLSID_VdsLoader, NULL, CLSCTX_LOCAL_SERVER, IID_IVdsServiceLoader, (void**) &pLoader ); //if succeeded load VDS on local machine if( SUCCEEDED(hResult) ) pLoader->LoadService(NULL, &pService); //Done with Loader now release VDS Loader interface _SafeRelease(pLoader); if( SUCCEEDED(hResult) ) { hResult = pService->WaitForServiceReady(); if ( SUCCEEDED(hResult) ) { AfxMessageBox(L"VDS Service Loaded"); IEnumVdsObject* pEnumVdsObject = NULL; hResult = pService->QueryProviders(VDS_QUERY_SOFTWARE_PROVIDERS, &pEnumVdsObject); IUnknown* ppObjUnk ; IVdsSwProvider* pVdsSwProvider = NULL; IVdsPack* pVdsPack = NULL; IVdsVolume* pVdsVolume = NULL; ULONG ulFetched = 0; hResult = E_INVALIDARG; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); hResult = ppObjUnk->QueryInterface(IID_IVdsSwProvider, (void**)&pVdsSwProvider); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); hResult = pVdsSwProvider->QueryPacks(&pEnumVdsObject); hResult = E_INVALIDARG; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); hResult = ppObjUnk->QueryInterface(IID_IVdsPack, (void**)&pVdsPack); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); hResult = pVdsPack->QueryVolumes(&pEnumVdsObject); pEnumVdsObject->Reset(); hResult = E_INVALIDARG; ulFetched = 0; BOOL bDone = FALSE; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); //hResult = ppObjUnk->QueryInterface(IID_IVdsVolume, (void**)&pVdsVolume); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); _SafeRelease(pVdsPack); _SafeRelease(pVdsSwProvider); // hResult = pVdsVolume-AddAccessPath(TEXT("G:\")); if(SUCCEEDED(hResult)) AfxMessageBox(L"Add Access Path Successfully"); else AfxMessageBox(L"Unable to Add access path"); //UUID of IVdsVolumeMF {EE2D5DED-6236-4169-931D-B9778CE03DC6} static const GUID GUID_IVdsVolumeMF = {0xEE2D5DED, 0x6236, 4169,{0x93, 0x1D, 0xB9, 0x77, 0x8C, 0xE0, 0x3D, 0XC6} }; hResult = pService->GetObject(GUID_IVdsVolumeMF, VDS_OT_VOLUME, &ppObjUnk); if(hResult == VDS_E_OBJECT_NOT_FOUND) AfxMessageBox(L"Object Not found"); if(hResult == VDS_E_INITIALIZED_FAILED) AfxMessageBox(L"Initialization failed"); // pVdsVolume = reinterpret_cast(ppObjUnk); if(SUCCEEDED(hResult)) { // hResult = pVdsVolume-AddAccessPath(TEXT("G:\")); if(SUCCEEDED(hResult)) { IVdsAsync* ppVdsSync; AfxMessageBox(L"Formatting is about to Start......"); // hResult = pVdsVolume-Format(VDS_FST_UDF, TEXT("UDF_FORMAT_TEST"), 2048, TRUE, FALSE, FALSE, &ppVdsSync); if(SUCCEEDED(hResult)) AfxMessageBox(L"Formatting Started......."); else AfxMessageBox(L"Formatting Failed"); } else AfxMessageBox(L"Unable to Add Access Path"); } _SafeRelease(pVdsVolume); } else { AfxMessageBox(L"VDS Service Cannot be Loaded"); } } } _SafeRelease(pService);

    Read the article

  • can't the asp file system object access shares server paths?

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %><% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize %> <META content="Microsoft Visual Studio 6.0" name=GENERATOR><!-- Author: Adrian Forbes --> <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "<h1>" & sDir & "</h1>" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number <> 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "<table border=""1"">" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sParent) & """>Parent folder</a></td></tr>" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sDir & objSubFolder.Name) & """>" & objSubFolder.Name & "</a></td></tr>" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "<tr><td><a href=""show.asp?file=" & server.URLEncode(objFile.Name) & "&dir=" & server.URLEncode (sDir) & """>" & objFile.Name & "</a></td><td>" & sSize & "</td><td>" & objFile.Type & "</td></tr>" & vbCRLF Next Response.Write "</table>" %> it works fine. but when i try to access something on shred path like: "\\cvrdd0110:share" it gives error. how to access these files? and sorry for formatting issues.

    Read the article

  • can't the asp file system object access shared server paths?

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %><% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize %> <META content="Microsoft Visual Studio 6.0" name=GENERATOR><!-- Author: Adrian Forbes --> <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "<h1>" & sDir & "</h1>" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number <> 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "<table border=""1"">" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sParent) & """>Parent folder</a></td></tr>" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sDir & objSubFolder.Name) & """>" & objSubFolder.Name & "</a></td></tr>" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "<tr><td><a href=""show.asp?file=" & server.URLEncode(objFile.Name) & "&dir=" & server.URLEncode (sDir) & """>" & objFile.Name & "</a></td><td>" & sSize & "</td><td>" & objFile.Type & "</td></tr>" & vbCRLF Next Response.Write "</table>" %> it works fine. but when i try to access something on shared path like: "\\cvrdd0110:share" it gives error. how to access these files? and sorry for formatting issues.

    Read the article

  • Best Practices - which domain types should be used to run applications

    - by jsavit
    This post is one of a series of "best practices" notes for Oracle VM Server for SPARC (formerly named Logical Domains) One question that frequently comes up is "which types of domain should I use to run applications?" There used to be a simple answer in most cases: "only run applications in guest domains", but enhancements to T-series servers, Oracle VM Server for SPARC and the advent of SPARC SuperCluster have made this question more interesting and worth qualifying differently. This article reviews the relevant concepts and provides suggestions on where to deploy applications in a logical domains environment. Review: division of labor and types of domain Oracle VM Server for SPARC offloads many functions from the hypervisor to domains (also called virtual machines). This is a modern alternative to using a "thick" hypervisor that provides all virtualization functions, as in traditional VM designs, This permits a simpler hypervisor design, which enhances reliability, and security. It also reduces single points of failure by assigning responsibilities to multiple system components, which further improves reliability and security. In this architecture, management and I/O functionality are provided within domains. Oracle VM Server for SPARC does this by defining the following types of domain, each with their own roles: Control domain - management control point for the server, used to configure domains and manage resources. It is the first domain to boot on a power-up, is an I/O domain, and is usually a service domain as well. I/O domain - has been assigned physical I/O devices: a PCIe root complex, a PCI device, or a SR-IOV (single-root I/O Virtualization) function. It has native performance and functionality for the devices it owns, unmediated by any virtualization layer. Service domain - provides virtual network and disk devices to guest domains. Guest domain - a domain whose devices are all virtual rather than physical: virtual network and disk devices provided by one or more service domains. In common practice, this is where applications are run. Typical deployment A service domain is generally also an I/O domain: otherwise it wouldn't have access to physical device "backends" to offer to its clients. Similarly, an I/O domain is also typically a service domain in order to leverage the available PCI busses. Control domains must be I/O domains, because they boot up first on the server and require physical I/O. It's typical for the control domain to also be a service domain too so it doesn't "waste" the I/O resources it uses. A simple configuration consists of a control domain, which is also the one I/O and service domain, and some number of guest domains using virtual I/O. In production, customers typically use multiple domains with I/O and service roles to eliminate single points of failure: guest domains have virtual disk and virtual devices provisioned from more than one service domain, so failure of a service domain or I/O path or device doesn't result in an application outage. This is also used for "rolling upgrades" in which service domains are upgraded one at a time while their guests continue to operate without disruption. (It should be noted that resiliency to I/O device failures can also be provided by the single control domain, using multi-path I/O) In this type of deployment, control, I/O, and service domains are used for virtualization infrastructure, while applications run in guest domains. Changing application deployment patterns The above model has been widely and successfully used, but more configuration options are available now. Servers got bigger than the original T2000 class machines with 2 I/O busses, so there is more I/O capacity that can be used for applications. Increased T-series server capacity made it attractive to run more vertical applications, such as databases, with higher resource requirements than the "light" applications originally seen. This made it attractive to run applications in I/O domains so they could get bare-metal native I/O performance. This is leveraged by the SPARC SuperCluster engineered system, announced a year ago at Oracle OpenWorld. In SPARC SuperCluster, I/O domains are used for high performance applications, with native I/O performance for disk and network and optimized access to the Infiniband fabric. Another technical enhancement is the introduction of Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV), which make it possible to give domains direct connections and native I/O performance for selected I/O devices. A domain with either a DIO or SR-IOV device is an I/O domain. In summary: not all I/O domains own PCI complexes, and there are increasingly more I/O domains that are not service domains. They use their I/O connectivity for performance for their own applications. However, there are some limitations and considerations: at this time, a domain using physical I/O cannot be live-migrated to another server. There is also a need to plan for security and introducing unneeded dependencies: if an I/O domain is also a service domain providing virtual I/O go guests, it has the ability to affect the correct operation of its client guest domains. This is even more relevant for the control domain. where the ldm has to be protected from unauthorized (or even mistaken) use that would affect other domains. As a general rule, running applications in the service domain or the control domain should be avoided. To recap: Guest domains with virtual I/O still provide the greatest operational flexibility, including features like live migration. I/O domains can be used for applications with high performance requirements. This is used to great effect in SPARC SuperCluster and in general T4 deployments. Direct I/O (DIO) and Single Root I/O Virtualization (SR-IOV) make this more attractive by giving direct I/O access to more domains. Service domains should in general not be used for applications, because compromised security in the domain, or an outage, can affect other domains that depend on it. This concern can be mitigated by providing guests' their virtual I/O from more than one service domain, so an interruption of service in the service domain does not cause an application outage. The control domain should in general not be used to run applications, for the same reason. SPARC SuperCluster use the control domain for applications, but it is an exception: it's not a general purpose environment; it's an engineered system with specifically configured applications and optimization for optimal performance. These are recommended "best practices" based on conversations with a number of Oracle architects. Keep in mind that "one size does not fit all", so you should evaluate these practices in the context of your own requirements. Summary Higher capacity T-series servers have made it more attractive to use them for applications with high resource requirements. New deployment models permit native I/O performance for demanding applications by running them in I/O domains with direct access to their devices. This is leveraged in SPARC SuperCluster, and can be leveraged in T-series servers to provision high-performance applications running in domains. Carefully planned, this can be used to provide higher performance for critical applications.

    Read the article

  • How do I suppress script errors when using the WPF WebBrowser control?

    - by willem
    I have a WPF application that uses the WPF WebBrowser control to display interesting web pages to our developers on a flatscreen display (like a news feed). The trouble is that I occasionally get a HTML script error that pops up a nasty IE error message asking if I would like to "stop running scripts on this page". Is there a way to suppress this error checking? NOTE: I have disabled script debugging in IE settings already.

    Read the article

  • c# Nested categories (sub-categories) in the form control “property grid”.

    - by Cathering
    I'm new to C# and I've been trying to design my own program for a while now. I came a across a control named Property Grid, it suits me perfectly and aftering Googling, I managed to find how to split up the various properties into catagories using attritubtes. But I cannot find any information on adding sub-catagories to another catagory. Can anyone shed light on this subject? Thank you.

    Read the article

< Previous Page | 396 397 398 399 400 401 402 403 404 405 406 407  | Next Page >