Search Results

Search found 38458 results on 1539 pages for 'remote access'.

Page 328/1539 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • Unix Permissions: Enable access to files no matter the user?

    - by TK Kocheran
    I've been using Linux for a long time and I still am completely in the dark about how file permissions really work. With that in mind, does anyone have any books or thorough guides I could read to really understand things completely? I've done my fair share of sysadminning, so I know the easy stuff like making directories readable and writable, making files executable, and changing the owner of a file, but on sharing files across users, I'm lost. Here's my main problem. I have a number of machines across which I intend to synchronize my music library. I've been using Unison for a while now and it's a great choice as I can easily run it over SSH on my local network which I just set up. Win-win. Up until this point, I've been synchronizing computers using a 2TB external hard drive. (computer 1 unisons to HD, computer 2 unisons to HD, etc.) This is tedious at best, especially since I encrypted the drive, making it a huge hassle to hook it up to all of my machines and sync it. Anyway, the drive is running ext4 (in TrueCrypt), so it maintains all Unix filesystem info like owners and groups. I just set up a new machine and just Unison'd it to get the music on it, and I realized that now, all of my permissions are fubar. I had to run Unison as root since that was the only way I could get the files to come off of the external drive. Apparently, since I'm using a different user name on this machine than my usual "rfkrocktk" across all machines, this essentially throws a huge wrench in the gears. Here's my use case. This laptop has two effective users, "leandra" and "rfkrocktk". I want to share music between these two users, so I symlinked /home/rfkrocktk/Music to point to /home/leandra/Music. How do I (a) allow both users access to read/write/delete files in this folder, and (b) keep everything nicely in sync without messing up file ownership?

    Read the article

  • Homebrew large data cluster access for 2 user levels?

    - by Yegor
    The title probably makes little sense, so here is an example. I have a file hosting site, that serves a large amount of semi-randomly accessed files. The setup is as follows: High horsepower front-end +DB server that also does encoding for files that need encoding Fresh file server, which stores newly uploaded content, thats probably (and usually) rapidly accessible, which has 500GB of raided SSD storage, that can push over 3GBit of traffic. 3 cheap node servers, containing 2 x 750GB SATA drives in raid1, where files older than 2 weeks are archived, from the SSD server (mentioned above). Files on each server are accessed via subdomains (via modsec) in a straight forward fashion (server1.domain.com, server2.domain.com, etc) Where I have the problem is this. I introduced a "premium" service where people pay a small fee every month, and get ad-free, quick accesses to stuff on the site. Once they are logged in, they access same files via premium.server1.domain.com via a different modsec script, with a different pass phrase. That all works fine and dandy.... except the cheap node servers are all IO bound, so accessing the files on them via a different, unsaturated network makes no difference, since it cannot read off the drive fast enough. What would be a good way to make files on the site be accessible via 2 different network routes, 1 of which will be saturated (the "free network") while all other files are on an un-saturated "premium" network?

    Read the article

  • How to access previous VHD versions of system backup?

    - by feklee
    Quote from the 31 Oct 2009 TechNet article "Learn more about system image backup": During the first backup, the backup engine scans the source drive and copies only blocks that contain data into a .vhd file stored on the target, creating a compact view of the source drive. The next time a system image is created, only new and changed data is written to the .vhd file, and old data on the same block is moved out of the VHD and into the shadow copy storage area. Volume Shadow Copy Service is used to compute the changed data between backups, as well as to handle the process of moving the old data out to the shadow copy area on the target. This approach makes the backup fast (since only changed blocks are backed up) and efficient (since data is stored in a compact manner). When restoring the image, blocks will be restored to their original locations on the source disk. If you want to restore from an older backup, the engine reads from the shadow copy area and restores the appropriate blocks. For the last days, a daily system backup of drive C: to drive E: has been scheduled and run by Windows 7 Backup and Restore. Drive C: currently holds 233 GB of data, which fits comfortably on drive E:, a 1 TB drive, with 727 GB of free space remaining. How do I access the previous version of a VHD? I right clicked on files and folders in E:\WindowsImageBackup, and I looked for Previous Versions but always: There are no previous versions available

    Read the article

  • Getting access to physical drives in ESXi v5.5 installation on Dell PowerEdge R710 with PERC 6/i

    - by Big-Blue
    I've acquired a Dell PowerEdge R710 server a few days ago, which includes a PERC 6/i RAID controller. The server is now fitted with a SATA SSD, one SAS drive and four SATA HDD's, all of which I would like to be passed through to ESXi in an "as-is" state, without creating any logical drives in the RAID controller. Now, the ESXi v5.5 installation image I grabbed from the Dell homepage starts just fine but only lists the logical drives and connected flash drives as possible installation targets, not any of the physical drives. If I create a small logical drive on my SSD (which the PERC 6/i detects as SATA-SSD type), the ESXi install wizard lists the SSD value on that drive as false; which is far from optimal. I have also tried disabling the RAID controller entirely in the setup, but that also did not help. Everything that should enable passthrough is enabled in BIOS, but that shouldn't be a concern at this early stage of the ESXi installation. How would I be able to install ESXi v5.5 to a part of my SSD that is connected to the storage controller, while giving it entire physical access to the disk (to allow for SMART values to be read etc.)?

    Read the article

  • Windows 7 access denied to executables.. by what?

    - by stijn
    Ever since I started using Windows 7 this problem has been bothering me. From time to time I see similar questions popping up on misc forums, but never did I see an answer. Here are two scenarios that nearly always reproduce it: the explorer way with explorer, navigate to a directory containing at least one exe file go one directory up immediately delete the directory just navigated to yields Folder Acces Denied dialog stating You need permission to perform this action You require permission from Administrators to make changes to this folder, with the buttons try Again and Cancel hitting Try Again never works immediately. Waiting a minute or so and then clickig it again does work note: if in step 2 and waiting a minute or more before going up one directory, the problem does not occur and the folder can be deleted the visual studio way build a project producing an exe file run the executable then close it immediately build the project again (by changing a single character in a source file for example) yields fatal error LNK1168: cannot open /path/to/the.exe for writing note: if in step 2 and waiting a minute or more before building again, the problem does not occur some specs happens both on Windows 7 32 and 64 bit, with VS2008/2010/2011 happens on 3 different machines I do not have a virusscanner of any kind I do have a bunch of services disabled, but nothing that prevents Windows from running normally, UAC is disabled as well happens on any type of disc I always use a user account that is in the Administrators group Obviously both scenarios are very similar and extremely reproducable. So I figured some process must have the file open for some reason, and release it again later. However, using systinternal's handle -a the exe file in question never shows up. (that is the correct way to use handle, right?) So while explorer/VS are reporting they cannot access the file, handle.exe says it's not in use anywhere. This leaves me rather clueless, so I'm wondering if someone can come up with a solution: why does this happen, and how to solve it?

    Read the article

  • Why did I loose access to the mailboxes on my old web/mail host after changing to a new one but keeping old MX values

    - by LaserBeak
    So I changed the NS records with registrar to point at the new webhosts DNS servers and edited the SOA record there, deleting the new hosts default MX records and instead putting in the old ones for the old web\mail hosts. The website A record is however pointing at the new webhosts servers and the site comes up fine. But none of this should cause me to loose access to mailboxes on my old hosts mail server right? I log into the control panel on the old host, all the mailboxes are there, all the passwords are fine but I can't log in using either webmail or pop3, says incorrect log-in/password. I even created a new mailbox and password for it respectively, but it would not let me log in. For what its worth I did not change\delete the records for 'A' on the old webhost zone file, since I am not hosting the site with them anymore and NS records are pointing to other hosts DNS servers/zone file so that shouldn't matter right? The old hosts mailserver is also not simply down, I can tell because through the control panel I setup a mail forward for one of the existing inboxes and when sending mail to it, it receives it and forwards it fine. So from this I can deduce that I have correctly inputted the old hosts MX records into the zone file hosted on the new hosts DNS and the mail is being sent to the old hosts mail server(s) and is successfully forwarded by it. But why can't I log into those account/inboxes anymore ?

    Read the article

  • Connecting via ShrewSoft VPN client means no LAN internet access (Windows 7 64 bit) - any advice please?

    - by iwishiknewmoreaboutnetworking
    I have a Windows 7 64 bit desktop machine which is connected to a LAN. I recently installed ShrewSoft VPN client v 2.1.7 on my machine so that I can connect to a license server hosted by my customer. They are running a Cisco VPN server and I originally tried (unsuccessfully!) to use the Cisco VPN client for Windows 64 bit but the default gateway wasn't being configured correctly after loading in my pcf file. Using ShrewSoft I am able to import the same pcf file, and successfully connect to the machine I need to using the VPN client software. The client machine I need to connect to has IP address 1.52.90.33. The problem is that when I am connected to the customer network using the VPN client application (and after a few minutes) I lose my LAN internet connection. I can only presume that this is because, by default the ShrewSoft VPN client application automatically tunnels all traffic through the VPN connection. I know there is an option to switch off the "Tunnel All" option on the Policy tab of the application and enter a Remote Network Resource (to "Include" or "Exclude") as "Address" and "Netmask" IP addresses however I am not sure what I need to enter here. Here is my ipconfig output before connecting to the VPN (with suffixes blanked out): Windows IP Configuration Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : ***.*** Link-local IPv6 Address . . . . . : fe80::8de3:9dbe:393a:33ba%11 IPv4 Address. . . . . . . . . . . : 150.237.13.17 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 150.237.13.1 Tunnel adapter 6TO4 Adapter: Connection-specific DNS Suffix . : ***.*** IPv6 Address. . . . . . . . . . . : 2002:96ed:d11::96ed:d11 Default Gateway . . . . . . . . . : 2002:c058:6301::c058:6301 Tunnel adapter Local Area Connection* 9: Connection-specific DNS Suffix . : IPv6 Address. . . . . . . . . . . : 2001:0:4137:9e76:2cf9:38c4:6912:f2ee Link-local IPv6 Address . . . . . : fe80::2cf9:38c4:6912:f2ee%12 Default Gateway . . . . . . . . . : Tunnel adapter isatap.***.***: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : ***.*** Here is my route print output before connecting to the VPN: =========================================================================== Interface List 11...20 cf 30 9d ec 2a ......Realtek RTL8168D/8111D Family PCI-E Gigabit Ethern et NIC (NDIS 6.20) 1...........................Software Loopback Interface 1 14...00 00 00 00 00 00 00 e0 Microsoft 6to4 Adapter 12...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 13...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 150.237.13.1 150.237.13.17 2 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 150.237.13.0 255.255.255.0 On-link 150.237.13.17 257 150.237.13.17 255.255.255.255 On-link 150.237.13.17 257 150.237.13.255 255.255.255.255 On-link 150.237.13.17 257 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 150.237.13.17 257 255.255.255.255 255.255.255.255 On-link 12

    Read the article

  • How do I speed up and cache mmap file access over NFS on Linux?

    - by Zan Lynx
    The server and client are both 64-bit Ubuntu 10.04 LTS. The application in question is a custom app that uses mmap() for fast random file access. Its ideal state is when the entire file is cached in RAM. The network connections are really fast 10Gb Ethernet. It is a virtual server blade setup. It isn't the network connections slowing things down because everything performs superbly when using a virtual disk (iSCSI to the SAN). But when we run the application on a NFS home directory mount, performance goes to the dogs. It appears that the Linux kernel isn't caching anything. So it is reading every single disk block needed by mmap() accesses over and over and over again. The NFS mount is done through autofs, which has only default settings. /proc/mounts shows the NFS mount is done with the following options: rw,relatime,vers=3,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=192.168.11.52,mountvers=3,mountproto=tcp,addr=192.168.11.52 How can I make Ubuntu 10.04 cache the file instead of reloading it all the time?

    Read the article

  • convert remote object result to array collection in flex...........

    - by user364199
    HI guys, im using zend_amf and flex. My problem is i have to populate my advance datagrid using array collection. this array collection have a children. example: [Bindable] private var dpHierarchy:ArrayCollection = new ArrayCollection([ {trucks:"Truck", children: [ {trucks:"AMC841", total_trip:1, start_time:'3:46:40 AM'}, {trucks:"AMC841", total_trip:1, start_time:'3:46:40 AM'}]) ]}; but the datasource of my datagrid should come from a database, how can i convert the result from remote object to array collection that has the same format like in my example, or any other way. here is my advance datagrid <mx:AdvancedDataGrid id="datagrid" width="500" height="200" lockedColumnCount="1" lockedRowCount="0" horizontalScrollPolicy="on" includeIn="loggedIn" x="67" y="131"> <mx:dataProvider> <mx:HierarchicalData id="dpHierarchytest" source="{dp}"/> </mx:dataProvider> <mx:groupedColumns> <mx:AdvancedDataGridColumn dataField="trucks" headerText="Trucks"/> <mx:AdvancedDataGridColumn dataField="total_trip" headerText="Total Trip"/> <mx:AdvancedDataGridColumnGroup headerText="PRECOOLING"> <mx:AdvancedDataGridColumnGroup headerText="Before Loading"> <mx:AdvancedDataGridColumn dataField="start_time" headerText="Start Time"/> <mx:AdvancedDataGridColumn dataField="end_time" headerText="End Time"/> <mx:AdvancedDataGridColumn dataField="precooling_time" headerText="Precooling Time"/> <mx:AdvancedDataGridColumn dataField="precooling_temp" headerText="Precooling Temp"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Before Dispatch"> <mx:AdvancedDataGridColumn dataField="bd_start_time" headerText="Start Time"/> <mx:AdvancedDataGridColumn dataField="bd_end_time" headerText="End Time"/> <mx:AdvancedDataGridColumn dataField="bd_precooling_time" headerText="Precooling Time"/> <mx:AdvancedDataGridColumn dataField="bd_precooling_temp" headerText="Precooling Temp"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumn dataField="remarks" headerText="Remarks"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Temperature Compliance"> <mx:AdvancedDataGridColumn dataField="total_hit" headerText="Total Hit"/> <mx:AdvancedDataGridColumn dataField="total_miss" headerText="Total Miss"/> <mx:AdvancedDataGridColumn dataField="cold_chain_compliance" headerText="Cold Chain Compliance"/> <mx:AdvancedDataGridColumn dataField="average_temp" headerText="Average Temp"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Productivity"> <mx:AdvancedDataGridColumn dataField="total_drop_points" headerText="Total Drop Points"/> <mx:AdvancedDataGridColumn dataField="total_delivery_time" headerText="Total Delivery Time"/> <mx:AdvancedDataGridColumn dataField="total_distance" headerText="Total Distance"/> </mx:AdvancedDataGridColumnGroup> <mx:AdvancedDataGridColumnGroup headerText="Trip Exceptions"> <mx:AdvancedDataGridColumn dataField="total_doc" headerText="Total DOC"/> <mx:AdvancedDataGridColumn dataField="total_eng" headerText="Total ENG"/> <mx:AdvancedDataGridColumn dataField="total_fenv" headerText="Total FENV"/> <mx:AdvancedDataGridColumn dataField="average_speed" headerText="Average Speed"/> </mx:AdvancedDataGridColumnGroup> </mx:groupedColumns> </mx:AdvancedDataGrid> Thanks, and i really need some help.

    Read the article

  • Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host

    - by xnoor
    i have an update server that sends client updates through TCP port 12000, the sending of a single file is successful only the first time, but after that i get an error message on the server "Unable to write data to the transport connection: An existing connection was forcibly closed by the remote host", if i restart the update service on the server, it works again only one, i have normal multithreaded windows service SERVER CODE namespace WSTSAU { public partial class ApplicationUpdater : ServiceBase { private Logger logger = LogManager.GetCurrentClassLogger(); private int _listeningPort; private int _ApplicationReceivingPort; private string _setupFilename; private string _startupPath; public ApplicationUpdater() { InitializeComponent(); } protected override void OnStart(string[] args) { init(); logger.Info("after init"); Thread ListnerThread = new Thread(new ThreadStart(StartListener)); ListnerThread.IsBackground = true; ListnerThread.Start(); logger.Info("after thread start"); } private void init() { _listeningPort = Convert.ToInt16(ConfigurationSettings.AppSettings["ListeningPort"]); _setupFilename = ConfigurationSettings.AppSettings["SetupFilename"]; _startupPath = System.IO.Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().GetName().CodeBase).Substring(6); } private void StartListener() { try { logger.Info("Listening Started"); ThreadPool.SetMinThreads(50, 50); TcpListener listener = new TcpListener(_listeningPort); listener.Start(); while (true) { TcpClient c = listener.AcceptTcpClient(); ThreadPool.QueueUserWorkItem(ProcessReceivedMessage, c); } } catch (Exception ex) { logger.Error(ex.Message); } } void ProcessReceivedMessage(object c) { try { TcpClient tcpClient = c as TcpClient; NetworkStream Networkstream = tcpClient.GetStream(); byte[] _data = new byte[1024]; int _bytesRead = 0; _bytesRead = Networkstream.Read(_data, 0, _data.Length); MessageContainer messageContainer = new MessageContainer(); messageContainer = SerializationManager.XmlFormatterByteArrayToObject(_data, messageContainer) as MessageContainer; switch (messageContainer.messageType) { case MessageType.ApplicationUpdateMessage: ApplicationUpdateMessage appUpdateMessage = new ApplicationUpdateMessage(); appUpdateMessage = SerializationManager.XmlFormatterByteArrayToObject(messageContainer.messageContnet, appUpdateMessage) as ApplicationUpdateMessage; Func<ApplicationUpdateMessage, bool> HandleUpdateRequestMethod = HandleUpdateRequest; IAsyncResult cookie = HandleUpdateRequestMethod.BeginInvoke(appUpdateMessage, null, null); bool WorkerThread = HandleUpdateRequestMethod.EndInvoke(cookie); break; } } catch (Exception ex) { logger.Error(ex.Message); } } private bool HandleUpdateRequest(ApplicationUpdateMessage appUpdateMessage) { try { TcpClient tcpClient = new TcpClient(); NetworkStream networkStream; FileStream fileStream = null; tcpClient.Connect(appUpdateMessage.receiverIpAddress, appUpdateMessage.receiverPortNumber); networkStream = tcpClient.GetStream(); fileStream = new FileStream(_startupPath + "\\" + _setupFilename, FileMode.Open, FileAccess.Read); FileInfo fi = new FileInfo(_startupPath + "\\" + _setupFilename); BinaryReader binFile = new BinaryReader(fileStream); FileUpdateMessage fileUpdateMessage = new FileUpdateMessage(); fileUpdateMessage.fileName = fi.Name; fileUpdateMessage.fileSize = fi.Length; MessageContainer messageContainer = new MessageContainer(); messageContainer.messageType = MessageType.FileProperties; messageContainer.messageContnet = SerializationManager.XmlFormatterObjectToByteArray(fileUpdateMessage); byte[] messageByte = SerializationManager.XmlFormatterObjectToByteArray(messageContainer); networkStream.Write(messageByte, 0, messageByte.Length); int bytesSize = 0; byte[] downBuffer = new byte[2048]; while ((bytesSize = fileStream.Read(downBuffer, 0, downBuffer.Length)) > 0) { networkStream.Write(downBuffer, 0, bytesSize); } fileStream.Close(); tcpClient.Close(); networkStream.Close(); return true; } catch (Exception ex) { logger.Info(ex.Message); return false; } finally { } } protected override void OnStop() { } } i have to note something that my windows service (server) is multithreaded.. i hope anyone can help with this

    Read the article

  • How to deny the web access to some files?

    - by Strae
    I need to do an operation a bit strange. First, i run on Debian, apache2 (which 'runs' as user www-data) So, I have simple text file with .txt ot .ini, or whatever extension, doesnt matter. These files are located in subfolders with a structure like this: www.example.com/folder1/car/foobar.txt www.example.com/folder1/cycle/foobar.txt www.example.com/folder1/fish/foobar.txt www.example.com/folder1/fruit/foobar.txt therefore, the file name always the same, ditto for the 'hierarchy', just change the name of the folder: /folder-name-static/folder-name-dinamyc/file-name-static.txt What I should do is (I think) relatively simple: I must be able to read that file by programs on the server (python, php for example), but if I try to retrieve the file contents by broswer (digiting the url www.example.com/folder1/car/foobar.txt, or via cUrl, etc..) I must get a forbidden error, or whatever, but not access the file. It would also be nice that even accessing those files via FTP are 'hidden', or anyway couldnt be downloaded (at least that I use with the ftp root and user data) How can I do? I found this online, be put in the file .htaccess: <Files File.txt> Order allow, deny Deny from all </ Files> It seems to work, but only if the file is in the web root (www.example.com / myfile.txt), and not in subfolders. Moreover, the folders in the second level (www.example.com/folder1/fruit/foobar.txt) will be dinamycally created.. I would like to avoid having to change .htaccess file from time to time. It is possible to create a rule, something like that, that goes for all files with given name, which is on www.example.com/folder-name-static/folder-name-dinamyc/file-name-static.txt, where those parts are allways the same, just that one change ? EDIT: As Dave Drager said, i could semplify this keeping those file outside the web accessible directory. But those directory's will contain others files too, images, and stuff used by my users, so i'm simply try to not have a duplicate folders system, like: /var/www/vhosts/example.com/httpdocs/folder1/car/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/cycle/[other folders and files here] /var/www/vhosts/example.com/httpdocs/folder1/fish/[other folders and files here] //and, then for the 'secrets' files: /folder1/data/car/foobar.txt /folder1/data/cycle/foobar.txt /folder1/data/fish/foobar.txt

    Read the article

  • Are file access times not properly maintained in Mac OS X?

    - by Ether
    I'm trying to determine how file access times are maintained by default in Mac OS X, as I'm trying to diagnose some odd behaviour I'm seeing in a new MBP Unibody (running Snow Leopard, 10.6.2): The symptoms (drilling down to the specific behaviour that seems to be causing the issue): mutt is unable to switch to mailboxes which have recently received new mail mail is delivered by procmail, which updates the mtime of the mbox folder it is updating, but does not alter the atime (this is how new mail detection works: by comparing atime to mtime) however, both the mtime and atime of the mbox file is getting updated Through testing, it does not appear that atimes can be set separately in the filesystem: : [ether@tequila ~]$; touch test : [ether@tequila ~]$; touch -m -t 200801010000 test2 : [ether@tequila ~]$; touch -a -t 200801010000 test3 : [ether@tequila ~]$; ls -l test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Jan 1 2008 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 : [ether@tequila ~]$; ls -lu test* -rw------- 1 ether staff 0 Dec 30 11:42 test -rw------- 1 ether staff 0 Dec 30 11:43 test2 -rw------- 1 ether staff 0 Dec 30 11:43 test3 The test2 file is created with an old mtime, and the atime is set to now (as it is a new file), which is correct. However, test3 is created with an old atime, but is not set properly on the file. To be sure this is not just behaviour seen with new files, let's modify an old file: : [ether@tequila ~]$; touch -a -t 200801010000 test : [ether@tequila ~]$; ls -l test -rw------- 1 ether staff 0 Dec 30 11:42 test : [ether@tequila ~]$; ls -lu test -rw------- 1 ether staff 0 Dec 30 11:45 test So it would seem that atimes cannot be set explicitly (it is always reset to "now" when either mtime or atime modifications are submitted). Is this something inherent to the filesystem itself, is it something that can be changed, or am I totally crazy and looking in the wrong place? PS. the output of mount is: : [ether@tequila ~]$; mount /dev/disk0s2 on / (hfs, local, journaled) devfs on /dev (devfs, local, nobrowse) map -hosts on /net (autofs, nosuid, automounted, nobrowse) map auto_home on /home (autofs, automounted, nobrowse) ...and Disk Utility says that the drive is of type "Mac OS Extended (Journaled)".

    Read the article

  • How can I force all internet traffic over a PPTP VPN but still allow local lan access?

    - by user126715
    I have a server running Linux Mint 12 that I want to keep connected to a PPTP VPN all the time. The VPN server is pretty reliable, but it drops on occasion so I just want to make it so all internet activity is disabled if the VPN connection is broken. I'd also like to figure out a way to restart it automatically, but that's not as big of an issue since this happens pretty rarely. I also want to always be able to connect to the box from my lan, regardless of whether the VPN is up or not. Here's what my ifconfig looks like with the VPN connected properly: eth0 Link encap:Ethernet HWaddr 00:22:15:21:59:9a inet addr:192.168.0.171 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::222:15ff:fe21:599a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:37389 errors:0 dropped:0 overruns:0 frame:0 TX packets:29028 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:37781384 (37.7 MB) TX bytes:19281394 (19.2 MB) Interrupt:41 Base address:0x8000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1446 errors:0 dropped:0 overruns:0 frame:0 TX packets:1446 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:472178 (472.1 KB) TX bytes:472178 (472.1 KB) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:10.10.11.10 P-t-P:10.10.11.9 Mask:255.255.255.255 UP POINTOPOINT RUNNING NOARP MULTICAST MTU:1500 Metric:1 RX packets:14 errors:0 dropped:0 overruns:0 frame:0 TX packets:23 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:1368 (1.3 KB) TX bytes:1812 (1.8 KB) Here's an iptables script I found elsewhere that seemed to be for the problem I'm trying to solve, but it wound up blocking all access, but I'm not sure what I need to change: #!/bin/bash #Set variables IPT=/sbin/iptables VPN=`ifconfig|perl -nE'/dr:(\S+)/&&say$1'|grep 10.` LAN=192.168.0.0/24 #Flush rules $IPT -F $IPT -X #Default policies and define chains $IPT -P OUTPUT DROP $IPT -P INPUT DROP $IPT -P FORWARD DROP #Allow input from LAN and tun0 ONLY $IPT -A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT $IPT -A INPUT -i lo -j ACCEPT $IPT -A INPUT -i tun0 -m conntrack --ctstate NEW -j ACCEPT $IPT -A INPUT -s $LAN -m conntrack --ctstate NEW -j ACCEPT $IPT -A INPUT -j DROP #Allow output from lo and tun0 ONLY $IPT -A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT $IPT -A OUTPUT -o tun0 -m conntrack --ctstate NEW -j ACCEPT $IPT -A OUTPUT -d $VPN -m conntrack --ctstate NEW -j ACCEPT $IPT -A OUTPUT -j DROP exit 0 Thanks for your help.

    Read the article

  • HELP: MS Virtual Disk Service to Access Volumes and Discs on Local Machine.

    - by Jibran Ahmed
    Hi, here it is my code through which I am successfully initialize the VDS service and get the Packs but When I call QueryVolumes on IVdsPack Object, I am able to get IEnumVdsObjects but unable to get IUnknown* array through IEnumVdsObject::Next method, it reutrns S_FALSE with IUnkown* = NULL. So this IUnknown* cant be used to QueryInterface for IVdsVolume Below is my code HRESULT hResult; IVdsService* pService = NULL; IVdsServiceLoader *pLoader = NULL; //Launch the VDS Service hResult = CoInitialize(NULL); if( SUCCEEDED(hResult) ) { hResult = CoCreateInstance( CLSID_VdsLoader, NULL, CLSCTX_LOCAL_SERVER, IID_IVdsServiceLoader, (void**) &pLoader ); //if succeeded load VDS on local machine if( SUCCEEDED(hResult) ) pLoader->LoadService(NULL, &pService); //Done with Loader now release VDS Loader interface _SafeRelease(pLoader); if( SUCCEEDED(hResult) ) { hResult = pService->WaitForServiceReady(); if ( SUCCEEDED(hResult) ) { AfxMessageBox(L"VDS Service Loaded"); IEnumVdsObject* pEnumVdsObject = NULL; hResult = pService->QueryProviders(VDS_QUERY_SOFTWARE_PROVIDERS, &pEnumVdsObject); IUnknown* ppObjUnk ; IVdsSwProvider* pVdsSwProvider = NULL; IVdsPack* pVdsPack = NULL; IVdsVolume* pVdsVolume = NULL; ULONG ulFetched = 0; hResult = E_INVALIDARG; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); hResult = ppObjUnk->QueryInterface(IID_IVdsSwProvider, (void**)&pVdsSwProvider); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); hResult = pVdsSwProvider->QueryPacks(&pEnumVdsObject); hResult = E_INVALIDARG; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); hResult = ppObjUnk->QueryInterface(IID_IVdsPack, (void**)&pVdsPack); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); hResult = pVdsPack->QueryVolumes(&pEnumVdsObject); pEnumVdsObject->Reset(); hResult = E_INVALIDARG; ulFetched = 0; BOOL bDone = FALSE; while(!SUCCEEDED(hResult)) { hResult = pEnumVdsObject->Next(1, &ppObjUnk, &ulFetched); //hResult = ppObjUnk->QueryInterface(IID_IVdsVolume, (void**)&pVdsVolume); if(!SUCCEEDED(hResult)) _SafeRelease(ppObjUnk); } _SafeRelease(pEnumVdsObject); _SafeRelease(ppObjUnk); _SafeRelease(pVdsPack); _SafeRelease(pVdsSwProvider); // hResult = pVdsVolume-AddAccessPath(TEXT("G:\")); if(SUCCEEDED(hResult)) AfxMessageBox(L"Add Access Path Successfully"); else AfxMessageBox(L"Unable to Add access path"); //UUID of IVdsVolumeMF {EE2D5DED-6236-4169-931D-B9778CE03DC6} static const GUID GUID_IVdsVolumeMF = {0xEE2D5DED, 0x6236, 4169,{0x93, 0x1D, 0xB9, 0x77, 0x8C, 0xE0, 0x3D, 0XC6} }; hResult = pService->GetObject(GUID_IVdsVolumeMF, VDS_OT_VOLUME, &ppObjUnk); if(hResult == VDS_E_OBJECT_NOT_FOUND) AfxMessageBox(L"Object Not found"); if(hResult == VDS_E_INITIALIZED_FAILED) AfxMessageBox(L"Initialization failed"); // pVdsVolume = reinterpret_cast(ppObjUnk); if(SUCCEEDED(hResult)) { // hResult = pVdsVolume-AddAccessPath(TEXT("G:\")); if(SUCCEEDED(hResult)) { IVdsAsync* ppVdsSync; AfxMessageBox(L"Formatting is about to Start......"); // hResult = pVdsVolume-Format(VDS_FST_UDF, TEXT("UDF_FORMAT_TEST"), 2048, TRUE, FALSE, FALSE, &ppVdsSync); if(SUCCEEDED(hResult)) AfxMessageBox(L"Formatting Started......."); else AfxMessageBox(L"Formatting Failed"); } else AfxMessageBox(L"Unable to Add Access Path"); } _SafeRelease(pVdsVolume); } else { AfxMessageBox(L"VDS Service Cannot be Loaded"); } } } _SafeRelease(pService);

    Read the article

  • can't the asp file system object access shares server paths?

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %><% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize %> <META content="Microsoft Visual Studio 6.0" name=GENERATOR><!-- Author: Adrian Forbes --> <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "<h1>" & sDir & "</h1>" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number <> 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "<table border=""1"">" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sParent) & """>Parent folder</a></td></tr>" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sDir & objSubFolder.Name) & """>" & objSubFolder.Name & "</a></td></tr>" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "<tr><td><a href=""show.asp?file=" & server.URLEncode(objFile.Name) & "&dir=" & server.URLEncode (sDir) & """>" & objFile.Name & "</a></td><td>" & sSize & "</td><td>" & objFile.Type & "</td></tr>" & vbCRLF Next Response.Write "</table>" %> it works fine. but when i try to access something on shred path like: "\\cvrdd0110:share" it gives error. how to access these files? and sorry for formatting issues.

    Read the article

  • can't the asp file system object access shared server paths?

    - by sushant
    i am using this code to access files and folders. <%@ Language=VBScript %><% option explicit dim sRoot, sDir, sParent, objFSO, objFolder, objFile, objSubFolder, sSize %> <META content="Microsoft Visual Studio 6.0" name=GENERATOR><!-- Author: Adrian Forbes --> <% sRoot = "D:Raghu" sDir = Request("Dir") sDir = sDir & "\" Response.Write "<h1>" & sDir & "</h1>" & vbCRLF Set objFSO = CreateObject("Scripting.FileSystemObject") on error resume next Set objFolder = objFSO.GetFolder(sRoot & sDir) if err.number <> 0 then Response.Write "Could not open folder" Response.End end if on error goto 0 sParent = objFSO.GetParentFolderName(objFolder.Path) ' Remove the contents of sRoot from the front. This gives us the parent ' path relative to the root folder ' eg. if parent folder is "c:webfilessubfolder1subfolder2" then we just want "subfolder1subfolder2" sParent = mid(sParent, len(sRoot) + 1) Response.Write "<table border=""1"">" ' Give a link to the parent folder. This is just a link to this page only pssing in ' the new folder as a parameter Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sParent) & """>Parent folder</a></td></tr>" & vbCRLF ' Now we want to loop through the subfolders in this folder For Each objSubFolder In objFolder.SubFolders ' And provide a link to them Response.Write "<tr><td colspan=3><a href=""browse.asp?dir=" & Server.URLEncode(sDir & objSubFolder.Name) & """>" & objSubFolder.Name & "</a></td></tr>" & vbCRLF Next ' Now we want to loop through the files in this folder For Each objFile In objFolder.Files if Clng(objFile.Size) < 1024 then sSize = objFile.Size & " bytes" else sSize = Clng(objFile.Size / 1024) & " KB" end if ' And provide a link to view them. This is a link to show.asp passing in the directory and the file ' as parameters Response.Write "<tr><td><a href=""show.asp?file=" & server.URLEncode(objFile.Name) & "&dir=" & server.URLEncode (sDir) & """>" & objFile.Name & "</a></td><td>" & sSize & "</td><td>" & objFile.Type & "</td></tr>" & vbCRLF Next Response.Write "</table>" %> it works fine. but when i try to access something on shared path like: "\\cvrdd0110:share" it gives error. how to access these files? and sorry for formatting issues.

    Read the article

  • Identity Management Monday at Oracle OpenWorld

    - by Tanu Sood
    What a great start to Oracle OpenWorld! Did you catch Larry Ellison’s keynote last evening? As expected, it was a packed house and the keynote received a tremendous response both from the live audience as well as the online community as evidenced by the frequent spontaneous applause in house and the twitter buzz. Here’s but a sampling of some of the tweets that flowed in: @paulvallee: I freaking love that #oracle has been born again in it's interest in core tech #oow (so good for #pythian) @rwang0: MyPOV: #oracle just leapfrogged the competition on the tech front across the board. All they need is the content delivery network #oow12 @roh1: LJE more astute & engaging this year. Nice announcements this year with 12c the MTDB sounding real good. #oow12 @brooke: Cool to see @larryellison interrupted multiple times by applause from the audience. Great speaker. #OOW And there’s lot more to come this week. Identity Management sessions kick-off today. Here’s a quick preview of what’s in store for you today for Identity Management: CON9405: Trends in Identity Management 10:45 a.m. – 11:45 a.m., Moscone West 3003 Hear directly from subject matter experts from Kaiser Permanente and SuperValu who would share the stage with Amit Jasuja, Senior Vice President, Oracle Identity Management and Security, to discuss how the latest advances in Identity Management that made it in Oracle Identity Management 11g Release 2 are helping customers address emerging requirements for securely enabling cloud, social and mobile environments. CON9492: Simplifying your Identity Management Implementation 3:15 p.m. – 4:15 p.m., Moscone West 3008 Implementation experts from British Telecom, Kaiser Permanente and UPMC participate in a panel to discuss best practices, key strategies and lessons learned based on their own experiences. Attendees will hear first-hand what they can do to streamline and simplify their identity management implementation framework for a quick return-on-investment and maximum efficiency. This session will also explore the architectural simplifications of Oracle Identity Governance 11gR2, focusing on how these enhancements simply deployments. CON9444: Modernized and Complete Access Management 4:45 p.m. – 5:45 p.m., Moscone West 3008 We have come a long way from the days of web single sign-on addressing the core business requirements. Today, as technology and business evolves, organizations are seeking new capabilities like federation, token services, fine grained authorizations, web fraud prevention and strong authentication. This session will explore the emerging requirements for access management, what a complete solution is like, complemented with real-world customer case studies from ETS, Kaiser Permanente and TURKCELL and product demonstrations. HOL10478: Complete Access Management Monday, October 1, 1:45 p.m. – 2:45 p.m., Marriott Marquis - Salon 1/2 And, get your hands on technology today. Register and attend the Hands-On-Lab session that demonstrates Oracle’s complete and scalable access management solution, which includes single sign-on, authorization, federation, and integration with social identity providers. Further, the session shows how to securely extend identity services to mobile applications and devices—all while leveraging a common set of policies and a single instance. Product Demonstrations The latest technology in Identity Management is also being showcased in the Exhibition Hall so do find some time to visit our product demonstrations there. Experts will be at hand to answer any questions. DEMOS LOCATION EXHIBITION HALL HOURS Access Management: Complete and Scalable Access Management Moscone South, Right - S-218 Monday, October 1 9:30 a.m.–6:00 p.m. 9:30 a.m.–10:45 a.m. (Dedicated Hours) Tuesday, October 2 9:45 a.m.–6:00 p.m. 2:15 p.m.–2:45 p.m. (Dedicated Hours) Wednesday, October 3 9:45 a.m.–4:00 p.m. 2:15 p.m.–3:30 p.m. (Dedicated Hours) Access Management: Federating and Leveraging Social Identities Moscone South, Right - S-220 Access Management: Mobile Access Management Moscone South, Right - S-219 Access Management: Real-Time Authorizations Moscone South, Right - S-217 Access Management: Secure SOA and Web Services Security Moscone South, Right - S-223 Identity Governance: Modern Administration and Tooling Moscone South, Right - S-210 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Oracle Directory Services Plus: Performant, Cloud-Ready Moscone South, Right - S-222 Oracle Identity Management: Closed-Loop Access Certification Moscone South, Right - S-221 We recommend you keep the Focus on Identity Management document handy. And don’t forget, if you are not on site, you can catch all the keynotes LIVE from the comfort of your desk on YouTube.com/Oracle. Keep the conversation going on @oracleidm. Use #OOW and #IDM and get engaged today. Photo Courtesy: @OracleOpenWorld

    Read the article

  • USB Hub and Ubuntu

    - by aserwin
    I have a powered 7 port hub connected to my Ubuntu box and it does nothing. The devices (zip drive and web cam) work direct, but aren't recognized through the hub. This worked fine in Windows 7. I can't prove it is the OS because this is a new motherboard and processor. Any advice? EDIT : Output from lsusb -v Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:12.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0503 highspeed power enable connect Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:13.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ehci_hcd iProduct 2 EHCI Host Controller iSerial 1 0000:00:16.2 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x000a No power switching (usb 1.0) Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0001 Self Powered Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:12.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:13.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 5 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Port 5: 0000.0100 power Device Status: 0x0001 Self Powered Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:14.5 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 2 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Device Status: 0x0001 Self Powered Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 0 Full speed (or root) hub bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0001 1.1 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic ohci_hcd iProduct 2 OHCI Host Controller iSerial 1 0000:00:16.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0002 1x 2 bytes bInterval 255 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 4 wHubCharacteristic 0x0002 No power switching (usb 1.0) Ganged overcurrent protection bPwrOn2PwrGood 2 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0303 lowspeed power enable connect Port 2: 0000.0100 power Port 3: 0000.0100 power Port 4: 0000.0100 power Device Status: 0x0001 Self Powered Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 1 Single TT bMaxPacketSize0 64 idVendor 0x1d6b Linux Foundation idProduct 0x0002 2.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:02:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 25 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 Hub Descriptor: bLength 9 bDescriptorType 41 nNbrPorts 2 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection TT think time 8 FS bits bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere DeviceRemovable 0x00 PortPwrCtrlMask 0xff Hub Port Status: Port 1: 0000.0100 power Port 2: 0000.0100 power Device Status: 0x0001 Self Powered Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 3.00 bDeviceClass 9 Hub bDeviceSubClass 0 Unused bDeviceProtocol 3 bMaxPacketSize0 9 idVendor 0x1d6b Linux Foundation idProduct 0x0003 3.0 root hub bcdDevice 3.02 iManufacturer 3 Linux 3.2.0-32-generic xhci_hcd iProduct 2 xHCI Host Controller iSerial 1 0000:02:00.0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 31 bNumInterfaces 1 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xe0 Self Powered Remote Wakeup MaxPower 0mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 9 Hub bInterfaceSubClass 0 Unused bInterfaceProtocol 0 Full speed (or root) hub iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0004 1x 4 bytes bInterval 12 bMaxBurst 0 Hub Descriptor: bLength 12 bDescriptorType 42 nNbrPorts 2 wHubCharacteristic 0x0009 Per-port power switching Per-port overcurrent protection bPwrOn2PwrGood 10 * 2 milli seconds bHubContrCurrent 0 milli Ampere bHubDecLat 0.0 micro seconds wHubDelay 0 nano seconds DeviceRemovable 0x00 Hub Port Status: Port 1: 0000.02a0 5Gbps power Rx.Detect Port 2: 0000.02a0 5Gbps power Rx.Detect Binary Object Store Descriptor: bLength 5 bDescriptorType 15 wTotalLength 15 bNumDeviceCaps 1 SuperSpeed USB Device Capability: bLength 10 bDescriptorType 16 bDevCapabilityType 3 bmAttributes 0x00 Latency Tolerance Messages (LTM) Supported wSpeedsSupported 0x0008 Device can operate at SuperSpeed (5Gbps) bFunctionalitySupport 3 Lowest fully-functional device speed is SuperSpeed (5Gbps) bU1DevExitLat 3 micro seconds bU2DevExitLat 2047 micro seconds Device Status: 0x0001 Self Powered Bus 001 Device 002: ID 04a9:1709 Canon, Inc. PIXMA MP150 Scanner Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 idVendor 0x04a9 Canon, Inc. idProduct 0x1709 PIXMA MP150 Scanner bcdDevice 1.08 iManufacturer 1 Canon iProduct 2 MP150 iSerial 3 20BC24 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 62 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xc0 Self Powered MaxPower 2mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 3 bInterfaceClass 255 Vendor Specific Class bInterfaceSubClass 0 bInterfaceProtocol 255 iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x07 EP 7 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x88 EP 8 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x89 EP 9 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0040 1x 64 bytes bInterval 11 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 2 bInterfaceClass 7 Printer bInterfaceSubClass 1 Printer bInterfaceProtocol 2 Bidirectional iInterface 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x01 EP 1 OUT bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 2 Transfer Type Bulk Synch Type None Usage Type Data wMaxPacketSize 0x0200 1x 512 bytes bInterval 0 Device Qualifier (for other device speed): bLength 10 bDescriptorType 6 bcdUSB 2.00 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 64 bNumConfigurations 1 Device Status: 0x0001 Self Powered Bus 007 Device 002: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser Device Descriptor: bLength 18 bDescriptorType 1 bcdUSB 1.10 bDeviceClass 0 (Defined at Interface level) bDeviceSubClass 0 bDeviceProtocol 0 bMaxPacketSize0 8 idVendor 0x046d Logitech, Inc. idProduct 0xc517 LX710 Cordless Desktop Laser bcdDevice 38.10 iManufacturer 1 Logitech iProduct 2 USB Receiver iSerial 0 bNumConfigurations 1 Configuration Descriptor: bLength 9 bDescriptorType 2 wTotalLength 59 bNumInterfaces 2 bConfigurationValue 1 iConfiguration 0 bmAttributes 0xa0 (Bus Powered) Remote Wakeup MaxPower 98mA Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 0 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 1 Keyboard iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 59 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x81 EP 1 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Interface Descriptor: bLength 9 bDescriptorType 4 bInterfaceNumber 1 bAlternateSetting 0 bNumEndpoints 1 bInterfaceClass 3 Human Interface Device bInterfaceSubClass 1 Boot Interface Subclass bInterfaceProtocol 2 Mouse iInterface 0 HID Device Descriptor: bLength 9 bDescriptorType 33 bcdHID 1.10 bCountryCode 0 Not supported bNumDescriptors 1 bDescriptorType 34 Report wDescriptorLength 177 Report Descriptors: ** UNAVAILABLE ** Endpoint Descriptor: bLength 7 bDescriptorType 5 bEndpointAddress 0x82 EP 2 IN bmAttributes 3 Transfer Type Interrupt Synch Type None Usage Type Data wMaxPacketSize 0x0008 1x 8 bytes bInterval 10 Device Status: 0x0000 (Bus Powered) This is with the powered hub plugged in.

    Read the article

  • Virtualbox: host only networking - proxy internet connection

    - by Russell
    I'll ask my question first, then give details about where I am coming from: Is it possible to use host only, then have ubuntu act as a proxy to provide internet access to windows? If so, how? I am trying to get the right combination of networking for my virtualbox windows client VM (win7). My host is ubuntu 10.10 (maverick). I believe I understand the basic network options (please correct me if I am incorrect): NAT - Host can't communicate with guest but guest has access to all host's adapters Host only - Separate adapter but guest has no net access Bridged - bridge an adapter in the host with the virtual adapter to give the host access to the host adapter I am trying to give my win guest internet access, but also access the host in a separate network. Bridged only works when the host is connected to the internet (this is a laptop) so when it's not connected the network is down. Thanks I appreciate your help.

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • Useful SVN and Git commands – Cheatsheet

    - by Madhan ayyasamy
    The following snippets will helpful one who user version control systems like Git and SVN.svn checkout/co checkout-url – used to pull an SVN tree from the server.svn update/up – Used to update the local copy with the changes made in the repository.svn commit/ci – m “message” filename – Used to commit the changes in a file to repository with a message.svn diff filename – shows up the differences between your current file and what’s there now in the repository.svn revert filename – To overwrite local file with the one in the repository.svn add filename – For adding a file into repository, you should commit your changes then only it will reflect in repository.svn delete filename – For deleting a file from repository, you should commit your changes then only it will reflect in repository.svn move source destination – moves a file from one directory to another or renames a file. It will effect your local copy immediately as well as on the repository after committing.git config – Sets configuration values for your user name, email, file formats and more.git init – Initializes a git repository – creates the initial ‘.git’ directory in a new or in an existing project.git clone – Makes a Git repository copy from a remote source. Also adds the original location as a remote so you can fetch from it again and push to it if you have permissions.git add – Adds files changes in your working directory to your index.git rm – Removes files from your index and your working directory so they will not be tracked.git commit – Takes all of the changes written in the index, creates a new commit object pointing to it and sets the branch to point to that new commit.git status – Shows you the status of files in the index versus the working directory.git branch – Lists existing branches, including remote branches if ‘-a’ is provided. Creates a new branch if a branch name is provided.git checkout – Checks out a different branch – switches branches by updating the index, working tree, and HEAD to reflect the chosen branch.git merge – Merges one or more branches into your current branch and automatically creates a new commit if there are no conflicts.git reset – Resets your index and working directory to the state of your last commit.git tag – Tags a specific commit with a simple, human readable handle that never moves.git pull – Fetches the files from the remote repository and merges it with your local one.git push – Pushes all the modified local objects to the remote repository and advances its branches.git remote – Shows all the remote versions of your repository.git log – Shows a listing of commits on a branch including the corresponding details.git show – Shows information about a git object.git diff – Generates patch files or statistics of differences between paths or files in your git repository, or your index or your working directory.gitk – Graphical Tcl/Tk based interface to a local Git repository.

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

  • Problem libc-so-6-version-glibc-2-14-not-found

    - by alessio
    i have installed nagios core 4 on ubuntu server 12.04 lts.. everything working fine,, but... i have a problem with remote command to a remote linux (ubuntu server 12.04) pc! when i try to check a service, for example: check_swap, check_disk etc.. i got everytime an error: Remote command execution failed: /home/nagios/plugins/check_disk: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.14' not found (required by /home/nagios/plugins/check_disk) the remote pc is not my pc and i don't want to make a disaster! :) So... how can i fix this problem? any help be appreciate!!! ;) thanks in advance! :)

    Read the article

  • ResponseStatusLine protocol violation

    - by Tom Hines
    I parse/scrape a few web page every now and then and recently ran across an error that stated: "The server committed a protocol violation. Section=ResponseStatusLine".   After a few web searches, I found a couple of suggestions – one of which said the problem could be fixed by changing the HttpWebRequest ProtocolVersion to 1.0 with the command: 1: HttpWebRequest req = (HttpWebRequest)HttpWebRequest.Create(strURI); 2: req.ProtocolVersion = HttpVersion.Version10;   …but that did not work in my particular case.   What DID work was the next suggestion I found that suggested the use of the setting: “useUnsafeHeaderParsing” either in the app.config file or programmatically. If added to the app.config, it would be: 1: <!-- after the applicationSettings --> 2: <system.net> 3: <settings> 4: <httpWebRequest useUnsafeHeaderParsing ="true"/> 5: </settings> 6: </system.net>   If done programmatically, it would look like this: C++: 1: // UUHP_CPP.h 2: #pragma once 3: using namespace System; 4: using namespace System::Reflection; 5:   6: namespace UUHP_CPP 7: { 8: public ref class CUUHP_CPP 9: { 10: public: 11: static bool UseUnsafeHeaderParsing(String^% strError) 12: { 13: Assembly^ assembly = Assembly::GetAssembly(System::Net::Configuration::SettingsSection::typeid); //__typeof 14: if (nullptr==assembly) 15: { 16: strError = "Could not access Assembly"; 17: return false; 18: } 19:   20: Type^ type = assembly->GetType("System.Net.Configuration.SettingsSectionInternal"); 21: if (nullptr==type) 22: { 23: strError = "Could not access internal settings"; 24: return false; 25: } 26:   27: Object^ obj = type->InvokeMember("Section", 28: BindingFlags::Static | BindingFlags::GetProperty | BindingFlags::NonPublic, 29: nullptr, nullptr, gcnew array<Object^,1>(0)); 30:   31: if(nullptr == obj) 32: { 33: strError = "Could not invoke Section member"; 34: return false; 35: } 36:   37: FieldInfo^ fi = type->GetField("useUnsafeHeaderParsing", BindingFlags::NonPublic | BindingFlags::Instance); 38: if(nullptr == fi) 39: { 40: strError = "Could not access useUnsafeHeaderParsing field"; 41: return false; 42: } 43:   44: if (!(bool)fi->GetValue(obj)) 45: { 46: fi->SetValue(obj, true); 47: } 48:   49: return true; 50: } 51: }; 52: } C# (CSharp): 1: using System; 2: using System.Reflection; 3:   4: namespace UUHP_CS 5: { 6: public class CUUHP_CS 7: { 8: public static bool UseUnsafeHeaderParsing(ref string strError) 9: { 10: Assembly assembly = Assembly.GetAssembly(typeof(System.Net.Configuration.SettingsSection)); 11: if (null == assembly) 12: { 13: strError = "Could not access Assembly"; 14: return false; 15: } 16:   17: Type type = assembly.GetType("System.Net.Configuration.SettingsSectionInternal"); 18: if (null == type) 19: { 20: strError = "Could not access internal settings"; 21: return false; 22: } 23:   24: object obj = type.InvokeMember("Section", 25: BindingFlags.Static | BindingFlags.GetProperty | BindingFlags.NonPublic, 26: null, null, new object[] { }); 27:   28: if (null == obj) 29: { 30: strError = "Could not invoke Section member"; 31: return false; 32: } 33:   34: // If it's not already set, set it. 35: FieldInfo fi = type.GetField("useUnsafeHeaderParsing", BindingFlags.NonPublic | BindingFlags.Instance); 36: if (null == fi) 37: { 38: strError = "Could not access useUnsafeHeaderParsing field"; 39: return false; 40: } 41:   42: if (!Convert.ToBoolean(fi.GetValue(obj))) 43: { 44: fi.SetValue(obj, true); 45: } 46:   47: return true; 48: } 49: } 50: }   F# (FSharp): 1: namespace UUHP_FS 2: open System 3: open System.Reflection 4: module CUUHP_FS = 5: let UseUnsafeHeaderParsing(strError : byref<string>) : bool = 6: // 7: let assembly : Assembly = Assembly.GetAssembly(typeof<System.Net.Configuration.SettingsSection>) 8: if (null = assembly) then 9: strError <- "Could not access Assembly" 10: false 11: else 12: 13: let myType : Type = assembly.GetType("System.Net.Configuration.SettingsSectionInternal") 14: if (null = myType) then 15: strError <- "Could not access internal settings" 16: false 17: else 18: 19: let obj : Object = myType.InvokeMember("Section", BindingFlags.Static ||| BindingFlags.GetProperty ||| BindingFlags.NonPublic, null, null, Array.zeroCreate 0) 20: if (null = obj) then 21: strError <- "Could not invoke Section member" 22: false 23: else 24: 25: // If it's not already set, set it. 26: let fi : FieldInfo = myType.GetField("useUnsafeHeaderParsing", BindingFlags.NonPublic ||| BindingFlags.Instance) 27: if(null = fi) then 28: strError <- "Could not access useUnsafeHeaderParsing field" 29: false 30: else 31: 32: if (not(Convert.ToBoolean(fi.GetValue(obj)))) then 33: fi.SetValue(obj, true) 34: 35: // Now return true 36: true VB (Visual Basic): 1: Option Explicit On 2: Option Strict On 3: Imports System 4: Imports System.Reflection 5:   6: Public Class CUUHP_VB 7: Public Shared Function UseUnsafeHeaderParsing(ByRef strError As String) As Boolean 8:   9: Dim assembly As [Assembly] 10: assembly = [assembly].GetAssembly(GetType(System.Net.Configuration.SettingsSection)) 11:   12: If (assembly Is Nothing) Then 13: strError = "Could not access Assembly" 14: Return False 15: End If 16:   17: Dim type As Type 18: type = [assembly].GetType("System.Net.Configuration.SettingsSectionInternal") 19: If (type Is Nothing) Then 20: strError = "Could not access internal settings" 21: Return False 22: End If 23:   24: Dim obj As Object 25: obj = [type].InvokeMember("Section", _ 26: BindingFlags.Static Or BindingFlags.GetProperty Or BindingFlags.NonPublic, _ 27: Nothing, Nothing, New [Object]() {}) 28:   29: If (obj Is Nothing) Then 30: strError = "Could not invoke Section member" 31: Return False 32: End If 33:   34: ' If it's not already set, set it. 35: Dim fi As FieldInfo 36: fi = [type].GetField("useUnsafeHeaderParsing", BindingFlags.NonPublic Or BindingFlags.Instance) 37: If (fi Is Nothing) Then 38: strError = "Could not access useUnsafeHeaderParsing field" 39: Return False 40: End If 41:   42: If (Not Convert.ToBoolean(fi.GetValue(obj))) Then 43: fi.SetValue(obj, True) 44: End If 45:   46: Return True 47: End Function 48: End Class   Technorati Tags: C++,CPP,VB,Visual Basic,F#,FSharp,C#,CSharp,ResponseStatusLine,protocol violation

    Read the article

  • Introduction to LinqPad Driver for StreamInsight 2.1

    - by Roman Schindlauer
    We are announcing the availability of the LinqPad driver for StreamInsight 2.1. The purpose of this blog post is to offer a quick introduction into the new features that we added to the StreamInsight LinqPad driver. We’ll show you how to connect to a remote server, how to inspect the entities present of that server, how to compose on top of them and how to manage their lifetime. Installing the driver Info on how to install the driver can be found in an earlier blog post here. Establishing connections As you click on the “Add Connection” link in the left pane you will notice that now it’s possible to build the data context automatically. The new driver appears as an option in the upper list, and if you pick it you will open a connection dialog that lets you connect to a remote StreamInsight server. The connection dialog lets you specify the address of the remote server. You will notice that it’s possible to pick up the binding information from the configuration file of the LinqPad application (which is normally in the same folder as LinqPad.exe and is called LinqPad.exe.config). In order for the context to be generated you need to pick an application from the server. The control is editable hence you can create a new application if you don’t want to make changes to an existing application. If you choose a new application name you will be prompted for confirmation before this gets created. Once you click OK the connection is created and you can start issuing queries against the remote server. If there’s any connectivity error the connection is marked with a red X and you can see the error message informing you what went wrong (i.e., the remote server could not be reached etc.). The context for remote servers Let’s take a look at what happens after we are connected successfully. Every LinqPad query runs inside a context – think of it as a class that wraps all the code that you’re writing. If you’re connecting to a live server the context will contain the following: The application object itself. All entities present in this application (sources, sinks, subjects and processes). The picture below shows a snapshot of the left pane of LinqPad after a successful connection. Every entity on the server has a different icon which will allow users to figure out its purpose. You will also notice that some entities have a string in parentheses following the name. It should be interpreted as such: the first name is the name of the property of the context class and the second name is the name of the entity as it exists on the server. Not all valid entity names are valid identifier names so in cases where we had to make a transformation you see both. Note also that as you hover over the entities you get IntelliSense with their types – more on that later. Remoting is not supported As you play with the entities exposed by the context you will notice that you can’t read and write directly to/from them. If for instance you’re trying to dump the content of an entity you will get an error message telling you that in the current version remoting is not supported. This is because the entity lives on the remote server and dumping its content means reading the events produced by this entity into the local process. ObservableSource.Dump(); Will yield the following error: Reading from a remote 'System.Reactive.Linq.IQbservable`1[System.Int32]' is not supported. Use the 'Microsoft.ComplexEventProcessing.Linq.RemoteProvider.Bind' method to read from the source using a remote observer. This basically tells you that you can call the Bind() method to direct the output of this source to a sink that has to be defined on the remote machine as well. You can’t bring the results to the LinqPad window unless you write code specifically for that. Compose queries You may ask – what's the purpose of all that? After all the same information is present in the EventFlowDebugger, why bother with showing it in LinqPad? First of all, What gets exposed in LinqPad is not what you see in the debugger. In LinqPad we have a property on the context class for every entity that lives on the server. Because LinqPad offers IntelliSense we in fact have much more information about the entity, and more importantly we can compose with that entity very easily. For example, let’s say that this code creates an entity: using (var server = Server.Connect(...)) {     var a = server.CreateApplication("WhiteFish");     var src = a         .DefineObservable<int>(() => Observable.Range(0, 3))         .Deploy("ObservableSource"); If later we want to compose with the source we have to fetch it and then we can bind something to     a.GetObservable<int>("ObservableSource)").Bind(... This means that we had to know a bunch of things about this: that it’s a source, that it’s an observable, it produces a result with payload Int32 and it’s named “ObservableSource”. Only the second and last bits of information are present in the debugger, by the way. As you type in the query window you see that all the entities are present, you get IntelliSense support for them and it’s much easier to make sense of what’s available. Let’s look at a scenario where composition is plausible. With the new programming model it’s possible to create “cold” sources that are parameterized. There was a way to accomplish that even in the previous version by passing parameters to the adapters, but this time it’s much more elegant because the expression declares what parameters are required. Say that we hover the mouse over the ThrottledSource source – we will see that its type is Func<int, int, IQbservable<int>> - this in effect means that we need to pass two int parameters before we can get a source that produces events, and the type for those events is int – in the particular case of my example I had the source produce a range of integers and the two parameters were the start and end of the range. So we see how a developer can create a source that is not running yet. Then someone else (e.g. an administrator) can pass whatever parameters appropriate and run the process. Proxy Types Here’s an interesting scenario – what if someone created a source on a server but they forgot to tell you what type they used. Worse yet, they might have used an anonymous type and even though they can refer to it by name you can’t figure out how to use that type. Let’s walk through an example that shows how you can compose against types you don’t need to have the definition of. This is how we can create a source that returns an anonymous type: Application.DefineObservable(() => Observable.Range(1, 10).Select(i => new { I = i })).Deploy("O1"); Now if we refresh the connection we can see the new source named O1 appear in the list. But what’s more important is that we now have a type to work with. So we can compose a query that refers to the anonymous type. var threshold = new StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0<int>(5); var filter = from i in O1              where i > threshold              select i; filter.Deploy("O2"); You will notice that the anonymous type defined with this statement: new { I = i } can now be manipulated by a client that does not have access to it because the LinqPad driver has generated another type in its stead, named StreamInsightDynamicDriver.TypeProxies.AnonymousType1_0. This type has all the properties and fields of the type defined on the server, except in this case we can instantiate values and use it to compose more queries. It is worth noting that the same thing works for types that are not anonymous – the test is if the LinqPad driver can resolve the type or not. If it’s not possible then a new type will be generated that approximates the type that exists on the server. Control metadata In addition to composing processes on top of the existing entities we can do other useful things. We can delete them – nothing new here as we simply access the entities through the Entities collection of the application class. Here is where having their real name in parentheses comes handy. There’s another way to find out what’s behind a property – dump its expression. The first line in the output tells us what’s the name of the entity used to build this property in the context. Runtime information So let’s create a process to see what happens. We can bind a source to a sink and run the resulting process. If you right click on the connection you can refresh it and see the process present in the list of entities. Then you can drag the process to the query window and see that you can have access to process object in the Processes collection of the application. You can then manipulate the process (delete it, read its diagnostic view etc.). Regards, The StreamInsight Team

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >