Search Results

Search found 19970 results on 799 pages for 'external tools'.

Page 64/799 | < Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >

  • client flips between internal and external IP addresses??

    - by jmiller-miramontes
    I have what seems like a not-particularly-complicated home network, all things considered: a DSL line comes in to a modem/router, which goes off to a switch, which supports a bunch of machines. My machines live in a 192.168.0.x address space; however, I'm running some public servers on the network, so I have a block of 8 (5, really) static IP addresses that are mapped to the servers by the router. The non-servers get 192.168.0.x addresses via NAT; some machines have static addresses and some get addresses from DHCP. Locally, I'm running a DNS server (named) to map between the domain names and the 192.168 address space. Somewhat messy, but everything basically works. Except: One of my local non-server clients occasionally switches from its internal address to its external address. That is, if I check the logs of a website I'm running internally, the hits coming from this client sometimes show up with the internal 192.168 address, and sometimes with the external (216.103...) address. It will flip back and forth for no apparent reason, without my doing anything. This can be a problem in terms of how the clients interact with the way I have some of the clients' SSH systems configured (e.g., allowing access from the internal network but not the external network), but it also Just Seems Wrong. I will confess that I'm kinda skating on the very edge of my networking competence here, but I can't for the life of me figure out what's going on. If it helps, the client in question is running Mac OS X / 10.6; its address is statically assigned, is not one of the five externally-accessible addresses, and gets its DNS from (first) the internal DNS server and (second) my ISP's DNS servers. I can't swear that none of the other NAT clients are also showing this problem; the one I'm dealing with is my everyday machine, so this is where I run into it. Does anybody out there have any advice? This is driving me crazy...

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • Nvidia GTS 360M in Asus laptop shuts off when gaming on external display

    - by mr odus
    I have had this problem on my Asus G51j Intel I5, Nvidia GTS360M graphics card. Just updated the drivers to latest off Nvidias site. Most games I play, If I hook it up through an external monitor(this happens whether it's HDMI or VGA). The laptop hard shuts down after about 20 minutes, give or take. This tends to happen on more graphically intense games like Call of dutys, bioshock. I'm running Windows 7, latest nvidia drivers. Games work fine on my laptops screen, and Movies, general computing work fine on the external displays. The laptop always sits on a cooling pad and the latest time was in front of my AC unit, I ran heatfan or whatever the heat tracking software was, and my temperatures were normal through a shut down. This has been happening for the life of the laptop. I dont play very many games, and even fewer on an external monitor, so this issue doesnt come up much Is it possible I have a faulty graphics card? Is there anything else I can try?

    Read the article

  • Using native resolution on external display results in stretched, out of bounds image

    - by Roni Yaniv
    I have an HP min 311 netbook with Windows XP, which I've connected to a Samsung SyncMaster 2043BW display via the supplied analog cable. The external display's native res is 1680x1050, which the netbook's ION GPU supports. I've configured the external display as the single display (no cloning or any such fancy stuff). However, once I set the native res, the image just stretches out. It looks squashed, and it goes outside the monitor's edges. In contrast, lower resolutions manage to stay within the monitor's display edges, though obviously they are skewed in some way (vertically or horizontally). BTW, the only res which seems to be displayed relatively clearly (it's the least blurry) is 1280x720. I tried looking all over the web for an explanation/advice but could not find any. I already played with the settings on the external display itself several times. So either it's not that, or I missed something. Has someone run into this issue? I need help.

    Read the article

  • Sharing a folder with Nautilus and NTFS external drive gets errors

    - by TheLQ
    I am trying to share a folder in Lubuntu over a network that's on an external NTFS drive. Due to the system that I have (rotating backup disks) this is probably the second time that the drive would of been mounted. Its manually mounted with a simple (for example) mount /dev/sdb1 /media/BACKUP On an internal NTFS disk I have successfully setup a network share and can access it. However on the external disk I can't from any other Windows computer. When setting up the share Nautilus said that it needs to change the other's permissions to allow for other users to write. However afterwords its still blank. Changing it to Read and Write just changes back to blank. Chowning the entire /media folder recursively and trying didn't work. Running PCManFM as root and changing didn't work. Adding "public=yes" to smb.conf and restarting didn't work. I'm out of idea's on what to do. What's weird is that it worked just fine on an internal NTFS disk, so why not the external one? Any solutions need to be able to managed inside of a gui (preferably Nautilus) as the person managing the machine isn't as tech savvy. Thanks

    Read the article

  • Explorer.exe hangup during move large file into external drive

    - by PiotrK
    During move large files (700mb+) to external drive formated NTFS via USB 3.0 I've noticed strange things about explorer.exe (I am using Windows 7 up to date) Sometimes after move file the explorer get stuck (ie. it can happen after few files during move of several large files) - moving window freeze and I am unable to kill explorer (via taskmgr, or cmdline TASKKILL). In command line I've got something like this (taskmgr shows that explorer.exe is still running - I've got the same PID every time I try to kill it and no diagnostic message): C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. C:\Windows\system32TASKKILL /F /IM explorer.exe SUKCES: proces "explorer.exe" o identyfikatorze PID 6296 zostal zakonczony. If I try to run another explorer.exe process at this point, I got desktop icon and start bar back but I cannot open any explorer window After few minutes explorer.exe finally dies and I am able to rerun it without rebooting File that I moved have two copies - one local and one on the external drive (the original file wasn't delete after move); Both copies seems to contain the same data (same length and CRC info) If this happen during move of multiply files, only some files are moved and one of them have two copies (both locally and on the external drive) What can I do to fix those explorer hangs? Added: The same problem exist when copying files, it hangsup between large files Similar problem exist when I tried to use TotalCommander (x64): copying paused at 80% of one of files, TC didn't hung up (but clicking cancel in copying dialog box doesn't have any effect). During this pause I can't kill TotalCmd.exe just like Explorer.exe Added (2): This problem seems to disappear when I use 32 bit applications (like TotalCommander (x86) ), but I need to do more testing to be sure of this Added (3): There are several errors in event log, source: disk, id: 11, qualifiers: 49156, task: 0, level: 2, keywords: 0x80000000000000 (This may be important, and I forgot to mention this): Main disk is encrypted via Truecrypt (boot-in password)

    Read the article

  • Juniper router dropping pings to external interface

    - by Alexander Garden
    My organization has a Juniper SSG20-WLAN that routes our traffic to the outside world. We've been having intermittent problems with our internet connection so I wrote up a Python script to ping the internal interface of the router, the external interface, a couple of our internal servers, the ISP router our router talks to, their upstream provider, and Google and Yahoo for good measure. It does that about every minute. What I have found is that when our internet goes out, our Juniper router ceases responding to pings on the external interface. Everything past that is, of course, unreachable. The internal interface and our internal servers continue to echo back without interruption. None of the counters indicate dropped packets of any type. They all look normal. The logs complain about VIP servers being unavailable but otherwise nothing indicative of network issues. My questions are these: Does this exonerate our ISP? Or, contrawise, might a problem with the connection be causing the external interface to go down? Is there somewhere else in the SSG20, beside the system log and counters, that might help me track down info on the problem? UPDATE: Turned out that one of the switches between my monitoring box and the router was a router itself, and occasionally diverting from the gateway to itself. Kudos to those who made suggestions along those lines. Not really sure which answer to mark as accepted, as it was really stuff in the comments that turned out to be right. Thanks for the suggestions.

    Read the article

  • Set document root for external subdomain (A Record) via htaccess

    - by 1nsane
    I have a managed server (unable to control apache settings) with the default document root of: /var/www I have a web app running in: /var/www/subdomains/app/webroot I have a dedicated domain managed by the host that has the aforementioned webroot which works perfectly. I would like to allow externally provisioned domains to point to the server/web app via A Record config. If I access the site via IP, it takes me to the index located in /var/www. I would like to configure the .htaccess in my /var/www directory to rewrite requests from the external subdomain to the /var/www/subdomains/app/webroot directory. I've done so using the following rules: RewriteCond %{HTTP_HOST} external\.domain\.com$ [NC] RewriteRule ^(.*)$ /var/www/subdomains/app/webroot/index.php?url=$1 [L,QSA] When accessing external.domain.com, the app loads properly, but the paths to things like CSS files, images, etc. are prefixed with "/subdomains/app/", causing broken links. I've tried changing the RewriteBase (both in /var/www and /var/www/subdomains/app/webroot), as I believe that's what it's designed for - but no luck. Any ideas? FYI the app is built on CakePHP. Thanks

    Read the article

  • T4 Template error - Assembly Directive cannot locate referenced assembly in Visual Studio 2010 proje

    - by CodeSniper
    I ran into the following error recently in Visual Studio 2010 while trying to port Phil Haack’s excellent T4CSS template which was originally built for Visual Studio 2008.   The Problem Error Compiling transformation: Metadata file 'dotless.Core' could not be found In “T4 speak”, this simply means that you have an Assembly directive in your T4 template but the T4 engine was not able to locate or load the referenced assembly. In the case of the T4CSS Template, this was a showstopper for making it work in Visual Studio 2010. On a side note: The T4CSS template is a sweet little wrapper to allow you to use DotLessCss to generate static .css files from .less files rather than using their default HttpHandler or command-line tool.    If you haven't tried DotLessCSS yet, go check it out now!  In short, it is a tool that allows you to templatize and program your CSS files so that you can use variables, expressions, and mixins within your CSS which enables rapid changes and a lot of developer-flexibility as you evolve your CSS and UI. Back to our regularly scheduled program… Anyhow, this post isn't about DotLessCss, its about the T4 Templates and the errors I ran into when converting them from Visual Studio 2008 to Visual Studio 2010. In VS2010, there were quite a few changes to the T4 Template Engine; most were excellent changes, but this one bit me with T4CSS: “Project assemblies are no longer used to resolve template assembly directives.” In VS2008, if you wanted to reference a custom assembly in your T4 Template (.tt file) you would simply right click on your project, choose Add Reference and select that assembly.  Afterwards you were allowed to use the following syntax in your T4 template to tell it to look at the local references: <#@ assembly name="dotless.Core.dll" #> This told the engine to look in the “usual place” for the assembly, which is your project references. However, this is exactly what they changed in VS2010.  They now basically sandbox the T4 Engine to keep your T4 assemblies separate from your project assemblies.  This can come in handy if you want to support different versions of an assembly referenced both by your T4 templates and your project. Who broke the build?  Oh, Microsoft Did! In our case, this change causes a problem since the templates are no longer compatible when upgrading to VS 2010 – thus its a breaking change.  So, how do we make this work in VS 2010? Luckily, Microsoft now offers several options for referencing assemblies from T4 Templates: GAC your assemblies and use Namespace Reference or Fully Qualified Type Name Use a hard-coded Fully Qualified UNC path Copy assembly to Visual Studio "Public Assemblies Folder" and use Namespace Reference or Fully Qualified Type Name.  Use or Define a Windows Environment Variable to build a Fully Qualified UNC path. Use a Visual Studio Macro to build a Fully Qualified UNC path. Option #1 & 2 were already supported in Visual Studio 2008, so if you want to keep your templates compatible with both Visual Studio versions, then you would have to adopt one of these approaches. Yakkety Yak, use the GAC! Option #1 requires an additional pre-build step to GAC the referenced assembly, which could be a pain.  But, if you go that route, then after you GAC, all you need is a simple type name or namespace reference such as: <#@ assembly name="dotless.Core" #> Hard Coding aint that hard! The other option of using hard-coded paths in Option #2 is pretty impractical in most situations since each developer would have to use the same local project folder paths, or modify this setting each time for their local machines as well as for production deployment.  However, if you want to go that route, simply use the following assembly directive style: <#@ assembly name="C:\Code\Lib\dotless.Core.dll" #> Lets go Public! Option #3, the Visual Studio Public Assemblies Folder, is the recommended place to put commonly used tools and libraries that are only needed for Visual Studio.  Think of it like a VS-only GAC.  This is likely the best place for something like dotLessCSS and is my preferred solution.  However, you will need to either use an installer or a pre-build action to copy the assembly to the right folder location.   Normally this is located at:  C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\PublicAssemblies Once you have copied your assembly there, you use the type name or namespace syntax again: <#@ assembly name="dotless.Core" #> Save the Environment! Option #4, using a Windows Environment Variable, is interesting for enterprise use where you may have standard locations for files, but less useful for demo-code, frameworks, and products where you don't have control over the local system.  The syntax for including a environment variable in your assembly directive looks like the following, just as you would expect: <#@ assembly name="%mypath%\dotless.Core.dll" #> “mypath” is a Windows environment variable you setup that points to some fully qualified UNC path on your system.  In the right situation this can be a great solution such as one where you use a msi installer for deployment, or where you have a pre-existing environment variable you can re-use. OMG Macros! Finally, Option #5 is a very nice option if you want to keep your T4 template’s assembly reference local and relative to the project or solution without muddying-up your dev environment or GAC with extra deployments.  An example looks like this: <#@ assembly name="$(SolutionDir)lib\dotless.Core.dll" #> In this example, I’m using the “SolutionDir” VS macro so I can reference an assembly in a “/lib” folder at the root of the solution.   This is just one of the many macros you can use.  If you are familiar with creating Pre/Post-build Event scripts, you can use its dialog to look at all of the different VS macros available. This option gives the best solution for local assemblies without the hassle of extra installers or other setup before the build.   However, its still not compatible with Visual Studio 2008, so if you have a T4 Template you want to use with both, then you may have to create multiple .tt files, one for each IDE version, or require the developer to set a value in the .tt file manually.   I’m not sure if T4 Templates support any form of compiler switches like “#if (VS2010)”  statements, but it would definitely be nice in this case to switch between this option and one of the ones more compatible with VS 2008. Conclusion As you can see, we went from 3 options with Visual Studio 2008, to 5 options (plus one problem) with Visual Studio 2010.  As a whole, I think the changes are great, but the short-term growing pains during the migration may be annoying until we get used to our new found power. Hopefully this all made sense and was helpful to you.  If nothing else, I’ll just use it as a reference the next time I need to port a T4 template to Visual Studio 2010.  Happy T4 templating, and “May the fourth be with you!”

    Read the article

  • Web Site Performance and Assembly Versioning – Part 3 Versioning Combined Files Using Mercurial

    - by capgpilk
    Minification and Concatination of JavaScript and CSS Files Versioning Combined Files Using Subversion Versioning Combined Files Using Mercurial – this post I have worked on a project recently where there was a need to version the system (library dll, css and javascript files) by date and Mercurial revision number. This was in the format:- 0.12.524.407 {major}.{year}.{month}{date}.{mercurial revision} Each time there is an internal build using the CI server, it would label the files using this format. When it came time to do a major release, it became v1.{year}.{month}{date}.{mercurial revision}, with each public release having a major version increment. Also as a requirement, each assembly also had to have a new GUID on each build. So like in previous posts, we need to edit the csproj file, and add a couple of Default targets. 1: <?xml version="1.0" encoding="utf-8"?> 2: <Project ToolsVersion="4.0" DefaultTargets="Hg-Revision;AssemblyInfo;Build" 3: xmlns="http://schemas.microsoft.com/developer/msbuild/2003"> 4: <PropertyGroup> Right below the closing tag of the entire project we add our two targets, the first is to get the Mercurial revision number. We first need to import the tasks for MSBuild which can be downloaded from http://msbuildhg.codeplex.com/ 1: <Import Project="..\Tools\MSBuild.Mercurial\MSBuild.Mercurial.Tasks" />   1: <Target Name="Hg-Revision"> 2: <HgVersion LocalPath="$(MSBuildProjectDirectory)" Timeout="5000" 3: LibraryLocation="C:\TortoiseHg\"> 4: <Output TaskParameter="Revision" PropertyName="Revision" /> 5: </HgVersion> 6: <Message Text="Last revision from HG: $(Revision)" /> 7: </Target> With the main Mercurial files being located at c:\TortoiseHg To get a valid GUID we need to escape from the csproj markup and call some c# code which we put in a property group for later reference. 1: <PropertyGroup> 2: <GuidGenFunction> 3: <![CDATA[ 4: public static string ScriptMain() { 5: return System.Guid.NewGuid().ToString().ToUpper(); 6: } 7: ]]> 8: </GuidGenFunction> 9: </PropertyGroup> Now we add in our target for generating the GUID. 1: <Target Name="AssemblyInfo"> 2: <Script Language="C#" Code="$(GuidGenFunction)"> 3: <Output TaskParameter="ReturnValue" PropertyName="NewGuid" /> 4: </Script> 5: <Time Format="yy"> 6: <Output TaskParameter="FormattedTime" PropertyName="year" /> 7: </Time> 8: <Time Format="Mdd"> 9: <Output TaskParameter="FormattedTime" PropertyName="daymonth" /> 10: </Time> 11: <AssemblyInfo CodeLanguage="CS" OutputFile="Properties\AssemblyInfo.cs" 12: AssemblyTitle="name" AssemblyDescription="description" 13: AssemblyCompany="none" AssemblyProduct="product" 14: AssemblyCopyright="Copyright ©" 15: ComVisible="false" CLSCompliant="true" Guid="$(NewGuid)" 16: AssemblyVersion="$(Major).$(year).$(daymonth).$(Revision)" 17: AssemblyFileVersion="$(Major).$(year).$(daymonth).$(Revision)" /> 18: </Target> So this will give use an AssemblyInfo.cs file like this just prior to calling the Build task:- 1: using System; 2: using System.Reflection; 3: using System.Runtime.CompilerServices; 4: using System.Runtime.InteropServices; 5:  6: [assembly: AssemblyTitle("name")] 7: [assembly: AssemblyDescription("description")] 8: [assembly: AssemblyCompany("none")] 9: [assembly: AssemblyProduct("product")] 10: [assembly: AssemblyCopyright("Copyright ©")] 11: [assembly: ComVisible(false)] 12: [assembly: CLSCompliant(true)] 13: [assembly: Guid("9C2C130E-40EF-4A20-B7AC-A23BA4B5F2B7")] 14: [assembly: AssemblyVersion("0.12.524.407")] 15: [assembly: AssemblyFileVersion("0.12.524.407")] Therefore giving us the correct version for the assembly. This can be referenced within your project whether web or Windows based like this:- 1: public static string AppVersion() 2: { 3: return Assembly.GetExecutingAssembly().GetName().Version.ToString(); 4: } As mentioned in previous posts in this series, you can label css and javascript files using this version number and the GetAssemblyIdentity task from the main MSBuild task library build into the .Net framework. 1: <GetAssemblyIdentity AssemblyFiles="bin\TheAssemblyFile.dll"> 2: <Output TaskParameter="Assemblies" ItemName="MyAssemblyIdentities" /> 3: </GetAssemblyIdentity> Then use this to write out the files:- 1: <WriteLinestoFile 2: File="Client\site-style-%(MyAssemblyIdentities.Version).combined.min.css" 3: Lines="@(CSSLinesSite)" Overwrite="true" />

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Nvidia System Tools compatible with GTX 560 Ti?

    - by Paula Ferreira
    I want to download and use Nvidia System Tools with ESA suport. But on the "Supported Products" tab my GTX 560 Ti isn't listed. The product was last released on April last year, but it shows support for both the GTX 570 and 580 as well as the GTX 4x family. All sister cards of the GTX 560. Has anyone successfuly ran this nvidia product with the GTX 560 Ti? Why wasn't this card included on the list of supported products?

    Read the article

  • Where to start? Server profiling tools

    - by Dave Chenell
    So I have a website, users upload images, people view images etc. Simple enough. Lots of writes and reads everywhere. Its built in MySQL and PHP on Apache. We have one dedicated server. Right now we are getting hammered with unexpected press and users and our CPU cycle is out of control, slowing down the entire site. Off hand I am guessing its either the saving of the image to the disk, or the Mysql queries. What tools can I use to educate myself on what is happening and pinpoint the problem? How can I find out what is happening in detail in this time of crisis?

    Read the article

  • how to i lock a Harddrive to a certain drive letter

    - by Memor-X
    i have an external hard drive that i have set to Drive F which on it contains some programs that i have shortcuts on the desktop how i have a second external hard drive which i store my music on which is auto assigned to Drive E, due to someone thinking that Australians love to have f** wings on power plugs for hard drives while power boards have each socket close together i can have both of these hard drives set at the same time my music hard drive i normally only plug up when i sync music to my ipod but on occasion when i unplug my music hard drive and plug my old one back in or at times when i turn on my computer with the music hard drive in, turn off my computer and turn it back on with my old hard drive it's drive letter gets switched to E i get annoyed having to always go into disk management and change the drive letter back to F when this happens so i am wondering if I can lock my hard drive to always be F, if anything else tries to be F it can fail for all i care If there is a batch file i can use that'll go though all the steps of Disk Management to change a drive's letter, that way i can set it up in the startup folder

    Read the article

  • OVF Tools error

    - by ToreTrygg
    Hello all, I am trying to use OVF tools 1.0 to create an .ovf appliance split to 4gbs. When I run the following command: ovftool --chunkSize=4gb vi://:@/?ds= e:\test\test.ovf everything starts off fine. It will got for about 15-16% then I get this message: "Error: unable to get NFC ticket for target disk". I have looked online, but cannot find anything that matches this problem. I am using Windows 2k3 as the ovf creation server, and I'm in a ESX 3.5 environment. The ovf must be split into 4gb chunks (or less) so they can be put onto DVDs. Also, when it is done it deletes the files it started to create. Any help would be greatly appreciated.

    Read the article

  • Amazon EC2 EBS volume scheduled backup/snapshots using puppet / similar tools

    - by Ehrann Mehdan
    I am not a Linux admin, although I wish I was, and I have seen these questions Amazon EC2 Backup Strategy Amazon EC2 + EBS:: Regular backup plan? Simple Backup Strategy for Amazon EC2 instances / volumes? And this suggestion http://alestic.com/2009/09/ec2-consistent-snapshot I tried using command line + crontab (the command line works, but crontab for some reason, doesn't) But I'm still pretty lost, all I want is an automated, rolling backup of my amazon EC2 (EBS) data (by rolling I mean keep 3-4 weeks back, but delete old snapshots as new ones come for cost control) And as things usually go, if there is something that is hard and painful, someone creates a solution for it. My question is simple, is there a way using a tool like Puppet to do it without a painful learning curve? (or via other tools like http://ylastic.com) If yes, how?

    Read the article

  • Open source command line tools for indexing a large number of text files

    - by ergosys
    I'm looking for any open source command line tool or tools which will allow me to index and search a large number of plain text files. Approximate search would be a plus. The tool only needs to print the files that match, although some match context would be useful. A GUI tool isn't useful for my application, nor is anything that searches files one by one (grep for example). I'm basically targeting unix platforms (osx, linux, bsd). EDIT: I'm not interested in any sort of tool that is system-wide, or needs to run in the background. Basically, I want to build an index for a directory tree full of text files and then later be able to search against it. Preferably the index is one or a few files that I can specify the location of. Any ideas?

    Read the article

  • Lost HDD partition to clean command, testdisk unresponsive

    - by Sujay Anjankar
    I accidentally cleaned my external HDD by the clean command in Diskpart and got a full sized heart attack after that. I did some research and I have tried a number of tools already. To name a few: Testdisk Recuva Partition Find and mount Eassos Recovery I have even tried some other dumb sounding tools, but none of them could find the lost partition. They just show "no partition found". Testdisk shows Partition sector doesn't have the endmark 0xAA55. The file recovery softwares list the files, but none of them seem to be able to restore the lost partition. I need to recover the disk as it was. Any help would be much appreciated!

    Read the article

  • Installing XP through USB-to-IDE/SATA connector?

    - by overtherainbow
    Hello To help someone reinstall a corrupt XP install on his IDE or SATA drive in a older desktop box (hence no Vista/W7), I was thinking of asking him to bring his harddrive and meet at Starbucks, and I would install a brand new XP by connecting it to my laptop, and booting it up with the XP install CD. If someone's already used a USB-to-IDE/SATA connector such as this one by Sabrent: Does the XP CD accept installing Windows to an external IDE/SATA through USB? Does USB 2.0 provide enough power to run an external IDE/SATA drive, or do I need to provide the drive with its own power (in which case, how?)? Thank you.

    Read the article

  • How can I quickly change display settings from dual monitor to single monitor on laptop?

    - by Daren Thomas
    I have my laptop (running Windows XP SP3) at work hooked up to an external monitor. Whenever I unplug the external monitor (time to go home!) I have to manually change the display settings. This takes time and involves a lot of clicks. Is there a way to automate changing these settings? I'm thinking of a hotkey solution or a little application that I can start with Launchy to toggle between two profiles. I use the MultiMon tool for "extending" the taskbar to the second monitor - will I have to give that up?

    Read the article

  • Graphite Running using daemon tools getting defunct

    - by pradeepchhetri
    I am running carbon-cache.py and carbon-aggregator.py using daemon tools. When I made some changes in the storage-schema.conf and tried to restart the carbon-cache.py, I found that it is becoming zombie very frequently. root 3367 3366 0 03:23 pts/1 00:00:00 supervise carbon-aggregator root 3371 3366 0 03:23 pts/1 00:00:00 supervise carbon-cache root 3373 3367 3 03:23 pts/1 00:00:02 /usr/bin/python /usr/bin/carbon-aggregator.py --debug start root 3379 3372 0 03:23 pts/1 00:00:00 multilog t /var/log/multilog/carbon-cache root 3382 3368 0 03:23 pts/1 00:00:00 multilog t /var/log/multilog/carbon-aggregator root 3638 3371 21 03:24 pts/1 00:00:00 [carbon-cache.py] <defunct> Can someone tell me what may be the reason ?

    Read the article

  • HDD Carrier, like a soda carrier available at McDonalds?

    - by Jason Taylor
    We use external USB drives for backups, and they have to be stored offsite at the end of the week. Right now we have your standard external USB drive inside an enclosure. We were thinking about moving to a USB dock, and dock a bare HDD for backups, rather than having various sized and types of enclosures. If we were to do this, the drives need protection while being transported to/from the safety deposit box. Is there any kind of hard drive carrier that would let us slide two drives into it, and it would provide protection while the drives are carried around by non-technical people? I'm afraid such a product doesn't exist, but perhaps someone knows of something?

    Read the article

  • How to set up Dell OMSA tools on Debian 6 (Squeeze) (PE2950)

    - by javano
    I assume I have set up them up incorrectly or am missing a library or dependency. When I log into the OMSA web interface I can't see anything Also, omreport tells me nothing; root@box:~# omreport storage controller No controllers found I assume these two will use the same source of information so what ever is wrong will fix them both. I set up OMSA as per these instructions. Also I have compiled MegaCLI (as this is a PowerEdge 2950 with a Perc 6/i controller) and I have used that to update the RAID firmware, so that works, but the Dell tools aren't. What have I missed during set up? root@kvm1:~# cat /etc/issue Debian GNU/Linux 6.0 \n \l root@kvm1:~# uname -a Linux kvm1.mivoice.cust.vostron.net 3.4.9 #1 SMP Wed Aug 22 19:08:46 BST 2012 x86_64 GNU/Linux

    Read the article

  • How can I quickly change display settings (dual monitor setup / single monitor setup on laptop)?

    - by Daren Thomas
    I have my laptop (running Windows XP SP3) at work hooked up to an external monitor. Whenever I unplug the external monitor (time to go home!) I have to manually change the display settings. This takes time and involves a lot of clicks. Is there a way to automate changing these settings? I'm thinking of a hotkey solution or a little application that I can start with Launchy to toggle between two profiles. I use the MultiMon tool for "extending" the taskbar to the second monitor - will I have to give that up?

    Read the article

  • Author Tools for Classroom Material?

    - by user1413
    I'm interested in putting a whole bunch of classroom material online. This material ranges from accounting classes to yoga. I would like to find a simple to use authoring tool that the people who teach the classes can use to put the material online. In other words, I do not want a tool that requires a developer. A person who knows the subject matter and is willing to read the manual should be able to put their material online. At minimum, this tool should allow for text and multi-media to be chained together in a logical form and it should allow quizzes to be created and graded. Even better would be for the tool to have some "smarts" so that subject areas which the student does not understand can be drilled. Even better would be for the tool to have ecommerce built in so that the instructors can charge for the classes. Are there any such tools?

    Read the article

< Previous Page | 60 61 62 63 64 65 66 67 68 69 70 71  | Next Page >