Search Results

Search found 22569 results on 903 pages for 'win32 process'.

Page 505/903 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • CLR Version issues with CorBindRuntimeEx

    - by Rick Strahl
    I’m working on an older FoxPro application that’s using .NET Interop and this app loads its own copy of the .NET runtime through some of our own tools (wwDotNetBridge). This all works fine and it’s fairly straightforward to load and host the runtime and then make calls against it. I’m writing this up for myself mostly because I’ve been bitten by these issues repeatedly and spend 15 minutes each However, things get tricky when calling specific versions of the .NET runtime since .NET 4.0 has shipped. Basically we need to be able to support both .NET 2.0 and 4.0 and we’re currently doing it with the same assembly – a .NET 2.0 assembly that is the AppDomain entry point. This works as .NET 4.0 can easily host .NET 2.0 assemblies and the functionality in the 2.0 assembly provides all the features we need to call .NET 4.0 assemblies via Reflection. In wwDotnetBridge we provide a load flag that allows specification of the runtime version to use. Something like this: do wwDotNetBridge LOCAL loBridge as wwDotNetBridge loBridge = CreateObject("wwDotNetBridge","v4.0.30319") and this works just fine in most cases.  If I specify V4 internally that gets fixed up to a whole version number like “v4.0.30319” which is then actually used to host the .NET runtime. Specifically the ClrVersion setting is handled in this Win32 DLL code that handles loading the runtime for me: /// Starts up the CLR and creates a Default AppDomain DWORD WINAPI ClrLoad(char *ErrorMessage, DWORD *dwErrorSize) { if (spDefAppDomain) return 1; //Retrieve a pointer to the ICorRuntimeHost interface HRESULT hr = CorBindToRuntimeEx( ClrVersion, //Retrieve latest version by default L"wks", //Request a WorkStation build of the CLR STARTUP_LOADER_OPTIMIZATION_MULTI_DOMAIN | STARTUP_CONCURRENT_GC, CLSID_CorRuntimeHost, IID_ICorRuntimeHost, (void**)&spRuntimeHost ); if (FAILED(hr)) { *dwErrorSize = SetError(hr,ErrorMessage); return hr; } //Start the CLR hr = spRuntimeHost->Start(); if (FAILED(hr)) return hr; CComPtr<IUnknown> pUnk; WCHAR domainId[50]; swprintf(domainId,L"%s_%i",L"wwDotNetBridge",GetTickCount()); hr = spRuntimeHost->CreateDomain(domainId,NULL,&pUnk); hr = pUnk->QueryInterface(&spDefAppDomain.p); if (FAILED(hr)) return hr; return 1; } CorBindToRuntimeEx allows for a specific .NET version string to be supplied which is what I’m doing via an API call from the FoxPro code. The behavior of CorBindToRuntimeEx is a bit finicky however. The documentation states that NULL should load the latest version of the .NET runtime available on the machine – but it actually doesn’t. As far as I can see – regardless of runtime overrides even in the .config file – NULL will always load .NET 2.0 even if 4.0 is installed. <supportedRuntime> .config File Settings Things get even more unpredictable once you start adding runtime overrides into the application’s .config file. In my scenario working inside of Visual FoxPro this would be VFP9.exe.config in the FoxPro installation folder (not the current folder). If I have a specific runtime override in the .config file like this: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v2.0.50727" /> </startup> </configuration> Not surprisingly with this I can load a .NET 2.0  runtime, but I will not be able to load Version 4.0 of the .NET runtime even if I explicitly specify it in my call to ClrLoad. Worse I don’t get an error – it will just go ahead and hand me a V2 version of the runtime and assume that’s what I wanted. Yuck! However, if I set the supported runtime to V4 in the .config file: <?xml version="1.0"?> <configuration> <startup> <supportedRuntime version="v4.0.30319" /> </startup> </configuration> Then I can load both V4 and V2 of the runtime. Specifying NULL however will STILL only give me V2 of the runtime. Again this seems pretty inconsistent. If you’re hosting runtimes make sure you check which version of the runtime is actually loading first to ensure you get the one you’re looking for. If the wrong version loads – say 2.0 and you want 4.0 - and you then proceed to load 4.0 assemblies they will all fail to load due to version mismatches. This is how all of this started – I had a bunch of assemblies that weren’t loading and it took a while to figure out that the host was running the wrong version of the CLR and therefore caused the assemblies loading to fail. Arrggh! <supportedRuntime> and Debugger Version <supportedRuntime> also affects the use of the .NET debugger when attached to the target application. Whichever runtime is specified in the key is the version of the debugger that fires up. This can have some interesting side effects. If you load a .NET 2.0 assembly but <supportedRuntime> points at V4.0 (or vice versa) the debugger will never fire because it can only debug in the appropriate runtime version. This has bitten me on several occasions where code runs just fine but the debugger will just breeze by breakpoints without notice. The default version for the debugger is the latest version installed on the system if <supportedRuntime> is not set. Summary Besides all the hassels, I’m thankful I can build a .NET 2.0 assembly and have it host .NET 4.0 and call .NET 4.0 code. This way we’re able to ship a single assembly that provides functionality that supports both .NET 2 and 4 without having to have separate DLLs for both which would be a deployment and update nightmare. The MSDN documentation does point at newer hosting API’s specifically for .NET 4.0 which are way more complicated and even less documented but that doesn’t help here because the runtime needs to be able to host both .NET 4.0 and 2.0. Not pleased about that – the new APIs look way more complex and of course they’re not available with older versions of the runtime installed which in our case makes them useless to me in this scenario where I have to support .NET 2.0 hosting (to provide greater ‘built-in’ platform support). Once you know the behavior above, it’s manageable. However, it’s quite easy to get tripped up here because there are multiple combinations that can really screw up behaviors.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  

    Read the article

  • Remote desktop sessions - Unwanted automatic log off after period of time

    - by alex
    I'm having an issue whenever I connect to any of our servers via RDP - After a certain period of time, it seems to close these sessions, closing all the applications i had open etc... This is particularly annoying if I am running a long process - for example, copying a file - it cuts it off... I then re-connect via RDP, and it effectively loads a new session. Is this set somewhere in Group Policy? Or somewhere else? This is happening on Windows 2008 (it may also be on our 2003 servers, although I haven't noticed...)

    Read the article

  • How can I recreate root dnsNode objects and their RootDNSServers folder in AD after they are deleted?

    - by TonyD
    A few days ago I was trying to permanently remove root hints from my DNS server. After much ado, I decided to go a different route and am now trying to put everything back as it was. During the original process, I opened ADUC, clicked ViewAdvanced Features, and then browsed to System MicrosoftDNS and deleted the folder RootDNSServers. Now in ADUC, I cannot create a folder here to replace the one I deleted. I can run adsiedit and load DomainDNSZones for my domain. Under there, I see MicrosoftDNS, RootDNSServers, with all of the objects still inside of it. Is there a way for me to undo what I did? Can I recreate these objects in ADUC? Can I do something else to cause them to show back up there? Thanks!

    Read the article

  • Focus On SOA & BPM for Oracle OpenWorld Now Available

    - by Lionel Dubreuil
    To help our valued customers & partners make the most of time spent at Oracle Openworld, please check out the Focus On Oracle Fusion Middleware documents.  Over the years, we've learned that these provide a great roadmap to must-attend sessions, demos, partner exhibits, and networking events during Oracle OpenWorld. SOA and BPM SOA for Developers BPM In addition to those “Focus On..” documents, session details (speakers, abstracts) can be found in the Content Catalog at: https://oracleus.activeevents.com/connect/search.ww?event=openworld We strongly recommend our customers to attend the following sessions: Service Integration (SOA) & BPM: “Using the Right Tools, Techniques, and Technologies for Integration Projects”  Monday, 10/1/2012; 3:15 PM; Moscone South - 308 BPM Suite: “Oracle Unified Business Process Management Suite 11g Overview and Roadmap” Monday, 10/1/ 2012; 12:15 PM; Moscone South – 308 SOA Suite:“Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge” Monday, 10/1/2012; 10:45 AM; Moscone South - 102 Foundation Pack: “Jump-starting Integration Projects with Oracle AIA Foundation Pack” Tuesday, 10/2/2012; 1:15 PM; Marriott Marquis - Salon 7 Oracle Enterprise Repository: “Gaining Victory over SOA and Application Integration Complexity” Tuesday, 10/2/2012; 1:15 PM; Moscone South - 310 See you in San Francisco! Not attending the show?  Some of the general and key sessions will be available online - so please stay tuned for those announcements as Oracle OpenWorld gets closer.

    Read the article

  • Focus On SOA & BPM for Oracle OpenWorld Now Available

    - by Lionel Dubreuil
    To help our valued customers & partners make the most of time spent at Oracle Openworld, please check out the Focus On Oracle Fusion Middleware documents.  Over the years, we've learned that these provide a great roadmap to must-attend sessions, demos, partner exhibits, and networking events during Oracle OpenWorld. SOA and BPM SOA for Developers BPM In addition to those “Focus On..” documents, session details (speakers, abstracts) can be found in the Content Catalog at: https://oracleus.activeevents.com/connect/search.ww?event=openworld We strongly recommend our customers to attend the following sessions: Service Integration (SOA) & BPM: “Using the Right Tools, Techniques, and Technologies for Integration Projects”  Monday, 10/1/2012; 3:15 PM; Moscone South - 308 BPM Suite: “Oracle Unified Business Process Management Suite 11g Overview and Roadmap” Monday, 10/1/ 2012; 12:15 PM; Moscone South – 308 SOA Suite:“Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge” Monday, 10/1/2012; 10:45 AM; Moscone South - 102 Foundation Pack: “Jump-starting Integration Projects with Oracle AIA Foundation Pack” Tuesday, 10/2/2012; 1:15 PM; Marriott Marquis - Salon 7 Oracle Enterprise Repository: “Gaining Victory over SOA and Application Integration Complexity” Tuesday, 10/2/2012; 1:15 PM; Moscone South - 310 See you in San Francisco! Not attending the show?  Some of the general and key sessions will be available online - so please stay tuned for those announcements as Oracle OpenWorld gets closer.

    Read the article

  • Focus On SOA & BPM for Oracle OpenWorld Now Available

    - by Lionel Dubreuil
    To help our valued customers & partners make the most of time spent at Oracle Openworld, please check out the Focus On Oracle Fusion Middleware documents.  Over the years, we've learned that these provide a great roadmap to must-attend sessions, demos, partner exhibits, and networking events during Oracle OpenWorld. SOA and BPM SOA for Developers BPM In addition to those “Focus On..” documents, session details (speakers, abstracts) can be found in the Content Catalog at: https://oracleus.activeevents.com/connect/search.ww?event=openworld We strongly recommend our customers to attend the following sessions: Service Integration (SOA) & BPM: “Using the Right Tools, Techniques, and Technologies for Integration Projects”  Monday, 10/1/2012; 3:15 PM; Moscone South - 308 BPM Suite: “Oracle Unified Business Process Management Suite 11g Overview and Roadmap” Monday, 10/1/ 2012; 12:15 PM; Moscone South – 308 SOA Suite:“Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge” Monday, 10/1/2012; 10:45 AM; Moscone South - 102 Foundation Pack: “Jump-starting Integration Projects with Oracle AIA Foundation Pack” Tuesday, 10/2/2012; 1:15 PM; Marriott Marquis - Salon 7 Oracle Enterprise Repository: “Gaining Victory over SOA and Application Integration Complexity” Tuesday, 10/2/2012; 1:15 PM; Moscone South - 310 See you in San Francisco! Not attending the show?  Some of the general and key sessions will be available online - so please stay tuned for those announcements as Oracle OpenWorld gets closer.

    Read the article

  • Focus On SOA & BPM for Oracle OpenWorld Now Available

    - by Lionel Dubreuil
    To help our valued customers & partners make the most of time spent at Oracle Openworld, please check out the Focus On Oracle Fusion Middleware documents.  Over the years, we've learned that these provide a great roadmap to must-attend sessions, demos, partner exhibits, and networking events during Oracle OpenWorld. SOA and BPM SOA for Developers BPM In addition to those “Focus On..” documents, session details (speakers, abstracts) can be found in the Content Catalog at: https://oracleus.activeevents.com/connect/search.ww?event=openworld We strongly recommend our customers to attend the following sessions: Service Integration (SOA) & BPM: “Using the Right Tools, Techniques, and Technologies for Integration Projects”  Monday, 10/1/2012; 3:15 PM; Moscone South - 308 BPM Suite: “Oracle Unified Business Process Management Suite 11g Overview and Roadmap” Monday, 10/1/ 2012; 12:15 PM; Moscone South – 308 SOA Suite:“Oracle SOA Suite, the Most Capable Tool for Every Possible Integration Challenge” Monday, 10/1/2012; 10:45 AM; Moscone South - 102 Foundation Pack: “Jump-starting Integration Projects with Oracle AIA Foundation Pack” Tuesday, 10/2/2012; 1:15 PM; Marriott Marquis - Salon 7 Oracle Enterprise Repository: “Gaining Victory over SOA and Application Integration Complexity” Tuesday, 10/2/2012; 1:15 PM; Moscone South - 310 See you in San Francisco! Not attending the show?  Some of the general and key sessions will be available online - so please stay tuned for those announcements as Oracle OpenWorld gets closer.

    Read the article

  • pxelinux hanging when booting client machine

    - by Blasphemophagher
    I'm kind of new to all of this, so please forgive any vagueness/misunderstandings on my part. I'm using pxelinux and VMs to create CentOS 6.0 machines that have the same install every time. I have a new VM set to boot from network, but in the process of booting up it gets stuck at "Loading 10.1.1.20:/pxelinux.0" (10.1.1.20 is the address of the server it's getting info from). pxelinux conf: http://pastebin.com/4XfZZPY1 I'm pretty sure all my config files are correct, could it be VirtualBox related? I have both the building server and the new client set to Host-only adapter and PCNET-FAST.

    Read the article

  • Planning for the Recovery

    - by john.orourke(at)oracle.com
    As we plan for 2011, there are many positive signs in the global economy, but also some lingering issues. Planning no longer is about extrapolating past performance and adjusting for growth. It is now about constantly testing the temperature of the water, formulating scenarios, assessing risk and assigning probabilities.  So how does one plan for recovery and improve forecast accuracy in such a volatile environment?  Here are some suggestions from a recent article I wrote, which was published in the December Financial Planning & Analysis (FP&A) newsletter from the AFP (Association of Financial Professionals): Increase the frequency of forecasting Get more line managers involved in the planning and forecasting process Re-consider what's being measured - i.e. key financial and operational metrics Incorporate risk and probability into forecasts Reduce reliance on spreadsheets - leverage packaged EPM applications To learn more about these best practices, check out the FP&A section of the AFP website and register to receive the FP&A newsletter.  AFP recently launched a new topic area focused on the FP&A function and items of interest to this group of finance professionals.  In addition to the FP&A quarterly newsletter, AFP will be publishing articles, running webinars and will have an FP&A track in their annual conference, which is in Boston next November.  Brian Kalish, AFP's Finance Lead, is hoping this initiative creates a valuable networking and information-sharing resource for FP&A professionals. Here's a link to the FP&A page on the AFP web site:  http://www.afponline.org/pub/res/topics/topics_fpa.html If you register on the site you can access and subscribe to the FP&A newsletter and other resources. Best of luck in your planning for 2011 and beyond!   

    Read the article

  • Trouble installing php memcache extension

    - by user35346
    I'm trying to install memcache on MAMP but I get the warning below, and when I continue it seems to complete properly. I add the line extension=memcache.so to the php.ini and restart MAMP but phpinfo() doesn't list the memcache extension. $ ./pecl install memcache downloading memcache-2.2.5.tgz ... Starting to download memcache-2.2.5.tgz (35,981 bytes) ..........done: 35,981 bytes 11 source files, building WARNING: php_bin /Applications/MAMP/bin/php5/bin/php appears to have a suffix 5/bin/php, but config variable php_suffix does not match running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 Enable memcache session handler support? [yes] : yes ... Build process completed successfully Installing '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.5 configuration option "php_ini" is not set to php.ini location You should add "extension=memcache.so" to php.ini

    Read the article

  • Cancel table design change in SQL Server 2000

    - by Bryce Wagner
    In SQL Server Enterprise Manager and change one of the columns and save it, it will create a table with the new definition, and copy all the data to that new table, and then delete the old table when it's done. But if your table is large (let's say on the order of 100GB), it can take a long time to do this. Even worse, if you don't have sufficient disk space, it doesn't notice ahead of time, and it will spend a long time trying to copy the table, run out of space, and then decide to abort the process. We have other ways to copy the data in smaller chunks, but those require significantly more manual intervention, so it's usually easier to just let Enterprise Manager figure it out, as long as there's enough disk space. So for a long running "Design Table" save like this, is there any way to cancel once it's started? Or do you just have to wait for it to fail?

    Read the article

  • SBS 2003 crashes often due to limited memory

    - by Sanoj
    I have a Windows SBS 2003 Std that regularly crashes, in about every 20th day. The only thing I can see in the logs is that used memory increases with about 30MB/day. The process that uses more and more memory is sqlservr. We don't have much installed on the server; a Point-Of-Sale-system that uses Pervasive SQL as database and an Accounting application. We just have 2GB of RAM and I could upgrade to 4GB but I think that this just delay the problem. Is there any solution to this problem? Could I limit sqlservr to some memory?

    Read the article

  • How can I use Windows Firewall to only permit the Windows Update service to make an outbound connection?

    - by microsmash
    I'm trying to tailor my Windows Firewall settings (using the Windows Firewall with Advanced Security console) to only permit programs that need to access the Internet with an outbound connection to do so. This works fine for normal applications as I can just allow the program, but services that load in the svchost.exe process are a problem. The only services I actually need to give access to are Windows Update and the Background Intelligent Transfer Service (and even that, I would only like Windows Update to be able to submit jobs to, but that's another issue.) Is there a method to only allow these to be permitted an outbound connection, and not any of the other services loaded in svchost?

    Read the article

  • Linux: find thin server running on port 80 and kill it

    - by Andrew
    On my Linux server I ran: sudo thin start -p 80 -d Now I'd like to restart the sever. The trouble is, I can't seem to get the old process to kill it. I tried: netstat -anp But what I see on port 80 is this: Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN - So, it didn't give me a PID to kill... I tried pgrep -l thin but that gave me nothing. Meanwhile pgrep -l ruby gives me like 6 processes running. I don't really understand why multiple ruby threads would be running, or which one I need to kill... How do I kill / restart the thin daemon?

    Read the article

  • How do I create a .deb file?

    - by JamesTheAwesomeDude
    Yes, I know that this question has been asked many times before, but none of the answers really helped. I'd like to package the Minecraft launcher (which has no proprietary code, AFAIK,) into a .deb file so that I can put it on a flash drive and share it with my friends. I have managed to install Minecraft it manually (put some files into /opt/minecraft, download an icon, and create a .desktop file in /usr/share/applications,) and I have made a shell script that completely automates the process, but it relies on wget to retrieve a few files, including the .desktop file. (It isn't a self-extracting archive, after all.) I'd like to be able to do this offline, as a lot of my friends have slow or no internet. (One of their internet lines was buried so shallowly that it actually got knocked out by the lawnmower.) I won't be loading it into a PPA or anything like that; I just want it to be a "formal" package that can be easily installed and uninstalled. (One thing that I would like is for sudo apt-get purge minecraft to also remove the .minecraft folder. It would also be nice to define the dependedcies as being able to accept OpenJDK or Sun's JVM.) Oh, just so you know, the Minecraft launcher is a .jar file, but I can very, very easily launch it via shell scripts. The exact command is right on the download page.

    Read the article

  • htaccess Redirect / RedirectMatch with URLs that contain Special / Encoded Characters

    - by dSquared
    I'm currently in the process of applying a variety of 301 redirects in an .htaccess file for a website that recently changed its structure. Everything is working as expected, except for URLs that contain special characters, for these I am getting 404 errors. For example the following directives that have a registered trademark symbol (®) bring up 404 pages: RedirectMatch 301 ^/directory/link-with®-special-character(/)?$ somelink.com RedirectMatch 301 ^/directory/link-with%c2%ae-special-character(/)?$ somelink.com I've also tried using Redirect, RewriteRule and surrounding the urls with double quotes and nothing seems to work. Does anyone know what might be happening or the proper way to handle these types of directives? Any help is greatly appreciated.

    Read the article

  • MOM 2005 SP1 and MBSA 2.1 : Compatible?

    - by Mitch
    We're currently using MOM 2005 SP1 for our system monitoring duties. It was installed and "configured" by someone who is no longer on our team. In the process of correcting a persistent error in the applications log of all of the monitored servers, "Default Global Virtual Directory Not Configured", I'm seeing the MBSA management pack is for MBSA v1.2. The current version of the MBSA appears to be 2.1. Does anyone have any experience in configuring MOM to use MBSA 2.1? Does MOM get along with MBSA 2.1? Any input would be appreciated. Point me in the direction of more reading if there's something out there I should be referring to. Thanks!

    Read the article

  • auto copy newest folders and images in MyPictures folder to USB drive

    - by TVersmet
    I drop lots of photos in a day into the My Documents/My Pictures folder. I archive at the end of each day to USB drives, to be stored offsite. What I would like is someway to automate the process. By example, a small app or script I can simply double-click and it will scan the My Pictures folder for the newest folders and images and copy them to the USB drive until the drive is full. It doesn't matter if I get redundancy from the previous days saved images, so long as the newest images and folders are always the first to get copied. Well, that's my request. Thanks for reading.

    Read the article

  • links for 2011-02-22

    - by Bob Rhubart
    Eleven BI trends for 2011 | ITWeb Business Intelligence (tags: ping.fm) The Buttso Blathers: WebLogic Schema Files Buttso shares a link. (tags: orale weblogic) Cloud Computing & Enterprise Architecture | Open Group Blog "On the first look, it may seem like Enterprise Architecture is irrelevant in a company if your complete IT is running on Cloud Computing, SaaS and outsourcing/offshoring. I was of the same opinion last year. However, it is not the case. In fact, the complexity is going to get multiplied." (tags: opengroup cloud enterprisearchitecture) James Taylor: Change Logging Level for SOA 11g James says: "I’m sure there are many blogs out there that have this solution. But I seem to get asked this question a lot so I thought I would post it here for my convenience. (tags: oracle middleware soa) David Linthicum: The Truth behind Standards, SOA, and Cloud Computing "Most of the standards we've worked on in the world of SOA over the past several years are applicable to the world of cloud computing. Cloud computing is simply a change in platform, and the existing architectural standards we leverage should transfer nicely to the cloud computing space." - David Linthicum (tags: enterprisearchitecture soa cloud) C. Martin Harris, MD: HIMSS11 Update from the Chairman "We cannot allow ourselves to focus exclusively on near term goals. Our real goal is a technology-driven transformation of healthcare that will never stop. A true transformation is a process of lessons learned and applied, that continually open broad new horizons of opportunity." - C. Martin Harris, MD (tags: enterprisearchitecture modernization)

    Read the article

  • Increment numbers in page headers in Microsoft Word

    - by Imray
    In Microsoft Word, I am laying out a process in steps. Each page pretty much is a new step that begins with a header like: 3. Drive the body to a secure location I would like the numbers to automatically increment, particularly if later on I decide to add a new step somewhere in the middle. Does anyone know how I can achieve that in the simplest way? I already have a working Table of Contents and I'd prefer not doing something that would mess with that, if possible to avoid.

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Additional new material WebLogic Community

    - by JuergenKress
    Update: Commercially Supported GlassFish VersionsAquarium blogger David Delabassee shares background information and links to where you can download the recently released GlassFish Server Bundle Patch 3.1.2.8. Read the article. Announcing WebLogic on Oracle Database Appliance 2.7Oracle WebLogic Server on Oracle Database Appliance 2.7 offers a complete solution for building and deploying enterprise Java EE applications in a fully integrated system of software, servers, storage, and networking that delivers highly available database and WebLogic services. Learn more. APAC Partner iDay: What's New in Oracle WebLogic, 8-Apr 12 noon SG/2pm AEDT/9:30 IST - Invite your Partners - Register Virtual Developer Conference:  Creating a Foundation for Cloud Applications using Oracle WebLogic and Oracle Coherence - OnDemand Webcast: WebLogic Configuration using Chef and Puppet - On-Demand Podcast Series: Part 3 - Oracle WebLogic Server and Oracle Database Integration - Podcast Coherence*Web: Sharing an httpSession Among Applications in Different Oracle WebLogic Clusters SOA solution architect Jordi Villena shows how easy it is to extend Coherence*Web to enable session sharing. Read the article. Multi-Factor Authentication in Oracle WebLogic Using multi-factor authentication to protect web applications deployed on Oracle WebLogic. Read the article. Video: Coherence Community on Java.net - 4 Projects available under CDDL-1.0 Brian Oliver (Senior Principal Solutions Architect, Oracle Coherence) and Randy Stafford (Architect At-Large, Oracle Coherence Product Development) discuss the evolution of the Oracle Coherence Community on Java.net and how you can actively participate in open source Coherence Community projects. Watch the video. Working with Oracle Security Token Service in an Architecture Involving Oracle WebLogic Server and Oracle Service Bus Oracle Fusion Middleware specialist Ronaldo Fernandes takes you step by step through the process of creating a single sign-on between Oracle WebLogic and Oracle Service Bus using Oracle Security Token Service (OSTS) to generate SAML tokens. Read the article. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress,

    Read the article

  • Linux DD command partition -to- partition

    - by Ben Jackson
    I just used the DD command to copy the contents of one partition over to another partition on another drive, like this: dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=noerror sda2 partition was 66GB and sdb2 was 250GB. I read that by doing this the extra space on the drive I am copying to will be wasted, is this true? I wasn't worried about loosing the extra space for the time being however, I just ran: sudo kill -USR1 (PID) to view the current status of DD and it has written over 66GB of data, will it continue to write data until it gets to 250GB? If so, is there a way to stop the process without corrupting it as waiting for it to write blank space seems like a waste of time.

    Read the article

  • Windows Server 2012 and Ubuntu 12.04.1 under Hyper-V

    - by Technicolour
    I've set up an instance of Ubuntu 12.04.1 LTS under Hyper-V 2012. However it seems to be nondeterministic as to whether or not it completes the boot process. I get a Kernel Panic, "IO-APIC + timer doesn't work!", which from my research is caused by not having integration services correctly installed? It was my understanding that the integration services were all now baked into the kernel? It should then be fine to update the OS (including any kernel updates, as I'm guessing that's what has happened) Being able to rely on this successfully booting would be great as I intend on using ssh for crisis situations.

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >