Search Results

Search found 24915 results on 997 pages for 'ordered test'.

Page 576/997 | < Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >

  • DON'T MISS: Live Webcast - Nimble SmartStack for Oracle with Cisco UCS (Nov 12)

    - by Zeynep Koch
    You are invited to the live webcast with Nimble Storage, Oracle and Cisco where we will talk about the new SmartStack solution from Nimble Storage that features Oracle Linux, Oracle VM and Cisco UCS products. In this webinar, you will learn how Nimble Storage SmartStack with Oracle and Cisco provides a converged infrastructure for Oracle Database environments with Oracle Linux and Oracle VM. SmartStack, built on best-of-breed components, delivers the performance and reliability needed for deploying Oracle on a single symmetric multiprocessing (SMP) server or Oracle Real Application Clusters (RAC) on multiple nodes.  When : Tuesday, November 12, 2013, 11:00 AM Pacific Time Panelists: Michele Resta, Director of Linux and Virtualization Alliances, Oracle John McAbel, Senior Product Manager, Cisco Ibby Rahmani, Solutions Marketing, Nimble Storage SmartStack™solutions provide pre-validated reference architectures that speed deployments and minimize risk.      The pre-validated converged infrastructure is based on an Oracle Validated Configuration that includes Oracle Database and Oracle Linux with the Unbreakable Enterprise Kernel.     The solution components include a Nimble Storage CS-Series array, two Cisco UCS B200 M3 blade servers, Oracle Linux 6 Update 4 with the Unbreakable Enterprise Kernel, and Oracle Database 11g Release 2 or Oracle Database 12c Release 1.     The Nimble Storage CS-Series is certified with Oracle VM 3.2 providing an even more flexible solution leveraging virtualization for functions such as test and development by delivering excellent random I/O performance in Oracle VM environments. Register today 

    Read the article

  • Deploy binary hex registry via GPO or PowerShell

    - by Prashanth Sundaram
    I am trying to deploy a custom registry entry which I exported from a test machine. It looks like below. I came across THIS similar request on another site, but I couldn't make it to work. "TextFontSimple"=hex:3c,00,00,00,1f,00,00,f8,00,00,00,40,dc,00,00,00,00,00,00,\ 00,00,00,00,ff,00,31,43,6f,75,72,69,65,72,20,4e,65,77,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 As per the other solution, my PS command below, throws error."A parameter cannot be found that matches parameter name" Set-ItemProperty -Path "HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common\MailSettings" -Name "TextFontSimple" -PropertyType Binary -Value ([byte[]] (0x3c,0x00,0x00,0x00,0x1f....0x00)) Any ideas? ====EDIT===== The key & value already exists. When I use Get-ItemProperty PSPath : Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common\MailSettings PSParentPath : Microsoft.PowerShell.Core\Registry::HKEY_CURRENT_USER\Software\Microsoft\Office\14.0\Common PSChildName : MailSettings PSProvider : Microsoft.PowerShell.Core\Registry TextFontSimple : {60, 0, 0, 0...}

    Read the article

  • "Reboot and select proper boot device"?

    - by overtherainbow
    Hello I didn't find the answer in Clonezilla's site/mailing list archives. Maybe someone has already seen this issue and knows how to recover from it: On a test host, using www.partedmagic.com, I created two partitions: One to hold an OS I wish to use for testing (/sda1), and a second partition to hold images (/sda2) After trying out Windows7, I used CloneZilla to restore an XPSP3 image, but I get the following error message when rebooting: "Reboot and select proper boot device" Could it be that Clonezilla didn't save/restore the MBR? Gparted didn't let me set a partition as "active", so it could also be this, but I have no idea. Thank you for any help.

    Read the article

  • Why does my audio keep reverting to stereo?

    - by GraemeF
    I have an ASUS P7P55D LE motherboard with onboard sound and I am running Windows 7 64-bit, and I am using the SPDIF output from my motherboard to my receiver. On my receiver I can see which audio channels are in use (i.e. stereo or 5.1) In the audio interface properties I can test DTS Audio and Dolby Digital outputs and both work fine, but when I try to play a game with 5.1 sound (I've tried Left 4 Dead and Dragon Age Origins) it reverts to stereo. I was getting this behaviour with the default Microsoft drivers so I installed the latest from the ASUS website but there is no difference. I notice that in the Advanced tab the Default Format only allows me to choose from various 2 channel formats so maybe that is something to do with it? How can I get 5.1 output all of the time?

    Read the article

  • Spoofing domains - using one domain to look at another without frame redirect

    - by hfidgen
    Hiya, In Plesk 9.2.2 does anyone know how the following can be achieved? I've got domain1.co.uk registered in plesk, but the domain has not been set up with any nameservers or A records, so it is unreachable from the web. However, I need to test it while we get the domain1.co.uk nameservers etc sorted over the next week or so. SO, i've got sparedomain.co.uk registered, with the nameservers and A records pointing to the server, and sure enough it displays the default plesk "theres no website here yet page" . bingo. Now, how can I set up sparedomain.co.uk on my plesk server, so it displays all the data held on the plesk account for domain1.co.uk? Frame forwarding doesnt work - because you get errors saying "domain1.co.uk cannot be found" in your browser - i need a server solution to spoof it all. Anyone got any ideas? Thanks!

    Read the article

  • Using ASP.NET C# and Javascript

    - by ctck
    I'm looking for the most efficient / standardized way of passing data between client javascript code and C# code behind in an ASP.NET application. Currently ive been using the following methods to achieve this but they all feel a bit like a fudge. The way i pass data from javascript to the C# code behind is by setting hidden asp variables and triggering a postback <asp:HiddenField ID="RandomList" runat="server" /> function SetDataField(data) { document.getElementById('<%=RandomList.ClientID%>').value = data; } Then in C# code i collect the list protected void GetData(object sender, EventArgs e) { var _list = RandomList.value; } Going back the other way i often use either scriptmanager to register a function and pass it data during Page_Load: ScriptManager.RegisterStartupScript(this.GetType(), "Set","get("Test();",true); or i add attributes to controls before a post back or during Initialization / pre rendering stages: Btn.Attributes.Add("onclick", "DisplayMessage("Hello");"); These methods have served me well and do the job. However they just dont feel complete. Is there a more standardized way of passing data between client side markup / javascript and backend code. Ive seen some posts like this one: Injecting JavaScrip : StackOverflow that describe HtmlElement class. Is this something is should look into? Thanks everyone for your time.

    Read the article

  • Hex Dump using LINQ (in 7 lines of code)

    - by Fabrice Marguerie
    Eric White has posted an interesting LINQ query on his blog that shows how to create a Hex Dump in something like 7 lines of code.Of course, this is not production grade code, but it's another good example that demonstrates the expressiveness of LINQ.Here is the code:byte[] ba = File.ReadAllBytes("test.xml");int bytesPerLine = 16;string hexDump = ba.Select((c, i) => new { Char = c, Chunk = i / bytesPerLine })    .GroupBy(c => c.Chunk)    .Select(g => g.Select(c => String.Format("{0:X2} ", c.Char))        .Aggregate((s, i) => s + i))    .Select((s, i) => String.Format("{0:d6}: {1}", i * bytesPerLine, s))    .Aggregate("", (s, i) => s + i + Environment.NewLine);Console.WriteLine(hexDump); Here is a sample output:000000: FF FE 3C 00 3F 00 78 00 6D 00 6C 00 20 00 76 00000016: 65 00 72 00 73 00 69 00 6F 00 6E 00 3D 00 22 00000032: 31 00 2E 00 30 00 22 00 20 00 65 00 6E 00 63 00000048: 6F 00 64 00 69 00 6E 00 67 00 3D 00 22 00 75 00000064: 3E 00Eric White reports that he typically notices that declarative code is only 20% as long as imperative code. Cross-posted from http://linqinaction.net

    Read the article

  • Software Developer Interview Question - Fair or Unfair

    - by user607018
    I just phone interviewed with a company for a graduate software developer position and was asked the following questions. I should add that the company concerned are not a database vendor. How does a query optimiser work? If a database was performing badly how would you use the performance logs to find out the problem. I have asked whether they ask such questions of all candidate software developers (graduate or experienced) in a first phone interview. They replied that they like to test their candidates knowledge of database development. I want to write to the company to say that these questions are unreasonable to ask at a software developer interview and to request that my interview be done over. I would like to check the reasonableness of the following assumptions a) Those questions cannot be fairly classified as database development questions. b) I think the questions are appropriate for a DBA interview but wholly unreasonable for a software developer interview (experienced or not). c) The first question is only relevant to a database vendor. d) The second question is not fair because software developers typically don't deal with database performance logs as that is the job of the DBA. Perhaps some of you will be kind enough to comment on my assumptions or may have any other suggestions, before I write to the company.

    Read the article

  • My first encounter with SmartAssembly

    - by Peter Larsson
    Let me start by writing I am a supreme VB6 programmer, but I have very little experience with VB.Net, so I think I still need some more time learning SmartAssembly. SmartAssembly make obfuscating and merging dll files a piece of cake! With it's simple, straight forward and clean GUI I did make my tests work. With other obfuscators like Xenocode, Salamander etc which lets you (and in some cases forces you) control more advanced settings, you really have to know what you are doing. Especially when it comes to protecting code that uses external dependencies. My most annoying experience is that if you start checking radio buttons and activating different obfuscating features in SmartAssembly, you will end up breaking your working code as well, if you like me is not that experienced and don't know what you’re doing. SmartAssembly have some troubleshooting information on their website which explains why the application will fail in some scenarios. So why not extend these checks in some deeper analyzing stage on the dll's? By doing that I think more people could get fully functional dll's out of the box instead of trying different settings and then test the protected dll and see if it's working or not. //Peter

    Read the article

  • Moving my bcd from HDD to SSD - Windows 7

    - by lelouch
    I have windows 7 installed on my SSD, but the /boot/ and bootmgr are on my hard-drive. I want to move them to my SSD for faster booting times. So i figured that I can fix the problem using the Windows startup repair tool. I made a bootable windows 7 flash drive, and ran Windows startup repair. However, it exits with an error. I also can't see my OS in the list of installed OSs. I then tried fixing via the command prompt with bootrec /fixmbr, bootrec /fixboot, bootrec /rebuildbcd. Bootrec /rebuildbcd finds the OS, but gives me the error "The requested system device cannot be found" when i try fixing it. Does anyone know why this is failing? I read somewhere that the Windows Repair environment doesn't support a flash drive, which is why I'm getting that error. Is this true? Unforunately my dvd drive is playing up so I can't use it to test this.

    Read the article

  • Best practice with branching source code and application lifecycle

    - by Toni Frankola
    We are a small ISV shop and we usually ship a new version of our products every month. We use Subversion as our code repository and Visual Studio 2010 as our IDE. I am aware a lot of people are advocating Mercurial and other distributed source control systems but at this point I do not see how we could benefit from these, but I might be wrong. Our main problem is how to keep branches and main trunk in sync. Here is how we do things today: Release new version (automatically create a tag in Subversion) Continue working on the main trunk that will be released next month And the cycle repeats every month and works perfectly. The problem arises when an urgent service release needs to be released. We cannot release it from the main trunk (2) as it is under heavy development and it is not stable enough to be released urgently. In such case we do the following: Create a branch from the tag we created in step (1) Bug fix Test and release Push the change back to main trunk (if applicable) Our biggest problem is merging these two (branch with main). In most cases we cannot rely on automatic merging because e.g.: a lot of changes has been made to main trunk merging complex files (like Visual Studio XML files etc.) does not work very well another developer / team made changes you do not understand and you cannot just merge it So what you think is the best practice to keep these two different versions (branch and main) in sync. What do you do?

    Read the article

  • Announcing: Oracle Enterprise Manager 12c Delivers Advanced Self-Service Automation for Oracle Database 12c Multitenant

    - by Scott McNeil
    New Self-Service Driven Provisioning of Pluggable Databases Today Oracle announced new capabilities that support managing the full lifecycle of pluggable database as a service in Oracle Enterprise Manager 12c Release 3 (12.1.0.3). This latest release builds on the existing capabilities to provide advanced automation for deploying database as a service using Oracle Database 12c Multitenant option. It takes it one step further by offering pluggable database as a service through Oracle Enterprise Manager 12c self-service portal providing customers with fast provisioning of database cloud services with minimal time and effort. This is a significant addition to Oracle Enterprise Manager 12c’s existing portfolio of cloud services that includes infrastructure as a service, database as a service, testing as a service, and Java platform as a service. The solution provides a self-service mechanism to provision pluggable databases allowing users to request and access database(s) on-demand. The self-service operations are also enabled through REST APIs allowing customers to integrate with third-party automation systems or their custom enterprise portals. Benefits Self-service provisioning allows rapid access to pluggable database as a service for hosting or certifying applications on Oracle Database 12c Self-service driven migration to pluggable database as a service in order to migrate a pre-Oracle Database 12c database to a pluggable database as a service model and test the consolidation strategy Single service catalog for all approved pluggable database as a service configurations which helps customers achieve standardization while catering to all applications and users in the enterprise Resource guarantee via database resource manager (and IORM on Oracle Exadata) that enables deployment of mixed workloads in a shared environment Quota, role based access, and policy based management that enforces governance and reduces administrative overhead Chargeback or showback which improves metering and accountability for services consumed by each pluggable database Comprehensive REST APIs that support integration with ticketing or change management systems, and or with other self-service portals Minimal administrative and maintenance overhead through self-managing automation that allows for intelligent placement of pluggable databases To understand how pluggable database as a service works, watch this quick demo: Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • Trackpad and battery on stop working at same time

    - by Alex Krycek
    Gateway E-475M G Windows 7 Professional SP1 Synaptics TouchPad 15.2.20.0 Microsoft ACPI-Compliant Control Method Battery 6.1.7600.16385 It seems that when my battery stops working correctly, my touchpad will also act erratically. Specifically, the computer will charge for a few seconds, then stop charging for another few seconds. This will continue for some time. But if I close the lid, the charging LED will stay lit. As for the touchpad, it'll click or highlight things at random when I can get it to move. As a test, I stopped charging my computer. Now it seems my touchpad is working fine. I've tried to reinstall both the battery and touchpad drivers. I've also started the computer in safe mode, but that didn't help either. Edit: I've now tried booting my computer with an Ubuntu live cd, and the charging problem persists. So, it now seems my computer can only charge continuously as long as it's turned off or sleeping.

    Read the article

  • Sending mail via php in EC2

    - by william007
    I have used the following code for sending mail using php using amazon ec2, but I only see 'aatest' as the result, and doesn't get any incoming email. Btw, I have already included ses.php, and have validated the email [email protected], and double confirm that accesskey, and accesskey are the correct one. Can anyone suggest way for debugging it? require_once('ses.php'); $con=new SimpleEmailService('accesskey','accesskey'); print_r('aa'.$con->listVerifiedEmailAddresses()); $m = new SimpleEmailServiceMessage(); $m->addTo('[email protected]'); $m->setFrom('[email protected]'); $m->setSubject('Hello, world!'); $m->setMessageFromString('This is the message body.'); print_r($con->sendEmail($m)); echo 'test';

    Read the article

  • Unix Server Partitioning & Filesystem Layout

    - by user1717735
    There's a lot of contradictory information about Unix server partitioning out on the internet, so I need some advice on how to proceed. So far, on the servers I in our test environment I didn't really care about partitioning and I configured a single monolithic / plus a swap partition. This partitioning scheme doesn't seem like a good idea for our production servers. I have found a good starting point here, but it seems very vague on the details. Basically I have a server on which I will be running a basic LAMP stack (Apache, PHP, and MySQL). It will have to handle file uploads (up to 2GB). The system has a 2TB RAID 1 array. I plan to set : / 100GB /var 1000GB (apache files and mysql files will be here), /tmp 800GB (handles the php tmp file) /home 96GB swap 4GB Does this sound sane, or am I over-complicating things?

    Read the article

  • LLVM-3.1 libLLVMSupport.a undefined reference to `dladdr'

    - by user91387
    I'm trying to compile using the llvm-3.1 package. I'm running 12.04 x64 (3.2.0-26 kernel) && 12.10 (3.5.0-4) x64 backported llvm-3.1 from quantal, then debian experimental. Next I tried 12.10 with the native ubuntu llvm-3.1 package; this failed as well. user@system:/tmp/llvm-test# make compiling cpp yacc file: decaf-llvm.y output file: decaf-llvm bison -b decaf-llvm -d decaf-llvm.y /bin/mv -f decaf-llvm.tab.c decaf-llvm.tab.cc flex -odecaf-llvm.lex.cc decaf-llvm.lex g++ -o ./decaf-llvm decaf-llvm.tab.cc decaf-llvm.lex.cc decaf-stdlib.c `llvm-config --cppflags --ldflags --libs core jit native` -ly -ll /usr/lib/llvm-3.1/lib/libLLVMSupport.a(Signals.o): In function `PrintStackTrace(void*)': (.text+0x6c): undefined reference to `dladdr' /usr/lib/llvm-3.1/lib/libLLVMSupport.a(Signals.o): In function `PrintStackTrace(void*)': (.text+0x18f): undefined reference to `dladdr' collect2: error: ld returned 1 exit status make: *** [decaf-llvm] Error 1 I know the code works as I've run it in centos fine using llvm-3.1-6.fc18(rpm) Google was a bit helpful with this: "On some systems, incluning Ubuntu 11.10, linking may fail with message that libLLVMSupport.a in function PrintStackTrace(void*) has undefined reference to dladdr." "Workaround is to compile LLVM with cmake specifying the following variable: -DCMAKE_EXE_LINKER_FLAGS=-ldl" http://svn.dsource.org/projects/bindings/trunk/llvm-3.0/Readme I double checked y ldflags and everything seems ok. user@system:/llvm-config --ldflags -L/usr/lib/llvm-3.1/lib -lpthread -lffi -ldl -lm I'm unclear of what to do next; any suggestions?

    Read the article

  • Changing memory allocator to Jemalloc Centos 6

    - by Brian Lovett
    After reading this blog post about the impact of memory allocators like jemalloc on highly threaded applications, I wanted to test things on a larger scale on some of our cluster of servers. We run sphinx, and apache using threads, and on 24 core machines. Installing jemalloc was simple enough. We are running Centos 6, so yum install jemalloc jemalloc-devel did the trick. My question is, how do we change everything on the system over to using jemalloc instead of the default malloc built into Centos. Research pointed me at this as a potential option: LD_PRELOAD=$LD_PRELOAD:/usr/lib64/libjemalloc.so.1 Would this be sufficient to get everything using jemalloc?

    Read the article

  • HP Printer scrolls "printing" forever - but nothing happens

    - by brett80
    I got a used HP Officejet 7130 all-in-one printer. I tried printing a text document, and it shows up in the queue (and says "printing"), and the printer display says "Printing" "Press Cancel to cancel job." - and it cyles that text forever, but nothing ever happens (after a couple of initial quiet noises.) I tried to print a test page from the printer, and I get the same "printing" message. I checked the hp website, and this problem is not on their list. (They are expecting a queue problem, error message, etc.) I did initially get a message that it needs new ink cartridges, but I pressed to get past that. It's supposed to try to print anyway. I don't want to buy ink cartridges for a printer that is broken. When I press it cancels correctly.

    Read the article

  • quick look at: dm_db_index_physical_stats

    - by fatherjack
    A quick look at the key data from this dmv that can help a DBA keep databases performing well and systems online as the users need them. When the dynamic management views relating to index statistics became available in SQL Server 2005 there was much hype about how they can help a DBA keep their servers running in better health than ever before. This particular view gives an insight into the physical health of the indexes present in a database. Whether they are use or unused, complete or missing some columns is irrelevant, this is simply the physical stats of all indexes; disabled indexes are ignored however. In it’s simplest form this dmv can be executed as:   The results from executing this contain a record for every index in every database but some of the columns will be NULL. The first parameter is there so that you can specify which database you want to gather index details on, rather than scan every database. Simply specifying DB_ID() in place of the first NULL achieves this. In order to avoid the NULLS, or more accurately, in order to choose when to have the NULLS you need to specify a value for the last parameter. It takes one of 4 values – DEFAULT, ‘SAMPLED’, ‘LIMITED’ or ‘DETAILED’. If you execute the dmv with each of these values you can see some interesting details in the times taken to complete each step. DECLARE @Start DATETIME DECLARE @First DATETIME DECLARE @Second DATETIME DECLARE @Third DATETIME DECLARE @Finish DATETIME SET @Start = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, DEFAULT) AS ddips SET @First = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'SAMPLED') AS ddips SET @Second = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'LIMITED') AS ddips SET @Third = GETDATE() SELECT * FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, 'DETAILED') AS ddips SET @Finish = GETDATE() SELECT DATEDIFF(ms, @Start, @First) AS [DEFAULT] , DATEDIFF(ms, @First, @Second) AS [SAMPLED] , DATEDIFF(ms, @Second, @Third) AS [LIMITED] , DATEDIFF(ms, @Third, @Finish) AS [DETAILED] Running this code will give you 4 result sets; DEFAULT will have 12 columns full of data and then NULLS in the remainder. SAMPLED will have 21 columns full of data. LIMITED will have 12 columns of data and the NULLS in the remainder. DETAILED will have 21 columns full of data. So, from this we can deduce that the DEFAULT value (the same one that is also applied when you query the view using a NULL parameter) is the same as using LIMITED. Viewing the final result set has some details that are worth noting: Running queries against this view takes significantly longer when using the SAMPLED and DETAILED values in the last parameter. The duration of the query is directly related to the size of the database you are working in so be careful running this on big databases unless you have tried it on a test server first. Let’s look at the data we get back with the DEFAULT value first of all and then progress to the extra information later. We know that the first parameter that we supply has to be a database id and for the purposes of this blog we will be providing that value with the DB_ID function. We could just as easily put a fixed value in there or a function such as DB_ID (‘AnyDatabaseName’). The first columns we get back are database_id and object_id. These are pretty explanatory and we can wrap those in some code to make things a little easier to read: SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName] … FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips  gives us   SELECT DB_NAME([ddips].[database_id]) AS [DatabaseName] , OBJECT_NAME([ddips].[object_id]) AS [TableName], [i].[name] AS [IndexName] , ….. FROM [sys].[dm_db_index_physical_stats](DB_ID(), NULL, NULL, NULL, NULL) AS ddips INNER JOIN [sys].[indexes] AS i ON [ddips].[index_id] = [i].[index_id] AND [ddips].[object_id] = [i].[object_id]     These handily tie in with the next parameters in the query on the dmv. If you specify an object_id and an index_id in these then you get results limited to either the table or the specific index. Once again we can place a  function in here to make it easier to work with a specific table. eg. SELECT * FROM [sys].[dm_db_index_physical_stats] (DB_ID(), OBJECT_ID(‘AdventureWorks2008.Person.Address’) , 1, NULL, NULL) AS ddips   Note: Despite me showing that functions can be placed directly in the parameters for this dmv, best practice recommends that functions are not used directly in the function as it is possible that they will fail to return a valid object ID. To be certain of not passing invalid values to this function, and therefore setting an automated process off on the wrong path, declare variables for the OBJECT_IDs and once they have been validated, use them in the function: DECLARE @db_id SMALLINT; DECLARE @object_id INT; SET @db_id = DB_ID(N’AdventureWorks_2008′); SET @object_id = OBJECT_ID(N’AdventureWorks_2008.Person.Address’); IF @db_id IS NULL BEGINPRINT N’Invalid database’; ENDELSE IF @object_id IS NULL BEGINPRINT N’Invalid object’; ENDELSE BEGINSELECT * FROM sys.dm_db_index_physical_stats (@db_id, @object_id, NULL, NULL , ‘LIMITED’); END; GO In cases where the results of querying this dmv don’t have any effect on other processes (i.e. simply viewing the results in the SSMS results area)  then it will be noticed when the results are not consistent with the expected results and in the case of this blog this is the method I have used. So, now we can relate the values in these columns to something that we recognise in the database lets see what those other values in the dmv are all about. The next columns are: We’ll skip partition_number, index_type_desc, alloc_unit_type_desc, index_depth and index_level  as this is a quick look at the dmv and they are pretty self explanatory. The final columns revealed by querying this view in the DEFAULT mode are avg_fragmentation_in_percent. This is the amount that the index is logically fragmented. It will show NULL when the dmv is queried in SAMPLED mode. fragment_count. The number of pieces that the index is broken into. It will show NULL when the dmv is queried in SAMPLED mode. avg_fragment_size_in_pages. The average size, in pages, of a single fragment in the leaf level of the IN_ROW_DATA allocation unit. It will show NULL when the dmv is queried in SAMPLED mode. page_count. Total number of index or data pages in use. OK, so what does this give us? Well, there is an obvious correlation between fragment_count, page_count and avg_fragment_size-in_pages. We see that an index that takes up 27 pages and is in 3 fragments has an average fragment size of 9 pages (27/3=9). This means that for this index there are 3 separate places on the hard disk that SQL Server needs to locate and access to gather the data when it is requested by a DML query. If this index was bigger than 72KB then having it’s data in 3 pieces might not be too big an issue as each piece would have a significant piece of data to read and the speed of access would not be too poor. If the number of fragments increases then obviously the amount of data in each piece decreases and that means the amount of work for the disks to do in order to retrieve the data to satisfy the query increases and this would start to decrease performance. This information can be useful to keep in mind when considering the value in the avg_fragmentation_in_percent column. This is arrived at by an internal algorithm that gives a value to the logical fragmentation of the index taking into account the multiple files, type of allocation unit and the previously mentioned characteristics if index size (page_count) and fragment_count. Seeing an index with a high avg_fragmentation_in_percent value will be a call to action for a DBA that is investigating performance issues. It is possible that tables will have indexes that suffer from rapid increases in fragmentation as part of normal daily business and that regular defragmentation work will be needed to keep it in good order. In other cases indexes will rarely become fragmented and therefore not need rebuilding from one end of the year to another. Keeping this in mind DBAs need to use an ‘intelligent’ process that assesses key characteristics of an index and decides on the best, if any, defragmentation method to apply should be used. There is a simple example of this in the sample code found in the Books OnLine content for this dmv, in example D. There are also a couple of very popular solutions created by SQL Server MVPs Michelle Ufford and Ola Hallengren which I would wholly recommend that you review for much further detail on how to care for your SQL Server indexes. Right, let’s get back on track then. Querying the dmv with the fifth parameter value as ‘DETAILED’ takes longer because it goes through the index and refreshes all data from every level of the index. As this blog is only a quick look a we are going to skate right past ghost_record_count and version_ghost_record_count and discuss avg_page_space_used_in_percent, record_count, min_record_size_in_bytes, max_record_size_in_bytes and avg_record_size_in_bytes. We can see from the details below that there is a correlation between the columns marked. Column 1 (Page_Count) is the number of 8KB pages used by the index, column 2 is how full each page is (how much of the 8KB has actual data written on it), column 3 is how many records are recorded in the index and column 4 is the average size of each record. This approximates to: ((Col1*8) * 1024*(Col2/100))/Col3 = Col4*. avg_page_space_used_in_percent is an important column to review as this indicates how much of the disk that has been given over to the storage of the index actually has data on it. This value is affected by the value given for the FILL_FACTOR parameter when creating an index. avg_record_size_in_bytes is important as you can use it to get an idea of how many records are in each page and therefore in each fragment, thus reinforcing how important it is to keep fragmentation under control. min_record_size_in_bytes and max_record_size_in_bytes are exactly as their names set them out to be. A detail of the smallest and largest records in the index. Purely offered as a guide to the DBA to better understand the storage practices taking place. So, keeping an eye on avg_fragmentation_in_percent will ensure that your indexes are helping data access processes take place as efficiently as possible. Where fragmentation recurs frequently then potentially the DBA should consider; the fill_factor of the index in order to leave space at the leaf level so that new records can be inserted without causing fragmentation so rapidly. the columns used in the index should be analysed to avoid new records needing to be inserted in the middle of the index but rather always be added to the end. * – it’s approximate as there are many factors associated with things like the type of data and other database settings that affect this slightly.  Another great resource for working with SQL Server DMVs is Performance Tuning with SQL Server Dynamic Management Views by Louis Davidson and Tim Ford – a free ebook or paperback from Simple Talk. Disclaimer – Jonathan is a Friend of Red Gate and as such, whenever they are discussed, will have a generally positive disposition towards Red Gate tools. Other tools are often available and you should always try others before you come back and buy the Red Gate ones. All code in this blog is provided “as is” and no guarantee, warranty or accuracy is applicable or inferred, run the code on a test server and be sure to understand it before you run it on a server that means a lot to you or your manager.

    Read the article

  • Upgraded from VS 2008 -> VS 2010. Can't Connect to SQL Server in Staging Environment

    - by Bob Kaufman
    I have a test application written in C#/ASP.NET that I've developed using Visual Studio 2008 Professional/.NET 3.5 which connects to a local SQL Server 2008 Express instance. I upgraded the development machine to Visual Studio 2010 Professional maintaining .NET 3.5 and everything in the development environment continues to work correctly. Upon deployment of the new app to an internal staging machine, that app cannot connect to its local SQL Server 2008 Express database. I get the customary "server not found" error: A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible... Does something need to be upgraded on the staging machine to be able to host a Visual Studio 2010/.NET 3.5 application?

    Read the article

  • LDAP over SSL/TLS working for everything but login on Ubuntu

    - by Oliver Nelson
    I have gotten OpenLDAP with SSL working on a test box with a signed certificate. I can use an LDAP tool on a Windows box to view the LDAP over SSL (port 636). But when I run dpkg-reconfigure ldap-auth-config to setup my local login to use ldaps, my login under a username in the directory doesn't work. If I change the config to use just plain ldap (port 389) it works just fine (I can login under a username in the directory). When its setup for ldaps I get Auth.log shows: Sep 5 13:48:27 boromir sshd[13453]: pam_ldap: ldap_simple_bind Can't contact LDAP server Sep 5 13:48:27 boromir sshd[13453]: pam_ldap: reconnecting to LDAP server... Sep 5 13:48:27 boromir sshd[13453]: pam_ldap: ldap_simple_bind Can't contact LDAP server I will provide whatever are needed. I'm not sure what else to include. Thanx for any insights... OLIVER

    Read the article

  • Linux: Combine two partitions?

    - by Jakobud
    This workstation is running Fedora 11. It has 4 HDDs raided into 4 partitions: / (31 gig) /boot (134 meg) /data (140 gig) /FC12 (31 gig) The previous employee that used my current workstation set it up this way. He apparently created the FC12 partition to test a Fedora 12 installation. I don't need Fedora 12 so I wiped that partition and now I'm wondering if its possible for me to combine the /FC12 partition into the / partition, so that the / partition will now be 62 gigs. Is this possible? If so, how? Can it be done w/o reinstalling the OS? I've toyed with Fedora's LVM admin interface but it seems very basic and there doesn't seem to be anything about combining partitions. I've also messed with other HDD utilities that are in Fedora (Palimpsest Disk Utlity) but all it seems to be able to do is mount and umount partitions.

    Read the article

  • Running CRON job on Ubuntu server for SugarCRM

    - by Logik
    I am pretty inexperienced in Linux, so be descriptive on your answer. My environment: Local Linux server 12.04 hosting Sugar CRM 6.5.2. There is area in sugar CRM called scheduler. I can configured some predefined jobs here. in my case i am trying to run email reminders (ever min/hour/day/month). For this scheduler to be effective, i read some where i need to setup CRON job. So I did some research & finally put following lines in CRONTAB for the root user, as per instructions given in sugarCRM. * * * * * cd /var/www/crm; php -f cron.php > /dev/null 2>&1 Well I am creating contracts in my sugarCRM (AOS module) & I want email reminders to be sent for these contracts to the concern person. Now my sugarCRM email is configured correctly & I can send test emails using it. But the CRON + scheduler not giving any result. I can't receive any emails. Then I tried to read /var/log/syslog & it is showing entry for following line each minute. Oct 27 15:03:01 unicomm CRON[28182]: (root) CMD (cd /var/www/crm; php -f cron.php /dev/null 2&1) I've few questions: what does the CRON job line i've added in crontab mean? cd /var/www/crm; php -f cron.php > /dev/null 2>&1 is not making any sense to me. How am i suppose to get this thing work? I've searched a lot (including SugarCRM forum), but no luck.

    Read the article

  • No input file specified on nginx and php-cgi

    - by Sandeep Bansal
    I'm trying to run nginx and php-cgi on my Windows PC, I've got up to the point where I want to move the html directory back two directory's so I can sort of create a structure. The only problem I have now is that PHP doesn't pick up any .php file. I have tried loading a static html file (localhost/test.html) and it works fine but localhost/info.php doesn't work at all. Can anyone give me some guidance on this? The part of the server block can be found below. server { listen 80; server_name localhost; root ../../www; index index.html index.htm index.php; #charset koi8-r; #access_log logs/host.access.log main; location ~ \.php$ { fastcgi_pass 127.0.0.1:9123; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Thanks

    Read the article

  • Optimizing PHP<>MySQL performance

    - by BarsMonster
    I am trying to optimize my PHP<MySQL on this test script: <? for($i=0;$i<100;$i++)//Itterations count $res.= var_dump(loadRow("select body_ru from articles where id>$i*50 limit 100")); print_r($res); ?> I have APC, and article table have an index on id. Also, all these queries are hitting query cache, so sole MySQL performance if great. But when I am using ab -c 10 -t 10 to bench this scipt, I am getting: 100 itterations: ~100req/sec (~10'000 MySQL queries per second) 5 itteration: ~200req/sec 1 itteration: ~380req/sec 0 itteration: ~580req/sec I've tried to disable persistent connections in PHP - it made it slower a bit. So, how can I make it work faster, provided that MySQL is not limiting performance here?

    Read the article

< Previous Page | 572 573 574 575 576 577 578 579 580 581 582 583  | Next Page >