Search Results

Search found 1714 results on 69 pages for 'optimizer hints'.

Page 9/69 | < Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • ... i just avoid GUID

    - by Tomaz.tsql
    Our partner was explaining to me that they are using GUID as primary key on all the tables. My immediate reaction was - why? and couple of basic doubts were: - since I can read uniqueidentifier, it does not tell me absolutely anything - if I will use my relational table, i sure will use other columns to get the information out - SQL is terrible when setting up clustered index on GUID columns (and hence performance problems) - why not use INT? it will save you space on disk, optimizer will be able...(read more)

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • The blocking nature of aggregates

    - by Rob Farley
    I wrote a post recently about how query tuning isn’t just about how quickly the query runs – that if you have something (such as SSIS) that is consuming your data (and probably introducing a bottleneck), then it might be more important to have a query which focuses on getting the first bit of data out. You can read that post here.  In particular, we looked at two operators that could be used to ensure that a query returns only Distinct rows. and The Sort operator pulls in all the data, sorts it (discarding duplicates), and then pushes out the remaining rows. The Hash Match operator performs a Hashing function on each row as it comes in, and then looks to see if it’s created a Hash it’s seen before. If not, it pushes the row out. The Sort method is quicker, but has to wait until it’s gathered all the data before it can do the sort, and therefore blocks the data flow. But that was my last post. This one’s a bit different. This post is going to look at how Aggregate functions work, which ties nicely into this month’s T-SQL Tuesday. I’ve frequently explained about the fact that DISTINCT and GROUP BY are essentially the same function, although DISTINCT is the poorer cousin because you have less control over it, and you can’t apply aggregate functions. Just like the operators used for Distinct, there are different flavours of Aggregate operators – coming in blocking and non-blocking varieties. The example I like to use to explain this is a pile of playing cards. If I’m handed a pile of cards and asked to count how many cards there are in each suit, it’s going to help if the cards are already ordered. Suppose I’m playing a game of Bridge, I can easily glance at my hand and count how many there are in each suit, because I keep the pile of cards in order. Moving from left to right, I could tell you I have four Hearts in my hand, even before I’ve got to the end. By telling you that I have four Hearts as soon as I know, I demonstrate the principle of a non-blocking operation. This is known as a Stream Aggregate operation. It requires input which is sorted by whichever columns the grouping is on, and it will release a row as soon as the group changes – when I encounter a Spade, I know I don’t have any more Hearts in my hand. Alternatively, if the pile of cards are not sorted, I won’t know how many Hearts I have until I’ve looked through all the cards. In fact, to count them, I basically need to put them into little piles, and when I’ve finished making all those piles, I can count how many there are in each. Because I don’t know any of the final numbers until I’ve seen all the cards, this is blocking. This performs the aggregate function using a Hash Match. Observant readers will remember this from my Distinct example. You might remember that my earlier Hash Match operation – used for Distinct Flow – wasn’t blocking. But this one is. They’re essentially doing a similar operation, applying a Hash function to some data and seeing if the set of values have been seen before, but before, it needs more information than the mere existence of a new set of values, it needs to consider how many of them there are. A lot is dependent here on whether the data coming out of the source is sorted or not, and this is largely determined by the indexes that are being used. If you look in the Properties of an Index Scan, you’ll be able to see whether the order of the data is required by the plan. A property called Ordered will demonstrate this. In this particular example, the second plan is significantly faster, but is dependent on having ordered data. In fact, if I force a Stream Aggregate on unordered data (which I’m doing by telling it to use a different index), a Sort operation is needed, which makes my plan a lot slower. This is all very straight-forward stuff, and information that most people are fully aware of. I’m sure you’ve all read my good friend Paul White (@sql_kiwi)’s post on how the Query Optimizer chooses which type of aggregate function to apply. But let’s take a look at SQL Server Integration Services. SSIS gives us a Aggregate transformation for use in Data Flow Tasks, but it’s described as Blocking. The definitive article on Performance Tuning SSIS uses Sort and Aggregate as examples of Blocking Transformations. I’ve just shown you that Aggregate operations used by the Query Optimizer are not always blocking, but that the SSIS Aggregate component is an example of a blocking transformation. But is it always the case? After all, there are plenty of SSIS Performance Tuning talks out there that describe the value of sorted data in Data Flow Tasks, describing the IsSorted property that can be set through the Advanced Editor of your Source component. And so I set about testing the Aggregate transformation in SSIS, to prove for sure whether providing Sorted data would let the Aggregate transform behave like a Stream Aggregate. (Of course, I knew the answer already, but it helps to be able to demonstrate these things). A query that will produce a million rows in order was in order. Let me rephrase. I used a query which produced the numbers from 1 to 1000000, in a single field, ordered. The IsSorted flag was set on the source output, with the only column as SortKey 1. Performing an Aggregate function over this (counting the number of rows per distinct number) should produce an additional column with 1 in it. If this were being done in T-SQL, the ordered data would allow a Stream Aggregate to be used. In fact, if the Query Optimizer saw that the field had a Unique Index on it, it would be able to skip the Aggregate function completely, and just insert the value 1. This is a shortcut I wouldn’t be expecting from SSIS, but certainly the Stream behaviour would be nice. Unfortunately, it’s not the case. As you can see from the screenshots above, the data is pouring into the Aggregate function, and not being released until all million rows have been seen. It’s not doing a Stream Aggregate at all. This is expected behaviour. (I put that in bold, because I want you to realise this.) An SSIS transformation is a piece of code that runs. It’s a physical operation. When you write T-SQL and ask for an aggregation to be done, it’s a logical operation. The physical operation is either a Stream Aggregate or a Hash Match. In SSIS, you’re telling the system that you want a generic Aggregation, that will have to work with whatever data is passed in. I’m not saying that it wouldn’t be possible to make a sometimes-blocking aggregation component in SSIS. A Custom Component could be created which could detect whether the SortKeys columns of the input matched the Grouping columns of the Aggregation, and either call the blocking code or the non-blocking code as appropriate. One day I’ll make one of those, and publish it on my blog. I’ve done it before with a Script Component, but as Script components are single-use, I was able to handle the data knowing everything about my data flow already. As per my previous post – there are a lot of aspects in which tuning SSIS and tuning execution plans use similar concepts. In both situations, it really helps to have a feel for what’s going on behind the scenes. Considering whether an operation is blocking or not is extremely relevant to performance, and that it’s not always obvious from the surface. In a future post, I’ll show the impact of blocking v non-blocking and synchronous v asynchronous components in SSIS, using some of LobsterPot’s Script Components and Custom Components as examples. When I get that sorted, I’ll make a Stream Aggregate component available for download.

    Read the article

  • Oracle????????????????????????~????????????????????

    - by Yusuke.Yamamoto
    RDBMS ???????·????????????????????????????????????????????????????????????????????????? ????????Oracle ?????????????????????????????????? Oracle Database ???????????????????????????????? ????????????????????? ????Oracle???????????????????????????????????????????????????????????????????????????? ?????????????? Oracle Database ???????????????????????? ??????????????????????????????????2????????????? 1. ??????(Query Transformation) Query Transformation ???????SQL??????????????????SQL????????????????????? Query Transformation ???Predicate Transformation ? Common Sub-expression Elimination (CSE), Order-BY Elimination (OBYE), Outer Join Elimination (OJE), Simple View Meging (SVM), Predicate Move around (PM), Complex View Merging (CVM), Sub-query Unnesting (SU), Join Predicate Push Down (JPPD) ???? OR Expansion, Star Transformation (ST) ????????????? ···???????????????????????????????????????????????????? Predicate Transformation ?????? Transitive Predicate Generation ????????????? ?????????????SQL???deptno ? 10 ????????????????????????????? select e.ename, d.loc from emp e, dept d where e.deptno=d.deptno and e.deptno=10; ???????????????emp ??? deptno=10 ??????????????dept ??? d.deptno=10 ??????????????????? emp ?? deptno=10 ????????????????????emp ?? deptno=10 ??????10???????10? dept ????????????dept ??20???????????????????????10?*20?=200?????(??????????·?????????)? ??SQL?? Transitive Predicate Generation ??????SQL????????????????? select e.ename, d.location from emp e, dept d where e.deptno=d.deptno and e.deptno=10 and d.deptno=10; ^^^^^^^^^^^ ??????dept ?????? deptno=10 ??????????????????????????10?*1?=10(dept.deptno ?unique????)?1/20????????????????1/20????????????????10??????????30???????????????Query Transformation ???????????????????????????? ?:??????????? dept ?? 1-row table ??????dept ?? driving ???(Outer Table)??? emp ?? probe ???(Inner Table)????????????1?*10?=10 ????????????????????????????????????????????????????????1/20????????????? ?????? Query Transformation ??????SQL????????????????????????????????? Transformation ??????????????????????????????????? 2. ????·????(Access Path Analysis) Access Path Analysis ??Query Transformation ??SQL????????????(Access Path)?????????(Join Method)?????(Join Order)?????????? ??????????????????(FTS)?ROWID?????????????????????????????·?????(Nested Loop Join)???????(Hash Join)????/?????(Sort Merge Join)????????????????????????????????????????????????????????????????????????? Oracle Database ????????? Query Transformation ???? Logical Optimizer?Access Path Analysis ???? Physical Optimizer ????????? ??????????????????????????????????????????????????????????????????????????????????????????????????? ????????????????????????????????????????????????????? Oracle Database ????????????????????? "Oracle ????????" ?????????? Sustaining Engineering?? ?(??? ???) ???????????????? Sustaining Engineering ????????????????????????Oracle Database ???????????????????????? ?????????????????????Ruby????????????????????????? Oracle????????????????????????! Oracle????????????? Oracle????????????????????????

    Read the article

  • Automatic Maintenance Jobs in every PDB? New SPM Evolve Advisor Task in Oracle 12.1.0.2

    - by Mike Dietrich
    A customer checking out our slides from the OTN Tour in August 2014 asked me a finicky question the other day: "According to the documentation the Automatic SQL Tuning Advisor maintenance task gets executed only within the CDB$ROOT, but not within each PDB - but the slides are not clear here. So what is the truth?" Ok, that's good question. In my understanding all tasks will get executed within each PDB - that's why we recommend (based on experience) to break up the default maintenance windows when using Oracle Multitenant. Otherwise all PDBs will have the same maintenance windows, and guess what will happen when 25 PDBs start gathering object statistics at the same time ... The documentation indeed says: Automatic SQL Tuning Advisor data is stored in the root. It might have results about SQL statements executed in a PDB that were analyzed by the advisor, but these results are not included if the PDB is unplugged. A common user whose current container is the root can run SQL Tuning Advisor manually for SQL statements from any PDB. When a statement is tuned, it is tuned in any container that runs the statement. This sounds reasonable. But when we have a look into our PDBs or into the CDB_AUTOTASK_CLIENT view the result is different from what the doc says. In my environment I did create just two fresh empty PDBs (CON_ID 3 and 4): SQL> select client_name, status, con_id from cdb_autotask_client; CLIENT_NAME                           STATUS         CON_ID------------------------------------- ---------- ----------auto optimizer stats collection       ENABLED             1sql tuning advisor                    ENABLED             1auto space advisor                    ENABLED             1auto optimizer stats collection       ENABLED             4sql tuning advisor                    ENABLED             4auto space advisor                    ENABLED             4auto optimizer stats collection       ENABLED             3sql tuning advisor                    ENABLED             3auto space advisor                    ENABLED             3 9 rows selected. I haven't verified the reason why this is different from the docs but it may have been related to one change in Oracle Database 12.1.0.2: The new SPM Evolve Advisor Task ( SYS_AUTO_SPM_EVOLVE_TASK) for automatic plan evolution for SQL Plan Management. This new task doesn't appear as a stand-alone job (client) in the maintenance window but runs as a sub-entity of the Automatic SQL Tuning Advisor task. And (I'm just guessing) this may be one of the reasons why every PDB will have to have its own Automatic SQL Tuning Advisor task  Here you'll find more information about how to enable, disable and configure the new Oracle 12.1.0.2 SPM Evolve Advisor Task: Oracle Database 12.1.0.2 SQL Tuning Guide:Managing the SPM Evolve Advisor Task -Mike

    Read the article

  • CLR JIT Bugs Found During IKVM.NET Development

    "It is actually fairly common that people notice that things fail under retail but not debug and tend to blame code generation. While a code generation bug is possible, as a matter of statistics, it is not likely." -- Vance MorrisonDateCLRArchTypeDescription2010-06-12 v4 x64 Incorrect code Optimizer incorrectly propagates invariants.2010-06-04 v2, v4 x86 Crash ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • The Seven Sins against T-SQL Performance

    There are seven common antipatterns in T-SQL coding that make code perform badly, and three good habits which will generally ensure that your code runs fast. If you learn nothing else from this list of great advice from Grant, just keep in mind that you should 'write for the optimizer'. Compress live data by 73% Red Gate's SQL Storage Compress reduces the size of live SQL Server databases, saving you disk space and storage costs. Learn more.

    Read the article

  • WSS - Server Error in "/" Application. Compilation Error Message: CS1006: Could not write to output

    - by ptahiliani
    I got the above errror when I tried to run WSS default site after installing and running the Advance System Optimizer 3.o. I resolve this by going to the following locations and adding permission for the admin users accounts (ASP.NET & IIS_WPG) I have set up for Sharepoint. C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\Temporary ASP.NET Files C:\WINDOWS\System 32\Log Files C:\WINDOWS\Temp After the correct permissions have been added, Sharepoint works as normal.

    Read the article

  • Why the SQL Server FORCESCAN hint exists

    It is often generalized that seeks are better than scans in terms of retrieving data from SQL Server. The index hint FORCESCAN was recently introduced so that you could coerce the optimizer to perform a scan instead of a seek. Which might lead you to wonder: Why would I ever want a scan instead of a seek? 12 must-have SQL Server toolsThe award-winning SQL Developer Bundle contains 10 tools for faster, simpler SQL Server development. Download a free trial.

    Read the article

  • SQL Server Prefetch and Query Performance

    Prefetching can make a surprising difference to SQL Server query execution times where there is a high incidence of waiting for disk i/o operations, but the benefits come at a cost. Mostly, the Query Optimizer gets it right, but occasionally there are queries that would benefit from tuning. Get smart with SQL Backup ProGet faster, smaller backups with integrated verification.Quickly and easily DBCC CHECKDB your backups. Learn more.

    Read the article

  • SEO For Lawyers

    There are a number of lawyers who have good websites too and if you want your own website to do well against theirs, you better get a good SEO professional to help you. It is not difficult to hire a search engine optimizer and once you do so, you will notice the positive difference.

    Read the article

  • My Oracle Suport?????

    - by Dongwei Wang
    ????????????????,??????MOS???????(????),????????????????????????????:Note 62143.1 - Troubleshooting: Tuning the Shared Pool and Tuning Library Cache Latch ContentionNote 376442.1 - * How To Collect 10046 Trace (SQL_TRACE) Diagnostics for Performance IssuesNote 749227.1 - * How to Gather Optimizer Statistics on 11gNote 1359094.1 - FAQ: How to Use AWR reports to Diagnose Database Performance IssuesNote 1320966.1 - Things to Consider Before Upgrading to 11.2.0.2 to Avoid Poor Performance or Wrong ResultsNote 1392633.1 - Things to Consider Before Upgrading to 11.2.0.3 to Avoid Poor Performance or Wrong Results????????????????”??“???,?????????????????(PDF??)???????????????”Rate this document“????

    Read the article

  • Effective SEO Strategies For Better Search Rankings

    Development of successful SEO campaign totally depends on having well researched and effective SEO strategies for the website. As a search engine optimizer you need to figure out how to progress with search engine optimization at various stages to gain optimal results.

    Read the article

  • PostgreSQL timezone does not match system timezone

    - by Martin C.
    I have several PostgreSQL 9.2 installations where the timezone used by PostgreSQL is GMT, despite the entire system being "Europe/Vienna". I double-checked that postgresql.conf does not contain timezone setting, so according to the documentation it should fallback to the system's timezone. However, # su -s /bin/bash postgres -c "psql mydb" mydb=# show timezone; TimeZone ---------- GMT (1 row) mydb=# select now(); now ------------------------------- 2013-11-12 08:14:21.697622+00 (1 row) Any hints, where the GMT timezone could come from? The system user does not have TZ set and the /etc/timezone and /etc/timeinfo seem to be configured correctly. # cat /etc/timezone Europe/Vienna # date Tue Nov 12 09:15:42 CET 2013 Any hints are appreciated, thanks in advance!

    Read the article

  • Windows Server 2008 R2 DNS Server Intermittently Unresponsive

    - by Ablue
    Throughout the day out DNS servers (2x Win 2k8 R2 servers) are unable to respond to requests. The requests that fail are all on the .root zone that are either cached or obtained from 1 of 5 DNS servers we forward to before going to root hints. At first I thought the DNS servers we were forwarding to were flaky. So I added some more in. Currently the forwarding list looks like ISP DNS 1 OPEN DNS 1 ISP DNS 2 OPEN DNS 2 ISP DNS 3 I have tried: Turning off root hints. Set record scavenging to 7 days. Using dnscmd /config /EnableEDNSProbes 0 as per this. Packet capture at the DNS server shows that there is a lot of query responses with server failure between lan clients and the local dns server; it does not appear to be forwarding those requests. So maybe a problem with caching? Anyhow, does anything have anything I can try to get this working?

    Read the article

  • Notes for a NetBeans IDE 7.4 HTML5 Screencast

    - by Geertjan
    I'm making a screencast that intends to thoroughly introduce NetBeans IDE 7.4 as a tool for HTML, JavaScript, and CSS developers. Here's the current outline, additions and other suggestions are welcome. Getting Started Downloading NetBeans IDE for HTML5 and PHP Examining the NetBeans installation directory, especially netbeans.conf Examining the NetBeans user directory Command line options for starting NetBeans IDE Exploring NetBeans IDE Menus and toolbars Versioning tools Options Window Go through whole Options window Change look and feels Adding themes Syntax coloring Code templates Plugin Manager and Plugin Portal Dark Look and Feel Themes Toggle line wrap Emmet HTML Tidy NetBeans Cheat Sheets Creating HTML5 projects From scratch From online template, e.g., Twitter Bootstrap From ZIP file From folder on disk From sample Editing Useful shortcuts Alt-Enter: see the current hints Alt-Shift-DOT/COMMA: expand selection (CTRL instead of Alt on Mac) Ctrl-Shift-Up/Down: copy up/down Alt-Shift-Up/Down: move up/down Alt-Insert: generate code (Lorum Ipsum) View menu | Show Non-printable Characters Source menu Show keyboard shortcut card Useful hints Surround with Tag Remove Surrounding Tag Useful code completion Link tag for CSS, show completion Script tag for JavaScript, show completion Create code templates in Options window Useful HTML Palette items Unordered List Link Useful code navigation Navigator Navigate menu Useful project settings Project-level deployment settings CSS Preprocessors (SASS/LESS) Cordova support Useful window management Dragging, minimizing, undocking Ctrl-Shift-Enter: distraction-free mode Alt-Shift Enter: maximization Debugging JavaScript debugger Deploying Embedded browser Responsive design Inspect in NetBeans mode Chrome browser with NetBeans plugin Android and iOS browsers Cordova makes native packages On device debugging On device styling Documentation PHP and HTML5 Learning Trail: https://netbeans.org/kb/trails/php.html Contributing Social Media: Twitter, Facebook, blogs Plugin Portal Planning to complete the above screencast this week, will continue editing this page as more useful features arise in my mind or hopefully in the comments in this blog entry!

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • White box testing with Google Test

    - by Daemin
    I've been trying out using GoogleTest for my C++ hobby project, and I need to test the internals of a component (hence white box testing). At my previous work we just made the test classes friends of the class being tested. But with Google Test that doesn't work as each test is given its own unique class, derived from the fixture class if specified, and friend-ness doesn't transfer to derived classes. Initially I created a test proxy class that is friends with the tested class. It contains a pointer to an instance of the tested class and provides methods for the required, but hidden, members. This worked for a simple class, but now I'm up to testing a tree class with an internal private node class, of which I need to access and mess with. I'm just wondering if anyone using the GoogleTest library has done any white box testing and if they have any hints or helpful constructs that would make this easier. Ok, I've found the FRIEND_TEST macro defined in the documentation, as well as some hints on how to test private code in the advanced guide. But apart from having a huge amount of friend declerations (i.e. one FRIEND_TEST for each test), is there an easier idion to use, or should I abandon using GoogleTest and move to a different test framework?

    Read the article

  • SQL – Crossword Puzzle Based on Course Building Successful High Traffic Profitable Blog

    - by Pinal Dave
    Do you like Crossword Puzzles? I personally love it. Everytime I open the newspaper, I try to resolve at least one crossword or sudoku. It is just fun to tease a brain little and stretch its limits. Regular readers of the blogs are aware that I have recently published two courses on how to build successful high traffic profitable blog. Here are the links to watch both the courses: Course 1, Course 2. Do watch them in order as both the courses have unique content, which can help you build a better blog. On my birthday July 30th, there was an interesting blog post posted on Pluralsight blog. It was a crossword build from my two courses. I encourage you try to solve the crossword which I have built. Giveaway: There is a cool gift for the winner – it is melting clock. Do not confuse this as a dummy or not working clock. This looks like melting but it always shows accurate time and it is perfectly balanced to hang off of any flat surface. How to Participate: Well, it is very simple, you just have to complete the crossword and send it to me at pinal at sqlauthority.com with all valid answers. The deadline is that you must send it before Monday August 5, 2013 or before the valid answer keys are posted on Pluralsight blog. Hints: Though the crossword is very easy and intuitive, if you ever get stuck anywhere here are two hints: Hint 1, Hint 2. Login to Pluralsight courses and watch both the courses. Watching the course will not only help you to easily complete crossword but there are hidden gems and secrets to build a high traffic profitable blog. Here is the link to download the crossword: Download Crossword. Alternatively you can download the image displayed below and print it as well.   Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Blogging

    Read the article

  • HTML Tidy in NetBeans IDE (Part 2)

    - by Geertjan
    This is what I was aiming for in the previous blog entry: What you can see above (especially if you click to enlarge it) is that I have HTML Tidy integrated into the NetBeans analyzer functionality, which is pluggable from 7.2 onwards. Well, if you set an implementation dependency on "Static Analysis Core", since it's not an official API yet. Also, the scopes of the analyzer functionality are not pluggable. That means you can 'only' set the analyzer's scope to one or more projects, one or more packages, or one or more files. Not one or more folders, which means you can't have a bunch off HTML files in a folder that you access via the Favorites window and then run the analyzer on that folder (or on multiple folders). Thus, to try out my new code, I had to put some HTML files into a package inside a Java application. Then I chose that package as the scope of the analyzer. Then I ran all the analyzers (i.e., standard NetBeans Java hints, FindBugs, as well as my HTML Tidy extension) on that package. The screenshot above is the result. Here's all the code for the above, which is a port of the Action code from the previous blog entry into a new Analyzer implementation: import java.io.IOException; import java.io.PrintWriter; import java.io.StringWriter; import java.util.ArrayList; import java.util.Collections; import java.util.List; import javax.swing.JComponent; import javax.swing.text.Document; import org.netbeans.api.fileinfo.NonRecursiveFolder; import org.netbeans.modules.analysis.spi.Analyzer; import org.netbeans.modules.analysis.spi.Analyzer.AnalyzerFactory; import org.netbeans.modules.analysis.spi.Analyzer.Context; import org.netbeans.modules.analysis.spi.Analyzer.CustomizerProvider; import org.netbeans.modules.analysis.spi.Analyzer.WarningDescription; import org.netbeans.spi.editor.hints.ErrorDescription; import org.netbeans.spi.editor.hints.ErrorDescriptionFactory; import org.netbeans.spi.editor.hints.Severity; import org.openide.cookies.EditorCookie; import org.openide.filesystems.FileObject; import org.openide.loaders.DataObject; import org.openide.util.Exceptions; import org.openide.util.lookup.ServiceProvider; import org.w3c.tidy.Tidy; public class TidyAnalyzer implements Analyzer {     private final Context ctx;     private TidyAnalyzer(Context cntxt) {         this.ctx = cntxt;     }     @Override     public Iterable<? extends ErrorDescription> analyze() {         List<ErrorDescription> result = new ArrayList<ErrorDescription>();         for (NonRecursiveFolder sr : ctx.getScope().getFolders()) {             FileObject folder = sr.getFolder();             for (FileObject fo : folder.getChildren()) {                 for (ErrorDescription ed : doRunHTMLTidy(fo)) {                     if (fo.getMIMEType().equals("text/html")) {                         result.add(ed);                     }                 }             }         }         return result;     }     private List<ErrorDescription> doRunHTMLTidy(FileObject sr) {         final List<ErrorDescription> result = new ArrayList<ErrorDescription>();         Tidy tidy = new Tidy();         StringWriter stringWriter = new StringWriter();         PrintWriter errorWriter = new PrintWriter(stringWriter);         tidy.setErrout(errorWriter);         try {             Document doc = DataObject.find(sr).getLookup().lookup(EditorCookie.class).openDocument();             tidy.parse(sr.getInputStream(), System.out);             String[] split = stringWriter.toString().split("\n");             for (String string : split) {                 //Bit of ugly string parsing coming up:                 if (string.startsWith("line")) {                     final int end = string.indexOf(" c");                     int lineNumber = Integer.parseInt(string.substring(0, end).replace("line ", ""));                     string = string.substring(string.indexOf(": ")).replace(":", "");                     result.add(ErrorDescriptionFactory.createErrorDescription(                             Severity.WARNING,                             string,                             doc,                             lineNumber));                 }             }         } catch (IOException ex) {             Exceptions.printStackTrace(ex);         }         return result;     }     @Override     public boolean cancel() {         return true;     }     @ServiceProvider(service = AnalyzerFactory.class)     public static final class MyAnalyzerFactory extends AnalyzerFactory {         public MyAnalyzerFactory() {             super("htmltidy", "HTML Tidy", "org/jtidy/format_misc.gif");         }         public Iterable<? extends WarningDescription> getWarnings() {             return Collections.EMPTY_LIST;         }         @Override         public <D, C extends JComponent> CustomizerProvider<D, C> getCustomizerProvider() {             return null;         }         @Override         public Analyzer createAnalyzer(Context cntxt) {             return new TidyAnalyzer(cntxt);         }     } } The above only works on packages, not on projects and not on individual files.

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12 13 14 15 16  | Next Page >