Search Results

Search found 66534 results on 2662 pages for 'document set'.

Page 442/2662 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Oracle Spatial User Conference, Directions, and the US Census

    - by stephen.garth
    This year's Oracle Spatial User Conference should be a winner, featuring new workshops and case studies presented by Oracle Spatial customers on applications as diverse as natural resource management, gold mining, the growing of wine grapes, and the United States Census. This podcast by Directions Media, official media sponsor of the Oracle Spatial User Conference, provides a glimpse of what's in store at the conference. In the podcast, Directions interviewed senior cartographers from the US Census Bureau to explore the enormous challenges of database management, mapping and spatial analysis associated with the 2010 US Census. The Oracle Spatial User Conference is in Phoenix, AZ on April 29, held in conjunction with the GITA Geospatial Infrastructure Solutions Conference. Register for the Oracle Spatial User Conference Listen to the Directions podcast on the 2010 US Census Find out more about Oracle Spatial var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Visual Studio ALM MVP of the Year 2011

    - by Martin Hinshelwood
    For some reason this year some of my peers decided to vote for me as a contender for Visual Studio ALM MVP of the year. I am not sure what I did to deserve this, but a number of people have commented that I have a rather useful blog. I feel wholly unworthy to join the ranks of previous winners: Ed Blankenship (2010) Martin Woodward (2009) Thank you to everyone who voted regardless of who you voted for. If there was a prize for the best group of MVP’s then the Visual Studio ALM MVP would be a clear winner, as would the product group of product groups that is Visual Studio ALM Group. To use a phrase that I have learned since moving to Seattle and probably use too much: you guys are all just awesome. I have tried my best in the last year to document not only every problem that I have had with Team Foundation Server (TFS), but also to document as many of the things I am doing as possible. I have taken some of Adam Cogan’s rules to heart and when a customer asks me a question I always blog the answer and send them a link. This allows both my blog and my understanding of TFS to grow while creating a useful bank of content. The idea is that if one customer asks, all benefit. I try, when writing for my blog, to capture both the essence and the context for a problem being solved. This allows more people to benefit as they do not need to understand the specifics of an environment to gain value. I have a number of goals for this year that I think will help increase value in the community: persuade my new colleagues at Northwest Cadence to do more blogging (Steve, Jeff, Shad and Rennie) Rangers Project – TFS Iteration Automation with Willy-Peter Schaub, Bill Essary, Martin Hinshelwood, Mike Fourie, Jeff Bramwell and Brian Blackman Write a book on the Team Foundation Server API with Willy-Peter Schaub, Mike Fourie and Jeff Bramwell write more useful blog posts I do not think that these things are beyond the realms of do-ability, but we will see…

    Read the article

  • FREE eBook: .NET Performance Testing and Optimization (Part 1)

    In this this first part of complete guide to performance profiling, Paul Glavich and Chris Farrell explain why performance testing is a good idea and walk you through everything you need to know to set up a test environment. This comprehensive guide to getting started is an essential handbook to any programmer looking to set up a .NET testing environment and get the best results out of it. Download your free copy now span.fullpost {display:none;}

    Read the article

  • VoteCounts: bookmarklet to display up/down votes even for rep<1k

    - by SztupY
    Screenshot / Code Snippet About This small bookmarklet will allow anyone to use the "vulnerability" of the API that it allows you to check the up/down vote count - a feat you could normally achieve by being a 1k+rep user. Mainly useful for sites where you don't have this amount of rep, but want to check the stats of the more controversial questions (usually on meta) No API key is actually used here, but it's trivial to add one. License I don't think a code like this deserves anything other than WTFPL Download It's the following line (javascript - 375 bytes): javascript:(function(){a='jsonp';c=' .vote-count-post';d='up_vote_count';e='down_vote_count';$.ajax({url:document.location.href.replace(/(http:\/\/)(.*)(\/questions\/.*)\/.*/,'$1api.$2/1.0$3'),dataType:a,jsonp:a,success:function(x){b=x.questions[0];$('#question'+c).html(b[d]+"-"+b[e]);$.each(b.answers,function(z,y){$('#answer-'+y.answer_id+c).html(y[d]+"-"+y[e])})}})})() EDIT: This is longer, but it will make the result look like exactly on SO. Took a while to make it exactly 508 chars, so it works with IE too. javascript:(function(){w=function(t,q){l='_vote_count';h='up'+l;j='down'+l;k='</div>';s='<div style="color:';$(t).html(s+'green">'+(q[h]?'+':'')+q[h]+k+'<div class="vote-count-separator">'+k+s+'maroon">'+(q[j]==0?'':'-')+q[j]+k)};a='jsonp';c=' .vote-count-post';$.ajax({url:document.location.href.replace(/(http:\/\/)(.*)(\/questions\/.*)\/.*/,'$1api.$2/1.0$3'),dataType:a,jsonp:a,success:function(x){b=x.questions[0];w('#question'+c,b);$.each(b.answers,function(z,y){w('#answer-'+y.answer_id+c,y)})}})})() Platform For any jquery/bookmarklets compatible browser. Tested with Chrome, FF3.6 and IE8 for SU,SO,MSO Contact sztupy.hu Code It was written in notepad already in minified form. Used firebug to debug. Code is above. Contribute(=decrease code size or make the output nicer) any way you want. I'd be great if you'd do the second code shorter than 508 bytes. Known bugs If a question has more than 30 answers then some of the answers won't be resolved. This can be solved easily for <=100 answers, but for questions with more than 100 answers this is more difficult EDIT: updated to API version 1.0. Answers doesn't work yet.

    Read the article

  • CUPS basic auth error through web interface

    - by Inaimathi
    I'm trying to configure CUPS to allow remote administration through the web interface. There's enough documentation out there that I can figure out what to change in my cupsd.conf (changing Listen localhost:631 to Port 631, and adding Allow @LOCAL to the /, /admin and /admin/conf sections). I'm now at the point where I can see the CUPS interface from another machine on the same network. The trouble is, when I try to Add Printer, I'm asked for a username and password, but my response is rejected even when I know I've gotten it right (I assume it's asking for the username and password of someone in the lpadmin group on the server machine; I've sshed in with credentials its rejecting, and the user I'm using has been added to the lpadmin group). If I disable auth outright, by changing DefaultAuthType Basic to DefaultAuthType None, I get an "Unauthorized" error instead of a password request when I try to Add Printer. What am I doing wrong? Is there a way of letting users from the local network to administer the print server through the CUPS web interface? EDIT: By request, my complete cupsd.conf (spoiler: minimally edited default config file that comes with the edition of CUPS from the Debian wheezy repos): LogLevel warn MaxLogSize 0 SystemGroup lpadmin Port 631 # Listen localhost:631 Listen /var/run/cups/cups.sock Browsing On BrowseOrder allow,deny BrowseAllow all BrowseLocalProtocols CUPS dnssd # DefaultAuthType Basic DefaultAuthType None WebInterface Yes <Location /> Order allow,deny Allow @LOCAL </Location> <Location /admin> Order allow,deny Allow @LOCAL </Location> <Location /admin/conf> AuthType Default Require user @SYSTEM Order allow,deny Allow @LOCAL </Location> # Set the default printer/job policies... <Policy default> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default CUPS-Get-Devices> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy> # Set the authenticated printer/job policies... <Policy authenticated> # Job/subscription privacy... JobPrivateAccess default JobPrivateValues default SubscriptionPrivateAccess default SubscriptionPrivateValues default # Job-related operations must be done by the owner or an administrator... <Limit Create-Job Print-Job Print-URI Validate-Job> AuthType Default Order deny,allow </Limit> <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job Cancel-My-Jobs Close-Job CUPS-Move-Job CUPS-Get-Document> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After Cancel-Jobs CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> AuthType Default Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order deny,allow </Limit> </Policy>

    Read the article

  • Logrotate not doing any rotation

    - by blizz
    I just set up LogRotate on my RHEL6 server so that it rotates my custom Apache log files. However, it doesn't do anything when i try manually running it. I expect it to rotate the log files "access.log" and "err.log". They have been there for a few days and need to be rotated. Here is the output: [root@pc1 httpd]# logrotate -d -f /etc/logrotate.d/apache reading config file /etc/logrotate.d/apache reading config info for /var/log/httpd/*log /var/www/html/NSLogs/access.log /var/www/html/NSErrorLogs/err.log Handling 1 logs rotating pattern: /var/log/httpd/*log /var/www/html/NSLogs/access.log /var/www/html/NSErrorLogs/err.log forced from command line (no old logs will be kept) empty log files are rotated, old logs are removed considering log /var/log/httpd/access_log log needs rotating considering log /var/log/httpd/error_log log needs rotating considering log /var/www/html/NSLogs/access.log log needs rotating considering log /var/www/html/NSErrorLogs/err.log log needs rotating rotating log /var/log/httpd/access_log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_log_t:s0 renaming /var/log/httpd/access_log to /var/log/httpd/access_log-20131023 disposeName will be /var/log/httpd/access_log-20131023.gz running postrotate script running script with arg /var/log/httpd/access_log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/log/httpd/access_log-20131023.gz error: error opening /var/log/httpd/access_log-20131023.gz: No such file or directory rotating log /var/log/httpd/error_log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_log_t:s0 renaming /var/log/httpd/error_log to /var/log/httpd/error_log-20131023 disposeName will be /var/log/httpd/error_log-20131023.gz running postrotate script running script with arg /var/log/httpd/error_log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/log/httpd/error_log-20131023.gz error: error opening /var/log/httpd/error_log-20131023.gz: No such file or directory rotating log /var/www/html/NSLogs/access.log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_sys_rw_content_t:s0 renaming /var/www/html/NSLogs/access.log to /var/www/html/NSLogs/access.log-20131023 disposeName will be /var/www/html/NSLogs/access.log-20131023.gz running postrotate script running script with arg /var/www/html/NSLogs/access.log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/www/html/NSLogs/access.log-20131023.gz error: error opening /var/www/html/NSLogs/access.log-20131023.gz: No such file or directory rotating log /var/www/html/NSErrorLogs/err.log, log->rotateCount is 0 dateext suffix '-20131023' glob pattern '-[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]' glob finding old rotated logs failed fscreate context set to unconfined_u:object_r:httpd_sys_rw_content_t:s0 renaming /var/www/html/NSErrorLogs/err.log to /var/www/html/NSErrorLogs/err.log-20131023 disposeName will be /var/www/html/NSErrorLogs/err.log-20131023.gz running postrotate script running script with arg /var/www/html/NSErrorLogs/err.log: " /usr/bin/killall -HUP httpd " compressing log with: /bin/gzip removing old log /var/www/html/NSErrorLogs/err.log-20131023.gz error: error opening /var/www/html/NSErrorLogs/err.log-20131023.gz: No such file or directory

    Read the article

  • Flash Technology Can Revolutionize your IT Infrastructure

    - by kimberly.billings
    A recent article in the Data Center Journal written by Mark Teter outlines how flash is becoming a disruptive technology in the data center and how it will soon replace HDDs in the storage hierarchy. As Teter explains, the drivers behind this trend are lower cost/performance and power savings; flash is over 100x faster for reads than the fastest HDD, and while it is expensive, it can produce dramatic reductions in the cost of performance as measured in Input/Outputs per second (IOPS). What's more, flash consumes 1/5th the power of HDD, so it's faster AND greener. Teter writes, "when appropriately used, flash turns the current economics of IT performance on its head. That's disruptive." Exadata Smart Flash Cache in the Sun Oracle Database Machine makes intelligent use of flash storage to deliver extreme performance for OLTP and mixed workloads. It intelligently caches data from the Oracle Database replacing slow mechanical I/O operations to disk with very rapid flash memory operations. Exadata Smart Flash Cache is the fundamental technology of the Sun Oracle Database Machine that enables the processing of up to 1 million random I/O operations per second (IOPS), and the scanning of data within Exadata storage at up to 50 GB/second. Are you incorporating flash into your storage strategy? Let us know! Read more: "Flash technology can revolutionize your IT infrastructure", The Data Center Journal, March 30, 2010. Exadata Smart Flash Cache and the Sun Oracle Database Machine white paper var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Protect Section in Word without limiting formatting in unprotected sections

    - by grom
    Steps to create protected section (in Word 2003): Insert - Break... Choose Section break, Continuous Tools - Protect Document... Enable 'Allow only this type of editing in the document' in editing restrictions In the drop down select 'Filling in forms' Click on 'Select sections...' and uncheck the unprotected sections (eg. Section 2) Click 'Yes, Start Enforcing Protection' and optionally set a password. Now go to the unprotected section and in the Format menu options like 'Bullets and Numbering...' and 'Borders and Shading...' are greyed out. How can you protect a section without limiting the features that can be used in the unprotected section?

    Read the article

  • Gawker Passwords

    - by Nick Harrison
    There has been much news about the hack of the Gawker web sites. There has even been an analysis of the common passwords found. This list is embarrassing in many ways. The most common password was "123456". The second most common password was "password". Much has also been written providing advice on how to create good passwords. This article provides some interesting advice, none of which should be taken. Anyone reading my blog, probably already knows the importance of strong passwords, so I am not going to reiterate the reasons here. My target audience is more the folks defining password complexity requirements. A user cannot come up with a strong password, if we have complexity requirements that don't make sense. With that in mind, here are a few guidelines:  Long Passwords Insist on long passwords. In some cases, you may need to change to allow a long password. I have seen many places that cap passwords at 8 characters. Passwords need to be at least 8 characters minimal. Consider how much stronger the passwords would be if you double the length. Passwords that are 15-20 characters will be that much harder to crack. There is no need to have limit passwords to 8 characters. Don't Require Special Characters Many complexity rules will require that your password include a capital letter, a lower case letter, a number, and one of the "special" characters, the shits above the number keys. The problem with such rules is that the resulting passwords are harder to remember. It also means that you will have a smaller set of characters in the resulting passwords. If you must include one of the 9 digits and one of the 9 "special" characters, then you have dramatically reduced the character set that will make up the final password. Two characters will be one of 10 possible values instead of one of 70. Two additional characters will be one of 26 possible characters instead of a 70 character potential character set. If you limit passwords to 8 characters, you are left with only 7 characters having the full set of 70 potential values. With these character restrictions in place, there are 1.6 x1012 possible passwords. Without these special character restrictions, but allowing numbers and special characters, you get a total of 5.76x1014 possible passwords. Even if you only allowed upper and lower case characters, you will still have 2.18X1014 passwords. You can do the math any number of ways, requiring special characters will always weaken passwords. Now imagine the number of passwords when you require more than 8 characters.  If you are responsible for defining complexity rules, I urge you to take these guidelines into account. What other guidelines do you follow?

    Read the article

  • Complex type support in process flow &ndash; XMLTYPE

    - by shawn
        Before OWB 11.2 release, there are only 5 simple data types supported in process flow: DATE, BOOLEAN, INTEGER, FLOAT and STRING. A new complex data type – XMLTYPE is added in 11.2, in order to support complex data being passed between the process flow activities. In this article we will give a simple example to illustrate the usage of the new type and some related editors.     Suppose there is a bookstore that uses XML format orders as shown below (we use the simplest form for the illustration purpose), then we can create a process flow to handle the order, take the order as the input, then extract necessary information, and generate a confirmation email to the customer automatically. <order id=’0001’>     <customer>         <name>Tom</name>         <email>[email protected]</email>     </customer>     <book id=’Java_001’>         <quantity>3</quantity>     </book> </order>     Considering a simple user case here: we use an input parameter/variable with XMLTYPE to hold the XML content of the order; then we can use an Assign activity to retrieve the email info from the order; after that, we can create an email activity to send the email (Other activities might be added in practical case, but will not be described here). 1) Set XML content value     For testing purpose, we will create a variable to hold the sample order, and then this will be used among the process flow activities. When the variable is of XMLTYPE and the “Literal” value is set the true, the advance editor will be enabled.     Click the “Advance Editor” shown as above, a simple xml editor will popup. The editor has basic features like syntax highlight and check as shown below:     We can also do the basic validation or validation against schema with the editor by selecting the normalized schema. With this, it will be easier to provide the value for XMLTYPE variables. 2) Extract information from XML content     After setting the value, we need to extract the email information with the Assign activity. In process flow, an enhanced expression builder is used to help users construct the XPath for extracting values from XML content. When the variable’s literal value is set the false, the advance editor is enabled.     Click the button, the advance editor will popup, as shown below:     The editor is based on the expression builder (which is often used in mapping etc), an XPath lib panel is appended which provides some help information on how to write the XPath. The expression used here is: “XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/email/text()').getStringVal()”, which uses ‘/order/customer/email/text()’ as the XPath to extract the email info from the XML document.     A variable called “EMAIL_ADDR” is created with String data type to hold the value extracted.     Then we bind the “VARIABLE” parameter of Assign activity to “EMAIL_ADDR” variable, which means the value of the “EMAIL_ADDR” activity will be set to the result of the “VALUE” parameter of Assign activity. 3) Use the extracted information in Email activity     We bind the “TO_ADDRESS” parameter of the email activity to the “EMAIL_ADDR” variable created in above step.     We can also extract other information from the xml order directly through the expression, for example, we can set the “MESSAGE_BODY” with value “'Dear '||XMLTYPE.EXTRACT(XML_ORDER,'/order/customer/name/text()').getStringVal()||chr(13)||chr(10)||'   You have ordered '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/quantity/text()').getStringVal()||' '||XMLTYPE.EXTRACT(XML_ORDER,'/order/book/@id').getStringVal()”. This expression will extract the customer name, the quantity and the book id from the order to compose the message body.     To make the email activity work, we need provide some other necessary information, Such as “SMTP_SERVER” (which is the SMTP server used to send the emails, like “mail.bookstore.com”. The default PORT number is set to 25. You need to change the value accordingly), “FROM_ADDRESS” and “SUBJECT”. Then the process flow is ready to go.     After deploying the process flow package, we can simply run the process flow to check if the result is as expected (An email will be sent to the specified email address with proper subject and message body).     Note: In oracle 11g, there is an enhanced security feature - ACL (Access Control List), which restrict the network access within db, so we need to edit the list to allow UTL_SMTP work if you are using oracle 11g. Refer to chapter “Access Control Lists for UTL_TCP/HTTP/SMTP” and “Managing Fine-Grained Access to External Network Services” for more details.       In previous releases, XMLTYPE already exists in other OWB objects, like mapping/transformation etc. When the mapping/transformation is dragged into a process flow, the parameters with XMLTYPE are mapped to STRING. Now with the XMLTYPE support in process flow, the XMLTYPE will map to XMLTYPE in a more natural way, and we can leverage the new data type for the design.

    Read the article

  • Insert SVG directly into Microsoft Publisher without converting file format

    - by nhinkle
    How can I insert an SVG image into a Microsoft Publisher 2010 document as a vector image without having to first convert it to a bitmap format like PNG? Copying and pasting an SVG file into a Publisher document does not work. I am aware that one can convert an SVG to EPS, and insert that, since Publisher accepts EPS files. The problem is that it is time consuming to convert, and often the colors come out wrong. If this is the only way to get vector graphics into Publisher, then is there a one-step method to convert an SVG to EPS and paste it into Publisher at one fell swoop?

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • Using WSUS to update machines not on the domain

    - by Arcath
    I have a WSUS server providing updates for for the computers on my domain. We also bring allot of machines back to our office and run windows update on them as build image, this means that we end up downloading the same updates over and over again. Is there anyway to get a machine to download its updates from our WSUS Server? i found that theres something running on port 8530 but its just an empty document, in fact every folder listed in IIS config returns a blank document anyone know if this is possible? and how i would do it?

    Read the article

  • Get the Information You Need. Delivered.

    - by Get Proactive Customer Adoption Team
    Untitled Document Don’t Take Chances with Alerts—Get Hot Topics When Oracle Support publishes an alert, how do you find out about it? I can see any number of ways you might stumble onto an alert that you need. For example, if you are visiting My Oracle Support in search of answers under the Knowledge tab and happen to notice, and click on, the Alert tab the under the Knowledge Article region, you might see an alert listed for one of the products you use. There are other ways… like subscribing to one of the Oracle Blogs and finding the alert in your RSS feed because the blogger decided to write up that topic for the latest post. I’m sure your colleagues sometimes pass on critical alerts for your products, I hope, giving you the information before you needed it. Well, no matter how you learn about an alert, the important point is that you get the correct information in a timely way. Right? I must admit, the ‘magic’ required to find out via these methods makes me nervous. Rather than leave it to chance, I think you need a more reliable way to stay informed and receive alerts for your products when Oracle publishes them. You may not be aware of it, but there is a better way. Oracle Premier Support Customers can leverage the “Hot Topics E-Mail.” You select the products and topics that interest you. Based on your choices, the system sends you the support related information when Oracle Support publishes it. This way you and I can both relax, knowing you’ll have ready access to the alerts you need, and enjoy the breadth of support related information you choose to subscribe to. This can include recently updated Knowledge base articles, new bugs, and product news. If I’ve convinced you, you will want to know how to set up and subscribe to the Hot Topics E-Mail. The complete guide, Doc ID 793436.1, is waiting for you. Follow the instructions in the document, and you will always stay on top of the latest information from Oracle Support.

    Read the article

  • Feasibility to take over a JavaMe Project by Coders who have no experience in JavaMe

    - by Stephenmjm
    As the original JavaMe team will leave to do other items. The JavaMe project will be taken over by some guys knowing nothing about JavaMe. Transition period: One month About this JavaMe project: about 3.5 million lines of code (more than 180 java file, SourceCode is 8.5KB in total) using the Polish, Proguard document: The JavaMe project itself have no document. No UML map. Difficulties I guess: familiar with the JavaMe, this should be okay In order to do the further development. We need to Read the sourceCode ---- It's not easy to read 3.5 million lines of code having not enough comment Adaptation work for more than 100 phone These are the questions, thank you! In the case of our guys have no experience in JavaMe, Is one month too hasty? In order to take the job in time . What we should ask the original JavaMe team to do . Considering we hava no experience in JavaMe. The complication we taking the Adaptation work without the original JavaMe team? Any other suggestions?

    Read the article

  • Search for and print only matched pattern

    - by Ayman
    I have some huge xml text files. I need to write a script to find and print a specific tag only. I tried sed and grep but they both return the whole line. Using SunOS 5.x, so not all linux commands may work. grep -o is not available. The 'xml' file is not actually one huge xml document, but each line is a separate xml document, with just a few tags, not even nested. And the structure is fairly easy, so full xml parsers is not needed, and probably would not work. I was looking for sed, awk, or some other one liners, but could not get them to work, and they are both relatively new to me.

    Read the article

  • How can you invert the colors of a PDF?

    - by legr3c
    I need to invert all the colors of a PDF document (background, text, graphics, and images). I want it persistent in the file so the inverted viewing options, that some viewers offer, won't help. Rasterizing the document and using image manipulation software is also not an option. I read somewhere that this can be done with the Enfocus PitStop plugin for Acrobat. However I didn't see a corresponding command anywhere. Am I missing something? Then I read that the ARTS PDF Crackerjack plugin for Acrobat offers negative printing so I tried that, too. The option is there but it simply doesn't work. I have been searching for very long for a way to do this. It seems like a common enough task but I just can't find out how to do it. Are there maybe any virtual printer drivers or something of the sort that support negative printing? Can anyone help?

    Read the article

  • Process.Start() and ShellExecute() fails with URLs on Windows 8

    - by Rick Strahl
    Since I installed Windows 8 I've noticed that a number of my applications appear to have problems opening URLs. That is when I click on a link inside of a Windows application, either nothing happens or there's an error that occurs. It's happening both to my own applications and a host of Windows applications I'm running. At first I thought this was an issue with my default browser (Chrome) but after switching the default browser to a few others and experimenting a bit I noticed that the errors occur - oddly enough - only when I run an application as an Administrator. I also tried switching to FireFox and Opera as my default browser and saw exactly the same behavior. The scenario for this is a bit bizarre: Running on Windows 8 Call Process.Start() (or ShellExecute() in Win32 API) with a URL or an HTML file Run 'As Administrator' (works fine under non-elevated user account!) or with UAC off A browser other than Internet Explorer is set as your Default Web Browser Talk about a weird scenario: Something that doesn't work when you run as an Administrator which is supposed to have rights to everything on the system! Instead running under an Admin account - either elevated with a User Account Control prompt or even when running as a full Administrator fails. It appears that this problem does not occur for everyone, but when I looked for a solution to this, I saw quite a few posts in relation to this with no clear resolutions. I have three Windows 8 machines running here in the office and all three of them showed this behavior. Lest you think this is just a programmer's problem - this can affect any software running on your system that needs to run under administrative rights. Try it out Now, in order for this next example to fail, any browser but Internet Explorer has to be your default browser and even then it may not fail depending on how you installed your browser. To see if this is a problem create a small Console application and call Process.Start() with a URL in it:namespace Win8ShellBugConsole { class Program { static void Main(string[] args) { Console.WriteLine("Launching Url..."); Process.Start("http://microsoft.com"); Console.Write("Press any key to continue..."); Console.ReadKey(); Console.WriteLine("\r\n\r\nLaunching image..."); Process.Start(Path.GetFullPath(@"..\..\sailbig.jpg")); Console.Write("Press any key to continue..."); Console.ReadKey(); } } } Compile this code. Then execute the code from Explorer (not from Visual Studio because that may change the permissions). If you simply run the EXE and you're not running as an administrator, you'll see the Web page pop up in the browser as well as the image loading. Now run the same thing with Run As Administrator: Now when you run it you get a nice error when Process.Start() is fired: The same happens if you are running with User Account Control off altogether - ie. you are running as a full admin account. Now if you comment out the URL in the code above and just fire the image display - that works just fine in any user mode. As does opening any other local file type or even starting a new EXE locally (ie. Process.Start("c:\windows\notepad.exe"). All that works, EXCEPT for URLs. The code above uses Process.Start() in .NET but the same happens in Win32 Applications that use the ShellExecute API. In some of my older Fox apps ShellExecute returns an error code of 31 - which is No Shell Association found. What's the Deal? It turns out the problem has to do with the way browsers are registering themselves on Windows. Internet Explorer - being a built-in application in Windows 8 - apparently does this correctly, but other browsers possibly don't or at least didn't at the time I installed them. So even Chrome, which continually updates itself, has a recent version that apparently has this registration issue fixed, I was unable to simply set IE as my default browser then use Chrome to 'Set as Default Browser'. It still didn't work. Neither did using the Set Program Associations dialog which lets you assign what extensions are mapped to by a given application. Each application provides a set of extension/moniker mappings that it supports and this dialog lets you associate them on a system wide basis. This also did not work for Chrome or any of the other browsers at first. However, after repeated retries here eventually I did manage to get FireFox to work, but not any of the others. What Works? Reinstall the Browser In the end I decided on the hard core pull the plug solution: Totally uninstall and re-install Chrome in this case. And lo and behold, after reinstall everything was working fine. Now even removing the association for Chrome, switching to IE as the default browser and then back to Chrome works. But, even though the version of Chrome I was running before uninstalling and reinstalling is the same as I'm running now after the reinstall now it works. Of course I had to find out the hard way, before Richard commented with a note regarding what the issue is with Chrome at least: http://code.google.com/p/chromium/issues/detail?id=156400 As expected the issue is a registration issue - with keys not being registered at the machine level. Reading this I'm still not sure why this should be a problem - an elevated account still runs under the same user account (ie. I'm still rickstrahl even if I Run As Administrator), so why shouldn't an app be able to read my Current User registry hive? And also that doesn't quite explain why if I register the extensions using Run As Administrator in Chrome when using Set as Default Browser). But in the end it works… Not so fast It's now a couple of days later and still there are some oddball problems although this time they appear to be purely Chrome issues. After the reinstall Chrome seems to pop up properly with ShellExecute() calls both in regular user and Admin mode. However, it now looks like Chrome is actually running two completely separate user profiles for each. For example, when I run Visual Studio in Admin mode and go to View in browser, Chrome complains that it was installed in Admin mode and can't launch (WTF?). Then you retry a few times later and it ends up working. When launched that way some of the plug-ins installed don't show up with the effect that sometimes they're visible sometimes they're not. Also Chrome seems to loose my configuration and Google sign in between sessions now, presumably when switching user modes. Add-ins installed in admin mode don't show up in user mode and vice versa. Ah, this is lovely. Did I mention that I freaking hate UAC precisely because of this kind of bullshit. You can never tell exactly what account your app is running under, and apparently apps also have a hard time trying to put data into the right place that works for both scenarios. And as my recent post on using Windows Live accounts shows it's yet another level of abstraction ontop of the underlying system identity that can cause all sort of small side effect headaches like this. Hopefully, most of you are skirting this issue altogether - having installed more recent versions of your favorite browsers. If not, hopefully this post will take you straight to reinstallation to fix this annoying issue.© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows  .NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • How should clients handle HTTP 401 with unknown authentication schemes?

    - by user113215
    What is the proper behavior for an HTTP client receiving a 401 Unauthorized response that specifies only unrecognized authentication schemes? My server supports Kerberos authentication using WWW-Authenticate: Negotiate. On the first request, the server sends a 401 Unauthorized response with a body containing an HTML document. The behavior that I expect is for clients that support Kerberos to perform that authentication and for other clients to simply display the HTML document (a login form). It seems that most of the "other clients" I've encountered do work this way, but a few do not. I haven't found anything that mandates any particular behavior in this situation. There's a brief mention in RFC 2617: HTTP Authentication: Basic and Digest Access Authentication, but is there anything more concrete? It is possible that a server may want to require Digest as its authentication method, even if the server does not know that the client supports it. A client is encouraged to fail gracefully if the server specifies only authentication schemes it cannot handle.

    Read the article

  • Issues getting a Cisco WLC 5508 to find AIR-LAP1142N

    - by user95917
    hoping someone can help me with a problem here. I'm attempting to setup a test (loan from Cisco) wireless network. Here's what i've got/done: 5508 Controller - Service Port IP set to 10.74.5.2 /24. Management IP set to 10.74.6.2 /24 with a default gateway of 10.74.6.1. Virtual IP set to 1.1.1.1. Copper SFP in slot 7, CAT5 (known good) going from there to port 1/0/47 on the switch. Green lights on both ends. 2960-S Switch - Vlan1 - 10.74.6.1 /24. dhcp pool 10.74.6.0 /24, default router 10.74.6.1. excluded-address 10.74.6.1, 10.74.6.2. 1/0/4 on the switch is set to switchport mode access and no shut. 1/0/47 on the switch is setup to switchport mode trunk and no shut. 1/0/4 has a CAT5 (known good) cable going from there to the AP. When I do a sh cdp nei from the switch, i can see the AP and Controller listed. When i configure my PC's nic to 10.74.5.5, and plug a cable from my nic to the SP port on the controller i can get on the device via the gui. In there, the only errors/info that show up in the trap are: Link Up: Slot: 0 Port: 7 Controller time base status - Controller is out of sync with the central timebase. I've manually set the time but apparently that's not quite the problem (or at least not the entire problem). When i plug the AP in, i see on the switch console that it grants it power, it sees it connect...but the controller won't see it for some reason. From what i've read you shouldn't have to do anything to the AP as it's managed by the controller...but i'm not sure what setting I'm missing for it to work. The AP light on top is continually cycling green, red, yellow. When I first start it up, it blinks green for 20 or so seconds, then goes to solid green for another 20 seconds or so, then flashes blue, green, red for awhile...but always ends up goinn back to the standard, green, red, yellow. Does anyone see any obvious issues with my setup or have any suggestions as to why i might be having a problem? Thanks for your help!

    Read the article

  • Merging many documents into one in Word 2007: How to make each one start on a new page?

    - by Javier Badia
    I have 31 documents I need to merge into one, using Word 2007 on Windows 7. I read that you can go to Insert - Object - Text from file and select the documents you need. I did that and it worked fine. The thing is, each document is right against the last one. Is there any way to make it so each document starts on a new page, other than manually inserting page breaks? Here are some example pictures in case it's not clear. Suppose "document1" and "document2" are two documents I want to merge. How Word does it: How I want it to be:

    Read the article

  • ANTS Memory Profiler 7.0 Review

    - by Michael B. McLaughlin
    (This is my first review as a part of the GeeksWithBlogs.net Influencers program. It’s a program in which I (and the others who have been selected for it) get the opportunity to check out new products and services and write reviews about them. We don’t get paid for this, but we do generally get to keep a copy of the software or retain an account for some period of time on the service that we review. In this case I received a copy of Red Gate Software’s ANTS Memory Profiler 7.0, which was released in January. I don’t have any upgrade rights nor is my review guided, restrained, influenced, or otherwise controlled by Red Gate or anyone else. But I do get to keep the software license. I will always be clear about what I received whenever I do a review – I leave it up to you to decide whether you believe I can be objective. I believe I can be. If I used something and really didn’t like it, keeping a copy of it wouldn’t be worth anything to me. In that case though, I would simply uninstall/deactivate/whatever the software or service and tell the company what I didn’t like about it so they could (hopefully) make it better in the future. I don’t think it’d be polite to write up a terrible review, nor do I think it would be a particularly good use of my time. There are people who get paid for a living to review things, so I leave it to them to tell you what they think is bad and why. I’ll only spend my time telling you about things I think are good.) Overview of Common .NET Memory Problems When coming to land of managed memory from the wilds of unmanaged code, it’s easy to say to one’s self, “Wow! Now I never have to worry about memory problems again!” But this simply isn’t true. Managed code environments, such as .NET, make many, many things easier. You will never have to worry about memory corruption due to a bad pointer, for example (unless you’re working with unsafe code, of course). But managed code has its own set of memory concerns. For example, failing to unsubscribe from events when you are done with them leaves the publisher of an event with a reference to the subscriber. If you eliminate all your own references to the subscriber, then that memory is effectively lost since the GC won’t delete it because of the publishing object’s reference. When the publishing object itself becomes subject to garbage collection then you’ll get that memory back finally, but that could take a very long time depending of the life of the publisher. Another common source of resource leaks is failing to properly release unmanaged resources. When writing a class that contains members that hold unmanaged resources (e.g. any of the Stream-derived classes, IsolatedStorageFile, most classes ending in “Reader” or “Writer”), you should always implement IDisposable, making sure to use a properly written Dispose method. And when you are using an instance of a class that implements IDisposable, you should always make sure to use a 'using' statement in order to ensure that the object’s unmanaged resources are disposed of properly. (A ‘using’ statement is a nicer, cleaner looking, and easier to use version of a try-finally block. The compiler actually translates it as though it were a try-finally block. Note that Code Analysis warning 2202 (CA2202) will often be triggered by nested using blocks. A properly written dispose method ensures that it only runs once such that calling dispose multiple times should not be a problem. Nonetheless, CA2202 exists and if you want to avoid triggering it then you should write your code such that only the innermost IDisposable object uses a ‘using’ statement, with any outer code making use of appropriate try-finally blocks instead). Then, of course, there are situations where you are operating in a memory-constrained environment or else you want to limit or even eliminate allocations within a certain part of your program (e.g. within the main game loop of an XNA game) in order to avoid having the GC run. On the Xbox 360 and Windows Phone 7, for example, for every 1 MB of heap allocations you make, the GC runs; the added time of a GC collection can cause a game to drop frames or run slowly thereby making it look bad. Eliminating allocations (or else minimizing them and calling an explicit Collect at an appropriate time) is a common way of avoiding this (the other way is to simplify your heap so that the GC’s latency is low enough not to cause performance issues). ANTS Memory Profiler 7.0 When the opportunity to review Red Gate’s recently released ANTS Memory Profiler 7.0 arose, I jumped at it. In order to review it, I was given a free copy (which does not include upgrade rights for future versions) which I am allowed to keep. For those of you who are familiar with ANTS Memory Profiler, you can find a list of new features and enhancements here. If you are an experienced .NET developer who is familiar with .NET memory management issues, ANTS Memory Profiler is great. More importantly still, if you are new to .NET development or you have no experience or limited experience with memory profiling, ANTS Memory Profiler is awesome. From the very beginning, it guides you through the process of memory profiling. If you’re experienced and just want dive in however, it doesn’t get in your way. The help items GAHSFLASHDAJLDJA are well designed and located right next to the UI controls so that they are easy to find without being intrusive. When you first launch it, it presents you with a “Getting Started” screen that contains links to “Memory profiling video tutorials”, “Strategies for memory profiling”, and the “ANTS Memory Profiler forum”. I’m normally the kind of person who looks at a screen like that only to find the “Don’t show this again” checkbox. Since I was doing a review, though, I decided I should examine them. I was pleasantly surprised. The overview video clocks in at three minutes and fifty seconds. It begins by showing you how to get started profiling an application. It explains that profiling is done by taking memory snapshots periodically while your program is running and then comparing them. ANTS Memory Profiler (I’m just going to call it “ANTS MP” from here) analyzes these snapshots in the background while your application is running. It briefly mentions a new feature in Version 7, a new API that give you the ability to trigger snapshots from within your application’s source code (more about this below). You can also, and this is the more common way you would do it, take a memory snapshot at any time from within the ANTS MP window by clicking the “Take Memory Snapshot” button in the upper right corner. The overview video goes on to demonstrate a basic profiling session on an application that pulls information from a database and displays it. It shows how to switch which snapshots you are comparing, explains the different sections of the Summary view and what they are showing, and proceeds to show you how to investigate memory problems using the “Instance Categorizer” to track the path from an object (or set of objects) to the GC’s root in order to find what things along the path are holding a reference to it/them. For a set of objects, you can then click on it and get the “Instance List” view. This displays all of the individual objects (including their individual sizes, values, etc.) of that type which share the same path to the GC root. You can then click on one of the objects to generate an “Instance Retention Graph” view. This lets you track directly up to see the reference chain for that individual object. In the overview video, it turned out that there was an event handler which was holding on to a reference, thereby keeping a large number of strings that should have been freed in memory. Lastly the video shows the “Class List” view, which lets you dig in deeply to find problems that might not have been clear when following the previous workflow. Once you have at least one memory snapshot you can begin analyzing. The main interface is in the “Analysis” tab. You can also switch to the “Session Overview” tab, which gives you several bar charts highlighting basic memory data about the snapshots you’ve taken. If you hover over the individual bars (and the individual colors in bars that have more than one), you will see a detailed text description of what the bar is representing visually. The Session Overview is good for a quick summary of memory usage and information about the different heaps. You are going to spend most of your time in the Analysis tab, but it’s good to remember that the Session Overview is there to give you some quick feedback on basic memory usage stats. As described above in the summary of the overview video, there is a certain natural workflow to the Analysis tab. You’ll spin up your application and take some snapshots at various times such as before and after clicking a button to open a window or before and after closing a window. Taking these snapshots lets you examine what is happening with memory. You would normally expect that a lot of memory would be freed up when closing a window or exiting a document. By taking snapshots before and after performing an action like that you can see whether or not the memory is really being freed. If you already know an area that’s giving you trouble, you can run your application just like normal until just before getting to that part and then you can take a few strategic snapshots that should help you pin down the problem. Something the overview didn’t go into is how to use the “Filters” section at the bottom of ANTS MP together with the Class List view in order to narrow things down. The video tutorials page has a nice 3 minute intro video called “How to use the filters”. It’s a nice introduction and covers some of the basics. I’m going to cover a bit more because I think they’re a really neat, really helpful feature. Large programs can bring up thousands of classes. Even simple programs can instantiate far more classes than you might realize. In a basic .NET 4 WPF application for example (and when I say basic, I mean just MainWindow.xaml with a button added to it), the unfiltered Class List view will have in excess of 1000 classes (my simple test app had anywhere from 1066 to 1148 classes depending on which snapshot I was using as the “Current” snapshot). This is amazing in some ways as it shows you how in stark detail just how immensely powerful the WPF framework is. But hunting through 1100 classes isn’t productive, no matter how cool it is that there are that many classes instantiated and doing all sorts of awesome things. Let’s say you wanted to examine just the classes your application contains source code for (in my simple example, that would be the MainWindow and App). Under “Basic Filters”, click on “Classes with source” under “Show only…”. Voilà. Down from 1070 classes in the snapshot I was using as “Current” to 2 classes. If you then click on a class’s name, it will show you (to the right of the class name) two little icon buttons. Hover over them and you will see that you can click one to view the Instance Categorizer for the class and another to view the Instance List for the class. You can also show classes based on which heap they live on. If you chose both a Baseline snapshot and a Current snapshot then you can use the “Comparing snapshots” filters to show only: “New objects”; “Surviving objects”; “Survivors in growing classes”; or “Zombie objects” (if you aren’t sure what one of these means, you can click the helpful “?” in a green circle icon to bring up a popup that explains them and provides context). Remember that your selection(s) under the “Show only…” heading will still apply, so you should update those selections to make sure you are seeing the view you want. There are also links under the “What is my memory problem?” heading that can help you diagnose the problems you are seeing including one for “I don’t know which kind I have” for situations where you know generally that your application has some problems but aren’t sure what the behavior you have been seeing (OutOfMemoryExceptions, continually growing memory usage, larger memory use than expected at certain points in the program). The Basic Filters are not the only filters there are. “Filter by Object Type” gives you the ability to filter by: “Objects that are disposable”; “Objects that are/are not disposed”; “Objects that are/are not GC roots” (GC roots are things like static variables); and “Objects that implement _______”. “Objects that implement” is particularly neat. Once you check the box, you can then add one or more classes and interfaces that an object must implement in order to survive the filtering. Lastly there is “Filter by Reference”, which gives you the option to pare down the list based on whether an object is “Kept in memory exclusively by” a particular item, a class/interface, or a namespace; whether an object is “Referenced by” one or more of those choices; and whether an object is “Never referenced by” one or more of those choices. Remember that filtering is cumulative, so anything you had set in one of the filter sections still remains in effect unless and until you go back and change it. There’s quite a bit more to ANTS MP – it’s a very full featured product – but I think I touched on all of the most significant pieces. You can use it to debug: a .NET executable; an ASP.NET web application (running on IIS); an ASP.NET web application (running on Visual Studio’s built-in web development server); a Silverlight 4 browser application; a Windows service; a COM+ server; and even something called an XBAP (local XAML browser application). You can also attach to a .NET 4 process to profile an application that’s already running. The startup screen also has a large number of “Charting Options” that let you adjust which statistics ANTS MP should collect. The default selection is a good, minimal set. It’s worth your time to browse through the charting options to examine other statistics that may also help you diagnose a particular problem. The more statistics ANTS MP collects, the longer it will take to collect statistics. So just turning everything on is probably a bad idea. But the option to selectively add in additional performance counters from the extensive list could be a very helpful thing for your memory profiling as it lets you see additional data that might provide clues about a particular problem that has been bothering you. ANTS MP integrates very nicely with all versions of Visual Studio that support plugins (i.e. all of the non-Express versions). Just note that if you choose “Profile Memory” from the “ANTS” menu that it will launch profiling for whichever project you have set as the Startup project. One quick tip from my experience so far using ANTS MP: if you want to properly understand your memory usage in an application you’ve written, first create an “empty” version of the type of project you are going to profile (a WPF application, an XNA game, etc.) and do a quick profiling session on that so that you know the baseline memory usage of the framework itself. By “empty” I mean just create a new project of that type in Visual Studio then compile it and run it with profiling – don’t do anything special or add in anything (except perhaps for any external libraries you’re planning to use). The first thing I tried ANTS MP out on was a demo XNA project of an editor that I’ve been working on for quite some time that involves a custom extension to XNA’s content pipeline. The first time I ran it and saw the unmanaged memory usage I was convinced I had some horrible bug that was creating extra copies of texture data (the demo project didn’t have a lot of texture data so when I saw a lot of unmanaged memory I instantly figured I was doing something wrong). Then I thought to run an empty project through and when I saw that the amount of unmanaged memory was virtually identical, it dawned on me that the CLR itself sits in unmanaged memory and that (thankfully) there was nothing wrong with my code! Quite a relief. Earlier, when discussing the overview video, I mentioned the API that lets you take snapshots from within your application. I gave it a quick trial and it’s very easy to integrate and make use of and is a really nice addition (especially for projects where you want to know what, if any, allocations there are in a specific, complicated section of code). The only concern I had was that if I hadn’t watched the overview video I might never have known it existed. Even then it took me five minutes of hunting around Red Gate’s website before I found the “Taking snapshots from your code" article that explains what DLL you need to add as a reference and what method of what class you should call in order to take an automatic snapshot (including the helpful warning to wrap it in a try-catch block since, under certain circumstances, it can raise an exception, such as trying to call it more than 5 times in 30 seconds. The difficulty in discovering and then finding information about the automatic snapshots API was one thing I thought could use improvement. Another thing I think would make it even better would be local copies of the webpages it links to. Although I’m generally always connected to the internet, I imagine there are more than a few developers who aren’t or who are behind very restrictive firewalls. For them (and for me, too, if my internet connection happens to be down), it would be nice to have those documents installed locally or to have the option to download an additional “documentation” package that would add local copies. Another thing that I wish could be easier to manage is the Filters area. Finding and setting individual filters is very easy as is understanding what those filter do. And breaking it up into three sections (basic, by object, and by reference) makes sense. But I could easily see myself running a long profiling session and forgetting that I had set some filter a long while earlier in a different filter section and then spending quite a bit of time trying to figure out why some problem that was clearly visible in the data wasn’t showing up in, e.g. the instance list before remembering to check all the filters for that one setting that was only culling a few things from view. Some sort of indicator icon next to the filter section names that appears you have at least one filter set in that area would be a nice visual clue to remind me that “oh yeah, I told it to only show objects on the Gen 2 heap! That’s why I’m not seeing those instances of the SuperMagic class!” Something that would be nice (but that Red Gate cannot really do anything about) would be if this could be used in Windows Phone 7 development. If Microsoft and Red Gate could work together to make this happen (even if just on the WP7 emulator), that would be amazing. Especially given the memory constraints that apps and games running on mobile devices need to work within, a good memory profiler would be a phenomenally helpful tool. If anyone at Microsoft reads this, it’d be really great if you could make something like that happen. Perhaps even a (subsidized) custom version just for WP7 development. (For XNA games, of course, you can create a Windows version of the game and use ANTS MP on the Windows version in order to get a better picture of your memory situation. For Silverlight on WP7, though, there’s quite a bit of educated guess work and WeakReference creation followed by forced collections in order to find the source of a memory problem.) The only other thing I found myself wanting was a “Back” button. Between my Windows Phone 7, Zune, and other things, I’ve grown very used to having a “back stack” that lets me just navigate back to where I came from. The ANTS MP interface is surprisingly easy to use given how much it lets you do, and once you start using it for any amount of time, you learn all of the different areas such that you know where to go. And it does remember the state of the areas you were previously in, of course. So if you go to, e.g., the Instance Retention Graph from the Class List and then return back to the Class List, it will remember which class you had selected and all that other state information. Still, a “Back” button would be a welcome addition to a future release. Bottom Line ANTS Memory Profiler is not an inexpensive tool. But my time is valuable. I can easily see ANTS MP saving me enough time tracking down memory problems to justify it on a cost basis. More importantly to me, knowing what is happening memory-wise in my programs and having the confidence that my code doesn’t have any hidden time bombs in it that will cause it to OOM if I leave it running for longer than I do when I spin it up real quickly for debugging or just to see how a new feature looks and feels is a good feeling. It’s a feeling that I like having and want to continue to have. I got the current version for free in order to review it. Having done so, I’ve now added it to my must-have tools and will gladly lay out the money for the next version when it comes out. It has a 14 day free trial, so if you aren’t sure if it’s right for you or if you think it seems interesting but aren’t really sure if it’s worth shelling out the money for it, give it a try.

    Read the article

  • Oracle Introduces Oracle Communications Data Model to Provide Actionable Insight for Communications

    - by kimberly.billings
    To help communications service providers (CSPs) manage and analyze rapidly growing data volumes cost effectively, Oracle recently introduced the Oracle Communications Data Model (OCDM). With the OCDM, CSPs can achieve rapid time to value by quickly implementing a standards-based enterprise data warehouse that features communications industry-specific reporting, analytics and data mining. The combination of the OCDM, Oracle Exadata and the Oracle Business Intelligence (BI) Foundation represents the most comprehensive data warehouse and BI solution for the communications industry. Hong Kong Broadband Network, the fastest growing and second largest broadband service provider in Hong Kong, enhanced its data warehouse using Oracle Communications Data Model. It went live with OCDM within three months, and has increased its subscriber base by 37 percent in six months and reduced customer churn to less than one percent. Read more about HKBN's use of OCDM. Read more about OCDM var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Page Hierarchy

    - by raghu.yadav
    Great example given by frank on Page Hierarchy here http://www.oracle.com/technology/products/jdev/tips/fnimphius/sitemenuprotection/index.html?_template=/ocom/printfew things we need to concentrate while implementing this example.1) create template and embed the same in all the jspx pages.2) set defaultFocusPath="true" for first itemNode in GroupNode and set idRef to point correct node.

    Read the article

  • How To: Automatically Remove www from a Domain in IIS7

    I recently moved the DevMavens.com site from one server to another and needed to ensure that the www.devmavens.com domain correctly redirected to simply devmavens.com.  This is important for SEO reasons (you dont want multiple domains to refer to the same content) and its generally better to use the shorter URL (www is so 20th century) rather than wasting 4 characters for zero gain. My friend and IIS guru Scott Forsyth pointed me to his blog post on how to set up IIS URL Rewriting.  To get started, you simply install IIS Rewrite from this link using the super awesome Web Platform Installer.  You should get something like this when youre done with the install: If you already have IIS Manager open, you may need to close it and re-open it before you see the URL Rewrite module.  Once you do, you should see it listed for any given Site under the IIS section: Double click on the URL Rewrite icon, and then choose the Add Rule(s) action.  You can simply create a blank rule, and name it Redirect from www to domain.com.  Essentially were following the instructions from Scott Forsyths post, but in reverse since hes showing how to add 4 useless characters to the URL and Im interested in removing them. After adding the name, well set the Match Url sections Using dropdown to Wildcards and specify a pattern of simply * to match anything. In the Conditions section we need to add a new condition with an Input of {HTTP_HOST} such that it should match the pattern www.devmavens.com (replace this with your domain). Ignore the Server Variables section. Set the action to Redirect and the Redirect URL to http://devmavens.com/{R:0} (replace with your domain).  The {R:0} will be replaced with whatever the user had entered.  So if they were going to http://www.devmavens.com/default.aspx theyll now be going to http://devmavens.com/default.aspx. The complete Inbound Rule should look like this: Thats it!  Test it out and make sure you havent accidentally used my exact URLs and started sending all of your users to devmavens.com! :)  Be sure to read Scotts post for more information on how to use regular expressions for your rules, and how to set them up via web.config rather than IIS manager. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >