Search Results

Search found 33509 results on 1341 pages for 'good practices'.

Page 19/1341 | < Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >

  • OBIEE 11.1.1 - (Updated) Best Practices Guide for Tuning Oracle® Business Intelligence Enterprise Edition (Whitepaper)

    - by Ahmed Awan
    Applies To: This whitepaper applies to OBIEE release 11.1.1.3, 11.1.1.5 and 11.1.1.6 Introduction: One of the most challenging aspects of performance tuning is knowing where to begin. To maximize Oracle® Business Intelligence Enterprise Edition performance, you need to monitor, analyze, and tune all the Fusion Middleware / BI components. This guide describes the tools that you can use to monitor performance and the techniques for optimizing the performance of Oracle® Business Intelligence Enterprise Edition components. Click to Download the OBIEE Infrastructure Tuning Whitepaper (Right click or option-click the link and choose "Save As..." to download this file) Disclaimer: All tuning information stated in this guide is only for orientation, every modification has to be tested and its impact should be monitored and analyzed. Before implementing any of the tuning settings, it is recommended to carry out end to end performance testing that will also include to obtain baseline performance data for the default configurations, make incremental changes to the tuning settings and then collect performance data. Otherwise it may worse the system performance.

    Read the article

  • Are `break` and `continue` bad programming practices?

    - by Mikhail
    My boss keeps mentioning nonchalantly that bad programmers use break and continue in loops. I use them all the time because they make sense; let me show you the inspiration: function verify(object) { if (object->value < 0) return false; if (object->value > object->max_value) return false; if (object->name == "") return false; ... } The point here is that first the function checks that the conditions are correct, then executes the actual functionality. IMO same applies with loops: while (primary_condition) { if (loop_count > 1000) break; if (time_exect > 3600) break; if (this->data == "undefined") continue; if (this->skip == true) continue; ... } I think this makes it easier to read & debug; but I also don't see a downside. Please comment.

    Read the article

  • SEO Practices - Comments on Blogs

    I have been creating web pages for some time and have spent considerable time researching SEO (Search Engine Optimisation) techniques, and as the owner of several blogs have always wondered about one of the techniques. Quoted from the results of a recent Google search on "SEO Blog Comments":

    Read the article

  • Good practices - database programming, unit testing

    - by Piotr Rodak
    Jason Brimhal wrote today on his blog that new book, Defensive Database Programming , written by Alex Kuznetsov ( blog ) is coming to bookstores. Alex writes about various techniques that make your code safer to run. SQL injection is not the only one vulnerability the code may be exposed to. Some other include inconsistent search patterns, unsupported character sets, locale settings, issues that may occur during high concurrency conditions, logic that breaks when certain conditions are not met. The...(read more)

    Read the article

  • Best practices when creating/modeling databases?

    - by Oscar Mederos
    I learned at the University some steps to model a database: Model the problem using the Extended Entity-Relationship Model. Extract the functional dependencies Apply some algorithms to normalize the database (3NF or Boyce-Codd) Create the database I'm studying Computer Science and since I received that course I'm wondering if I always need to do those steps when creating a complex database for an specified problem. For example, do PHP / .NET / .. programmers always do that? or there are some tools to simplify that process, maybe using another way of represent the problem instead of the EERM?

    Read the article

  • Windows Azure guidance from the Patterns and Practices team

    - by Eric Nelson
    The P&P team have started to share guidance on the Windows Azure Platform.  They plan to group their efforts around: 1. Moving to the Cloud 2. Integrating with the Cloud 3. Leveraging the Cloud First up is a document which explains the capabilities and limitations of Enterprise Library 5.0 Beta 2 in terms of use within .NET applications designed to run with the Windows Azure platform. You can download it here. Related Links: UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure

    Read the article

  • Mobile Development- Obtaining development hardware - best practices?

    - by Zoot
    I'm looking to get into smartphone development, but there a quite a few options out there for platforms right now. (iOS/Android/WebOS/Bada/Symbian/MeeGo/WindowsMobile/JavaME) I'd like to have development hardware to test my code and the overall functionality of the devices. What is the best way to obtain and/or borrow hardware for development and testing? Are there rules of thumb to follow which apply to all companies and platforms? In this situation, I'm a single developer. Does this process change for a startup? A hackerspace? A small business? A large business?

    Read the article

  • Best Practices and Etiquette for Setting up Email Notifications

    - by George Stocker
    If you were going to set up a Email Alerts for the customers of your website to subscribe to, what rules of etiquette ought to be followed? I can think of a few off the top of my head: Users can Opt-Out Text Only (Or tasteful Remote Images) Not sent out more than once a week Clients have fine-grained control over what they receive emails about (Only receive what they are interested in) What other points should I consider? From a programming standpoint, what is the best method for setting up and running email notifications? Should I use an ASP.NET Service? A Windows Service? What are the pitfalls to either? How should I log emails that are sent? I don't care if they're received, but I do need to be able to prove that I did or did not send an email.

    Read the article

  • Best Practices for SOA 11g Multi Data Center Active – Active Deployment – White Paper

    - by JuergenKress
    Best practice for High Availability This paper describes the recommended Active - Active solutions that can be used for protecting an Oracle Fusion Middleware 11 g SOA system against downtime across multiple locations (referred to as SOA Active - Active Disaster Recovery Solution or SOA Multi Data Center Active - Active Deployment). It provides the required configuration steps for setting up the recommended topologies and guidance about the performance and failover implications of such a configuration. Get the white paper here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: high availability,best practice,active deployment,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Recommend good shared hosting [closed]

    - by Django Reinhardt
    It seems that everyone has something bad to say about the "big" shared hosting sites like 1and1, HostGator, GoDaddy, etc. but what are the ones you've had GOOD experiences with? I'm going to focus this question on LAMP stacks, given that they're the most popular option for shared hosting, but if you have an especially good experience with a different stack. Good shared hosting should be: Competitively priced - But not at the expense of... Fully featured - Email, PHP, MySQL, but what else? Highly customizable - Do you have access to advanced features like being able to deliver static content? Up to date - Do they run PHP4 as standard, or do they run the latest version? Customer service - When you have a problem are they rude and unhelpful? Do they take ages to reply? So how about it? Who have YOU have a good experience with?

    Read the article

  • Referential Integrity: Best Practices for IBM DB2

    Of the various constraints possible on relational tables, referential constraints are perhaps the most common ... and most misused. Learn about the advantages and disadvantages of different methods to implement and enforce RI, and issues that must be addressed when implementing DBMS-enforced Referential Integrity.

    Read the article

  • displaying multi-section html documents - best practices

    - by ecpepper
    I work at a research organization and we publish a lot of large-ish documents, usually organized in sections. What I want to know is how best to present these multi-section documents on our website. Presently, what I do is load the entire document as a single page, with each section as its own div. Then I show and hide divs as needed via a table of contents and "next" and "prev" buttons. The advantages to this are mainly: 1) that you can move between sections very quickly, 2) it produces consistent analytics (when a page is loaded, I know a report is being read). The disadvantages, however, are real: Readers can't take advantage of browser back/forward buttons to move between sections. It's complicated to create direct links to individual sections (I can do it with javascript but it's not easy for other people to grab and share). For long reports, you have to wait for the full report to load before you can move around (and that can include hordes of images and charts). Do other people have thoughts on better ways to organize this? Here's an example of the current system: http://massbudget.org/825

    Read the article

  • Robots.txt practices with .htaccess redirections (inherits)

    - by Jayhal
    I have a question regarding how to write robots.txt files for many domains and subdomains with redirects in place. We have a hosting account that enacts primary and add-on domains. All of our domains and subdomains, including the primary domain, is redirected via htaccess 301s to their own subdirectories in the primary domain's root directory. I'm confused about how I would write the robots.txt for certain directories. First, I wanted to confirm I am right in understanding that for domains and subdomains, crawlers will look to the directory that acts as that urls root directory for the crawling rules(robots.txt). Also, that a directory will not be affected by a robots.txt present in their parent directory if the directory has its own domain/subdomain, and that url is the one being accessed by crawlers. (Am pretty sure, but I wanted to confirm I didnt have a fundamentally flawed understanding of robots.txt) In the original root directory on the account(where the primary domain was directed before htaccess was put in place) what should the robots.txt contain? When crawlers look to crawl our primary domain, will they look to the original root directory for the robots.txt or will they reference the file contained in the new subdirectory where all the primary domain's site files are located? If so, what should the root's robot.txt include if anything at all. Would I be right to include a simple 'disallow: /' for all agents, and then include more specific robots.txt files in each subdirectory with more specific instructions. Would that affect the crawling of the directory where the primary domain is now redirected? Any help is greatly appreciated, Thanks!

    Read the article

  • Good Practices in Software Outsourcing

    When an organization has dedicated to outsource a job to other company outside its organizational structure, it may have to deal with risks pertaining to software outsourcing. Hiring a stranger to ge... [Author: Chirag Vyas - Web Design and Development - April 09, 2010]

    Read the article

  • Github Organization Repositories, Issues, Multiple Developers, and Forking - Best Workflow Practices

    - by Jim Rubenstein
    A weird title, yes, but I've got a bit of ground to cover I think. We have an organization account on github with private repositories. We want to use github's native issues/pull-requests features (pull requests are basically exactly what we want as far as code reviews and feature discussions). We found the tool hub by defunkt which has a cool little feature of being able to convert an existing issue to a pull request, and automatically associate your current branch with it. I'm wondering if it is best practice to have each developer in the organization fork the organization's repository to do their feature work/bug fixes/etc. This seems like a pretty solid work flow (as, it's basically what every open source project on github does) but we want to be sure that we can track issues and pull requests from ONE source, the organization's repository. So I have a few questions: Is a fork-per-developer approach appropriate in this case? It seems like it could be a little overkill. I'm not sure that we need a fork for every developer, unless we introduce developers who don't have direct push access and need all their code reviewed. In which case, we would want to institute a policy like that, for those developers only. So, which is better? All developers in a single repository, or a fork for everyone? Does anyone have experience with the hub tool, specifically the pull-request feature? If we do a fork-per-developer (or even for less-privileged devs) will the pull-request feature of hub operate on the pull requests from the upstream master repository (the organization's repository?) or does it have different behavior? EDIT I did some testing with issues, forks, and pull requests and found that. If you create an issue on your organization's repository, then fork the repository from your organization to your own github account, do some changes, merge to your fork's master branch. When you try to run hub -i <issue #> you get an error, User is not authorized to modify the issue. So, apparently that work flow won't work.

    Read the article

  • Good Video Game User Interface Design Books/Websites?

    - by Tucker Morgan
    I having been programming games for some time, but while my teachers say that my code is good and advanced, my friends say that the interface is hard to understand and not the easiest to navigate. I want to learn how to design good user interfaces so that I can program better games, and people will have a easier time getting around. Does anyone know of any good books or websites about designing video game interfaces?

    Read the article

  • Live Security Talk Webcast: Security Best Practices for Design and Deployment on Windows Azure (Leve

    Developing secure applications and services in the cloud requires knowledge of the threat landscape specific to the cloud provider. The key is understanding threat mitigations implemented by the cloud architecture versus those that are the responsibility of the developer. Register for this exciting live webcast to learn about the threats that are specific to the cloud and how the Windows Azure architecture deals with these threats. We also cover how to use built-in Windows Azure security features...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Buzzword for "performance-aware" software development

    - by errantlinguist
    There seems to be an overabundance of buzzwords for software development styles and methodologies: Agile development, extreme programming, test-driven development, etc... well, is there any sort of buzzword for "performance-aware" development? By "performance awareness", I don't necessarily mean low-latency or low-level programming, although the former would logically fall under the blanket term I'm looking for. I mean development in which resources are recognised to be finite and so there is a general emphasis on low computational complexity, good resource management, etc. If I was to be snarky, I would say "good programming", but that doesn't seem to get the message across so well...

    Read the article

  • Best practices for caching search queries

    - by David Esteves
    I am trying to improve performance of my ASP.net Web Api by adding a data cache but I am not sure how exactly to go about it as it seems to be more complex than most caching scenarios. An example is I have a table of Locations and an api to retrieve locations via search, for an autocomplete. /api/location/Londo and the query would be something like SELECT * FROM Locations WHERE Name like 'Londo%' These locations change very infrequently so I would like to cache them to prevent trips to the database for no real reason and improve the response time. Looking at caching options I am using the Windows Azure Appfabric system, the problem is it's just a key/value cache. Since I can only retrieve items based on keys I couldn't actually use it for this scenario as far as Im aware. Is what I am trying to do bad use of a caching system? Should I try looking into NoSql DB which could possibly run as a cache for something like this to improve performance? Should I just cache the entire table/collection in a single key with a specific data structure which could assist with the searching and then do the search upon retrieval of the data?

    Read the article

< Previous Page | 15 16 17 18 19 20 21 22 23 24 25 26  | Next Page >