Search Results

Search found 9423 results on 377 pages for 'greatest common divisor'.

Page 24/377 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Are webhosts that require NS instead of a CNAME common?

    - by billpg
    I've just signed up with a webhost (which I prefer not to name) and I'm reasonably happy with it. The only nit was when I was ready to put a site online and I asked the support line to what name I should point my 'www' CNAME to. They responded that they don't do that and I need to set my domain's NS records for the hosting to work. "Why would you ever want to do it that way? Our service to you includes DNS and our servers are probably much better than the one your registrar provides." This was a bit of surprise as all of the other webhosts I've worked with happily support this. I've set up (eg) gallery.myfriend.example for friends by having them configure their DNS to CNAME 'gallery' to the name of a shared server at a webhost and the webhost does name-based hosting for 'gallery.myfriend.example'. (Of course, if the webhost ever tells me I'm being moved from A.webhost.example to B.webhost.example, it would be my responsibility to change where the CNAME points. Really good webhosts would instead create myname.webhost.example for the IP of whichever server my stuff happens to be on, so I'd never have to worry about keeping my CNAME up to date.) Is my impression correct, that most webhosts will happily support a service that begins with a CNAME hosted elsewhere, or is it really more common that webhosts will only provide a service if they control the DNS service too?

    Read the article

  • Is it common to only pay developers for the time they said a project would take?

    - by BAM
    I work at a small startup (<10 people), and I was recently assigned (along with one other developer) to a relatively small project. The project involved moving an existing iOS app to Android. The client told us they had built the app for iOS in 300 man-hours. Not knowing at the time that this figure was completely false, we naively and optimistically assumed that if they could build the app from scratch in that amount of time, we could easily "port" it in a similar amount of time. Therefore, we drafted up a fixed-price contract based on 350 man-hours, with a 5 week deadline. (We are well aware now of how big of a mistake this was... Never let the client tell you how long it's going to take!) Anyway, by week 4 we had already surpassed our 350 hours, and we estimated that there were at least 2 more weeks left on the project. We were told to continue working, but that the company could not afford to pay out on overdue projects anymore. I thought this just meant "be more careful about estimates in the future". However a few weeks later, the company president informed us that we would not be getting paid for any time past 350 man-hours. We argued over the issue for almost an hour. He claimed, however, that this is standard practice for many organizations, and that I was unreasonable for making a big deal out of it. So is this really a common thing, or am I justified in being upset about it? Thanks in advance for any advice!

    Read the article

  • How can I be certain that my code is flawless? [duplicate]

    - by David
    This question already has an answer here: Theoretically bug-free programs 5 answers I have just completed an exercise from my textbook which wanted me to write a program to check if a number is prime or not. I have tested it and seems to work fine, but how can I be certain that it will work for every prime number? public boolean isPrime(int n) { int divisor = 2; int limit = n-1 ; if (n == 2) { return true; } else { int mod = 0; while (divisor <= limit) { mod = n % divisor; if (mod == 0) { return false; } divisor++; } if (mod > 0) { return true; } } return false; } Note that this question is not a duplicate of Theoretically Bug Free Programs because that question asks about whether one can write bug free programs in the face of the the limitative results such as Turing's proof of the incomputability of halting, Rice's theorem and Godel's incompleteness theorems. This question asks how a program can be shown to be bug free.

    Read the article

  • Plan Caching and Query Memory Part II (Hash Match) – When not to use stored procedure - Most common performance mistake SQL Server developers make.

    - by sqlworkshops
    SQL Server estimates Memory requirement at compile time, when stored procedure or other plan caching mechanisms like sp_executesql or prepared statement are used, the memory requirement is estimated based on first set of execution parameters. This is a common reason for spill over tempdb and hence poor performance. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union. This article covers Hash Match operations with examples. It is recommended to read Plan Caching and Query Memory Part I before this article which covers an introduction and Query memory for Sort. In most cases it is cheaper to pay for the compilation cost of dynamic queries than huge cost for spill over tempdb, unless memory requirement for a query does not change significantly based on predicates.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   To read additional articles I wrote click here.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script. Most of these concepts are also covered in our webcasts: www.sqlworkshops.com/webcasts  Let’s create a Customer’s State table that has 99% of customers in NY and the rest 1% in WA.Customers table used in Part I of this article is also used here.To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'. --Example provided by www.sqlworkshops.com drop table CustomersState go create table CustomersState (CustomerID int primary key, Address char(200), State char(2)) go insert into CustomersState (CustomerID, Address) select CustomerID, 'Address' from Customers update CustomersState set State = 'NY' where CustomerID % 100 != 1 update CustomersState set State = 'WA' where CustomerID % 100 = 1 go update statistics CustomersState with fullscan go   Let’s create a stored procedure that joins customers with CustomersState table with a predicate on State. --Example provided by www.sqlworkshops.com create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1) end go  Let’s execute the stored procedure first with parameter value ‘WA’ – which will select 1% of data. set statistics time on go --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' goThe stored procedure took 294 ms to complete.  The stored procedure was granted 6704 KB based on 8000 rows being estimated.  The estimated number of rows, 8000 is similar to actual number of rows 8000 and hence the memory estimation should be ok.  There was no Hash Warning in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Now let’s execute the stored procedure with parameter value ‘NY’ – which will select 99% of data. -Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  The stored procedure took 2922 ms to complete.   The stored procedure was granted 6704 KB based on 8000 rows being estimated.    The estimated number of rows, 8000 is way different from the actual number of rows 792000 because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘WA’ in our case. This underestimation will lead to spill over tempdb, resulting in poor performance.   There was Hash Warning (Recursion) in SQL Profiler. To observe Hash Warning, enable 'Hash Warning' in SQL Profiler under Events 'Errors and Warnings'.   Let’s recompile the stored procedure and then let’s first execute the stored procedure with parameter value ‘NY’.  In a production instance it is not advisable to use sp_recompile instead one should use DBCC FREEPROCCACHE (plan_handle). This is due to locking issues involved with sp_recompile, refer to our webcasts, www.sqlworkshops.com/webcasts for further details.   exec sp_recompile CustomersByState go --Example provided by www.sqlworkshops.com exec CustomersByState 'NY' go  Now the stored procedure took only 1046 ms instead of 2922 ms.   The stored procedure was granted 146752 KB of memory. The estimated number of rows, 792000 is similar to actual number of rows of 792000. Better performance of this stored procedure execution is due to better estimation of memory and avoiding spill over tempdb.   There was no Hash Warning in SQL Profiler.   Now let’s execute the stored procedure with parameter value ‘WA’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go  The stored procedure took 351 ms to complete, higher than the previous execution time of 294 ms.    This stored procedure was granted more memory (146752 KB) than necessary (6704 KB) based on parameter value ‘NY’ for estimation (792000 rows) instead of parameter value ‘WA’ for estimation (8000 rows). This is because the estimation is based on the first set of parameter value supplied to the stored procedure which is ‘NY’ in this case. This overestimation leads to poor performance of this Hash Match operation, it might also affect the performance of other concurrently executing queries requiring memory and hence overestimation is not recommended.     The estimated number of rows, 792000 is much more than the actual number of rows of 8000.  Intermediate Summary: This issue can be avoided by not caching the plan for memory allocating queries. Other possibility is to use recompile hint or optimize for hint to allocate memory for predefined data range.Let’s recreate the stored procedure with recompile hint. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, recompile) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 297 ms and 1102 ms in line with previous optimal execution times.   The stored procedure with parameter value ‘WA’ has good estimation like before.   Estimated number of rows of 8000 is similar to actual number of rows of 8000.   The stored procedure with parameter value ‘NY’ also has good estimation and memory grant like before because the stored procedure was recompiled with current set of parameter values.  Estimated number of rows of 792000 is similar to actual number of rows of 792000.    The compilation time and compilation CPU of 1 ms is not expensive in this case compared to the performance benefit.   There was no Hash Warning in SQL Profiler.   Let’s recreate the stored procedure with optimize for hint of ‘NY’. --Example provided by www.sqlworkshops.com drop proc CustomersByState go create proc CustomersByState @State char(2) as begin declare @CustomerID int select @CustomerID = e.CustomerID from Customers e inner join CustomersState es on (e.CustomerID = es.CustomerID) where es.State = @State option (maxdop 1, optimize for (@State = 'NY')) end go  Let’s execute the stored procedure initially with parameter value ‘WA’ and then with parameter value ‘NY’. --Example provided by www.sqlworkshops.com exec CustomersByState 'WA' go exec CustomersByState 'NY' go  The stored procedure took 353 ms with parameter value ‘WA’, this is much slower than the optimal execution time of 294 ms we observed previously. This is because of overestimation of memory. The stored procedure with parameter value ‘NY’ has optimal execution time like before.   The stored procedure with parameter value ‘WA’ has overestimation of rows because of optimize for hint value of ‘NY’.   Unlike before, more memory was estimated to this stored procedure based on optimize for hint value ‘NY’.    The stored procedure with parameter value ‘NY’ has good estimation because of optimize for hint value of ‘NY’. Estimated number of rows of 792000 is similar to actual number of rows of 792000.   Optimal amount memory was estimated to this stored procedure based on optimize for hint value ‘NY’.   There was no Hash Warning in SQL Profiler.   This article covers underestimation / overestimation of memory for Hash Match operation. Plan Caching and Query Memory Part I covers underestimation / overestimation for Sort. It is important to note that underestimation of memory for Sort and Hash Match operations lead to spill over tempdb and hence negatively impact performance. Overestimation of memory affects the memory needs of other concurrently executing queries. In addition, it is important to note, with Hash Match operations, overestimation of memory can actually lead to poor performance.   Summary: Cached plan might lead to underestimation or overestimation of memory because the memory is estimated based on first set of execution parameters. It is recommended not to cache the plan if the amount of memory required to execute the stored procedure has a wide range of possibilities. One can mitigate this by using recompile hint, but that will lead to compilation overhead. However, in most cases it might be ok to pay for compilation rather than spilling sort over tempdb which could be very expensive compared to compilation cost. The other possibility is to use optimize for hint, but in case one sorts more data than hinted by optimize for hint, this will still lead to spill. On the other side there is also the possibility of overestimation leading to unnecessary memory issues for other concurrently executing queries. In case of Hash Match operations, this overestimation of memory might lead to poor performance. When the values used in optimize for hint are archived from the database, the estimation will be wrong leading to worst performance, so one has to exercise caution before using optimize for hint, recompile hint is better in this case.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.  Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan

    Read the article

  • Visual Studio &amp; TFS &ndash; List of addins, extensions, patches and hotfixes &ndash; Latest and Greatest

    - by terje
    This post is a list of the addins and extensions we (I ) recommend for use in Inmeta.  It’s coming up all the time – what to install, where are the download sites, etc etc, and thus I thought it better to post it here and keep it updated. The basics are Visual Studio 2010 connected to a Team Foundation Server 2010.  The edition of Visual Studio I use is the Ultimate Edition, but as many stay with the Premium Edition I’ve marked the extensions which only works with the Ultimate with a . I’ve also split the group into Recommended (which means Required) and Optional (which means Recommended) and Nice to Have (which means Optional) .   The focus is to get a setup which can be used for a complete coding experience for the whole ALM process.  The Code Gallery is found either through the Tools/Extension Manager menu in Visual Studio or through this link. The ones to really download is the Recommended category.  Then consider the Optional based on your needs.  The list of course reflects what I use for my work , so it is by no means complete, and for some of the tools there are equally useful alternatives.  The components directly associated with Visual Studio from Microsoft should be common, see the Microsoft column.     Product Available on Code Gallery Latest Version License Rec/Opt/N2H Applicable to Microsoft TFS Power Tools Sept 2010 Complete setup msi on link, split into parts on CG Sept 2010 Free Recommended TFS integration Yes Productivity Power Tools Yes 10.0.11019.3 Free Recommended Coding Yes Code Contracts No 1.4.30903 Free Recommended Coding & Quality Yes Code Contracts Editor Extensions Yes 1.4.30903 Free Recommended Coding & Quality Yes VSCommands Yes 3.6.4.1 Lite version Free (Good enough) Nice to have Coding No Power Commands Yes 1.0.2.3 Free Recommended Coding Yes FeaturePack 2   No.  MSDN Subscriber download under Visual Studio 2010 FP2 Part of MSDN Subscription Recommended Modeling & Testing Yes ReSharper No (Trial only) 5.1.1 Licensed Recommended Coding & Quality No dotTrace No 4.0.1 Licensed Optional Quality No NDepends No (Trial only) Licensed Optional Quality No tangible T4 editor Yes 1.950 Lite version Free (Good enough) Optional Coding (T4 templates) No Reflector No (Trial of Pro version only) 6.5 Lite version Free (Good enough) Recommended Coding/Investigation No LinqPad No 4.26.2 Licensed Nice to have Coding No Beyond Compare No 3.1.11 Licensed Recommended Coding/Investigation No Pex and Moles No (Moles available alone on CG) . Complete on MSDN Subscriber download under Visual Studio 2010 0.94.51023 Part of MSDN Subscription Optional Coding & Unit Testing Yes ApexSQL No Licensed Nice to have SQL No                 Some important Patches, upgrades and fixes Product Date Information Rec/Opt Applicable to Scrolling context menu KB2345133 and KB2413613 October 2010 Here Recommended Visual Studio MTM Patch October 2010 Here and here  KB2387011 Recommended (if you use MTM) MTM Data warehouse fix June 2010 Iteration dates fails with SQL 2008 R2.  KB2222312. Affects Burndown chart in Agile workbook Only for SQL 2008 R2 Server Upgrade 2008 to 2010 issue and hotfix August 2010 Fixes problems with labels and branches which are lost during upgrade. Apply before upgrade. Note: This has been fixed in the latest re-release of the TFS Server dated Aug 5th 2010. See here. Recommends downloading the latest bits. Only if upgrade from 2008 from earlier bits Server

    Read the article

  • Lost in Translation – Common Mistakes Interpreting Patterns – Mark Simpson, Griffiths-Waite @ SOA, Cloud & Service Technology Symposium 2012

    - by JuergenKress
    ORACLE PROMOTIONAL DISCOUNT FOR EXCLUSIVE ORACLE DISCOUNT, ENTER PROMO CODE: DJMXZ370 For details please visit the registration page International SOA, Cloud + Service Technology Symposium is a yearly event that features the top experts and authors from around the world, providing a series of keynotes, talks, demonstrations, and panels, as well as training and certification workshops - all dedicated to empowering IT professionals to realize modern service technologies and practices in the real world. Click here for a two-page printable conference overview (PDF). Speaker: Mark Simpson, Griffiths-Waite Mark has been specialising in Oracle technology for 13 years, the last 10 of these with Griffiths Waite. Mark leads our SOA technology practice (covering SOA, Business Process Management and Enterprise Architecture). He is a much sought after presenter on the Oracle and SOA conference circuits, and a respected authority on these technologies. Mark has advised a host of UK leading organisations on the deployment of BPM / SOA solutions. Working closely with Oracle US Product Development Mark has contributed to Oracle's SOA Methodology and Oracle's SOA Maturity Model. Lost in Translation – Common Mistakes Interpreting Patterns Learn how small misinterpretations of high-level design patterns can have large and costly project ramifications. Good SOA design benefits from the use of a reference architecture and standardised design patterns. However both of these concepts give an abstracted view of the intended solution, which needs to be interpreted to become realised. A reference implementation is important to demonstrate how key design guidelines can be implemented in the toolset of choice, but the main success factor is how these are used through the build and post live phases of the project. This session will introduce practical design patterns with supporting implementation examples that, if used correctly, will give long term benefit. We will highlight implementations where misinterpretations or misalignment from pattern aims have led to issues post implementation. The session will add depth to the pattern discussions you are already having enabling confidence in proceeding to the next level of realisation whilst considering how they may be implemented within your solution and chosen toolset. September 25, 2012 - 13:55 KEYNOTES & SPEAKERS More than 80 international subject matter experts will be speaking at the Symposium. Below are confirmed keynotes and speakers so far. Over 50% of the agenda has not yet been finalized. Many more speakers to come. View the partial program calendars on the Conference Agenda page. CONFERENCE THEMES & TRACKS Cloud Computing Architecture & Patterns New SOA & Service-Orientation Practices & Models Emerging Service Technology Innovation Service Modeling & Analysis Techniques Service Infrastructure & Virtualization Cloud-based Enterprise Architecture Business Planning for Cloud Computing Projects Real World Case Studies Semantic Web Technologies (with & without the Cloud) Governance Frameworks for SOA and/or Cloud Computing Projects Service Engineering & Service Programming Techniques Interactive Services & the Human Factor New REST & Web Services Tools & Techniques Oracle Specialized SOA & BPM Partners Oracle Specialized partners have proven their skills by certifications and customer references. To find a local Specialized partner please visit http://solutions.oracle.com SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: Mark Simpson,Griffiths Waite,SOA Patterns,SOA Symposium,Thomas Erl,SOA Community,Oracle SOA,Oracle BPM,BPM,Community,OPN,Jürgen Kress

    Read the article

  • Whitepaper list for the application framework

    - by Rick Finley
    We're reposting the list of technical whitepapers for the Oracle ETPM framework (called OUAF, Oracle Utilities Application Framework).  These are are available from My Oracle Support at the Doc Id's mentioned below. Some have been updated in the last few months to reflect new advice and new features.  This is reposted from the OUAF blog:  http://blogs.oracle.com/theshortenspot/entry/whitepaper_list_as_at_november Doc Id Document Title Contents 559880.1 ConfigLab Design Guidelines This whitepaper outlines how to design and implement a data management solution using the ConfigLab facility. This whitepaper currently only applies to the following products: Oracle Utilities Customer Care And Billing Oracle Enterprise Taxation Management Oracle Enterprise Taxation and Policy Management           560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers. 560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows: Concepts - General Concepts and Performance Troublehooting processes Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting - General troubleshooting of the network with common issues and resolutions. Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions. Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting - General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions. 560401.1 Software Configuration Management Series  A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. The individual whitepapers are as follows: Concepts - General concepts and introduction. Environment Management - Principles and techniques for creating and managing environments. Version Management - Integration of Version control and version management of configuration items. Release Management - Packaging configuration items into a release. Distribution - Distribution and installation of releases across environments Change Management - Generic change management processes for product implementations. Status Accounting - Status reporting techniques using product facilities. Defect Management - Generic defect management processes for product implementations. Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation. Implementing Service Packs - Discussion on the service packs and how to use them in an implementation. Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. 773473.1 Oracle Utilities Application Framework Security Overview A whitepaper summarizing the security facilities in the framework. Now includes references to other Oracle security products supported. 774783.1 LDAP Integration for Oracle Utilities Application Framework based products Updated! A generic whitepaper summarizing how to integrate an external LDAP based security repository with the framework. 789060.1 Oracle Utilities Application Framework Integration Overview A whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products A whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products This whitepaper outlines the common and best practices implemented by sites all over the world. 856854.1 Technical Best Practices V1 Addendum Addendum to Technical Best Practices for Oracle Utilities Customer Care And Billing V1.x only. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Oracle Identity Manager used to provision user and user group security information. For Fw4.x customers use whitepaper 1375600.1 instead. 1068958.1 Production Environment Configuration Guidelines A whitepaper outlining common production level settings for the products based upon benchmarks and customer feedback. 1177265.1 What's New In Oracle Utilities Application Framework V4? Whitepaper outlining the major changes to the framework since Oracle Utilities Application Framework V2.2. 1290700.1 Database Vault Integration Whitepaper outlining the Database Vault Integration solution provided with Oracle Utilities Application Framework V4.1.0 and above. 1299732.1 BI Publisher Guidelines for Oracle Utilities Application Framework Whitepaper outlining the interface between BI Publisher and the Oracle Utilities Application Framework 1308161.1 Oracle SOA Suite Integration with Oracle Utilities Application Framework based products This whitepaper outlines common design patterns and guidelines for using Oracle SOA Suite with Oracle Utilities Application Framework based products. 1308165.1 MPL Best Practices Oracle Utilities Application Framework This is a guidelines whitepaper for products shipping with the Multi-Purpose Listener. This whitepaper currently only applies to the following products: Oracle Utilities Customer Care And Billing Oracle Enterprise Taxation Management Oracle Enterprise Taxation and Policy Management 1308181.1 Oracle WebLogic JMS Integration with the Oracle Utilities Application Framework This whitepaper covers the native integration between Oracle WebLogic JMS with Oracle Utilities Application Framework using the new Message Driven Bean functionality and real time JMS adapters. 1334558.1 Oracle WebLogic Clustering for Oracle Utilities Application Framework New! This whitepaper covers process for implementing clustering using Oracle WebLogic for Oracle Utilities Application Framework based products. 1359369.1 IBM WebSphere Clustering for Oracle Utilities Application Framework New! This whitepaper covers process for implementing clustering using IBM WebSphere for Oracle Utilities Application Framework based products 1375600.1 Oracle Identity Management Suite Integration with the Oracle Utilities Application Framework New! This whitepaper covers the integration between Oracle Utilities Application Framework and Oracle Identity Management Suite components such as Oracle Identity Manager, Oracle Access Manager, Oracle Adaptive Access Manager, Oracle Internet Directory and Oracle Virtual Directory. 1375615.1 Advanced Security for the Oracle Utilities Application Framework New! This whitepaper covers common security requirements and how to meet those requirements using Oracle Utilities Application Framework native security facilities, security provided with the J2EE Web Application and/or facilities available in Oracle Identity Management Suite.

    Read the article

  • List of available whitepapers as at 04 May 2010

    - by Anthony Shorten
    The following table lists the whitepapers available, from My Oracle Support, for any Oracle Utilities Application Framework based product: KB Id Document Title Contents 559880.1 ConfigLab Design Guidelines Whitepaper outlining how to design and implement a ConfigLab solution. 560367.1 Technical Best Practices for Oracle Utilities Application Framework Based Products Whitepaper summarizing common technical best practices used by partners, implementation teams and customers.  560382.1 Performance Troubleshooting Guideline Series A set of whitepapers on tracking performance at each tier in the framework. The individual whitepapers are as follows: Concepts - General Concepts and Performance Troublehooting processes Client Troubleshooting - General troubleshooting of the browser client with common issues and resolutions. Network Troubleshooting - General troubleshooting of the network with common issues and resolutions. Web Application Server Troubleshooting - General troubleshooting of the Web Application Server with common issues and resolutions. Server Troubleshooting - General troubleshooting of the Operating system with common issues and resolutions. Database Troubleshooting - General troubleshooting of the database with common issues and resolutions. Batch Troubleshooting - General troubleshooting of the background processing component of the product with common issues and resolutions. 560401.1 Software Configuration Management Series  A set of whitepapers on how to manage customization (code and data) using the tools provided with the framework. The individual whitepapers are as follows: Concepts - General concepts and introduction. Environment Management - Principles and techniques for creating and managing environments. Version Management - Integration of Version control and version management of configuration items.  Release Management - Packaging configuration items into a release.  Distribution - Distribution and installation of  releases across environments  Change Management - Generic change management processes for product implementations. Status Accounting -Status reporting techniques using product facilities.  Defect Management -Generic defect management processes for product implementations. Implementing Single Fixes - Discussion on the single fix architecture and how to use it in an implementation. Implementing Service Packs - Discussion on the service packs and how to use them in an implementation. Implementing Upgrades - Discussion on the the upgrade process and common techniques for minimizing the impact of upgrades. 773473.1 Oracle Utilities Application Framework Security Overview Whitepaper summarizing the security facilities in the framework. Updated for OUAF 4.0.1 774783.1 LDAP Integration for Oracle Utilities Application Framework based products Whitepaper summarizing how to integrate an external LDAP based security repository with the framework.  789060.1 Oracle Utilities Application Framework Integration Overview Whitepaper summarizing all the various common integration techniques used with the product (with case studies). 799912.1 Single Sign On Integration for Oracle Utilities Application Framework based products Whitepaper outlining a generic process for integrating an SSO product with the framework. 807068.1 Oracle Utilities Application Framework Architecture Guidelines This whitepaper outlines the different variations of architecture that can be considered. Each variation will include advice on configuration and other considerations. 836362.1 Batch Best Practices for Oracle Utilities Application Framework based products This whitepaper oulines the common and best practices implemented by sites all over the world. Updated for OUAF 4.0.1 856854.1 Technical Best Practices V1 Addendum  Addendum to Technical Best Practices for Oracle Utilities Application Framework Based Products containing only V1.x specific advice. 942074.1 XAI Best Practices This whitepaper outlines the common integration tasks and best practices for the Web Services Integration provided by the Oracle Utilities Application Framework. Updated for OUAF 4.0.1 970785.1 Oracle Identity Manager Integration Overview This whitepaper outlines the principals of the prebuilt intergration between Oracle Utilities Application Framework Based Products and Orade Identity Manager used to provision user and user group secuity information 1068958.1 Production Environment Configuration Guidelines (New!) Whitepaper outlining common production level settings for the products

    Read the article

  • Execute a SQlite command with Entity Framework

    - by Filimindji
    Hi everybody, I use a SQLite database and Entity Framework (with .net framework 3.5). I'm trying to execute a simple SQL non query command to create a new table in this datase. My Entity Framework already contains the object model for this table : I just want to generate the corresponding table using a command. (By the way, there is maybe a better way to do this. Any ideas someone :) My problem is that I'm not able to execute any command, even the simple commands. Here is my code : EntityConnection entityConnection = new EntityConnection(entitiesConnectionString); Entities db = new Entities(entityConnection); DbCommand command = db.Connection.CreateCommand(); command.CommandText ="CREATE TABLE MyTable (Id int NOT NULL, OtherTable_Id nchar(40) REFERENCES OtherTable (Id) On Delete CASCADE On Update NO ACTION, SomeData nvarchar(1024) NOT NULL, Primary Key(Id) );"; command.ExecuteNonQuery(); I got this error : System.Data.EntitySqlException: The query syntax is not valid., near identifier 'TABLE', line 1, column 8. at System.Data.Common.EntitySql.CqlParser.yyerror(String s) at System.Data.Common.EntitySql.CqlParser.yyparse() at System.Data.Common.EntitySql.CqlParser.Parse(String query) at System.Data.Common.EntitySql.CqlQuery.Parse(String query, ParserOptions parserOptions) at System.Data.Common.EntitySql.CqlQuery.Compile(String query, Perspective perspective, ParserOptions parserOptions, Dictionary`2 parameters, Dictionary`2 variables, Boolean validateTree) at System.Data.EntityClient.EntityCommand.MakeCommandTree() at System.Data.EntityClient.EntityCommand.CreateCommandDefinition() at System.Data.EntityClient.EntityCommand.TryGetEntityCommandDefinitionFromQueryCache(EntityCommandDefinition& entityCommandDefinition) at System.Data.EntityClient.EntityCommand.GetCommandDefinition() at System.Data.EntityClient.EntityCommand.InnerPrepare() at System.Data.EntityClient.EntityCommand.ExecuteReader(CommandBehavior behavior) at System.Data.EntityClient.EntityCommand.ExecuteScalar[T_Result](Func`2 resultSelector) It's seem to be a syntax error, but I can't figure where is the problem and how to resolve it. The entityConnection is ok because I can use any entities generated with EF. I tried with another simple command, but it throw another exception : DbCommand command = db.Connection.CreateCommand(); command.CommandText = "SELECT COUNT(Id) From OtherTable;"; int result = (int)command.ExecuteScalar(); And I got this error, witch is not the same, but may help : System.Data.EntitySqlException: 'Groupe' could not be resolved in the current scope or context. Make sure that all referenced variables are in scope, that required schemas are loaded, and that namespaces are referenced correctly., near simple identifier, line 1, column 23. at System.Data.Common.EntitySql.CqlErrorHelper.ReportIdentifierError(Expr expr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ConvertIdentifier(Expr expr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.Convert(Expr astExpr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ProcessAliasedFromClauseItem(AliasExpr aliasedExpr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ProcessFromClauseItem(FromClauseItem fromClauseItem, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ProcessFromClause(FromClause fromClause, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ConvertQuery(Expr expr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.Convert(Expr astExpr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ConvertRootExpression(Expr astExpr, SemanticResolver sr) at System.Data.Common.EntitySql.SemanticAnalyzer.ConvertGeneralExpression(Expr astExpr, SemanticResolver sr) at System.Data.Common.EntitySql.CqlQuery.AnalyzeSemantics(Expr astExpr, Perspective perspective, ParserOptions parserOptions, Dictionary`2 parameters, Dictionary`2 variables) at System.Data.Common.EntitySql.CqlQuery.Compile(String query, Perspective perspective, ParserOptions parserOptions, Dictionary`2 parameters, Dictionary`2 variables, Boolean validateTree) at System.Data.EntityClient.EntityCommand.MakeCommandTree() at System.Data.EntityClient.EntityCommand.CreateCommandDefinition() at System.Data.EntityClient.EntityCommand.TryGetEntityCommandDefinitionFromQueryCache(EntityCommandDefinition& entityCommandDefinition) at System.Data.EntityClient.EntityCommand.GetCommandDefinition() at System.Data.EntityClient.EntityCommand.InnerPrepare() at System.Data.EntityClient.EntityCommand.ExecuteReader(CommandBehavior behavior) at System.Data.EntityClient.EntityCommand.ExecuteScalar[T_Result](Func`2 resultSelector)

    Read the article

  • How to speed up calculation of length of longest common substring?

    - by eSKay
    I have two very large strings and I am trying to find out their Longest Common Substring. One way is using suffix trees (supposed to have a very good complexity, though a complex implementation), and the another is the dynamic programming method (both are mentioned on the Wikipedia page linked above). Using dynamic programming The problem is that the dynamic programming method has a huge running time (complexity is O(n*m), where n and m are lengths of the two strings). What I want to know (before jumping to implement suffix trees): Is it possible to speed up the algorithm if I only want to know the length of the common substring (and not the common substring itself)?

    Read the article

  • I Can't Install or Remove Any Application

    - by berkay gürsoy
    when i try to install or remove an application via either software center or apt-get install they both fail and give some debconf errors below is the log please help.Sorry some of the text is not english. sudo apt-get install aptitude Paket listeleri okunuyor... Bitti Bagimlilik agaci insa ediliyor. Durum bilgisi okunuyor... Bitti Asagidaki ek paketler de yüklenecek: aptitude-common libboost-iostreams1.49.0 libcwidget3 Önerilen paketler: aptitude-doc-en aptitude-doc tasksel debtags libcwidget-dev Asagidaki YENI paketler kurulacak: aptitude aptitude-common libboost-iostreams1.49.0 libcwidget3 Yükseltilen: 0, Yeni Kurulan: 4, Kaldirilacak: 0 ve Yükseltilmeyecek: 48. 8 tam olarak kurulmadi veya kaldirilmadi. Indirilmesi gereken dosya boyutu 0 B/2.498 kB Bu islemden sonra 10,4 MB ek disk alani kullanilacak. Devam etmek istiyor musunuz [E/h]? e Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok Selecting previously unselected package aptitude-common. dpkg: uyari: files list file for package 'aspell' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'ubuntu-desktop' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'vuze' missing; assuming package has no files currently installed dpkg: uyari: files list file for package 'java-wrappers' missing; assuming package has no files currently installed (Veritabani okunuyor... 198988 files and directories currently installed.) Unpacking aptitude-common (from .../aptitude-common_0.6.8.1-2ubuntu1_all.deb) ... Selecting previously unselected package libboost-iostreams1.49.0. Unpacking libboost-iostreams1.49.0 (from .../libboost-iostreams1.49.0_1.49.0-3.1ubuntu1_amd64.deb) ... Selecting previously unselected package libcwidget3. Unpacking libcwidget3 (from .../libcwidget3_0.5.16-3.4ubuntu1_amd64.deb) ... Selecting previously unselected package aptitude. Unpacking aptitude (from .../aptitude_0.6.8.1-2ubuntu1_amd64.deb) ... wicd-daemon (1.7.2.4-2ubuntu1) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing wicd-daemon (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 man-db (2.6.3-1) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing man-db (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 dictionaries-common (1.12.10) kuruluyor... Use of uninitialized value in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 44, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in -e at /usr/share/perl5/Debconf/DbDriver/File.pm line 46, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value in pattern match (m//) at /usr/share/perl5/Debconf/DbDriver/File.pm line 47, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in -d at /usr/share/perl5/Debconf/DbDriver/File.pm line 48, <DEBCONF_CONFIG> chunk 3. Use of uninitialized value $directory in concatenation (.) or string at /usr/share/perl5/Debconf/DbDriver/File.pm line 49, <DEBCONF_CONFIG> chunk 3. debconf: DbDriver "config": mkdir :Böyle bir dosya ya da dizin yok dpkg: error processing dictionaries-common (--configure): installed post-installation script alt islemi çikis durumunda hata döndürdü : 1 dpkg: dependency problems prevent configuration of aspell: aspell depends on dictionaries-common (>> 0.40); bununla beraber: Package dictionaries-common is not configured yet. dpkg: error processing aspell (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of aspell-en: aspell-en depends on aspell (>= 0.60.3-2); bununla beraber: Package aspell is not configured yet. aspell-en depends on dictionaries-common (>= 0.49.2); bununla beraber: Package dictionaries-common is not configured yet. dpkg: error processing aspell-en (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of hyphen-en-us: hyphen-en-us depends on dictionaries-common (>= 0.10) | openoffice.org-updatedicts; bununla beraber: Package dictionaries-common is not configured yet. openoffice.org-updatedicts paketi yüklenmedi. Package dictionaries-common which provides openoffice.org-updatedicts is not configured yet. dpkg: error processing hyphen-en-us (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of wicd-gtk: wicd-gtk depends on wicd-daemon (= 1.7.2.4-2ubuntu1); bununla beraber: Package wicd-daemon is not configured yet. dpkg: error processing wicd-gtk (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor dpkg: dependency problems prevent configuration of wicd: wicd depends on wicd-daemon (= 1.7.2.4-2ubuntu1); bununla beraber: Package wicd-daemon is not configured yet. wicd depends on wicd-gtk (= 1.7.2.4-2ubuntu1) | wicd-curses (= 1.7.2.4-2ubuntu1) | wicd-cli (= 1.7.2.4-2ubuntu1) | wicd-client; bununla beraber: Package wicd-gtk is not configured yet. wicd-curses paketi yüklenmedi. wicd-cli paketi yüklenmedi. wicd-client paketi yüklenmedi. Package wicd-gtk which provides wicd-client is not configured yet. dpkg: error processing wicd (--configure): bagimlilik sorunlari - yapilandirilmadan birakiliyor aptitude-common (0.6.8.1-2ubuntu1) kuruluyor... libboost-iostreams1.49.0 (1.49.0-3.1ubuntu1) kuruluyor... libcwidget3 (0.5.16-3.4ubuntu1) kuruluyor... aptitude (0.6.8.1-2ubuntu1) kuruluyor... update-alternatives: using /usr/bin/aptitude-curses to provide /usr/bin/aptitude (aptitude) in Otomatik Mod Processing triggers for libc-bin ... ldconfig deferred processing now taking place Islem sirasinda hatalar bulundu: wicd-daemon man-db dictionaries-common aspell aspell-en hyphen-en-us wicd-gtk wicd E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • What are the most common schools of knowledge prevalent in 'great' programmers?

    - by DaveDev
    I asked this question on StackOverflow but it got shot down fairly quickly. It was suggested that I ask it here, so I've copied it from there. Hope that's ok: The question: I think that the 'great' programmers become so mostly from being exposed to and interested in programming from early ages, as well as huge amounts of dedication. Unfortunately I only discovered programming at a later age, and I sometimes feel frustrated with the difficulties I experience in trying to grok some of the more fundamental concepts the 'greats' seem to take for granted.. So my question is in relationt to that, if a 'great' programmer (i.e. top 10%) had to distill his or her knowledge into a few recommendations / books / concepts / suggestions / lessons, what would they be? What does a programmer who's willing to learn need to do to get on the right track towards becoming great? And to be more specific, I don't mean 'what does that person need to do', because the answer is almost invariably, 'practice!'. What I mean is, what does the programmer need to know?

    Read the article

  • High CPU usage compared to WinXP. Common aps and actions use 100% CPU cycles

    - by Jopower
    I'm running Lubunto 14.04.1 LTS. PC is a 2004 HP ze4200 laptop: 1.8 ghz Celeron M with 1 gb RAM and 80 gb drive. Was running fine on WinXP SP3 and I cleaned the drive off to test Lubuntu 14 LTS. No anti-virus is installed yet. I enabled the CPU resource monitor to see how various programs drag the OS. Using Firefox 31 online right now, I see doing basic functions like openning a new tab and scrolling down a page are using 100% CPU time, ocassionally for 10-30 seconds. In fact some pretty basic aps like Leafnote hit 100% for a second. Wordpad never did that. Lubuntu Software Center locks things up at 100% for 10 seconds. Just typing here shows a 60-80% spike every character. Running the mouse around the screen for 10 seconds results in a sustained 100% load during that time. Right now, if I let the PC rest just idling Firefox and not doing anything with it, CPU use bounces from 20-40% all by itself. WinXP idles at 2-10% and it's considered not good for it to be above 20%... something odd must be happening. Sure, XP will give similar higher CPU cycles with program use, but it's not locking and slogging like this. Lubuntu is supposed to be a light OS and by memory usage it is and I'm happy since this is an old PC maxed for memory upgrades. However, being used to doing some tuning and wary of abnormalities going on in the background, the CPU use indicates things going on that I want to know about and perhaps apply a tweek or two. Recommendations are appreciated. And this 300 point "new tags" restriction bites!

    Read the article

  • What are the basic features of an email module in a common web application?

    - by Coral Doe
    When developing an email module, what are the features to have in mind, besides actual email sending? I am talking about an email module that notifies users of events and periodically sends reports. The only other feature I have in mind is maintaining grey/black lists for users that do illegal operations in the system or any other things that may lead to email/domain/IP banning. Is there an etiquette for developing email modules? Are there some references of requirements for such modules?

    Read the article

  • Is sexist humor more common in the Ruby community than other language communities? [closed]

    - by Andrew Grimm
    I've heard of more cases of sexist humor in the Ruby community, such as the sqoot's "women as perk" and toplessness in advertising, than in all the other programming language communities combined. Is this merely because I'm in the Ruby community, and therefore are more likely to hear about incidents in the Ruby community, or is it because there's a higher rate of sexist humor in the Ruby community compared to, say, the C community?

    Read the article

  • How common is prototyping as the first stage of development?

    - by EpsilonVector
    I've been taking some software design courses in the past few semesters, and while I see the benefit in a lot of the formalism, I feel like it doesn't tell me anything about the program itself: You can't tell how the program is going to operate from the Use Case spec, even though it discusses what the program can do. You can't tell anything about the user experience from the requirements document, even though it can include quality requirements. Sequence diagrams are a good description of how the software works as the call stack, but are very limited, and give a highly partial view of the overall system. Class diagrams are great for describing how the system is built, but are utterly useless in helping you figure out what the software needs to be. Where in all this formalism is the bottom line: how the program looks, operates, and what experience it gives? Doesn't it make more sense to design off of that? Isn't it better to figure out how the program should work via a prototype and strive to implement it for real? I know that I'm probably suffering from being taught engineering by theoreticians, but I need to ask, do they do this in the industry? How do people figure out what the program actually is, not what it should conform to? Do people prototype a lot, or do they mostly use the formal tools like UML and I just didn't get the hang of using them yet?

    Read the article

  • How common is prototyping as the first stage of development?

    - by EpsilonVector
    I've been taking some software design courses in the past few semesters, and while I see the benefit in a lot of the formalism, I still feel like it doesn't tell me anything about the program itself. You can't tell how the program is going to operate from the Use Case spec, even though it discusses what the program can do, and you can't tell anything about the user experience from the requirements document, even though it can include QA requirements. ...sequence diagrams are as good a description of how the software works as the call stack, in other words- very limited, highly partial view of the overall system, and a class diagram is great for describing how the system is built, but is utterly useless in helping you figure out what the software needs to be. Where in all this formalism is the bottom line- how the program looks, operates, and what experience it gives? Doesn't it make more sense to design off of that? Isn't it better to figure out how the program should work via a prototype and strive to implement it for real? I know that I'm probably suffering from being taught engineering by theoreticians, but I got to ask, do they do this in the industry? How do people figure out what the program actually is, not what it should conform to? Do people prototype a lot? ...or do they mostly use the formal tools like UML and I just didn't get the hang of using them yet?

    Read the article

  • Is there an easy way to type in common math symbols?

    - by srcspider
    Disclaimer: I'm sure someone is going to moan about easy-of-use, for the purpose of this question consider readability to be the only factor that matters So I found this site that converts to easting northing, it's not really important what that even means but here's how the piece of javascript looks. /** * Convert Ordnance Survey grid reference easting/northing coordinate to (OSGB36) latitude/longitude * * @param {OsGridRef} gridref - easting/northing to be converted to latitude/longitude * @returns {LatLonE} latitude/longitude (in OSGB36) of supplied grid reference */ OsGridRef.osGridToLatLong = function(gridref) { var E = gridref.easting; var N = gridref.northing; var a = 6377563.396, b = 6356256.909; // Airy 1830 major & minor semi-axes var F0 = 0.9996012717; // NatGrid scale factor on central meridian var f0 = 49*Math.PI/180, ?0 = -2*Math.PI/180; // NatGrid true origin var N0 = -100000, E0 = 400000; // northing & easting of true origin, metres var e2 = 1 - (b*b)/(a*a); // eccentricity squared var n = (a-b)/(a+b), n2 = n*n, n3 = n*n*n; // n, n², n³ var f=f0, M=0; do { f = (N-N0-M)/(a*F0) + f; var Ma = (1 + n + (5/4)*n2 + (5/4)*n3) * (f-f0); var Mb = (3*n + 3*n*n + (21/8)*n3) * Math.sin(f-f0) * Math.cos(f+f0); var Mc = ((15/8)*n2 + (15/8)*n3) * Math.sin(2*(f-f0)) * Math.cos(2*(f+f0)); var Md = (35/24)*n3 * Math.sin(3*(f-f0)) * Math.cos(3*(f+f0)); M = b * F0 * (Ma - Mb + Mc - Md); // meridional arc } while (N-N0-M >= 0.00001); // ie until < 0.01mm var cosf = Math.cos(f), sinf = Math.sin(f); var ? = a*F0/Math.sqrt(1-e2*sinf*sinf); // nu = transverse radius of curvature var ? = a*F0*(1-e2)/Math.pow(1-e2*sinf*sinf, 1.5); // rho = meridional radius of curvature var ?2 = ?/?-1; // eta = ? var tanf = Math.tan(f); var tan2f = tanf*tanf, tan4f = tan2f*tan2f, tan6f = tan4f*tan2f; var secf = 1/cosf; var ?3 = ?*?*?, ?5 = ?3*?*?, ?7 = ?5*?*?; var VII = tanf/(2*?*?); var VIII = tanf/(24*?*?3)*(5+3*tan2f+?2-9*tan2f*?2); var IX = tanf/(720*?*?5)*(61+90*tan2f+45*tan4f); var X = secf/?; var XI = secf/(6*?3)*(?/?+2*tan2f); var XII = secf/(120*?5)*(5+28*tan2f+24*tan4f); var XIIA = secf/(5040*?7)*(61+662*tan2f+1320*tan4f+720*tan6f); var dE = (E-E0), dE2 = dE*dE, dE3 = dE2*dE, dE4 = dE2*dE2, dE5 = dE3*dE2, dE6 = dE4*dE2, dE7 = dE5*dE2; f = f - VII*dE2 + VIII*dE4 - IX*dE6; var ? = ?0 + X*dE - XI*dE3 + XII*dE5 - XIIA*dE7; return new LatLonE(f.toDegrees(), ?.toDegrees(), GeoParams.datum.OSGB36); } I found that to be a really nice way of writing an algorythm, at least as far as redability is concerned. Is there any way to easily write the special symbols. And by easily write I mean NOT copy/paste them.

    Read the article

  • Is it good practice to use functions just to centralize common code?

    - by EpsilonVector
    I run across this problem a lot. For example, I currently write a read function and a write function, and they both check if buf is a NULL pointer and that the mode variable is within certain boundaries. This is code duplication. This can be solved by moving it into its own function. But should I? This will be a pretty anemic function (doesn't do much), rather localized (so not general purpose), and doesn't stand well on its own (can't figure out what you need it for unless you see where it is used). Another option is to use a macro, but I want to talk about functions in this post. So, should you use a function for something like this? What are the pros and cons?

    Read the article

  • What are the common mistakes in 'tailored Scrum approaches'?

    - by Clark Gable
    I have seen this before. Management wants to be agile and be scrummified, but does not want to step out of the status quo. My latest observation is no different; here, the Scrum is 'tailored' to the organization; specifically into a weird many-people-process. The diagram showing the different participants. I am putting together a document listing why this will not work. Here are the obvious ones: 1. There are product owner agents (an obvious WTF), who report to the product owner: causing dilution of decision making capability 2. There is a role that looks similar to a manager in the traditional approach - development manager: an obvious attempt at command-and-control model 3. The ScrumMaster's role includes collecting timesheets, which are used to track progress instead of burndown charts: detrimental to agile's efforts to build teams with motivated individuals Leaving the question "how would you convince the management?", my question is more at, "what else do you see as failures in this/similar 'tailored Scrum approaches'? EDIT: The diagram might use a few more details 1. The development manager is not part of the development team, with not very clearly defined responsibilities, except: developer performance assessemnt, recruitment, etc., 2. There are more than two teams (with ScrumMaster+development manager+dev team) with the same product owner for all teams!

    Read the article

  • Is it common for a development position to be extremely mundane and not challenging at all? [closed]

    - by Kim Jong Woo
    Hi guys so I am working at this company as a web developer but after 1 week of working here, I realize the stuff I am doing seem to be very easy stuff compared to what my peers who have been around for longer are doing. I am way ahead of my schedule and finish my projects early but it's because the work is not at all hard or problem solving involved. So I am puzzled why I would be thanked over doing such menial tasks. Is this normal? This is driving me nuts, I ask to be given more work and I do get it and still finish it quickly and accurately. Now I am having this paranoia that they are just conspiring to use me for a short period of time and terminate me. Am I going too far with this? I keep losing sleep over this. On days when I have a full load of work to complete, this uneasiness goes away but so far I feel like I am not being allowed to pursue what I thought I would do like solving and designing solutions. A lot of it doesn't require any thinking, just cleaning up other people's code and closing bug tickets.

    Read the article

  • Is it common to prototype in a higher level language?

    - by Mark Canlas
    I'm currently toying with the idea of embarking on a project that far exceeds my current programming ability in a language I have very little real world experience in (C). Would it be valuable to prototype in a higher level language that I'm more familiar with (like Perl/Python/Ruby/C#) just so I can get the overall design going? Ultimately, the final product is performance sensitive, hence the choice of C, but I'm afraid not knowing C well will make me lose the forest for the trees. While searching for similar questions, I noticed one fellow mention that programmers used to prototype in Prolog, then crank it out in assembler.

    Read the article

  • How do I architect 2 plugins that share a common component?

    - by James
    I have an object that takes in data and spits out a transformed output, called IBaseItem. I also have two parsers, IParserA and IParserB. These parsers transform external data (in format dataA and dataB respectively) to a format usable by my IBaseItem (baseData). I want to create 2 systems, one that works with dataA and one that works with dataB. They will allow the user to enter data and match it to the right plugins/implementations and transform the data to outData. I want to write these traffic cops myself, but have other people provide the parsers and baseitem logic, and and as such am implementing these items as plugins (hence the use of interfaces). Other programmers can choose to implement 1 or both parsers. Q: How should I structure the way base items and parsers are associated, stored, and loaded into each of my programs? Class Relations: What I've Tried: Initially I though there should be a different dll for each of my 2 traffic cops, that each have a parser and baseitem in them. However, the duplication of baseitem logic doesn't seem right (especially if the base item logic changes). I then thought the base items could all have their own dll, and then somehow associate parsers and baseitems (guids?), but I don't know if implementing the overhead id/association is adding too much complexion.

    Read the article

  • What are the common pitfalls that would stop Authorised Key SSH access, and how do I find and correct for them?

    - by Ashimema
    EDIT: This question was reworked to make it more useful to the community and less specific to me. Questions seem to come up reasonably often regarding ssh and problems with authorised keys access, but very few seem to have a clear answer anywhere; Server keeps asking for password after I've copied my SSH Public Key to authorized_keys ssh not accepting public key how do I use ssh with key access in 11.10 passwordless ssh not working So, In the communities opinion, what is the tried and tested method for getting to the bottom of such problems?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >