Search Results

Search found 24403 results on 977 pages for 'matt case'.

Page 176/977 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Transient VO : Powerful J2EE Design Pattern

    - by Vijay Mohan
    We had a use-case wherein, the communication has to happen between regions residing under differenet taskfows. Essentially, they had a common set of parameters to be used. Initially, we resorted to the  use of pageFlowScope variables, but they are tightly coupled with the individual task flows. So, how the communication has to happen..?Some of the alternatives that we brainstormed into are - 1.usage of adf contextual event - This is a powerful feature indeed for such use-cases, but there is a considerable cost involved with it. So, before resorting to it, you have to make sure that you have good enough reason to use it.It actually does a server roundtrip and also the issue of an event and listening part to it is also something which requires your attention !!2.Use a transientVO with shared data control scope - with shared data control scope, the transient VO rows would be persistent across the task flows in your application. All you have to do is to create the attributes in the transientVO(prefereably with the same names - for the ease of conversion) and create some utility methods in VOImpl for creating row, updating row and deleting a row. You also have to make sure that the vo row is initialized per http request( this you can do in a bookmark method of your index.jspx - residing in adfc-config.xml), else the ui fields binded to the transient vo attributes won't render in UI.Hope, this helps and this should be a common use-case across apps.

    Read the article

  • SQL SERVER – ORDER BY ColumnName vs ORDER BY ColumnNumber

    - by pinaldave
    I strongly favor ORDER BY ColumnName. I read one of the blog post where blogger compared the performance of the two SELECT statement and come to conclusion that ColumnNumber has no harm to use it. Let us understand the point made by first that there is no performance difference. Run following two scripts together: USE AdventureWorks GO -- ColumnName (Recommended) SELECT * FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT * FROM HumanResources.Department ORDER BY 3,2 GO If you look at the result and see the execution plan you will see that both of the query will take the same amount of the time. However, this was not the point of this blog post. It is not good enough to stop here. We need to understand the advantages and disadvantages of both the methods. Case 1: When Not Using * and Columns are Re-ordered USE AdventureWorks GO -- ColumnName (Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY GroupName, Name GO -- ColumnNumber (Strongly Not Recommended) SELECT GroupName, Name, ModifiedDate, DepartmentID FROM HumanResources.Department ORDER BY 3,2 GO Case 2: When someone changes the schema of the table affecting column order I will let you recreate the example for the same. If your development server where your schema is different than the production server, if you use ColumnNumber, you will get different results on the production server. Summary: When you develop the query it may not be issue but as time passes by and new columns are added to the SELECT statement or original table is re-ordered if you have used ColumnNumber it may possible that your query will start giving you unexpected results and incorrect ORDER BY. One should note that the usage of ORDER BY ColumnName vs ORDER BY ColumnNumber should not be done based on performance but usability and scalability. It is always recommended to use proper ORDER BY clause with ColumnName to avoid any confusion. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Contents farms, scrapers sites, aggregators real world examples? [closed]

    - by Marco Demaio
    Contents farm, scrappers, aggregators real world examples? Could you plz clarify me: efreedom.com is a scraper site, not a content farm? Because it simply copies and pastes contents from stackoverflow. ehow.com and squidoo.com are contents farm? They don't copy and paste contents they just generate fresh new user generated content, but too much and too quickly. expert-exchange.com is NOT a content farm or a scraper site, right?! It's simply that many people (an me too) hates it (they also wrote to Matt Cutts) because it shows up hight in Google providing a useless question with no answer. There are also many sites that act as 'contents aggregators in the form of specialized directories' (let's call them CASD), I don't know how to else define them. Do they have a specific definition? Anyway are these type of CASD contents farms or scrapers sites or what else? Basically these CASD search for all sites of the same type i.e. “restaurants websites”, they copy and paste the contents found in “Restaurant A” and create in their aggregator site a new page called “Restaurant A”, then they do the same for all websites of the same type, thus creating a sort of directory of restaurants. Later on these CASD also sends an email to the owner of “Restaurant A” (usually the email is on the website) with a user and password to let him modify/update its own page on the CASD site. Later on these CASD might ask for money to the owner of “Restaurant A” because they bring him traffic, otherwise they remove its page on the aggregator. Someone could call these simply directories, but I think a directory is different because is something you need to add your site into by filling a form and not something that steals contents from your existing site without a specific acceptance from the site's owner. I also really wonder how Google will sort out all these mess sites packed of contents that show up more and more and everywhere in search results.

    Read the article

  • What's so difficult about SVN merges? [closed]

    - by Mason Wheeler
    Possible Duplicate: I’m a Subversion geek, why should I consider or not consider Mercurial or Git or any other DVCS? Every once in a while, you hear someone saying that distributed version control (Git, HG) is inherently better than centralized version control (like SVN) because merging is difficult and painful in SVN. The thing is, I've never had any trouble with merging in SVN, and since you only ever hear that claim being made by DVCS advocates, and not by actual SVN users, it tends to remind me of those obnoxious commercials on TV where they try to sell you something you don't need by having bumbling actors pretend that the thing you already have and works just fine is incredibly difficult to use. And the use case that's invariably brought up is re-merging a branch, which again reminds me of those strawman product advertisements; if you know what you're doing, you shouldn't (and shouldn't ever have to) re-merge a branch in the first place. (Of course it's difficult to do when you're doing something fundamentally wrong and silly!) So, discounting the ridiculous strawman use case, what is there in SVN merging that is inherently more difficult than merging in a DVCS system?

    Read the article

  • Variable naming conventions?

    - by Ziv
    I've just started using ReSharper (for C#) and I kind of like its code smells finder, it shows me some things about my writing that I meant to fix a long time ago (mainly variable naming conventions). It caused me to reconsider some of my naming conventions for methods and instance variables. ReSharper suggests that instance variable be lower camel case and begin with an underscore. For a while I meant to make all my local variables lower camel case but is the underscore necessary? Do you find it comfortable? I don't like this convention but I also haven't tried it yet, what is you opinion of it? The second thing it prompted me to re-evaluate is my naming conventions for GUI event handlers. I usually use the VS standard of ControlName_Action and my controls usually use hungarian notation (as a suffix, to help clarify in code what is visible to the user and what isn't when dealing with similarly named variable) so I end up with OK_btn_Click(), what is your opinion of that? Should I succumb to the ReSharper convention or there are other equally valid options?

    Read the article

  • Email notification and mail server

    - by Jerr Wu
    I am building a web application with email notification just like Facebook, which will host in http://www.linode.com/. When a user A comment to a post, the poster will get an email notification from '[email protected]' with the comment message written by user A. (Not spam) I really like Google Apps but they have sending limits 2000 sending per day, that is not suit for my case becuz I cannot have sending limits. There will be many email notifications. http://support.google.com/a/bin/answer.py?hl=en&answer=166852 I also need company email accounts for team members use which I prefer Google Apps. My web application will host in linode, I am considering "Amazon Simple Notification Service" for the email notification. My questions are Any other recommend email service provider suits my case for me? Can I bind company email accounts(ex: [email protected]) with Google Apps and bind [email protected] with other email service provider?

    Read the article

  • Why does bash invocation differ on AIX when using telnet vs ssh

    - by Philbert
    I am using an AIX 5.3 server with a .bashrc file set up to echo "Executing bashrc." When I log in to the server using ssh and run: bash -c ls I get: Executing bashrc . .. etc.... However, when I log in with telnet as the same user and run the same command I get: . .. etc.... Clearly in the telnet case, the .bashrc was not invoked. As near as I can tell this is the correct behaviour given that the shell is non-interactive in both cases (it is invoked with -c). However, the ssh case seems to be invoking the shell as interactive. It does not appear to be invoking the .profile, so it is not creating a login shell. I cannot see anything obviously different between the environments in the two cases. What could be causing the difference in bash behaviour?

    Read the article

  • SQL SERVER – Finding Size of a Columnstore Index Using DMVs

    - by pinaldave
    Columnstore Index is one of my favorite enhancement in SQL Server 2012. A columnstore index stores each column in a separate set of disk pages, rather than storing multiple rows per page as data traditionally has been stored. In case of the row store indexes multiple pages will contain multiple rows of the columns spanning across multiple pages. Whereas in case of column store indexes multiple pages will contain (multiple) single columns.  Columnstore Indexes are compressed by default and occupies much lesser space than regular row store index by default. One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size.  SELECT OBJECT_SCHEMA_NAME(i.OBJECT_ID) SchemaName, OBJECT_NAME(i.OBJECT_ID ) TableName, i.name IndexName, SUM(s.used_page_count) / 128.0 IndexSizeinMB FROM sys.indexes AS i INNER JOIN sys.dm_db_partition_stats AS S ON i.OBJECT_ID = S.OBJECT_ID AND I.index_id = S.index_id WHERE  i.type_desc = 'NONCLUSTERED COLUMNSTORE' GROUP BY i.OBJECT_ID, i.name Here is my introductory article written on SQL Server Fundamentals of Columnstore Index. Create a sample columnstore index based on the script described in the earlier article. It will give the following results. Please feel free to suggest improvement to script so I can further modify it to accommodate enhancements. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Index, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ColumnStore Index

    Read the article

  • Outlook attachment save prompt behavior

    - by kara-marfia
    It seems they've put some Clippy-like behavior into Outlook 07. Assume you open an email message and open its attachment, given that you make no changes to the message or the attachment. If you close the attachment, then close the email - works as expected Close email - prompted to save changes to attachment I have some clerical users, and they tend to believe what the computer tells them. In this case, I'm having a hard time determining the reason someone determined that Outlook should lie in this case, and prompt someone to save a file that hasn't changed. Regardless, I've only been able to find examples of people failing to find a fix for this. Anyone have ideas? edit: I should have clarified, I suppose I'm looking for a workarounnd, as it's consistently reproduceable for any machine, and I suspect is therefore "working as intended"

    Read the article

  • How can I move along an angled collision at a constant speed?

    - by Raven Dreamer
    I have, for all intents and purposes, a Triangle class that objects in my scene can collide with (In actuality, the right side of a parallelogram). My collision detection and resolution code works fine for the purposes of preventing a gameobject from entering into the space of the Triangle, instead directing the movement along the edge. The trouble is, the maximum speed along the x and y axis is not equivalent in my game, and moving along the Y axis (up or down) should take twice as long as an equivalent distance along the X axis (left or right). Unfortunately, these speeds apply to the collision resolution too, and movement along the blue path above progresses twice as fast. What can I do in my collision resolution to make sure that the speedlimit for Y axis movement is obeyed in the latter case? Collision Resolution for this case below (vecInput and velocity are the position and velocity vectors of the game object): // y = mx+c lowY = 2*vecInput.x + parag.rightYIntercept ; ... else { // y = mx+c // vecInput.y = 2(x) + RightYIntercept // (vecInput.y - RightYIntercept) / 2 = x; //if velocity.Y (positive) greater than velocity.X (negative) //pushing from bottom, so push right. if(velocity.y > -1*velocity.x) { vecInput = new Vector2((vecInput.y - parag.rightYIntercept)/2, vecInput.y); Debug.Log("adjusted rightwards"); } else { vecInput = new Vector2( vecInput.x, lowY); Debug.Log("adjusted downwards"); } }

    Read the article

  • Permission based Authorization vs. Role based Authorization - Best Practices - 11g

    - by Prakash Yamuna
    In previous blog posts here and here I have alluded to the support in OWSM for Permission based authorization and Role based authorization support. Recently I was having a conversation with an internal team in Oracle looking to use OWSM for their Web Services security needs and one of the topics was around - When to use permission based authorization vs. role based authorization? As in most scenarios the answer is it depends! There are trade-offs involved in using the two approaches and you need to understand the trade-offs and you need to understand which trade-offs are better for your scenario. Role based Authorization: Simple to use. Just create a new custom OWSM policy and specify the role in the policy (using EM Fusion Middleware Control). Inconsistent if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) - ex: the model for securing EJBs with roles or the model for securing Web App roles - is inconsistent. Since the model is inconsistent, tooling is also fairly inconsistent. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating OWSM custom policies. Permission based Authorization: More complex. You need to attach both an OWSM policy and create OPSS Permission authorization policies. (Note: OWSM leverages OPSS Permission based Authorization support). More appropriate if you have multiple type of resources in an application (ex: EJBs, Web Apps, Web Services) and want a consistent authorization model. Consistent Tooling for managing authorization across different resources (ex: EM Fusion Middleware Control). Better Lifecycle support in terms of T2P, etc. Achieving this use-case using JDeveloper is slightly complex - since JDeveloper does not directly support creating/editing OPSS Permission based authorization policies.

    Read the article

  • Why is Quicksort called "Quicksort"?

    - by Darrel Hoffman
    The point of this question is not to debate the merits of this over any other sorting algorithm - certainly there are many other questions that do this. This question is about the name. Why is Quicksort called "Quicksort"? Sure, it's "quick", most of the time, but not always. The possibility of degenerating to O(N^2) is well known. There are various modifications to Quicksort that mitigate this problem, but the ones which bring the worst case down to a guaranteed O(n log n) aren't generally called Quicksort anymore. (e.g. Introsort). I just wonder why of all the well-known sorting algorithms, this is the only one deserving of the name "quick", which describes not how the algorithm works, but how fast it (usually) is. Mergesort is called that because it merges the data. Heapsort is called that because it uses a heap. Introsort gets its name from "Introspective", since it monitors its own performance to decide when to switch from Quicksort to Heapsort. Similarly for all the slower ones - Bubblesort, Insertion sort, Selection sort, etc. They're all named for how they work. The only other exception I can think of is "Bogosort", which is really just a joke that nobody ever actually uses in practice. Why isn't Quicksort called something more descriptive, like "Partition sort" or "Pivot sort", which describe what it actually does? It's not even a case of "got here first". Mergesort was developed 15 years before Quicksort. (1945 and 1960 respectively according to Wikipedia) I guess this is really more of a history question than a programming one. I'm just curious how it got the name - was it just good marketing?

    Read the article

  • If-Modified-Since vs If-None-Match

    - by Roger
    This question is based on this article response header HTTP/1.1 200 OK Last-Modified: Tue, 12 Dec 2006 03:03:59 GMT ETag: "10c24bc-4ab-457e1c1f" Content-Length: 12195 request header GET /i/yahoo.gif HTTP/1.1 Host: us.yimg.com If-Modified-Since: Tue, 12 Dec 2006 03:03:59 GMT If-None-Match: "10c24bc-4ab-457e1c1f" HTTP/1.1 304 Not Modified In this case browser is sending both If-None-Match and If-Modified-Since. My question is on the server side do I need to match BOTH etag and If-Modified-Since before I send 304. Or Should I just look at etag and send 304 if etag is a match. In this case I am ignoring If-Modified-Since .

    Read the article

  • How to deal with colleagues refuse to follow practices?

    - by Adrian Shum
    I was discussing with another colleague about what we should be used when an DB entity is referring to another. I don't think there is any good reason to break the practice of putting the Primary Key in the referring entity. However, one of my colleague says: "You should use a surrogate key in the entity, but it is better to put the human-readable natural key in the referring entity. As long it is unique, it is fine and it is easier when you are doing support or maintenance job" I know it will works, but obviously it is not a good practice you are putting a non-PK unique column as "foreign key", just for gaining a bit of ease in writing SQL during support as we can have less table join. Though I mentioned the his approach is conceptual incorrect, and causing problem too practically etc, he seems rather trade off correctness in data model in exchange of ease of maintenance. And he said: "I know it is not good practice, but good practice is not golden rule" Honestly I feel frustrated when dealing with something like this. I know there are always case that we should break some rule or practice, but doubtless it is not such case now. What will you when you are facing situation like this? Please assume yourself being a senior developer which is expected to contribute in misc development direction and convention.

    Read the article

  • Installing Windows 8 over Windows 7 with Ubuntu installed using wubi (both on `C:\`)

    - by peat-ar
    Current state I'm using both - Ubuntu (installed via Wubi on the same drive as Windows) and Windows 7 - quite frequently. I just bought the upgrade to Windows 8 and was curious to try it out, however I'm quite insecure whether Windows 8's "secure boot" will exclude my current Ubuntu installation and if it's even possible to keep it. So... is there any way to upgrade to Windows 8 without overwriting Ubuntu? (I really don't want to reinstall it, as a lot of customization has been done here and taking backups and all would get pretty wearing (same case for Windows 7 - if possible, I'd like to keep my files)) This is not a dublicate of Installing Windows 8 over Windows 7 with Ubuntu installed using wubi? because this question only deals with the case when Ubuntu has been installed on (e.g.) D:\ (while Windows is being installed on C:\)

    Read the article

  • Windows 2003 shutdown instead of reboot

    - by The_cobra666
    Hi all, I've got a very strange problem. On the net I can only find problems with Server 2003 pc's, that reboot instead of shutting down, but in my case... (go figure) it does the opposite. When I choose to reboot, the system shutdown. Yes I am sure I used the reboot button. It happens via start == reboot and it happens when clicking on the reboot button when the update's have been installed. Nothing changed to the system's hardware. Only update's have been installed on the system. I did notice something odd in the logs when rebooting: Description: Timed out sending notification of target device change to window of "C:\WINDOWS\Explorer.EXE" KB: http://support.microsoft.com/kb/924390 They talk about a removable drive, but in my case, it's explorer.exe :s it appears 4 times. OS: Windows Server 2003 R2. Services: AD, DNS, DHCP, WSUS. Thanks in advance! Greetings from Belgium Shane

    Read the article

  • Are More Comments Better in High-Turnover Environments?

    - by joshin4colours
    I was talking with a colleague today. We work on code for two different projects. In my case, I'm the only person working on my code; in her case, multiple people work on the same codebase, including co-op students who come and go fairly regularly (between every 8-12 months). She said that she is liberal with her comments, putting them all over the place. Her reasoning is that it helps her remember where things are and what things do since much of the code wasn't written by her and could be changed by someone other than her. Meanwhile, I try to minimize the comments in my code, putting them in only in places with a unobvious workaround or bug. However, I have a better understanding of my code overall, and have more direct control over it. My opinion in that comments should be minimal and the code should tell most of the story, but her reasoning makes sense too. Are there any flaws in her reasoning? It may clutter the code but it ultimately could be quite helpful if there are many people working on it in the short- to medium-run.

    Read the article

  • How do I handle having too many links on a webpage because of my menu

    - by RandomBen
    I am developing a website that has a drop-down menu at the top of it. The Menu has around 100 links in it that are repeated on every page. Every page also has some number of links below the Menu that may or may not be in the menu itself. My issue is that Google says they generally don't like pages with more than 100 links on them. Is there any way to change the links on the menu so that they no longer "count" towards my max of 100 links? It seems like there should be an easy way to do this but their really doesn't seem to be. the rel=nofollow still counts towards the number of links on the page at least according to Google, so what other options do I have? I looked into where the 100 comes from and I found that it used to be here: http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=35769#2 but that is no longer the case. I found a more definitive and frankly muddier answer here: http://www.seomoz.org/blog/questions-answers-with-googles-spam-guru from Matt Cutts from 2007. Long story short, in 2007 they still felt 100 links was a good number but they stated you could go far beyond that. In fact, they said that pages with high PageRank could have 2-300. It did sound like having many links could reduce the PageRank of the page with all of the links or possibly all of the items linked to. Also, I know IIS7's SEO 1.0 toolkit suggests that pages should have no more than 250 links.

    Read the article

  • Oracle at Work videó: Banca Transilvania, Exadata és Exalogic, alkalmazások és adattárház

    - by user645740
    Az ORACLE AT WORK videók sorában most a Banca Transilvania romániai bank felso vezetoi osztják meg gyakorlati tapasztalataikat az Exadata Database Machine és az Exalogic megoldások banki muködésérol. Videó link: Video Case Study with Banca Transilvania.    A kolozsvári székhelyu Banca Transilvania a harmadik legnagyobb bank Romániában, 1,5 millió aktív ügyfelet kezel, a bank növekszik. Amikor 1999-ben megjelent a II. generációs Exadata V2-es sorozat, ebbol a Banca Transilvania volt az elso banki vásárló a világon. Eloször az adattárházukat helyezték át Exadatára, hatalmas teljesítmény ugrást tapasztaltak, amit az üzlet ki is tudott használni, több riport, extrém jó válaszidokkel, gyorsabb batch futások. A cél a banki architektúra Exadata és Exalogic rendszerre történo konszolidálása. A videóban az Exadata + Exalogic megoldásuk elonyeirol a Banca Transilvania vezetoi beszélnek: Robert C. Rekkers, CEO;  Leontin Toderici, COO;  Marius Ursuti, IT Director. az alkalmazásaikat Oracle-ön konszolidálják kártya rendszer új core-banking rendszer: FLEXCUBE is az Exadatán fog muködni, ez segíti a bankot a növekedésben, a növekvo ügyfélszám, tranzakciók kezelésében Siebel CRM tesztelik az Exa architektúrán az Oracle E-Business Suite alkalmazásokat is legjobb teljesítmény Oracle adatbázisokra, kisebb válaszidok, adattárház és Oracle OLAP: költségcsökkenés, 30-szor gyorsabb muködés Exalogic: alkalmazásszerverek konszolidációja WebLogic szerveren a bank tesztjei is azt mutatják, hogy az Exalogic elonyei még jobban kidomborodnak az Exadata használatával együtt a gépek beállítását követoen 24 órán belül már kezdhetik a munkát, az elore telepített szoftverekkel, a gyárilag alaposan tesztelt és hangolt gépekkel egyszerubb architektúra, ennek menedzsmentje is kisebb költségu általánosan minden rendszer bevezetési ideje lecsökken az Exa architektúrán lehetové teszi új termékek és szolgáltatások gyorsabb bevezetését költségcsökkentés, teljes költség, Total Cost csökkent kiemelték az Oracle Corporation  felkészült csapatának rugalmasságát az Oracle folyamatos fejlesztése, innovációja, supportja Videó link: Video Case Study with Banca Transilvania.

    Read the article

  • need explanation on amortization in algorithm

    - by Pradeep
    I am a learning algorithm analysis and came across a analysis tool for understanding the running time of an algorithm with widely varying performance which is called as amortization. The autor quotes " An array with upper bound of n elements, with a fixed bound N, on it size. Operation clear takes O(n) time, since we should dereference all the elements in the array in order to really empty it. " The above statement is clear and valid. Now consider the next content: "Now consider a series of n operations on an initially empty array. if we take the worst case viewpoint, the running time is O(n^2), since the worst case of a sigle clear operation in the series is O(n) and there may be as many as O(n) clear operations in the series." From the above statement how is the time complexity O(n^2)? I did not understand the logic behind it. if 'n' operations are performed how is it O(n ^2)? Please explain what the autor is trying to convey..

    Read the article

  • How can I move a polygon edge 1 unit away from the center?

    - by Stephen
    Let's say I have a polygon class that is represented by a list of vector classes as vertices, like so: var Vector = function(x, y) { this.x = x; this.y = y; }, Polygon = function(vectors) { this.vertices = vectors; }; Now I make a polygon (in this case, a square) like so: var poly = new Polygon([ new Vector(2, 2), new Vector(5, 2), new Vector(5, 5), new Vector(2, 5) ]); So, the top edge would be [poly.vertices[0], poly.vertices[1]]. I need to stretch this polygon by moving each edge away from the center of the polygon by one unit, along that edge's normal. The following example shows the first edge, the top, moved one unit up: The final polygon should look like this new one: var finalPoly = new Polygon([ new Vector(1, 1), new Vector(6, 1), new Vector(6, 6), new Vector(1, 6) ]); It is important that I iterate, moving one edge at a time, because I will be doing some collision tests after moving each edge. Here is what I tried so far (simplified for clarity), which fails triumphantly: for(var i = 0; i < vertices.length; i++) { var a = vertices[i], b = vertices[i + 1] || vertices[0]; // in case of final vertex var ax = a.x, ay = a.y, bx = b.x, by = b.y; // get some new perpendicular vectors var a2 = new Vector(-ay, ax), b2 = new Vector(-by, bx); // make into unit vectors a2.convertToUnitVector(); b2.convertToUnitVector(); // add the new vectors to the original ones a.add(a2); b.add(b2); // the rest of the code, collision tests, etc. } This makes my polygon start slowly rotating and sliding to the left, instead of what I need. Finally, the example shows a square, but the polygons in question could be anything. They will always be convex, and always with vertices in clockwise order.

    Read the article

  • Big project layout : adding new feature on multiple sub-projects

    - by Shiplu
    I want to know how to manage a big project with many components with version control management system. In my current project there are 4 major parts. Web Server Admin console Platform. The web and server part uses 2 libraries that I wrote. In total there are 5 git repositories and 1 mercurial repository. The project build script is in Platform repository. It automates the whole building process. The problem is when I add a new feature that affects multiple components I have to create branch for each of the affected repo. Implement the feature. Merge it back. My gut feeling is "something is wrong". So should I create a single repo and put all the components there? I think branching will be easier in that case. Or I just do what I am doing right now. In that case how do I solve this problem of creating branch on each repository?

    Read the article

  • Is using something other than XML advisable for my configuration file?

    - by Earlz
    I have a small tool I'm designing which would require a configuration file of some sort. The configuration file in my case is really more of a database, but it needs to be lightweight, and if needed the end-user should find it easily editable. However, it also will contain a lot of things in it. (depending on certain factors, could be 1Mb or more) I've decided I'd rather use plain ol' text, rather than trying to use SQLite or some such. However, with using text, I also have to deal with the variety of formats. So far, my options are XML JSON Custom format The data in my file is quite simple consisting for the most part of key-value type things. So, a custom format wouldn't be that difficult... but I'd rather not have to worry about writing the support for it. I've never seen JSON used for configuration files. And XML would bloat the file size substantially I think. (I also just has a dislike of XML in general). What should I do in this case? Factors to consider: This configuration file can be uploaded to a web service(so size matters) Users must be able to edit it by hand if necessary(ease of editing and reading matters) Must be able to generate and process automatically (speed doesn't matter a lot, but not excessively slow) The "keys" and "values" are plain strings, but must be escaped because they can contain anything. (unicode and escaping has to work easily)

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >