Search Results

Search found 55092 results on 2204 pages for 'oracle add category'.

Page 398/2204 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • jquery add remove objects

    - by Scarface
    Hey guys quick question, I have an add remove script that will add a unique element and should remove it on a click function but does not. I think it is because the added object is not in the DOM when page is loaded but I am not sure how to fix this. IF anyone has any advice I would greatly appreciate it. $(document).ready(function(){ if (action=='content-change'){ $('#droppable2-inner').empty().append('<div id="content-image"><img id="visual-background2" src=' + src + '></div><div id="drop-content" action="drop-image">x</div>'); } $("#drop-content").click(function() { $('#content-image').remove(); }); })

    Read the article

  • Add a hidden view above a UITableView that is only displayed when TableView is scrolled up

    - by noncogitas
    Goal/Situation: I currently have a UIView in the TableView header. I am trying to add another UIView (which contains two Buttons and a few TextFields) that will sit above the TableView header. I would like the view to be displayed when the user scrolls up past the header (a la "pull to refresh"), and go away when the user presses a "done" button on the view. My two questions: 1) How do I add a view above the tableview header? 2) How do I display said view when a user has scrolled up past the header? 3) How do I dismiss said view when the user has pressed a button on said view?

    Read the article

  • add objects with different name through for loop

    - by Gandalf StormCrow
    What is the best way to do the following: List<MyObject> list = new LinkedList<MyObject>(); for(int i=0; i<30;i++) { MyObject o1 = new MyObject(); list.add(o1); } But the things is I don't wanna create objects with same name, I wanna create them with different name like o1,o2,o3,o4,o5,o6,o7,o8,o9,o10 and I wanna add each to the list. What is the best way to do this ?

    Read the article

  • "Method name expected" error when trying add a handler method to a delegate - C#

    - by zakplayyy
    I keep getting the error "Method name expected" when trying add the a method to a delegate. I have a delegate which is invoked when ever my game ends. The function I'm trying to add to the delegate stops a countdown from flashing (the method is in a static class). I've searched about and I'm still unsure why its not working. Here is the line causing the error: LivesManager.gameEnded += new LivesManager.EndGame(CountdownManager.DisableFlashTimer(this)); The this passes the current form to the method so it can disable the timer flash on the form. I have added methods from static classes to the same delegate before and it works fine, the only difference is that I'm passing the form as a paramater and then it doesn't like it. Is there any way to pass the form to the method without the error? Thanks in advance :)

    Read the article

  • Add/Remove script for local printer

    - by GxFlint
    I have a Windows XP machine that runs two applications and both print on a thermal printer connected by a serial port. For one application, the "Generic / Text Only" printer must be present, for the other to work I need to remove it. I've found a few .vbs scripts, but they are for network printer. How do I make them work with my local printer? Is there a better solution? The user would have to run the script every time he needs to switch from an application to another.

    Read the article

  • SSRS 2008 Grouping With Header VS2008

    - by Kevin
    I have a table with the following columns and data Category Item Price Tax A I1 1.00 .01 A I2 2.22 .02 B I3 3.33 0.3 I want to group on Category and have the details below such as: Category/Item Price Tax A I1 1.00 .01 I2 2.22 .02 B I3 3.33 .03 I want the category to have its own row then the detail rows below with the item in the same column as the Category. The output that is desired is: Category - Item | Price | Tax A | | I1 | 1.00 | .01 I2 | 2.22 | .02 B | | I3 | 3.33 | .03

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Ruby on Rails - Belongs_to and Active Admin not creating foreign ID

    - by Milo
    I have the following setup: class Category < ActiveRecord::Base has_many :products end class Product < ActiveRecord::Base belongs_to :category has_attached_file :photo, :styles => { :medium => "300x300>", :thumb => "200x200>" } validates_attachment_content_type :photo, :content_type => /\Aimage\/.*\Z/ end ActiveAdmin.register Product do permit_params :title, :price, :slideshow, :photo, :category form do |f| f.inputs "Product Details" do f.input :title f.input :category f.input :price f.input :slideshow f.input :photo, :required => false, :as => :file end f.actions end show do |ad| attributes_table do row :title row :category row :photo do image_tag(ad.photo.url(:medium)) end end end index do column :id column :title column :category column :price do |product| number_to_currency product.price end actions end class ProductController < ApplicationController def create @product = Product.create(params[:id]) end end Every time I make a product item in activeadmin the category comes up empty. It wont populate the column category_id for the product. It just leaves it empty. What am I doing wrong?

    Read the article

  • WebCenter Customer Advisory Board meetings kick off Oracle Open World 2012!

    - by Lance711
    Welcome to OpenWorld! OpenWorld 2012 got underway today with a series of meetings with the members of the WebCenter Customer Advisory Board. Led by the WebCenter Product Management team, these meetings are a great way for the product team and customers to directly interact and discuss real-life business challenges, product details and to discuss upcoming features and functionality. This year, board members participated in discussions around live demos around product enhancements that will be featured throughout the coming week. Highlights included a variety of new mobile and social solutions, a great new user interface for WebCenter Content plus new Portal and Sites functionality that makes the experience for the everyday user a lot more pleasant. The day kicked off with Roel Stalman, VP of Product Management, giving a detailed overview of what’s new in WebCenter. Given all the improvements to discuss, this session went over 2 hours! Roel showcased the brand new UI for Content, Portal and Sites. He also gave live demos of the new mobile apps for WebCenter Content, Portal and the Oracle Social Network.  The attendees then broke into sub-groups in order to deep-dive with Product Management for the Portal, Sites, and Content product areas on specific functionality and application integrations. If you are here in San Francisco this week for OpenWorld, I definitely recommend stopping by the WebCenter area in the Moscone West Exhibition Hall to see some of this new functionality for yourself. And be sure to check out the WebCenter sessions throughout the week as those give us a chance to discuss direction and strategy, answer your questions and get your feedback and ideas. For those of you could not make it to OpenWorld this year, we miss you! You can stay in touch with what is happening via this blog and by following #oow and #webcenter on Twitter. Additionally, we will be rolling out details on upcoming products and release info over the coming months via this blog and web seminars. Stay tuned!

    Read the article

  • Do you want to know more about Oracle Learning Management 12.1?

    - by anders.northeved
    Many of you have upgraded to OLM 12.1 or are in the process of doing so. We have been asked if it was possible to arrange a couple of webcast describing the new functions and features in OLM 12.1 – and of course it is. We will do two webcasts: One on the new features and functions in OLM 12.1.1 and another one on the new features and functions in OLM 12.1.2 + 12.1.3. Each webcast will last for approx. 45 min and afterwards there will be a Q&A session for as long as you have questions! Everybody interested in participating is very welcome to join. Just send an e-mail with the following information to [email protected]: List of participants from your organization Your organization’s current status: Which OLM version you are on and if you have current upgrade plans then we’ll send you a mail with information on how to join. Webcast on OLM 12.1.1 new features: Monday 28th March 5pm CET (8.30pm IST; 4pm UK; 11am EST; 8am PST) Webcast on OLM 12.1.2+OLM 12.1.3 new features: Tuesday 29th March 5pm CET (8.30pm IST; 4pm UK; 11am EST; 8am PST) We are looking forward to your participation!

    Read the article

  • Database platform migration from Windows-32bit to Linux-64bit

    - by [email protected]
    We have a customer which have all they core business database on RAC over Windows OS. Last year they were affected by a virus that destroyed the registry and all their RAC environments were "OUT OF ORDER", the result, thousand people on vacation for a day.They were distrustful about Linux and after came an agreement to migrate their Enterprise Manager from Windows to Linux (OMS and Repository). How we did demonstrate how powerful and easy is RMAN to migrate databases across platforms.Fist of check of target platform is available from sourceSQL> select platform_name from v$db_transportable_platform;PLATFORM_NAME-----------------------------------------------------------Microsoft Windows IA (32-bit)Linux IA (32-bit)HP Tru64 UNIXLinux IA (64-bit)HP Open VMSMicrosoft Windows IA (64-bit)Linux 64-bit for AMDMicrosoft Windows 64-bit for AMDSolaris Operating System (x86)Check database object as directories that can change across platforms, also check external tables.Startup source database in read only modeRun the following RMAN ScriptRMAN> connect target / RMAN> convert database on target platform convert script 'c:/temp/convert_grid.rman'transport script 'c:/TEMP/transporta_grid.sql' new database 'gridbd' format 'c:/temp/gridmydb%U' db_file_name_convert 'C:\oracle\oradata\grid','/oracle/gridbd/data2/data';(Notice tha path change on db_file_name_convert)Move from source to target:PfileNew scriptsexternal table filesbfilesdata filesCheck pfile, and ensure that the paths are OKCreate temporary control file to connect rmanExecute the RMAN scriptRMAN> connect target / RMAN> @/home/oracle/pboixeda/win2lnx.rmanShutdown the instance and remove temporary control filesRecreate controlfile/s, take care about the used paths.Execute the transport script, transporta_grid.sqlDue we were moving from a 32-bit architecture to a 64-bit architecture, there is bug reported in 386990.1 note, we had to recreate OLAP , check the note for more details. Alter or Recreate all necessary objects Launch utlrpAfter this experience with Linux they are on the way to migrate all their RAC from 10gR2 on Windows to 11gR2 Linux 64 bit.Hope it helps

    Read the article

  • Policy Administration is the Top 2011 IT Priority for Insurers

    - by helen.pitts(at)oracle.com
    The current issue of Insurance Networking News includes an interesting column by Novarica's Matt Josefowicz.  Recent research by the firm revealed that policy administration replacement or extension is the most common strategic IT project for insurers this year.  The article goes on to note that insurers are keenly focused on the business capabilities that can be delivered once the system is in production as well as the ability to leverage agile development methodologies and true business/IT collaboration during implementation. The results are not too surprising given that policy administration is a mission-critical system for life and annuity insurers.  As Josefowicz notes, "Core systems are called core for a reason--they are at the heart of the insurer's ability to function.  Replacing them is not to be done lightly, but failing to replace them can mean diminishing the ability to compete or function effectively as a company." Insurers can no longer rely on inflexible policy administration systems that impede their ability to rapidly configure and bring to innovative new products, add riders, support changing business processes and take advantage of market opportunities.  The ability to leverage the policy administration systems to better service customers and distribution channels by providing real-time access to policy information throughout the policy lifecycle is also critical to sustain loyalty and further fuel growth.Insurers can benefit from a modern, adaptive policy administration system, like Oracle Insurance Policy Administration for Life and Annuity.  You can learn more about the industry's most highly advanced, rules-based system, which is unmatched for its highly flexible, rules-based configurability, performance and extensibility, as well as global market industry trends by viewing a complimentary, on-demand Webcast, Adapt, Transform and Grow:  Accelerate Speed to Market with Adaptive Insurance Policy Administration.Data conversions can be a daunting process for many insurers when deciding to modernize, in particular when consolidating from multiple, disparate legacy policy administration systems to a single new platform.  Migrating from a legacy system requires a well-thought out approach that builds on the industry's best thinking from previous modernization efforts and takes data migration off the critical path by leveraging proven methodology and tools to capitalize on the new system's capabilities.  We'll discuss more about this approach in a future Oracle Insurance blog.Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >