Search Results

Search found 35607 results on 1425 pages for 'oracle information'.

Page 461/1425 | < Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >

  • 600 tables in DDBB

    - by Michael
    Hi all, I'm a very young software architect. Now I'm working in a very large and I have to lead a group of developers to rewrite all the mortgage system of the bank. I'm looking at database tables and I realize that there is no any data model, neither documentation. The worst part is that there are about 1000 tables in dev environment, and like 600 in production. I trust more the production environment, but anyway, what can I do? I mean, I can suicide me or something, but is there any good reverse engineering tool, so at least I could get the schema definition with the relations between tables and comments extracted from the fields? Can you advice me something? Thanks in advance.

    Read the article

  • How can I store large amount of data from a database to XML (speed problem, part three)?

    - by Andrija
    After getting some responses, the current situation is that I'm using this tip: http://www.ibm.com/developerworks/xml/library/x-tipbigdoc5.html (Listing 1. Turning ResultSets into XML), and XMLWriter for Java from http://www.megginson.com/downloads/ . Basically, it reads date from the database and writes them to a file as characters, using column names to create opening and closing tags. While doing so, I need to make two changes to the input stream, namely to the dates and numbers. // Iterate over the set while (rs.next()) { w.startElement("row"); for (int i = 0; i < count; i++) { Object ob = rs.getObject(i + 1); if (rs.wasNull()) { ob = null; } String colName = meta.getColumnLabel(i + 1); if (ob != null ) { if (ob instanceof Timestamp) { w.dataElement(colName, Util.formatDate((Timestamp)ob, dateFormat)); } else if (ob instanceof BigDecimal){ w.dataElement(colName, Util.transformToHTML(new Integer(((BigDecimal)ob).intValue()))); } else { w.dataElement(colName, ob.toString()); } } else { w.emptyElement(colName); } } w.endElement("row"); } The SQL that gets the results has the to_number command (e.g. to_number(sif.ID) ID ) and the to_date command (e.g. TO_DATE (sif.datum_do, 'DD.MM.RRRR') datum_do). The problems are that the returning date is a timestamp, meaning I don't get 14.02.2010 but rather 14.02.2010 00:00:000 so I have to format it to the dd.mm.yyyy format. The second problem are the numbers; for some reason, they are in database as varchar2 and can have leading zeroes that need to be stripped; I'm guessing I could do that in my SQL with the trim function so the Util.transformToHTML is unnecessary (for clarification, here's the method): public static String transformToHTML(Integer number) { String result = ""; try { result = number.toString(); } catch (Exception e) {} return result; } What I'd like to know is a) Can I get the date in the format I want and skip additional processing thus shortening the processing time? b) Is there a better way to do this? We're talking about XML files that are in the 50 MB - 250 MB filesize category.

    Read the article

  • Refactoring PL/SQL triggers - extract procedures

    - by Juraj
    Hello, we have application where database contains large parts of business logic in triggers, with a update subsequently firing triggers on several other tables. I want to refactor the mess and wanted to start by extracting procedures from triggers, but can't find any reliable tool to do this. Using "Extract procedure" in both SQL Developer and Toad failed to properly handle :new and :old trigger variables. If you had similar problem with triggers, did you find a way around it? EDIT: Ideally, only columns that are referenced by extracted code would be sent as in/out parameters, like: Example of original code to be extracted from trigger: ..... if :new.col1 = some_var then :new.col1 := :old.col1 end if ..... would become : procedure proc(in old_col1 varchar2, in out new_col1 varchar2, some_var varchar2) is begin if new_col1 = some_var then new_col1 := old_col1 end if; end; ...... proc(:old.col1,:new.col1, some_var);

    Read the article

  • How do I post back information from a webpage that has post-generated content?

    - by meanbunny
    Ok so I am very new to html and the web. Right now I am trying to generate list items on the fly and then have them post back to the web server so I know what the person is trying to obtain. Here is the code for generating the list items: foreach (var item in dataList) { MyDataList.InnerHtml += "<li><a runat='server' onclick='li_Click' id='" + item.Name + "-button'></a></li>"; } Further down I have my click event. protected void li_Click(object sender, EventArgs e) { //How do I determine here which item was actually clicked? } My question is how do I determine which list item was clicked? Btw, the code running behind is C#. EDIT 1 LinkButton link = new LinkButton(); link.ID = "all-button"; link.Text = "All"; link.Click += new EventHandler(link_Click); MyDataList.Controls.Add(link); Then below I have my link_Click event that never seems to hit a breakpoint. void link_Click(object sender, EventArgs e) { if (sender != null) { if (sender.GetType() == typeof(LinkButton)) { LinkButton button = (LinkButton)sender; if (button.ID == "all-button") { } } } } I know this has to be possible I just cant figure out what I am missing. Edit 2 Ok ok I think I know what the problem is. I was trying to add a second list inside of another list. This was causing the whole thing to have problems. It is working now.

    Read the article

  • Where can I view rountrip information in my ASP.NET application?

    - by ajax81
    Hi All, I'm playing around with storing application settings in my database, but I think I may have created a situation where superfluous roundtrips are being made. Is there an easy way to view roundtrips made to an MS Access (I know, I know) backend? I guess while I'm here, I should ask for advice on the best way to handle this project. I'm building an app that generates links based on file names (files are numbered ints, 0-5000). The files are stored on network shares, arranged by name, and the paths change frequently as files are bulk transfered to create space, etc. Example: Files 1000 - 2000 go to /path/1000s Files 2001 - 3000 go to /path/2000s Files 3001 - 4000 go to /path/3000s etc I'm sure by now you can see where I'm going with this. Ultimately, I'm trying to avoid making a roundtrip to get the paths for every single file as they are displayed in a gridview. I'm open to the notion that I've gone about this all wrong and that my idea might be rubbish. I've toyed around with the notion of just creating a flat file, but if I do that, do I still run into the problem of having that file opened and closed for every file displayed in a gridview?

    Read the article

  • How should I migrate DDL changes from one environment to the next?

    - by Rl
    I make DDL changes using SQL Developer's GUI. Problem is, I need to apply those same changes to the test environment. I'm wondering how others handle this issue. Currently I'm having to manually write ALTER statements to bring the test environment into alignment with the development environment, but this is prone to error (doing the same thing twice). In cases where there's no important data in the test environment I usually just blow everything away, export the DDL scripts from dev and run them from scratch in test. I know there are triggers that can store each DDL change, but this is a heavily shared environment and I would like to avoid that if possible. Maybe I should just write the DDL stuff manually rather than using the GUI?

    Read the article

  • Fill in missing values in a SELECT statement

    - by benjamin button
    If i have a table with two fields.customer id and order. let's say i have in total order ID 1,2,3,4 all the customer can have all the four orders.like below 1234 1 1234 2 1234 3 1234 4 3245 3 3245 4 5436 2 5436 4 you can see above that 3245 customer doesnt have order id 1 and 2. how could i print in the query output like 3245 1 3245 2 5436 1 5436 3 EDIT: i dont have order table but i have list of order's like we can hard code it in the query(1,2,3,4) i dont have an orders table.

    Read the article

  • Using a trigger to record audit information vs. stored procedure

    - by Germ
    Suppose you have the following... An ASP.NET web application that calls a stored procedure to delete a record. The table has a trigger on it that will insert an audit entry each time a record is deleted. I want to be able to record in the audit entry the username of who deleted the record. What would be the best way to go about achieving this? I know I could remove the trigger and have the delete stored procedure insert the audit entry prior to deleting but are there any other recommeded alternative? If a username was passed as a parameter to the delete stored procedure, is there anyway to get this value in the trigger that's excuted when the record is deleted? I'm just throwing this out there...

    Read the article

  • how to optimize this query?

    - by jest
    hi! the query is: select employee_id, last_name, salary, round((salary+(salary*0.15)), 0) as NewSalary, (round((salary+(salary*0.15)), 0) - salary) as “IncreaseAmount” from employees; can i optimize this round((salary+(salary*0.15)), 0) part in anyway, so that it doesn't appear twice..i tried giving it an alias but didn't work :(

    Read the article

  • Cost of logic in a query

    - by FrustratedWithFormsDesigner
    I have a query that looks something like this: select xmlelement("rootNode", (case when XH.ID is not null then xmlelement("xhID", XH.ID) else xmlelement("xhID", xmlattributes('true' AS "xsi:nil"), XH.ID) end), (case when XH.SER_NUM is not null then xmlelement("serialNumber", XH.SER_NUM) else xmlelement("serialNumber", xmlattributes('true' AS "xsi:nil"), XH.SER_NUM) end), /*repeat this pattern for many more columns from the same table...*/ FROM XH WHERE XH.ID = 'SOMETHINGOROTHER' It's ugly and I don't like it, and it is also the slowest executing query (there are others of similar form, but much smaller and they aren't causing any major problems - yet). Maintenance is relatively easy as this is mostly a generated query, but my concern now is for performance. I am wondering how much of an overhead there is for all of these case expressions. To see if there was any difference, I wrote another version of this query as: select xmlelement("rootNode", xmlforest(XH.ID, XH.SER_NUM,... (I know that this query does not produce exactly the same, thing, my plan was to move the logic to PL/SQL or XSL) I tried to get execution plans for both versions, but they are the same. I'm guessing that the logic does not get factored into the execution plan. My gut tells me the second version should execute faster, but I'd like some way to prove that (other than writing a PL/SQL test function with timing statements before and after the query and running that code over and over again to get a test sample). Is it possible to get a good idea of how much the case-when will cost? Also, I could write the case-when using the decode function instead. Would that perform better (than case-statements)?

    Read the article

  • Order of declaration in an anonymous pl/sql block

    - by RenderIn
    I have an anonymous pl/sql block with a procedure declared inside of it as well as a cursor. If I declare the procedure before the cursor it fails. Is there a requirement that cursors be declared prior to procedures? What other rules are there for order of declaration in a pl/sql block? This works: DECLARE cursor cur is select 1 from dual; procedure foo as begin null; end foo; BEGIN null; END; This fails with error PLS-00103: Encountered the symbol "CURSOR" when expecting one of the following: begin function package pragma procedure form DECLARE procedure foo as begin null; end foo; cursor cur is select 1 from dual; BEGIN null; END;

    Read the article

  • Begin Viewing Query Results Before Query Ends

    - by Frank Developer
    OK, so say I have a table with 500K rows, then I ad-hoc query with unsupported indexing which requires a full table scan. I would like to immediately view the first rows returned while the full table scan continues. Then I want to scroll thru the next results. In the meantime, I would like to display the progress of the table scan, example: "SEARCHING.. FOUND 23 OF 500,000 ROWS SO FAR". If I scroll too far ahead, I want to display a message like: "REACHED LAST ROW IN LOOK-AHEAD BUFFER.. QUERY HAS NOT COMPLETED".. Can this be done? Maybe like: spawn/exec, declare scroll cursor, open, fetch, etc.?

    Read the article

  • FETCH INTO doesn't raise an exception if the set is empty, does it?

    - by Cade Roux
    Here is some actual code I'm trying to debug: BEGIN OPEN bservice (coservice.prod_id); FETCH bservice INTO v_billing_alias_id, v_billing_service_uom_id, v_summary_remarks; CLOSE bservice; v_service_found := 1; -- An empty fetch is expected for some services. EXCEPTION WHEN OTHERS THEN v_service_found := 0; END; When the parametrized cursor bservice(prod_id) is empty, it fetches NULL into the three variables and does not throw an exception. So whoever wrote this code expecting it to throw an exception was wrong, right? The comment seems to imply that and empty fetch is expected and then it sets a flag for later handling, but I think this code cannot possibly have been tested with empty sets either. Obviously, it should use bservice%NOTFOUND or bservice%FOUND or similar.

    Read the article

  • remove a varchar2 string from the middle of table data values

    - by Michelle Daniel
    Data in the file_name field of the generation table should be an assigned number, then _01, _02, or _03, etc. and then .ldf (example 82617_01.pdf). Somewhere, the program is putting a state name and sometimes a date/time stamp, between the assigned number and the 01, 02, etc. (82617_ALABAMA_01.pdf or 19998_MAINE_07-31-2010_11-05-59_AM.pdf or 5485325_OREGON_01.pdf for example). We would like to develop an SQL statement to find the bad file names and fix them. In theory it seems rather simple to find file_names that include a varchar2 data type and remove it, but putting the statement together is beyond me. Any help or suggestions apprecuiated. Something like ........... UPDATE GENERATION SET FILE_NAME (?) WHERE FILE_NAME (?...LIKE '%STRING%');?

    Read the article

  • drop-down combo box-like functionality for queries.

    - by Frank Developer
    Is it possible to provide the following type of fuctionality with informix client tools? As the user types the first two characters of a name, the drop-down list is empty. At the third character, the list fills with just the names beginning with those three characters. At the fourth character, MS-Access completes the first matching name (assuming the combo's AutoExpand is on). Once enough characters are typed to identify the customer, the user tabs to the next field. The time taken to load the combo between keystrokes is minimal. This occurs once only for each entry, unless the user backspaces through the first three characters again. If your list still contains too many records, you can reduce them by another order of magnitude by changing the value of constant conSuburbMin from 3 to 4.

    Read the article

  • In SQL How do I copy values from one table to another based on another field's value?

    - by Joshua1729
    Okay I have two tables VOUCHERT with the following fields ACTIVATIONCODE SERIALNUMBER VOUCHERDATADBID UNAVAILABLEAT UNAVAILABLEOPERATORDBID AVAILABLEAT AVAILABLEOPERATORDBID ACTIVATIONCODENEW EXT1 EXT2 EXT3 DENOMINATION -- I added this column into the table. and the second table is VOUCHERDATAT with the following fields VOUCHERDATADBID BATCHID VALUE CURRENCY VOUCHERGROUP EXPIRYDATE AGENT EXT1 EXT2 EXT3 What I want to do is copy the corresponding VALUE from VOUCHERDATAT and put it into DENOMINATION of VOUCHERT. The linking between the two is VOUCHERDATADBID. How do I go about it? It is not a 1:1 mapping. What I mean is there may be 1000 SERIALNUMBERS with a same VOUCHERDATADBID. And that VOUCHERDATADBID has only entry in VOUCHERDATAT, hence one value. Therefore, all serial numbers belonging to a certain VOUCHERDATADBID will have the same value. Will JOINS work? What type of JOIN should I use? Or is UPDATE table the way to go? Thanks for the help !!

    Read the article

  • Cannot disable index during PL/SQL procedure

    - by nw
    I've written a PL/SQL procedure that would benefit if indexes were first disabled, then rebuilt upon completion. An existing thread suggests this approach: alter session set skip_unusable_indexes = true; alter index your_index unusable; [do import] alter index your_index rebuild; However, I get the following error on the first alter index statement: SQL Error: ORA-14048: a partition maintenance operation may not be combined with other operations ORA-06512: [...] 14048. 00000 - "a partition maintenance operation may not be combined with other operations" *Cause: ALTER TABLE or ALTER INDEX statement attempted to combine a partition maintenance operation (e.g. MOVE PARTITION) with some other operation (e.g. ADD PARTITION or PCTFREE which is illegal *Action: Ensure that a partition maintenance operation is the sole operation specified in ALTER TABLE or ALTER INDEX statement; operations other than those dealing with partitions, default attributes of partitioned tables/indices or specifying that a table be renamed (ALTER TABLE RENAME) may be combined at will The problem index is defined so: CREATE INDEX A11_IX1 ON STREETS ("SHAPE") INDEXTYPE IS "SDE"."ST_SPATIAL_INDEX" PARAMETERS ('ST_GRIDS=890,8010,72090 ST_SRID=2'); This is a custom index type from a 3rd-party vendor, and it causes chronic performance degradation during high-volume update/insert/delete operations. Any suggestions on how to work around this error? By the way, this error only occurs within a PL/SQL block.

    Read the article

  • Persist data when the table was not mapped (JPA EclipseLink)

    - by enrique
    Hi everybody I need some help in persisting data into a table that has not been mapped... The issue is that the database we have has a table which all of its columns are foreign keys so by mapping the whole database all of the tables are correctly mapped. However that table called "category" is not mapped. The way in which we can browse the data is by passing for the table I mentioned using the @jointable annotation which was set by the system in the other tables with which "category" has a relation. So we can go ahead and using the collections and perform a query. But the issue comes when I want to persist data into that table because there´s no any entity. We tried to persist through the collections but no luck. Then I have just tried by creating the entity with its PK and Facade all by hand. However when I try to persist using the Merge method the system tries to perform an Insert when it is supposed to perform an Update. So obviously it returns an error. Does anybody have an idea on this situation? Thanks.-

    Read the article

  • Indexing XMLType columns

    - by Chris
    Hello, I am working with a XMLType and currently experiencing significant performance issues and would like to incorporate indexing to the column type. Currently I am taking the approach of using the XMLTable() and XQuery functions to create a virtual table. I would like to use this Virtual Table to create a function based index on the table containing the XMLType, but I am receiving this error: Error report: SQL Error: ORA-00907: missing right parenthesis 00907. 00000 - "missing right parenthesis" *Cause: *Action: This is the index.. any assistance would be greatly appreciated. CREATE INDEX indx_medicinalproduct ON d.ProductName XMLTable('for $i at $a in /safetyreport/patient//drug for $j in $i/medicinalproduct return element r { $i/medicinalproduct }' PASSING s.safetyreport COLUMNS ProductName varchar2(70) PATH 'medicinalproduct') d;

    Read the article

  • performance issue: difference between select s.* vs select *

    - by kamil
    Recently I had some problem in performance of my query. The thing is described here: poor Hibernate select performance comparing to running directly - how debug? After long time of struggling, I've finally discovered that the query with select prefix like: select sth.* from Something as sth... Is 300x times slower then query started this way: select * from Something as sth.. Could somebody help me, and asnwer why is that so? Some external documents on this would be really useful. The table used for testing was: SALES_UNIT table contains some basic info abot sales unit node such as name and etc. The only association is to table SALES_UNIT_TYPE, as ManyToOne. The primary key is ID and field VALID_FROM_DTTM which is date. SALES_UNIT_RELATION contains relation PARENT-CHILD between sales unit nodes. Consists of SALES_UNIT_PARENT_ID, SALES_UNIT_CHILD_ID and VALID_TO_DTTM/VALID_FROM_DTTM. No association with any tables. The PK here is ..PARENT_ID, ..CHILD_ID and VALID_FROM_DTTM The actual query I've done was: select s.* from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null select * from sales_unit s left join sales_unit_relation r on (s.sales_unit_id = r.sales_unit_child_id) where r.sales_unit_child_id is null Same query, both uses left join and only difference is with select.

    Read the article

< Previous Page | 457 458 459 460 461 462 463 464 465 466 467 468  | Next Page >