Search Results

Search found 71449 results on 2858 pages for 'oracle data integration'.

Page 454/2858 | < Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >

  • Oracle (Old?) Joins - A tool/script for conversion?

    - by Grasper
    I have been porting oracle selects, and I have been running across a lot of queries like so: SELECT e.last_name, d.department_name FROM employees e, departments d WHERE e.department_id(+) = d.department_id; ...and: SELECT last_name, d.department_id FROM employees e, departments d WHERE e.department_id = d.department_id(+); Are there any guides/tutorials for converting all of the variants of the (+) syntax? What is that syntax even called (so I can scour google)? Even better.. Is there a tool/script that will do this conversion for me (Preferred Free)? An optimizer of some sort? I have around 500 of these queries to port.. When was this standard phased out? Any info is appreciated.

    Read the article

  • Is there a way to give a subquery an alias in Oracle 10g SQL?

    - by Matt Pascoe
    Is there a way to give a subquery in Oracle 11g an alias like: select * from (select client_ref_id, request from some_table where message_type = 1) abc, (select client_ref_id, response from some_table where message_type = 2) defg where abc.client_ref_id = def.client_ref_id; Otherwise is there a way to join the two subqueries based on the client_ref_id. I realize there is a self join, but on the database I am running on a self join can take up to 5 min to complete (there is some extra logic in the actual query I am running but I have determined the self join is what is causing the issue). The individual subqueries only take a few seconds to complete by them selves. The self join query looks something like: select st.request, st1.request from some_table st, some_table st1 where st.client_ref_id = st1.client_ref_id;

    Read the article

  • Oracle (Old?) Joins - A tool for conversion?

    - by Grasper
    I have been porting oracle selects, and I have been running across a lot of queries like so: SELECT e.last_name, d.department_name FROM employees e, departments d WHERE e.department_id(+) = d.department_id; ...and: SELECT last_name, d.department_id FROM employees e, departments d WHERE e.department_id = d.department_id(+); Are there any guides/tutorials for converting all of the variants of the (+) syntax? What is that syntax even called (so I can scour google)? Even better.. Is there a tool that will do this conversion for me? When was this standard phased out? Any info is appreciated.

    Read the article

  • Planning and Budgeting Cloud Service - Partner Webcast

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Please join us for a 90 minutes live Partner Webcast which will overview the upcoming Oracle Planning and Budgeting Cloud Service (PBCS) offering on Tuesday, 26th November, 2013 at 5:00 pm CET / 4:00 pm UK. Look out for the joining URL and instruction in my November Newsletter coming soon. As a reminder, there was also a Partner Webcast recorded in August 2103 about PBCS which included a demo. Replay link here. Topics include: Latest news from Product Management; live demo; overview of assets and collaterals; Q&A session Oracle Planning and Budgeting Cloud Service (PBCS) offers organizations the market-leading Oracle Hyperion Planning and Budgeting solution delivered via Oracle’s public cloud service. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Creating a binary file from an IntelHex in C#

    - by Allek
    I'm trying to create a binary file from a intelHex file. Iside the intelHex file I have data and address to which I should write the data inside the binary file. IntelHex file looks like that :10010000214601360121470136007EFE09D2190140 :100110002146017EB7C20001FF5F16002148011988 :10012000194E79234623965778239EDA3F01B2CAA7 :100130003F0156702B5E712B722B732146013421C7 :00000001FF So I have 4 lines here with data since the last one tells us thats the end of file. Here is what I'm doing to create the file while (!streamReader.EndOfStream) { string temp = String.Empty; int address = 0; line = streamReader.ReadLine(); // Get address for each data address = Convert.ToInt32(line.Substring(3, 4), 16); // Get data from each line temp = line.Substring(7, 2); if (temp == "01") break; else { temp = line.Substring(9, line.Length - 11); string[] array = new string[(temp.Length / 2)]; int j = 0; for (int i = 0; i < array.Length; ++i) { array[i] = temp[j].ToString() + temp[j + 1].ToString(); j = j + 2; } temp = String.Empty; for (int i = 0; i < array.Length; ++i) { temp = temp + Convert.ToChar(Convert.ToInt32(array[i], 16)); } } binaryWriter.Seek(address, SeekOrigin.Begin); binaryWriter.Write(temp); binaryWriter.Flush(); } Console.WriteLine("Done...\nPress any key to exit..."); The problem here is, that data in binary file in some places is not equal to data from the intelHex file. Looks like there is some random data added to the file and I do not know from where. First time I saw that there is an additional data before the data from the intelHex file. For instance first data line starts with 21, but in binary file I have a number 12 before the 21. I do not know what is wrong here. Hope someone can help me or guide me where I can find some usefull informations about creating binary files in C#

    Read the article

  • How do I do proximity search in Oracle right?

    - by hko19
    Oracle's NEAR operator for full text search returns a score based on the proximity of two or more query terms. For example: near((dog, bite), 6) matches if 'dog' and 'bite' occurs within 6 words. What if I'd like it to match if either 'dog' or 'cat' or any other type of animal occurs within 6 words of the word 'bite'? I tried: near(((dog OR cat OR animal), bite), 6) but I got: NEAR operand not a phrase, equivalence or another NEAR expression Rather than expanding all possible combination into multiple NEAR and 'or' them together, what is the proper way to write such query?

    Read the article

  • Why is selecting specified columns, and all, wrong in Oracle SQL?

    - by TomatoSandwich
    Say I have a select statement that goes.. select * from animals That gives a a query result of all the columns in the table. Now, if the 42nd column of the table animals is is_parent, and I want to return that in my results, just after gender, so I can see it more easily. But I also want all the other columns. select is_parent, * from animals This returns ORA-00936: missing expression. The same statement will work fine in Sybase, and I know that you need to add a table alias to the animals table to get it to work ( select is_parent, a.* from animals ani), but why must Oracle need a table alias to be able to work out the select?

    Read the article

  • Geo Fence: how to find if a point or a shape is inside a polygon using oracle spatial

    - by Abdul
    Hi Folks, How do i find if a point or a polygon is inside another polygon using oracle spatial SQL query Here is the scenario; I have a table (STATE_TABLE) which contains a spatial type (sdo_geometry) which is a polygon (of say a State), I have another table (UNIVERSITY_TABLE) which contains spatial data (sdo_geometry) (point/polygon) which contains Universities; Now how do i find if a selected University('s) are in a given State using SQL select statement. Primarily i want to locate existence of given object(s) in a geofence. Thanks.

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • d:DesignData issue, Visual Studio 2010 cant build after adding sample design data with Expression Bl

    - by Valko
    Hi, VS 2010 solution and Silverlight project builds fine, then: I open MyView.xaml view in Expression Blend 4 Add sample data from class (I use my class defined in the same project) after I add new sample design data with Expression blend 4, everything looks fine, you see the added sample data in the EB 4 fine, you also see the data in VS 2010 designer too. Close the EB 4, and next VS 2010 build is giving me this errors: Error 7 XAML Namespace http://schemas.microsoft.com/expression/blend/2008 is not resolved. C:\Code\source\...myview.xaml and: Error 12 Object reference not set to an instance of an object. ... TestSampleData.xaml when I open the TestSampleData.xaml I see that namespace for my class used to define sample data is not recognized. However this namespace and the class itself exist in the same project! If I remove the design data from the MyView.xaml: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" it builds fine and the namespace in TestSampleData.xaml is recognized this time?? and then if add: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" I again see in the VS 2010 designer sample data, but the next build fails and again I see studio cant find the namespace in my TestSampleData.xaml containing sample data. That cycle is driving me crazy. Am I missing something here, is it not possible to have your class defining sample design data in the same project you have the MyView.xaml view?? cheers Valko

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • storing/retrieving data for graph with long continuous stretches

    - by james
    i have a large 2-dimensional data set which i would like to graph. the graph is displayed in a browser and the data is retrieved via ajax. long stretches of this graph will be continuous - e.g., for x=0 through x=1000, y=9, then for x=1001 through x=1100, y=80, etc. the approach i'm considering is to send (from the server) and store (in the browser) only the points where the data changes. so for the example above, i would say data[0] = 9, then data[1001] = 80. then given x=999 for example, retrieving data[999] would actually look up data[0]. the problem that arises is finding a dictionary-like data structure which behaves like this. the approach i'm considering is to store the data in a traditional dictionary object, then also maintain a sorted array of key for that object. when given x=999, it would look at the mid-point of this array, determine whether the nearest lower key is left or right of that midpoint, then repeat with the correct subsection, etc.. does anyone have thoughts on this problem/approach?

    Read the article

  • Dojox Datagrid contains data, but shows up as empty

    - by Vivek
    I'd really appreciate any help on this. There is this Dojox Datagrid that I'm creating programatically and supplying JSON data. As of now, I'm creating this data within JavaScript itself. Please refer to the below code sample. var upgradeStageStructure =[{ cells:[ { field: "stage", name: "Stage", width: "50%", styles: 'text-align: left;' }, { field:"status", name: "Status", width: "50%", styles: 'text-align: left;' } ] }]; var upgradeStageData = [ {id:1, stage: "Preparation", status: "Complete"}, {id:2, stage: "Formatting", status: "Complete"}, {id:3, stage: "OS Installation", status: "Complete"}, {id:4, stage: "OS Post-Installation", status: "In Progress"}, {id:5, stage: "Application Installation", status: "Not Started"}, {id:6, stage: "Application Post-Installation", status: "Not Started"} ]; var stagestore = new dojo.data.ItemFileReadStore({data:{identifier:"id", items: upgradeStageData}}); var upgradeStatusGrid = new dojox.grid.DataGrid({ autoHeight: true, style: "width:400px;padding:0em;margin:0em;", store: stagestore, clientSort: false, rowSelector: '20px', structure: upgradeStageStructure, columnReordering: false, selectable: false, singleClickEdit: false, selectionMode: 'none', loadingMessage: 'Loading Upgrade Stages', noDataMessage:'There is no data', errorMessage: 'Failed to load Upgrade Status' }); dojo.byId('progressIndicator').innerHTML=''; dojo.byId('progressIndicator').appendChild(upgradeStatusGrid.domNode); upgradeStatusGrid.startup(); The problem is that I am not seeing anything within the grid upon display (no headers, no data). But I know for sure that the data in the grid does exist and the grid is properly initialized, because I called alert (grid.domNode.innerHTML);. The resultant HTML that is thrown up does show a table containing header rows and the above data. This link contains an image which illustrates what I'm seeing when I display the page. (Can't post images since my account is new here) As you may notice, there are 6 rows for 6 pieces of data I have created but the grid is a mess. Please help out if you think you know what could be going wrong. Thanks in advance, Viv

    Read the article

  • rpy2: Converting a data.frame to a numpy array

    - by Mike Dewar
    I have a data.frame in R. It contains a lot of data : gene expression levels from many (125) arrays. I'd like the data in Python, due mostly to my incompetence in R and the fact that this was supposed to be a 30 minute job. I would like the following code to work. To understand this code, know that the variable path contains the full path to my data set which, when loaded, gives me a variable called immgen. Know that immgen is an object (a Bioconductor ExpressionSet object) and that exprs(immgen) returns a data frame with 125 columns (experiments) and tens of thousands of rows (named genes). robjects.r("load('%s')"%path) # loads immgen e = robjects.r['data.frame']("exprs(immgen)") expression_data = np.array(e) This code runs, but expression_data is simply array([[1]]). I'm pretty sure that e doesn't represent the data frame generated by exprs() due to things like: In [40]: e._get_ncol() Out[40]: 1 In [41]: e._get_nrow() Out[41]: 1 But then again who knows? Even if e did represent my data.frame, that it doesn't convert straight to an array would be fair enough - a data frame has more in it than an array (rownames and colnames) and so maybe life shouldn't be this easy. However I still can't work out how to perform the conversion. The documentation is a bit too terse for me, though my limited understanding of the headings in the docs implies that this should be possible. Anyone any thoughts?

    Read the article

  • How to find the latest row for each group of data

    - by Jason
    Hi All, I have a tricky problem that I'm trying to find the most effective method to solve. Here's a simplified version of my View structure. Table: Audits AuditID | PublicationID | AuditEndDate | AuditStartDate 1 | 3 | 13/05/2010 | 01/01/2010 2 | 1 | 31/12/2009 | 01/10/2009 3 | 3 | 31/03/2010 | 01/01/2010 4 | 3 | 31/12/2009 | 01/10/2009 5 | 2 | 31/03/2010 | 01/01/2010 6 | 2 | 31/12/2009 | 01/10/2009 7 | 1 | 30/09/2009 | 01/01/2009 There's 3 query's that I need from this. I need to one to get all the data. The next to get only the history data (that is, everything but exclude the latest data item by AuditEndDate) and then the last query is to obtain the latest data item (by AuditEndDate). There's an added layer of complexity that I have a date restriction (This is on a per user/group basis) where certain user groups can only see between certain dates. You'll notice this in the where clause as AuditEndDate<=blah and AuditStartDate=blah Foreach publication, select all the data available. select * from Audits Where auditEndDate<='31/03/10' and AuditStartDate='06/06/2009'; foreach publication, select all the data but Exclude the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009' /* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend!=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ Foreach publication, select only the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009'/* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ So at the moment, query 1 and 3 work fine, but query 2 just returns all the data instead of the restriction. Can anyone help me? Thanks jason

    Read the article

  • Multiple calls to data service from SL3?

    - by Chris
    I have an SL3 that makes asynchronous calls to a data service. Basically, there is a treeview that is bound to a collection of objects. The idea is that as a user selects a specific treeviewitem, a call is made to the data service, with a parameter specific to the selected treeviewitem being passed to the corresponding web method in the data service. The data service returns data back to the SL3 client, and the client presents the data to the user. This works well. The problem is that when users start to navigate through the treeview using the arrow keys on their keyboard, they could press the down arrow key, for example, 10 times, and 10 calls will be made to the data service, and then each of the 10 items will be displayed to the user momentarily, until finishing with the data for the most recently selected treeview item. So - onto the question. How can I put in some form of delay, to allow someone to navigate quickly through a treeview, then, once then stop at a certain treeviewitem, a call is made to the data service? Thanks for any suggestions. Chris

    Read the article

  • Unable to set up ODBC after installing ODAC (Xcopy)

    - by rwilson513
    We are trying to use ODAC Xcopy to minimize the footprint of installing Oracle 11g Client. Currently, we use the Oracle 11g Admin install (~700mb). I've tried using the ODAC Xcopy, and that works. However, the only issue I now have is that I cannot set up an ODBC on the target system by just installing the ODAC Xcopy. After installing ODAC (Windows XP fyi), I go to Control Panel--Admin Tools--Data Sources (ODBC)--System DSN--Add--Microsoft ODBC for Oracle. I get the following error: "The Oracle(tm) client and networking components were not found. These components are supplied by Oracle and are part of the Oracle Version 7.3 (or greater) client software installation. You will be unable to use this driver until these components have been installed." I've tried editing the registry and creating the same keys that the Oracle Admin install creates, but still no luck. I'm not sure how to get past this. Any suggestions? Thanks in advance.

    Read the article

  • memcache is not storing data accross requests

    - by morpheous
    I am new to using memcache, so I may be doing something wrong. I have written a wrapper class around memcache. The wrapper class has only static methods, so is a quasi singleton. The class looks something like this: class myCache { private static $memcache = null; private static $initialized = false; public static function init() { if (self::$initialized) return; self::$memcache = new Memcache(); if (self::configure()) //connects to daemon { self::store('foo', 'bar'); } else throw ConnectionError('I barfed'); } public static function store($key, $data, $flag=MEMCACHE_COMPRESSED, $timeout=86400) { if (self::$memcache->get($key)!== false) return self::$memcache->replace($key, $data, $flag, $timeout); return self::$memcache->set($key, $data, $flag, $timeout); } public static function fetch($key) { return self::$memcache->get($key); } } //in my index.php file, I use the class like this require_once('myCache.php'); myCache::init(); echo 'Stored value is: '. myCache::fetch('foo'); The problem is that the myCache::init() method is being executed in full everytime a page is requested. I then remembered that static variables do not maintain state accross page requests. So I decided instead, to store the flag that indicates whether the server contains the start up data (for our purposes, the variable 'foo', with value 'bar') in memcache itself. Once the status flag is stored in memcache itself, It solves the problem of the initialisation data being loaded for every page request (which quite frankly, defeats the purpose of memcache). However, having solved that problem, when I come to fetch the data in memcache, it is empty. I dont understand whats going on. Can anyone clarify how I can store my data once and retrieve it accross page requests? BTW, (just to clarify), the get/set is working correctly, and if I allow memcache to load the initialisation data for each page request, (which is silly), then the data is available in memcache.

    Read the article

  • Getting zeros between data while reading a binary file in C

    - by indiajoe
    I have a binary data which I am reading into an array of long integers using a C programme. hexdump of the binary data shows, that after first few data points , it starts again at a location 20000 hexa adresses away. hexdump output is as shown below. 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 0000 0000 0053 0000 0064 0000 006b 0000 0020010 0066 0000 0068 0000 0066 0000 005d 0000 0020020 0087 0000 0059 0000 0062 0000 0066 0000 ........ and so on... But when I read it into an array 'data' of long integers. by the typical fread command fread(data,sizeof(*data),filelength/sizeof(*data),fd); It is filling up with all zeros in my data array till it reaches the 20000 location. After that it reads in data correctly. Why is it reading regions where my file is not there? Or how will I make it read only my file, not anything inbetween which are not in file? I know it looks like a trivial problem, but I cannot figure it out even after googling one night.. Can anyone suggest me where I am doing it wrong? Other Info : I am working on a gnu/linux machine. (slax-atma distro to be specific) My C compiler is gcc.

    Read the article

  • AJAX Post Not Sending Data?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost, so forgive me, but I have new data. I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • Cross-domain data access in JavaScript

    - by vit
    We have an ASP.Net application hosted on our network and exposed to a specific client. This client wants to be able to import data from their own server into our application. The data is retrieved with an HTTP request and is CSV formatted. The problem is that they do not want to expose their server to our network and are requesting the import to be done on the client side (all clients are from the same network as their server). So, what needs to be done is: They request an import page from our server The client script on the page issues a request to their server to get CSV formatted data The data is sent back to our application This is not a challenge when both servers are on the same domain: a simple hidden iframe or something similar will do the trick, but here what I'm getting is a cross-domain "access denied" error. They also refuse to change the data format to return JSON or XML formatted data. What I tried and learned so far is: Hidden iframe -- "access denied" XMLHttpRequest -- behaviour depends on the browser security settings: may work, may work while nagging a user with security warnings, or may not work at all Dynamic script tags -- would have worked if they could have returned data in JSON format IE client data binding -- the same "access denied" error Is there anything else I can try before giving up and saying that it will not be possible without exposing their server to our application, changing their data format or changing their browser security settings? (DNS trick is not an option, by the way).

    Read the article

  • How can I store large amount of data from a database to XML (memory problem)?

    - by Andrija
    First, I had a problem with getting the data from the Database, it took too much memory and failed. I've set -Xmx1500M and I'm using scrolling ResultSet so that was taken care of. Now I need to make an XML from the data, but I can't put it in one file. At the moment, I'm doing it like this: while(rs.next()){ i++; xmlStringBuilder.append("\n\t<row>"); xmlStringBuilder.append("\n\t\t<ID>" + Util.transformToHTML(rs.getInt("id")) + "</ID>"); xmlStringBuilder.append("\n\t\t<JED_ID>" + Util.transformToHTML(rs.getInt("jed_id")) + "</JED_ID>"); xmlStringBuilder.append("\n\t\t<IME_PJ>" + Util.transformToHTML(rs.getString("ime_pj")) + "</IME_PJ>"); //etc. xmlStringBuilder.append("\n\t</row>"); if (i%100000 == 0){ //stores the data to a file with the name i.xml storeKBR(xmlStringBuilder.toString(),i); xmlStringBuilder= null; xmlStringBuilder= new StringBuilder(); } and it works; I get 12 100 MB files. Now, what I'd like to do is to do is have all that data in one file (which I then compress) but if just remove the if part, I go out of memory. I thought about trying to write to a file, closing it, then opening, but that wouldn't get me much since I'd have to load the file to memory when I open it. P.S. If there's a better way to release the Builder, do let me know :)

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • How to deal with a flaw in System.Data.DataTableExtensions.CopyToDataTable()

    - by andy
    Hey guys, so I've come across something which is perhaps a flaw in the Extension method .CopyToDataTable. This method is used by Importing (in VB.NET) System.Data.DataTableExtensions and then calling the method against an IEnumerable. You would do this if you want to filter a Datatable using LINQ, and then restore the DataTable at the end. i.e: Imports System.Data.DataRowExtensions Imports System.Data.DataTableExtensions Public Class SomeClass Private Shared Function GetData() As DataTable Dim Data As DataTable Data = LegacyADO.NETDBCall Data = Data.AsEnumerable.Where(Function(dr) dr.Field(Of Integer)("SomeField") = 5).CopyToDataTable() Return Data End Function End Class In the example above, the "WHERE" filtering might return no results. If this happens CopyToDataTable throws an exception because there are no DataRows. Why? The correct behavior should be to return a DataTable with Rows.Count = 0. Can anyone think of a clean workaround to this, in such a way that whoever calls CopyToDataTable doesn't have to be aware of this issue? System.Data.DataTableExtensions is a Static Class so I can't override the behavior....any ideas? Have I missed something? cheers UPDATE: I have submitted this as an issue to Connect. I would still like some suggestions, but if you agree with me, you could vote up the issue at Connect via the link above cheers

    Read the article

< Previous Page | 450 451 452 453 454 455 456 457 458 459 460 461  | Next Page >