Search Results

Search found 4685 results on 188 pages for 'queries'.

Page 163/188 | < Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >

  • NHibernate + Fluent long startup time

    - by PaRa
    Hi all, am new to NHibernate. When performing below test took 11.2 seconds (debug mode) i am seeing this large startup time in all my tests (basically creating the first session takes a tone of time) setup = Windows 2003 SP2 / Oracle10gR2 latest CPU / ODP.net 2.111.7.20 / FNH 1.0.0.636 / NHibernate 2.1.2.4000 / NUnit 2.5.2.9222 / VS2008 SP1 using System; using System.Collections; using System.Data; using System.Globalization; using System.IO; using System.Text; using System.Data; using NUnit.Framework; using System.Collections.Generic; using System.Data.Common; using NHibernate; using log4net.Config; using System.Configuration; using FluentNHibernate; [Test()] public void GetEmailById() { Email result; using (EmailRepository repository = new EmailRepository()) { results = repository.GetById(1111); } Assert.IsTrue(results != null); } public class EmailRepository : RepositoryBase { public EmailRepository():base() { } } In my RepositoryBase public T GetById(object id) { using (var session = sessionFactory.OpenSession()) using (var transaction = session.BeginTransaction()) { try { T returnVal = session.Get(id); transaction.Commit(); return returnVal; } catch (HibernateException ex) { // Logging here transaction.Rollback(); return null; } } } The query time is very small. The resulting entity is really small. Subsequent queries are fine. Its seems to be getting the first session started. Has anyone else seen something similar?

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

  • How should I name my SQL query files? Should I use some methodology?

    - by Mehper C. Palavuzlar
    We have an Oracle 10g database (a huge one) in our company, and I provide employees with data upon their requests. My problem is, I save almost every SQL query I wrote, and now my list has grown too much. I want to organize and rename these .sql files so that I can find the one I want easily. At the moment, I'm using some folders named as Sales Dept, Field Team, Planning Dept, Special etc. and under those folders there are .sql files like Delivery_sales_1, Delivery_sales_2, ... Sent_sold_lostsales_endpoints, ... Sales_provinces_period, Returnrates_regions_bymonths, ... Jack_1, Steve_1, Steve_2, ... I try to name the files regarding their content but this makes file names longer and does not completely meet my needs. Sometimes someone comes and demands a special report, and I give the file his name, but this is also not so good. I know duplicates or very similar files are growing in time but I don't have control over them. Can you show me the right direction to rename all these files and folders and organize my queries for easy and better control? TIA.

    Read the article

  • Hundreds of custom UserControls create thousands of USER Objects

    - by Andy Blackman
    I'm creating a dashboard application that shows hundreds of "items" on a FlowLayoutPanel. Each "item" is a UserControl that is made up of 12 or labels. My app queries a database and then creates an "item" instance for each record, populating ethe labels and textboxes with data before adding it to the FlowLayoutPanel. After adding about 560 items to the panel, I noticed that the USER Objects count in my Task Manager had gone up to about 7300, which was much much larger than any other app on my machine. I did a quick spot of mental arithmetic (OK, I might have used calc.exe) and figured that 560 * 13 (12 labels plus the UserControl itself) is 7280. So that suddenly gave away where all the objects were coming from... Knowing that there is a 10,000 USER object limit before windows throws in the towel, I'm trying to figure better ways of drawing these items onto the FlowLayoutPanel. My ideas so far are as follows: 1) User-draw the "item", using graphics.DrawText and DrawImage in place of many of the labels. I'm hoping that this will mean 1 item = 1 USER Object, not 13. 2) Have 1 instance of the "item", then for each record, populate the instance and use the Control.DrawToBitmap() method to grab an image and then use that in the FlowLayoutPanel (or similar) So... Does anyone have any other suggestions ??? P.S. It's a zoomable interface, so I have already ruled out "Paging" as there is a requirement to see all items at once :( Thanks everyone.

    Read the article

  • Codeigniter Inserting Multidimensional Array as rows in MySQL

    - by RisingSun
    Please Refer to this question I asked Codeigniter Insert Multiple Rows in SQL To restate <tr> <td><input type="text" name="user[0][name]" value=""></td> <td><input type="text" name="user[0][address]" value=""><br></td> <td><input type="text" name="user[0][age]" value=""></td> <td><input type="text" name="user[0][email]" value=""></td> </tr> <tr> <td><input type="text" name="user[1][name]" value=""></td> <td><input type="text" name="user[1][address]" value=""><br></td> <td><input type="text" name="user[1][age]" value=""></td> <td><input type="text" name="user[1][email]" value=""></td> </tr> .......... Can Be Inserted into MySQL as this foreach($_POST['user'] as $user) { $this->db->insert('mytable', $user); } This results in multiple MySQL queries. Is it possible to optimise it further, so that the insert occurs in one query Something like this insert multiple rows via a php array into mysql but taking advantage of codeigniters simpler syntax. Thanks

    Read the article

  • Using Memcached in Python/Django - questions.

    - by Thomas
    I am starting use Memcached to make my website faster. For constant data in my database I use this: from django.core.cache import cache cache_key = 'regions' regions = cache.get(cache_key) if result is None: """Not Found in Cache""" regions = Regions.objects.all() cache.set(cache_key, regions, 2592000) #(2592000sekund = 30 dni) return regions For seldom changed data I use signals: from django.core.cache import cache from django.db.models import signals def nuke_social_network_cache(self, instance, **kwargs): cache_key = 'networks_for_%s' % (self.instance.user_id,) cache.delete(cache_key) signals.post_save.connect(nuke_social_network_cache, sender=SocialNetworkProfile) signals.post_delete.connect(nuke_social_network_cache, sender=SocialNetworkProfile) Is it correct way? I installed django-memcached-0.1.2, which show me: Memcached Server Stats Server Keys Hits Gets Hit_Rate Traffic_In Traffic_Out Usage Uptime 127.0.0.1 15 220 276 79% 83.1 KB 364.1 KB 18.4 KB 22:21:25 Can sombody explain what columns means? And last question. I have templates where I am getting much records from a few table (relationships). So in my view I get records from one table and in templates show it and related info from others. Generating page last a few seconds for very small table (<100records). Is it some easy way to cache queries from templates? Have I to do some big structure in my view (with all related tables), cache it and send to template?

    Read the article

  • Mysql query in drupal database - groupwise maximum with duplicate data

    - by nselikoff
    I'm working on a mysql query in a Drupal database that pulls together users and two different cck content types. I know people ask for help with groupwise maximum queries all the time... I've done my best but I need help. This is what I have so far: # the artists SELECT users.uid, users.name AS username, n1.title AS artist_name FROM users LEFT JOIN users_roles ur ON users.uid=ur.uid INNER JOIN role r ON ur.rid=r.rid AND r.name='artist' LEFT JOIN node n1 ON n1.uid = users.uid AND n1.type = 'submission' WHERE users.status = 1 ORDER BY users.name; This gives me data that looks like: uid username artist_name 1 foo Joe the Plumber 2 bar Jane Doe 3 baz The Tooth Fairy Also, I've got this query: # artwork SELECT n.nid, n.uid, a.field_order_value FROM node n LEFT JOIN content_type_artwork a ON n.nid = a.nid WHERE n.type = 'artwork' ORDER BY n.uid, a.field_order_value; Which gives me data like this: nid uid field_order_value 1 1 1 2 1 3 3 1 2 4 2 NULL 5 3 1 6 3 1 Additional relevant info: nid is the primary key for an Artwork every Artist has one or more Artworks valid data for field_order_value is NULL, 1, 2, 3, or 4 field_order_value is not necessarily unique per Artist - an Artist could have 4 Artworks all with field_order_value = 1. What I want is the row with the minimum field_order_value from my second query joined with the artist information from the first query. In cases where the field_order_value is not valuable information (either because the Artist has used duplicate values among their Artworks or left that field NULL), I would like the row with the minimum nid from the second query.

    Read the article

  • Access VBA question: Change the query being referenced by a function, depending on context

    - by Tara Amatista
    I have a custom function in Access2007 that hinges on grabbing data out of a specific query. It opens Outlook, creates a new email and populates the fields with specific addresses and data taken from the query ("DecisionEmail"). Now I want to make a different query ("RequestEmail") and have it populate the email with that data. So all I have to do is change this line: Set MailList = db.OpenRecordset("DecisionEmail") and that's where I get stumped. This is my desired result: If the user is on Form_Decision and clicks the button "Send email", "DecisionEmail" will get plugged into the function and that data will appear in the email. If the user on Form_SendRequest and clicks the button "Send email", "RequestEmail" will instead get plugged in. The reason that these are different queries is because they contain very different information that is smudged about in different ways. However, since it's just one little thing that needs to change in the function code, I don't think a brand new function is a good idea. My last resort would be to make a brand new function and use the Conditions field in the Macro interface to choose between them, but I have a feeling there's a more elegant solution possible. I have a vague notion of setting the query names as variables and using an If statement but I just don't have the mental vocabulary for thinking through this.

    Read the article

  • What's the best way to "shuffle" a table of database records?

    - by Darth
    Say that I have a table with a bunch of records, which I want to randomly present to users. I also want users to be able to paginate back and forth, so I have to perserve some sort of order, at least for a while. The application is basically only AJAX and it uses cache for already visited pages, so even if I always served random results, when the user tries to go back, he will get the previous page, because it will load from the local cache. The problem is, that if I return only random results, there might be some duplicates. Each page contains 6 results, so to prevent this, I'd have to do something like WHERE id NOT IN (1,2,3,4 ...) where I'd put all the previously loaded IDs. Huge downside of that solution is that it won't be possible to cache anything on the server side, as every user will request different data. Alternate solution might be to create another column for ordering the records, and shuffle it every insert time unit here. The problem here is, I'd need to set random number out of a sequence to every record in the table, which would take as many queries as there are records. I'm using Rails and MySQL if that's of any relevance.

    Read the article

  • MySQL LIMIT 1 but query 15 rows?

    - by Ian
    Basically what I'm trying to do is compare the ID's of rows against 15 results in MySQL, eliminating all but 1 (using NOT IN) and then pull that result. Now normally this would be fine by itself, however the order of the 15 rows I'm doing the SQL query for are constantly changing based on a ranking, so there is a possibility that between the time the ranking updates, and the ajax request (which I submit the ID's for NOT IN) more than just one ID has changed, which would of course bring back more than one row which I do not want. So in short, is there a way in which I can query 15 rows, but only return one? Without having to run two separate queries. Any help is appreciated, thank you. EXAMPLE: Say I have 7 items in my database, and I'm displaying 5 on the page to the user. These are what are being displayed to the user: Apple Orange Kiwi Banana Grape But in the database I also have Peach Blackberry Now what I want to do is if the user deletes an item from their list, it will add another item (based on a ranking they have) Now the issue is, in order to know what they have on their list at the moment I send the remaining items to the database (say they deleted Kiwi, I would send Apple, Orange, Banana, and Grape) So now I select the highest ranked 5 items from are remaining six items, make sure they are not the ones already displayed on the page, and then add the new one to list (either Peach or Blackberry) All good and well, except that if both peach and blackberry now outrank grape, then I will be returning two results instead of just one. Because it would've searched... Apple Orange Banana Peach Blackberry and excluded... Apple Orange Banana Grape Which leaves us with both Peach and Blackberry, instead of just Peach or Blackberry

    Read the article

  • build SQL query string using user input

    - by user175084
    i have to make a string by using the values which the user selects on the webpage suppose i need to display files for multiple machines with differnt search criteria.. i currently use this code: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["DBConnectionString"].ConnectionString; connection.Open(); SqlCommand sqlCmd = new SqlCommand("SELECT FileID FROM Files WHERE MachineID=@machineID and date= @date", connection); SqlDataAdapter sqlDa = new SqlDataAdapter(sqlCmd); sqlCmd.Parameters.AddWithValue("@machineID", machineID); sqlCmd.Parameters.AddWithValue("@date", date); sqlDa.Fill(dt); now this is fixed query where the user just has one machine and just selects one date... i want to make a query in which the user has multiple search options like type or size if he wants depending on what he selects also if he can select multiple machines.. SELECT FileID FROM Files WHERE (MachineID=@machineID1 or MachineID = @machineID2...) and (date= @date and size=@size and type=@type... ) all of this happens in runtime... other wise i have to create a for loop to put multiple machines one by one... and have multiple queries depending on the case the user selected... this is quiet interesting and i could use some help... thanks

    Read the article

  • Sql Server problems reading columns with a foreigh key

    - by illdev
    I have a weird situation, where simple queries seem to never finish for instance SELECT top 100 ArticleID FROM Article WHERE ProductGroupID=379114 returns immediately SELECT top 1000 ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT top 1000 ArticleID FROM Article returns immediately by 'returning' I mean 'in query analyzer the green check mark appears and it says "Query executed successfully"'. I sometimes get the rows painted to the grid in qa, but still the query goes on waiting for my client to time out - 'sometimes': SELECT ProductGroupID AS Product23_1_, ArticleID AS ArticleID1_, ArticleID AS ArticleID18_0_, Inventory_Name AS Inventory3_18_0_, Inventory_UnitOfMeasure AS Inventory4_18_0_, BusinessKey AS Business5_18_0_, Name AS Name18_0_, ServesPeople AS ServesPe7_18_0_, InStock AS InStock18_0_, Description AS Descript9_18_0_, Description2 AS Descrip10_18_0_, TechnicalData AS Technic11_18_0_, IsDiscontinued AS IsDisco12_18_0_, Release AS Release18_0_, Classifications AS Classif14_18_0_, DistributorName AS Distrib15_18_0_, DistributorProductCode AS Distrib16_18_0_, Options AS Options18_0_, IsPromoted AS IsPromoted18_0_, IsBulkyFreight AS IsBulky19_18_0_, IsBackOrderOnly AS IsBackO20_18_0_, Price AS Price18_0_, Weight AS Weight18_0_, ProductGroupID AS Product23_18_0_, ConversationID AS Convers24_18_0_, DistributorID AS Distrib25_18_0_, type AS Type18_0_ FROM Article AS articles0_ WHERE (IsDiscontinued = '0') AND (ProductGroupID = 379121) shows this behavior. I have no idea what is going on. Probably select is broken ;) Anyone can tell me how to handle such a situation? More info, anyone?

    Read the article

  • Performance Comparison of Shell Scripts vs high level interpreted langs (C#/Java/etc.)

    - by dferraro
    Hi all, First - This is not meant to be a 'which is better, ignorant nonionic war thread'... But rather, I generally need help in making an architecture decision / argument to put forward to my boss. Skipping the details - I simply just would love to know and find the results of anyone who has done some performance comparisons of Shell vs [Insert General Purpose Programming Language (interpreted) here), such as C# or Java... Surprisingly, I have spent some time on Google on searching here to not find any of this data. Has anyone ever done these comparisons, in different use-cases; hitting a database like in a XYX # of loops doing different types of SQL (Oracle pref, but MSSQL would do) queries such as any of the CRUD ops - and also not hitting database and just regular 50k loop type comparison doing different types of calculations, and things of that nature? In particular - for right now, I need to a comparison of hitting an Oracle DB from a shell script vs, lets say C# (again, any GPPL thats interpreted would be fine, even the higher level ones like Python). But I also need to know about standard programming calculations / instructions/etc... Before you ask 'why not just write a quick test yourself? The answer is: I've been a Windows developer my whole life/career and have very limited knowledge of Shell scripting - not to mention *nix as a whole.... So asking the question on here from the more experienced guys would be grealty beneficial, not to mention time saving as we are in near perputual deadline crunch as it is ;). Thanks so much in advance,

    Read the article

  • How do I Order on common attribute of two models in the DB?

    - by Will
    If i have two tables Books, CDs with corresponding models. I want to display to the user a list of books and CDs. I also want to be able to sort this list on common attributes (release date, genre, price, etc.). I also have basic filtering on the common attributes. The list will be large so I will be using pagination in manage the load. items = [] items << CD.all(:limit => 20, :page => params[:page], :order => "genre ASC") items << Book.all(:limit => 20, :page => params[:page], :order => "genre ASC") re_sort(items,"genre ASC") Right now I am doing two queries concatenating them and then sorting them. This is very inefficient. Also this breaks down when I use paging and filtering. If I am on page 2 of how do I know what page of each table individual table I am really on? There is no way to determine this information without getting all items from each table. I have though that if I create a new Class called items that has a one to one relationship with either a Book or CD and do something like Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "genre ASC") However this gives back an ambiguous error. So can only be refined as Item.all(:limit => 20, :page => params[:page], :include => [:books, :cds], :order => "books.genre ASC") And does not interleave the books and CDs in a way that I want. Any suggestions.

    Read the article

  • An Erroneous SQL Query makes browser hang until script timeout exceeded

    - by Jimbo
    I have an admin page in a Classic ASP web application that allows the admin user to run queries against the database (SQL Server 2000) Whats really strange is that if the query you send has an error in it (an invalid table join, a column you've forgotten to group by etc) the BROWSER hangs (CPU usage goes to maximum) until the SERVER script timeout is exceeded and then spits out a timeout exceeded error (server and browser are on different machines, so not sure how this happens!) I have tried this in IE 8 and FF 3 with the same result. If you run that same query (with errors) directly from SQL Enterprise Manager, it returns the real error immediately. Is this a security feature? Does anyone know how to turn it off? It even happens when the connection to the database is using 'sa' credentials so I dont think its a security setting :( Dim oRS Set oRS = Server.CreateObject("ADODB.Recordset") oRS.ActiveConnection = sConnectionString // run the query - this is for the admin only so doesnt check for sql safe commands etc. oRS.Open Request.Form("txtSQL") If Not oRS.EOF Then // list the field names from the recordset For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).name & "&nbsp;" Next // show the data for each record in the recordset While Not oRS.EOF For i = 0 to oRS.Fields.Count - 1 Response.Write oRS.Fields(i).value & "&nbsp;" Next Response.Write "<br />" oRS.Movenext() Wend End If

    Read the article

  • How can I get a iterable resultset from the database using pdo, instead of a large array?

    - by Tchalvak
    I'm using PDO inside a database abstraction library function query. I'm using fetchAll(), which if you have a lot of results, can get memory intensive, so I want to provide an argument to toggle between a fetchAll associative array and a pdo result set that can be iterated over with foreach and requires less memory (somehow). I remember hearing about this, and I searched the PDO docs, but I couldn't find any useful way to do that. Does anyone know how to get an iterable resultset back from PDO instead of just a flat array? And am I right that using an iterable resultset will be easier on memory? I'm using Postgresql, if it matters in this case. . . . The current query function is as follows, just for clarity. /** * Running bound queries on the database. * * Use: query('select all from players limit :count', array('count'=>10)); * Or: query('select all from players limit :count', array('count'=>array(10, PDO::PARAM_INT))); **/ function query($sql_query, $bindings=array()){ DatabaseConnection::getInstance(); $statement = DatabaseConnection::$pdo->prepare($sql_query); foreach($bindings as $binding => $value){ if(is_array($value)){ $statement->bindParam($binding, $value[0], $value[1]); } else { $statement->bindValue($binding, $value); } } $statement->execute(); // TODO: Return an iterable resultset here, and allow switching between array and iterable resultset. return $statement->fetchAll(PDO::FETCH_ASSOC); }

    Read the article

  • MySQL move data from one table to another, matching ID's

    - by Reveller
    I have (a.o.) two MySQL tables with (a.o.) the following columns: tweets: ------------------------------------- id text from_user_id from_user ------------------------------------- 1 Cool tweet! 13295354 tradeny 2 Tweeeeeeeet 43232544 bolleke 3 Yet another 13295354 tradeny 4 Something.. 53546443 janusz4 users: ------------------------------------- id from_user num_tweets from_user_id ------------------------------------- 1 tradeny 2235 2 bolleke 432 3 janusz4 5354 I now want to normalize the tweets table, replacing tweets.from_user with an integer that matches users.id. Secondly, I want to fill in the corresponding users.from_user_id, Finally, I want to delete tweets.from_user_id so that the end result would look like: tweets: ------------------------ id text from_user ------------------------ 1 Cool tweet! 1 2 Tweeeeeeeet 2 3 Yet another 1 4 Something.. 3 users: ------------------------------------- id from_user num_tweets from_user_id ------------------------------------- 1 tradeny 2235 13295354 2 bolleke 432 43232544 3 janusz4 5354 53546443 My question is whether one could help me form the proper queries for this. I have only come so far: UPDATE tweets SET from_user = (SELECT id FROM users WHERE from_user = tweets.from_user) WHERE... UPDATE users SET from_user_id = (SELECT from_user_id FROM tweets WHERE from_user = tweets.from_user) WHERE... ALTER TABLE tweets DROP from_user_id Any help would be greatly appreciated :-)

    Read the article

  • PyGTK: Manually render an existing widget at a given Rectangle? (TextView in a custom CellRenderer)

    - by NicDumZ
    Hello! I am trying to draw a TextView into the cell of a TreeView. (Why? I would like to have custom tags on text, so they can be clickable and trigger different actions/popup menus depending on where user clicks). I have been trying to write a customized CellRenderer for this purpose, but so far I failed because I find it extremely difficult to find generic documentation on rendering design in gtk. More than an answer at the specific question (that might be hard/not doable, and I'm not expecting you to do everything for me), I am first looking for documentation on how a widget is rendered, to understand how one is supposed to implement a CellRenderer. Can you share any link that explains, either for gtk or for pygtk, the rendering mechanism? More specifically: size allocation mechanism (should I say protocol?). I understand that a window has a defined size, and then queries its children, saying "my size is w x h, what would be your ideal size, buddy?", and then sometimes shrinks children when all children cant fit together at their ideal sizes. Any specific documentation on that, and on particular on when this happens during rendering? How are rendered "builtin" widgets? What kind of methods do they call on Widget base class? On the parent Window? When? Do they use pango.Layout? can you manually draw a TextView onto a pango.Layout object? This link gives an interesting example showing how you can draw content in a pango.Layout object and use it in a CellRenderer. I guess that I could adapt it if only I understood how TextView widget are rendered. Or perhaps, to put it more simply: given an existing widget instance, how does one render it at a specific gdk.Rectangle? Thanks a lot.

    Read the article

  • LINQ query to find if items in a list are contained in another list

    - by cjohns
    I have the following code: List<string> test1 = new List<string> { "@bob.com", "@tom.com" }; List<string> test2 = new List<string> { "[email protected]", "[email protected]" }; I need to remove anyone in test2 that has @bob.com or @tom.com. What I have tried is this: bool bContained1 = test1.Contains(test2); bool bContained2 = test2.Contains(test1); bContained1 = false but bContained2 = true. I would prefer not to loop through each list but instead use a Linq query to retrieve the data. bContained1 is the same condition for the Linq query that I have created below: List<string> test3 = test1.Where(w => !test2.Contains(w)).ToList(); The query above works on an exact match but not partial matches. I have looked at other queries but I can find a close comparison to this with Linq. Any ideas or anywhere you can point me to would be a great help.

    Read the article

  • Oracle User definied aggregate function for varray of varchar

    - by baju
    I am trying to write some aggregate function for the varray and I get this error code when I'm trying to use it with data from the DB: ORA-00600 internal error code, arguments: [kodpunp1], [], [], [], [], [], [], [], [], [], [], [] [koxsihread1], [0], [3989], [45778], [], [], [], [], [], [], [], [] Code of the function is really simple(in fact it does nothing ): create or replace TYPE "TEST_VECTOR" as varray(10) of varchar(20) ALTER TYPE "TEST_VECTOR" MODIFY LIMIT 4000 CASCADE create or replace type Test as object( lastVector TEST_VECTOR, STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number, MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number, MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number, MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number ); create or replace type body Test is STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number is begin sctx := Test(TEST_VECTOR()); return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number is begin self.lastVector := value; return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number is begin return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number is begin returnValue := self.lastVector; return ODCIConst.Success; end; end; create or replace FUNCTION test_fn (input TEST_VECTOR) RETURN TEST_VECTOR PARALLEL_ENABLE AGGREGATE USING Test; Next I create some test data: create table t1_test_table( t1_id number not null, t1_value TEST_VECTOR not null, Constraint PRIMARY_KEY_1 PRIMARY KEY (t1_id) ) Next step is to put some data to the table insert into t1_test_table (t1_id,t1_value) values (1,TEST_VECTOR('x','y','z')) Now everything is prepared to perform queries: Select test_fn(TEST_VECTOR('y','x')) from dual Query above work well Select test_fn(t1_value) from t1_test_table where t1_id = 1 Version of Oracle DBMS I use: 11.2.0.3.0 Does anyone tried do such a thing? What can be the reason that it does not work? How to solve it? Thanks in advance for help.

    Read the article

  • "Most popular" GROUP BY in LINQ?

    - by tags2k
    Assuming a table of tags like the stackoverflow question tags: TagID (bigint), QuestionID (bigint), Tag (varchar) What is the most efficient way to get the 25 most used tags using LINQ? In SQL, a simple GROUP BY will do: SELECT Tag, COUNT(Tag) FROM Tags GROUP BY Tag I've written some LINQ that works: var groups = from t in DataContext.Tags group t by t.Tag into g select new { Tag = g.Key, Frequency = g.Count() }; return groups.OrderByDescending(g => g.Frequency).Take(25); Like, really? Isn't this mega-verbose? The sad thing is that I'm doing this to save a massive number of queries, as my Tag objects already contain a Frequency property that would otherwise need to check back with the database for every Tag if I actually used the property. So I then parse these anonymous types back into Tag objects: groups.OrderByDescending(g => g.Frequency).Take(25).ToList().ForEach(t => tags.Add(new Tag() { Tag = t.Tag, Frequency = t.Frequency })); I'm a LINQ newbie, and this doesn't seem right. Please show me how it's really done.

    Read the article

  • When to use @Singleton in a Jersey resource

    - by dexter
    I have a Jersey resource that access the database. Basically it opens a database connection in the initialization of the resource. Performs queries on the resource's methods. I have observed that when I do not use @Singleton, the database is being open at each request. And we know opening a connection is really expensive right? So my question is, should I specify that the resource be singleton or is it really better to keep it at per request especially when the resource is connecting to the database? My resource code looks like this: //Use @Singleton here or not? @Path(/myservice/) public class MyResource { private ResponseGenerator responser; private Log logger = LogFactory.getLog(MyResource.class); public MyResource() { responser = new ResponseGenerator(); } @GET @Path("/clients") public String getClients() { logger.info("GETTING LIST OF CLIENTS"); return responser.returnClients(); } ... // some more methods ... } And I connect to the database using a code similar to this: public class ResponseGenerator { private Connection conn; private PreparedStatement prepStmt; private ResultSet rs; public ResponseGenerator(){ Class.forName("org.h2.Driver"); conn = DriverManager.getConnection("jdbc:h2:testdb"); } public String returnClients(){ String result; try{ prepStmt = conn.prepareStatement("SELECT * FROM hosts"); rs = prepStmt.executeQuery(); ... //do some processing here ... } catch (SQLException se){ logger.warn("Some message"); } finally { rs.close(); prepStmt.close(); // should I also close the connection here (in every method) if I stick to per request // and add getting of connection at the start of every method // conn.close(); } return result } ... // some more methods ... } Some comments on best practices for the code will also be helpful.

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • Statistical analysis on large data set to be published on the web

    - by dassouki
    I have a non-computer related data logger, that collects data from the field. This data is stored as text files, and I manually lump the files together and organize them. The current format is through a csv file per year per logger. Each file is around 4,000,000 lines x 7 loggers x 5 years = a lot of data. some of the data is organized as bins item_type, item_class, item_dimension_class, and other data is more unique, such as item_weight, item_color, date_collected, and so on ... Currently, I do statistical analysis on the data using a python/numpy/matplotlib program I wrote. It works fine, but the problem is, I'm the only one who can use it, since it and the data live on my computer. I'd like to publish the data on the web using a postgres db; however, I need to find or implement a statistical tool that'll take a large postgres table, and return statistical results within an adequate time frame. I'm not familiar with python for the web; however, I'm proficient with PHP on the web side, and python on the offline side. users should be allowed to create their own histograms, data analysis. For example, a user can search for all items that are blue shipped between week x and week y, while another user can search for sort the weight distribution of all items by hour for all year long. I was thinking of creating and indexing my own statistical tools, or automate the process somehow to emulate most queries. This seemed inefficient. I'm looking forward to hearing your ideas Thanks

    Read the article

  • Incrementing value by one over a lot of rows

    - by Andy Gee
    Edit: I think the answer to my question lies in the ability to set user defined variables in MySQL through PHP - the answer by Multifarious has pointed me in this direction Currently I have a script to cycle over 10M records, it's very slow and it goes like this: I first get a block of 1000 results in an array similar to this: $matches[] = array('quality_rank'=>46732, 'db_id'=>5532); $matches[] = array('quality_rank'=>12324, 'db_id'=>1234); $matches[] = array('quality_rank'=>45235, 'db_id'=>8345); $matches[] = array('quality_rank'=>75543, 'db_id'=>2562); I then cycle through them one by one and update the record $mult = count($matches)*2; foreach($matches as $m) { $rank++; $score = (($m[quality_rank] + $rank)/($mult))*100; $s = "UPDATE `packages_sorted` SET `price_rank` = '".$rank."', `deal_score` = '".$score."' WHERE `db_id` = '".$m[db_id]."' LIMIT 1"; } It seems like this is a very slow way of doing it but I can't find another way to increment the field price_rank by one each time. Can anyone suggest a better method. Note: Although I wouldn't usually store this kind of value in a database I really do need on this occasion for comparison search queries later on in the project. Any help would be kindly appreciated :)

    Read the article

< Previous Page | 159 160 161 162 163 164 165 166 167 168 169 170  | Next Page >