Search Results

Search found 17593 results on 704 pages for 'wmi query'.

Page 652/704 | < Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >

  • Append data to same text file

    - by Manu
    SimpleDateFormat formatter = new SimpleDateFormat("ddMMyyyy_HHmmSS"); String strCurrDate = formatter.format(new java.util.Date()); String strfileNm = "Customer_" + strCurrDate + ".txt"; String strFileGenLoc = strFileLocation + "/" + strfileNm; String Query1="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; String Query2="select '0'||to_char(sysdate,'YYYYMMDD')||'123456789' class_code from dual"; try { Statement stmt = null; ResultSet rs = null; Statement stmt1 = null; ResultSet rs1 = null; stmt = conn.createStatement(); stmt1 = conn.createStatement(); rs = stmt.executeQuery(Query1); rs1 = stmt1.executeQuery(Query2); File f = new File(strFileGenLoc); OutputStream os = (OutputStream)new FileOutputStream(f,true); String encoding = "UTF8"; OutputStreamWriter osw = new OutputStreamWriter(os, encoding); BufferedWriter bw = new BufferedWriter(osw); while (rs.next() ) { bw.write(rs.getString(1)==null? "":rs.getString(1)); bw.write(" "); } bw.flush(); bw.close(); } catch (Exception e) { System.out.println( "Exception occured while getting resultset by the query"); e.printStackTrace(); } finally { try { if (conn != null) { System.out.println("Closing the connection" + conn); conn.close(); } } catch (SQLException e) { System.out.println( "Exception occured while closing the connection"); e.printStackTrace(); } } return objArrayListValue; } The above code is working fine. it writes the content of "rs" resultset data in text file Now what i want is ,i need to append the the content in "rs2" resultset to the "same text file"(ie . i need to append "rs2" content with "rs" content in the same text file)..

    Read the article

  • How to view my Android app created database, via my android app?

    - by suufang
    I'm creating an application that collects users location (Cell broadcast location code) and saves that location with a name of users option, for example say my CB location code is 546034, now my app allows me to store that location code with a name of my choice say 'Home'. So, essentially my app has that following modules, To collect users CB location. To collect a custom name from user of that location. To store these values in a database. I've succeeded in doing all the above modules. I have a sub-module for my third module which has an option for user, of showing and deleting the database values, the screen shot looks as follows, Now, Users should be able to view and delete entries when he chooses the option 'View my database of locations' I've learned how to query my database values and I'm stuck with creating a list view and providing the delete option. My code for getting the database values and querying values goes as follows, submit.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { String locname = name.getText().toString(); if(locname.length()==0) { Toast.makeText(getBaseContext(), "Please enter the location name, for example 'Home'.", Toast.LENGTH_LONG).show(); } else { SQLiteDatabase cd = openOrCreateDatabase("mydata", MODE_WORLD_READABLE, null); cd.execSQL("CREATE TABLE IF NOT EXISTS MLITable (CblocationCode INT(10), CblocationName VARCHAR);"); cd.execSQL("INSERT INTO MLITable VALUES ('"+str+ "','"+locname+ "');"); cd.close(); Toast.makeText(getBaseContext(), "value successfully entered.", Toast.LENGTH_LONG).show(); } } }); viewdb.setOnClickListener(new View.OnClickListener() { public void onClick(View arg0) { // here comes the code for viewing the database and deleting SQLiteDatabase db = openOrCreateDatabase("mydata", MODE_WORLD_READABLE, null); Cursor c = db.rawQuery("SELECT * FROM MLITble", null); db.close(); } }); I can see that I've read the table values, but I'm not able to create a display for those values. Help required. Thank you.

    Read the article

  • Use a foreign key mapping to get data from the other table using Python and SQLAlchemy.

    - by Az
    Hmm, the title was harder to formulate than I thought. Basically, I've got these simple classes mapped to tables, using SQLAlchemy. I know they're missing a few items but those aren't essential for highlighting the problem. class Customer(object): def __init__(self, uid, name, email): self.uid = uid self.name = name self.email = email def __repr__(self): return str(self) def __str__(self): return "Cust: %s, Name: %s (Email: %s)" %(self.uid, self.name, self.email) The above is basically a simple customer with an id, name and an email address. class Order(object): def __init__(self, item_id, item_name, customer): self.item_id = item_id self.item_name = item_name self.customer = None def __repr__(self): return str(self) def __str__(self): return "Item ID %s: %s, has been ordered by customer no. %s" %(self.item_id, self.item_name, self.customer) This is the Orders class that just holds the order information: an id, a name and a reference to a customer. It's initialised to None to indicate that this item doesn't have a customer yet. The code's job will assign the item a customer. The following code maps these classes to respective database tables. # SQLAlchemy database transmutation engine = create_engine('sqlite:///:memory:', echo=False) metadata = MetaData() customers_table = Table('customers', metadata, Column('uid', Integer, primary_key=True), Column('name', String), Column('email', String) ) orders_table = Table('orders', metadata, Column('item_id', Integer, primary_key=True), Column('item_name', String), Column('customer', Integer, ForeignKey('customers.uid')) ) metadata.create_all(engine) mapper(Customer, customers_table) mapper(Orders, orders_table) Now if I do something like: for order in session.query(Order): print order I can get a list of orders in this form: Item ID 1001: MX4000 Laser Mouse, has been ordered by customer no. 12 What I want to do is find out customer 12's name and email address (which is why I used the ForeignKey into the Customer table). How would I go about it?

    Read the article

  • SELECT Statement without duplicate rows on the multiple join tables

    - by theBo
    I have 4 tables built with JOINS and I would like to SELECT DISTINCT rows on the setsTbl.s_id so they always show regardless if there's relational data against them or not!. This is what I have at present which displays the data but doesn't display all of but not the entire distinct row! SELECT setsTbl.s_id, setsTbl.setName, userProfilesTbl.no + ' ' + userProfilesTbl.surname AS Name, trainingTbl.t_date, userAssessmentTbl.o_id FROM userProfilesTbl LEFT OUTER JOIN userAssessmentTbl ON userProfilesTbl.UserId = userAssessmentTbl.UserId FULL OUTER JOIN trainingTbl ON userAssessmentTbl.tt_id = trainingTbl.tt_id RIGHT OUTER JOIN setsTbl ON trainingTbl.s_id = setsTbl.s_id WHERE (userProfilesTbl.st_id=@st_id AND userProfilesTbl.sh_id=@sh_id) AND (DATEPART(yyyy,t_date) = @y_date ) OR (userAssessmentTbl.o_id IS NULL) ORDER BY setName ASC, t_date ASC With this statement I get some of the rows (the ones with data against them) but as stated the s_id field does not return distinct. This following inner select statement works in part when used in SQL Query analyzer and returns pretty much the data i require s_id setName Name o_id ----- ----- ----- ------ 1 100 Barnes 2 2 100 Beardsley 3 3 101 Aldridge 1 4 102 Molby 2 5 102 Whelan 3 but not when used outside of that environment. select * from ( SELECT userProfilesTbl.serviceNo + ' ' + userProfilesTbl.surname AS Name, userProfilesTbl.st_id, userProfilesTbl.sh_id, userAssessmentTbl.o_id, setsTbl.s_id, setsTbl.setName, row_number() over ( partition by setsTbl.s_id order by setsTbl.s_id ) r FROM userProfilesTbl LEFT OUTER JOIN userAssessmentTbl ON userProfilesTbl.UserId = userAssessmentTbl.UserId FULL OUTER JOIN trainingTbl ON userAssessmentTbl.tt_id = trainingTbl.tt_id RIGHT OUTER JOIN setsTbl ON trainingTbl.s_id = setsTbl.s_id ) x where x.r = 1 Not receiving any errors just not displaying the data?

    Read the article

  • (Rails) Creating multi-dimensional hashes/arrays from a data set...?

    - by humble_coder
    Hi All, I'm having a bit of an issue wrapping my head around something. I'm currently using a hacked version of Gruff in order to accommodate "Scatter Plots". That said, the data is entered in the form of: g.data("Person1",[12,32,34,55,23],[323,43,23,43,22]) ...where the first item is the ENTITY, the second item is X-COORDs, and the third item is Y-COORDs. I currently have a recordset of items from a table with the columns: POINT, VALUE, TIMESTAMP. Due to the "complex" calculations involved I must grab everything using a single query or risk way too much DB activity. That said, I have a list of items for which I need to dynamically collect all data from the recordset into a hash (or array of arrays) for the creation of the data items. I was thinking something like the following: @h={} e = Events.find_by_sql(my_query) e.each do |event| @h["#{event.Point}"][x] = event.timestamp @h["#{event.Point}"][y] = event.value end Obviously that's not the correct syntax, but that's where my brain is going. Could someone clean this up for me or suggest a more appropriate mechanism by which to accomplish this? Basically the main goal is to keep data for each pointname grouped (but remember the recordset has them all). Much appreciated. EDIT 1 g = Gruff::Scatter.new("600x350") g.title = self.name e = Event.find_by_sql(@sql) h ={} e.each do |event| h[event.Point.to_s] ||= {} h[event.Point.to_s].merge!({event.Timestamp.to_i,event.Value}) end h.each do |p| logger.info p[1].values.inspect g.data(p[0],p[1].keys,p[1].values) end g.write(@chart_file)

    Read the article

  • Silverlight 4 webclient authentication - anyone have this working yet?

    - by Toran Billups
    So one of the best parts about the new Silverlight 4 beta is that they finally implemented the big missing feature of the networking stack - Network Credentials! In the below I have a working request setup, but for some reason I get a "security error" when the request comes back - is this because twitter.com rejected my api call or something that I'm missing in code? It might be good to point out that when I watch this code execute via fiddler it shows that the xml file for cross domain is pulled down successfully, but that is the last request shown by fiddler ... public void RequestTimelineFromTwitterAPI() { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential("username", "password"); myService.UseDefaultCredentials = false; myService.OpenReadCompleted += new OpenReadCompletedEventHandler(TimelineRequestCompleted); myService.OpenReadAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } public void TimelineRequestCompleted(object sender, System.Net.OpenReadCompletedEventArgs e) { //anytime I query for e.Result I get a security error }

    Read the article

  • logic before dispatcher + controller?

    - by Spoonface
    I believe for a typical MVC web application the router / dispatcher routine is used to decide which controller is loaded based primarily on the area requested in the url by the user. However, in addition to checking the url query string, I also like to use the dispatcher to check whether the user is currently logged in or not to decide which controller is loaded. For example if they are logged in and request the login page, the dispatcher would load their account instead. But is this a fairly non-standard design? Would it violate MVC in any way? I only ask as the examples I've read through this weekend have had no major calculations performed before the dispatcher routine, and commonly check whether the user is logged in or not per controller, and then redirect where necessary. But to me it seems odd to redirect a logged in user from the login area to account area if you could just load the account controller in the first place? I hope I've explained my consternation well enough, but could anyone offer some details on how they handle logged in users, and similar session data?

    Read the article

  • Is encrypting session id (or other authenticate value) in cookie useful at all?

    - by Ji
    In web development, when session state is enabled, a session id is stored in cookie(in cookieless mode, query string will be used instead). In asp.net, the session id is encrypted automatically. There are plenty of topics on the internet regarding how you should encrypt your cookie, including session id. I can understand why you want to encrypt private info such as DOB, but any private info should not be stored in cookie at first place. So for other cookie values such as session id, what is the purpose encryption? Does it add security at all? no matter how you secure it, it will be sent back to server for decryption. Be be more specific, For authentication purpose, turn off session, i don't want to deal with session time out any more store some sort of id value in the cookie, on the server side, check if the id value exists and matches, if it is, authenticate user. let the cookie value expire when browser session is ended, this way. vs Asp.net form authentication mechanism (it relies on session or session id, i think) does latter one offer better security?

    Read the article

  • Indexing/Performance strategies for vast amount of the same value

    - by DrColossos
    Base information: This is in context to the indexing process of OpenStreetMap data. To simplify the question: the core information is divided into 3 main types with value "W", "R", "N" (VARCHAR(1)). The table has somewhere around ~75M rows, all columns with "W" make up ~42M rows. Existing indexes are not relevant to this question. Now the question itself: The indexing of the data is done via an procedure. Inside this procedure, there are some loops that do the following: [...] SELECT * FROM table WHERE the_key = "W"; [...] The results get looped again and the above query itself is also in a loop. This takes a lot of time and slows down the process massivly. An indexon the_key is obviously useless since all the values that the index might use are the same ("W"). The script itself is running with a speed that is OK, only the SELECTing takes very long. Do I need to create a "special" kind of index that takes this into account and makes the SELECT quicker? If so, which one? need to tune some of the server parameters (they are already tuned and the result that they deliver seem to be good. If needed, I can post them)? have to live with the speed and simply get more hardware to gain more power (Tim Taylor grunt grunt)? Any alternatives to the above points (except rewriting it or not using it)?

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Why do I get null objects in a many-to-many bag?

    - by Jim Geurts
    I have a bag defined for a many-to-many list: <class name="Author" table="Authors"> <id name="Id" column="AuthorId"> <generator class="identity" /> </id> <property name="Name" /> <bag name="Books" table="Author_Book_Map" where="IsDeleted=0" fetch="join"> <key column="AuthorId" /> <many-to-many class="Book" column="BookId" where="IsDeleted=0" /> </bag> </class> If I return all author objects using something like the following, I will get what initially appeared to be duplicate Author records: Session.Query<Author>().List<Author>() The extra author objects are created when an author is mapped to Book objects that have IsDeleted = 1 and IsDeleted = 0. Rather than creating one Author object with an enumerable that contains only the books with IsDeleted = 0, it will create two author objects. The first author object has a Books enumerable that contains books with IsDeleted = 0. The second author object will contain an enumerable of null book objects. Similarly, if an object only has one book map, and that map points to a book with IsDeleted = 1, then an author object is returned with a Books collection having one null object. I'm thinking part of the problem stems from the map table objects linking to rows that satisfy the where condition on the bag object but do not meet the many-to-many where condition. This is happening with NHibernate version 3.0.0.4980. Is this a configuration issue or something else?

    Read the article

  • Determining Best Table Structure for MySQL Performance

    - by Joe Majewski
    I'm working on a browser-based RPG for one of my websites, and right now I'm trying to determine the best way to organize my SQL tables for performance and maintenance. Here's my question: Does the number of columns in an SQL table affect the speed in which it can be queried? I am not a newbie when it comes to PHP or MySQL. I used to develop things with the common goal of getting them to work, but I've recently advanced to the stage where a functional program is not good enough unless it's fast and reliable. Anyways, right now I have a members table that has around 15 columns. It contains information such as the player's username, password, email, logins, page views, etcetera. It doesn't contain any information on the player's progress in the game, however. If I added columns for things such as army size, gold, turns, and whatnot, then it could easily rise to around 40 or 50 total columns. Oh, and my database structure IS normalized. Will a table with 50 columns that gets constantly queried be a bad idea? Should I split it into two tables; one for the user's general information and one for the user's game statistics? I know I could check the query time myself, but I haven't actually created the tables yet and I think I'd be better off with some professional advice on this important decision for my game. Thank you for your time! :)

    Read the article

  • How do I implement Hibernate Pagination using a cursor (so the results stay consistent, despite new

    - by hunterae
    Hey all, Is there any way to maintain a database cursor using Hibernate between web requests? Basically, I'm trying to implement pagination, but the data that is being paged is consistently changing (i.e. new records are added into the database). We are trying to set it up such that when you do your initial search (returning a maximum of 5000 results), and you page through the results, those same records always appear on the same page (i.e. we're not continuously running the query each time next and previous page buttons are clicked). The way we're currently implementing this is by merely selecting 5000 (at most) primary keys from the table we're paging, storing those keys in memory, and then just using 20 primary keys at a time to fetch their details from the database. However, we want to get away from having to store these keys in memory and would much prefer a database cursor that we just keep going back to and moving backwards and forwards over the cursor to generate pages. I tried doing this with Hibernate's ScrollableResults but found that I could not call methods like next() and previous() would cause an exception if you within a different web request / Hibernate session (no surprise there). Is there any way to reattach a ScrollableResults object to a Session, much the same way you would reattach a detached database object to make it persistent? Are there any other approaches to implement this data paging with consistent paging results without caching the primary keys?

    Read the article

  • How to get <td> value in textbox

    - by Shreeuday Kasat
    I've done some code in html and in JavaScript ... My query is when I click on <td>, whatever the value associated with it, has to be displayed in the corresponding text box ... In front of <td> I've taken the textbox ... for an example I've taken 3 <td> and 3 textboxes <script type="text/javascript"> function click3(x) { x = document.getElementById("name").innerHTML var a = document.getElementById("txt"); a.value = x; } function click1(y) { y = document.getElementById("addr").innerHTML var b = document.getElementById("txt1"); b.value = y; } function click2(z) { z = document.getElementById("email").innerHTML var c = document.getElementById("txt2"); c.value = z; } </script> this is my JavaScript code , I know this is not an adequate way to deal such problem, since its giving static way to deal with this problem does anyone have a better solution for this problem ?? In JavaScript/jQuery

    Read the article

  • How to model in J2EE / JEE?

    - by Harry
    Let's say, I have decided to go with J(2)EE stack for my enterprise application. Now, for domain modelling (or: for designing the M of MVC), which APIs can I safely assume and use, and which I should stay away from... say, via a layer of abstraction? For example, Should I go ahead and litter my Model with calls to Hibernate/JPA API? Or, should I build an abstraction... a persistence layer of my own to avoid hard-coding against these two specific persistence APIs? Why I ask this: Few years ago, there was this Kodo API which got superseded by Hibernate. If one had designed a persistence layer and coded the rest of the model against this layer (instead of littering the Model with calls to specific vendor API), it would have allowed one to (relatively) easily switch from Kodo to Hibernate to xyz. Is it recommended to make aggressive use of the *QL provided by your persistence vendor in your domain model? I'm not aware of any real-world issues (like performance, scalability, portability, etc) arising out of a heavy use of an HQL-like language. Why I ask this: I would like to avoid, as much as possible, writing custom code when the same could be accomplished via query language that is more portable than SQL. Sorry, but I'm a complete newbie to this area. Where could I find more info on this topic? Many thanks in advance. /HS

    Read the article

  • NHibernate - Retrieving Lots of Data Becomes Exponentially Slow

    - by nfplee
    Hi, I have an issue when I retrieve lots of data in NHibernate (such as when producing a report) the page becomes exponentially slower the more data it has to retrieve. I found the following article: http://nhforge.org/blogs/nhibernate/archive/2008/10/30/bulk-data-operations-with-nhibernate-s-stateless-sessions.aspx It explains how doing bulk data operations in NHibernate is slow since the first level cache grows too large and how you should use the IStatelessSession instead. The trouble I have is that I don't wish to tie my application to NHibernate so I've added a wrapper around ISession. I then use Linq as my query mechanism but IStatelessSession does not support Linq (it may do in NHibernate 3 but the Linq provider is not stable as it stands at the moment). I then read that you could do a clear after so many iterations to clear out the first level cache. The problem now is that you can't use lazy loading. The linq provider doesn't allow you to override the mapping defined (or eagerly fetch the additional data) so whenever I grab data which is lazy loaded after I have cleared the session an exception is thrown. I'm completely lost on what do now. I like the ease of producing reports with linq but the limitations of the inbuilt linq provider in NHibernate seem to be holding me back. I'd really appreciate it if someone could show me an alternative approach. Thanks

    Read the article

  • how to programatically compare permissions of login/user in sql server 2005

    - by titanium
    There's a login/user in SQL Server who is having a problem importing accounts in production server. I don't have an idea what method he is doing this. According to the one importing, this import is working fine in development server. But when he did the same import in production it is giving him errors. Below are the errors he is getting for each accounts. 2009-06-05 18:01:05.8254 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow has failed 2009-06-05 18:01:05.9035 ERROR [engine-1038] Task [1038:00001 - Members]: Step 1.0 [<Insert step description>]: Task.RunStep(): StoreRow exception: Exception caught while storing Data. [Microsoft][ODBC SQL Server Driver][SQL Server]'ACCOUNT1' is not a valid login or you do not have permission. Please note that 'ACCOUNT1' is not the real account name. I just changed it for security reason. Using SQL Server Management Studio (SSMS), I viewed/checked the permissions of the user/login who is performing the import from development server and production for comparison. I found no difference. My question is: Is there a way to programmatically query permissions in server and database level of a particular login/user so I can compare/contrast for any differences?

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • Linq - reuse expression on child property

    - by user175528
    Not sure if what I am trying is possible or not, but I'd like to reuse a linq expression on an objects parent property. With the given classes: class Parent { int Id { get; set; } IList<Child> Children { get; set; } string Name { get; set; } } class Child{ int Id { get; set; } Parent Dad { get; set; } string Name { get; set; } } If i then have a helper Expression<Func<Parent,bool> ParentQuery() { Expression<Func<Parent,bool> q = p => p.Name=="foo"; } I then want to use this when querying data out for a child, along the lines of: using(var context=new Entities.Context) { var data=context.Child.Where(c => c.Name=="bar" && c.Dad.Where(ParentQuery)); } I know I can do that on child collections: using(var context=new Entities.Context) { var data=context.Parent.Where(p => p.Name=="foo" && p.Childen.Where(childQuery)); } but cant see any way to do this on a property that isnt a collection. This is just a simplified example, actually the ParentQuery will be more complex and I want to avoid having this repeated in multiple places as rather than just having 2 layers I'll have closer to 5 or 6, but all of them will need to reference the parent query to ensure security.

    Read the article

  • Avoiding repeated subqueries when 'WITH' is unavailable

    - by EloquentGeek
    MySQL v5.0.58. Tables, with foreign key constraints etc and other non-relevant details omitted for brevity: CREATE TABLE `fixture` ( `id` int(11) NOT NULL auto_increment, `competition_id` int(11) NOT NULL, `name` varchar(50) NOT NULL, `scheduled` datetime default NULL, `played` datetime default NULL, PRIMARY KEY (`id`) ); CREATE TABLE `result` ( `id` int(11) NOT NULL auto_increment, `fixture_id` int(11) NOT NULL, `team_id` int(11) NOT NULL, `score` int(11) NOT NULL, `place` int(11) NOT NULL, PRIMARY KEY (`id`) ); CREATE TABLE `team` ( `id` int(11) NOT NULL auto_increment, `name` varchar(50) NOT NULL, PRIMARY KEY (`id`) ); Where: A draw will set result.place to 0 result.place will otherwise contain an integer representing first place, second place, and so on The task is to return a string describing the most recently played result in a given competition for a given team. The format should be "def Team X,Team Y" if the given team was victorious, "lost to Team X" if the given team lost, and "drew with Team X" if there was a draw. And yes, in theory there could be more than two teams per fixture (though 1 v 1 will be the most common case). This works, but feels really inefficient: SELECT CONCAT( (SELECT CASE `result`.`place` WHEN 0 THEN "drew with" WHEN 1 THEN "def" ELSE "lost to" END FROM `result` WHERE `result`.`fixture_id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `result`.`team_id` = 1), ' ', (SELECT GROUP_CONCAT(`team`.`name`) FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` LEFT JOIN `team` ON `result`.`team_id` = `team`.`id` WHERE `fixture`.`id` = (SELECT `fixture`.`id` FROM `fixture` LEFT JOIN `result` ON `result`.`fixture_id` = `fixture`.`id` WHERE `fixture`.`competition_id` = 2 AND `result`.`team_id` = 1 ORDER BY `fixture`.`played` DESC LIMIT 1) AND `team`.`id` != 1) ) Have I missed something really obvious, or should I simply not try to do this in one query? Or does the current difficulty reflect a poor table design?

    Read the article

  • Hosting an Access DB

    - by Mitciv
    Hey, So I'm inexperienced in hosting DB's and I've always had the luxury of someone else getting the db setup. I was going to help a friend out with getting a webpage setup, I've got experience in Asp.Net MVC so I'm going with that. They want to setup a search page to query a db and display the results. My question I have is in getting the DB setup and hosted. They currently just have the Access DB on a local computer. There is basically only one table that would need to be queried for the search. What is the best approach to getting this table/db accessible? They would like to keep the main copy of the db on the local machine, so copying the entire db over to the hosted site would be time consuming, could the lone table needed be solely copied to the host? Should I try to convince them to make changes on the hosted db and just make copies of that for their local machines? Any suggestions are welcome, Again I'm a total noob when it comes to hosting databases. Thanks

    Read the article

  • Having a problem inserting into database

    - by neo skosana
    I have a stored procedure: CREATE PROCEDURE busi_reg(IN instruc VARCHAR(10), IN tble VARCHAR(20), IN busName VARCHAR(50), IN busCateg VARCHAR(100), IN busType VARCHAR(50), IN phne VARCHAR(20), IN addrs VARCHAR(200), IN cty VARCHAR(50), IN prvnce VARCHAR(50), IN pstCde VARCHAR(10), IN nm VARCHAR(20), IN surname VARCHAR(20), IN eml VARCHAR(50), IN pswd VARCHAR(20), IN srce VARCHAR(50), IN refr VARCHAR(50), IN sess VARCHAR(50)) BEGIN INSERT INTO b_register SET business_name = busName, business_category = busCateg, business_type = busType, phone = phne, address = addrs, city = cty, province = prvnce, postal_code = pstCde, firstname = nm, lastname = surname, email = eml, password = pswd, source = srce, ref_no = refr; END; This is my php script: $busName = $_POST['bus_name']; $busCateg = $_POST['bus_cat']; $busType = $_POST['bus_type']; $phne = $_POST['phone']; $addrs = $_POST['address']; $cty = $_POST['city']; $prvnce = $_POST['province']; $pstCde = $_POST['postCode']; $nm = $_POST['name']; $surname = $_POST['lname']; $eml = $_POST['email']; $srce = $_POST['source']; $ref = $_POST['ref_no']; $result2 = $db->query("CALL busi_reg('$instruc', '$tble', '$busName', '$busCateg', '$busType', '$phne', '$addrs', '$cty', '$prvnce', '$pstCde', '$nm', '$surname', '$eml', '$pswd', '$srce', '$refr', '')"); if($result) { echo "Data has been saved"; } else { printf("Error: %s\n",$db->error); } Now the error that I get: Commands out of sync; you can't run this command now

    Read the article

  • Average over a timeframe with missing data

    - by BHare
    Assuming a table such as: UID Name Datetime Users 4 Room 4 2012-08-03 14:00:00 3 2 Room 2 2012-08-03 14:00:00 3 3 Room 3 2012-08-03 14:00:00 1 1 Room 1 2012-08-03 14:00:00 2 3 Room 3 2012-08-03 14:15:00 1 2 Room 2 2012-08-03 14:15:00 4 1 Room 1 2012-08-03 14:15:00 3 1 Room 1 2012-08-03 14:30:00 6 1 Room 1 2012-08-03 14:45:00 3 2 Room 2 2012-08-03 14:45:00 7 3 Room 3 2012-08-03 14:45:00 8 4 Room 4 2012-08-03 14:45:00 4 I wanted to get the average user count of each room (1,2,3,4) from the time 2PM to 3PM. The problem is that sometimes the room may not "check in" at the 15 minute interval time, so the assumption has to be made that the previous last known user count is still valid. For example the check-in's for 2012-08-03 14:15:00 room 4 never checked in, so it must be assumed that room 4 had 3 users at 2012-08-03 14:15:00 because that is what it had at 2012-08-03 14:00:00 This follows on through so that the average user count I am looking for is as follows: Room 1: (2 + 3 + 6 + 3) / 4 = 3.5 Room 2: (3 + 4 + 4 + 7) / 4 = 4.5 Room 3: (1 + 1 + 1 + 8) / 4 = 2.75 Room 4: (3 + 3 + 3 + 4) / 4 = 3.25 where # is the assumed number based on the previous known check-in. I am wondering if it's possible to so this with SQL alone? if not I am curious of a ingenious PHP solution that isn't just bruteforce math, as such as my quick inaccurate pseudo code: foreach ($rooms_id_array as $room_id) { $SQL = "SELECT * FROM `table` WHERE (`UID` == $room_id && `Datetime` >= 2012-08-03 14:00:00 && `Datetime` <= 2012-08-03 15:00:00)"; $result = query($SQL); if ( count($result) < 4 ) { // go through each date and find what is missing, and then go to previous date and use that instead } else { foreach ($result) $sum += $result; $avg = $sum / 4; } }

    Read the article

  • Need help/suggestions for creating fantasy sports scoring databases and queries

    - by MGumbel
    I'm trying to create a website for my friends and I to keep track of fantasy sports scoring. So far, I've been doing the calculations and storage in Excel, which is very tedious. I'm trying to make it more simplified and automated through a SQL database that I can then wrap a web app around to enter daily stat updates. It's premised on our participation in another commercial site where we trade virtual shares of athletes, and thus acquire an "ownership percentage" in each athlete. For instance, if there are 100 shares of AROD, and I own 10 shares, then I own 10%. It then applies this to traditional baseball rotisserie scoring. So, for instance, if AROD has 1 HR today, then his adjusted HR stat would be 1.10. If he also has 2 RBI's, then his adjusted RBI stat today would be 2.20, based on (2 x 1.10)(1 to normalize the stat, and the .10 to represent the ownership percentage). All the stats for my team would then be summed each day and added to my stat history to come to an aggregated total. After that, points are allocated based on the ranking of each participant in each category at the end of the day. E.g. if there are 10 participants, and I have the highest total aggregate number of Adjusted HR's, then I get 10 pts. The points are then summed across the different stat categories to come up with a total point ranking for that day. An added difficulty is that ownership %'s can change on a daily basis. So far, in playing around with different schema, I don't know that having a separate table for each athlete's stats and each player's ownership %'s is the wisest choice. It seems to me that simply having two tables, one that contains the daily stat information for each athlete, and another that shows the ownership % of each player. My friend suggested using a start and end date for each ownership % to represent the potential daily changes in this category. I'm admittedly new to database development, so any suggestions on query code would be appreciated.

    Read the article

< Previous Page | 648 649 650 651 652 653 654 655 656 657 658 659  | Next Page >