Search Results

Search found 25547 results on 1022 pages for 'table locking'.

Page 473/1022 | < Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >

  • Need help with auto-scaffolding template in ASP.NET MVC

    - by DanM
    I'm trying to write an auto-scaffolder for Index views. I'd like to be able to pass in a collection of models or view-models (e.g., IQueryable<MyViewModel>) and get back an HTML table that uses the DisplayName attribute for the headings (th elements) and Html.Display(propertyName) for the cells (td elements). Each row should correspond to one item in the collection. Here's what I have so far: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <% var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; // Should be generic! var properties = items.First().GetMetadata().Properties .Where(pm => pm.ShowForDisplay && !ViewData.TemplateInfo.Visited(pm)); %> <table> <tr> <% foreach(var property in properties) { %> <th> <%= property.DisplayName %> </th> <% } %> </tr> <% foreach(var item in items) { %> <tr> <% foreach(var property in properties) { %> <td> <%= Html.Display(property.DisplayName) %> // This doesn't work! </td> <% } %> </tr> <% } %> </table> Two problems with this: I'd like it to be generic. So, I'd like to replace var items = (IQueryable<TestProj.ViewModels.TestViewModel>)Model; with var items = (IQueryable<T>)Model; or something to that effect. The <td> elements are not working because the Html in <%= Html.Display(property.DisplayName) %> contains the model for the view, which is a collection of items, not the item itself. Somehow, I need to obtain an HtmlHelper object whose Model property is the current item, but I'm not sure how to do that. How do I solve these two problems?

    Read the article

  • PHP multiuser login class or script

    - by FFish
    I am looking for a simple but secure login script with mySQL PHP: sessions, MD5 that I can use with my exsisting database. Cookies to store password + password recovery by email. Change login/pass. I do not need registering, I register the user myself with temp login/pass. table agents agent1 agent2 table albums album1, owner: agent1 album2, owner: agent1 album3, owner: agent2 ... login.php agent1 logs in and has access to his albums: - album1 - album2 agent1 can edit his albums: edit.php?ref=album1 but NOT edit.php?ref=album3 by changing the ?ref variable

    Read the article

  • Need alternative field names for these reserved words

    - by MattSlay
    “type” and “class” are likely reserved or problematic words in C# and/or Ruby, two languages I may use to program against my new database schema in the future. So, in order to avoid potential conflicts with those languages, I’m looking for alternative names for these field names in my tables. In this case, it is from my Machines table, where I have: “class” field (values would be something like “manual” or “computerized”) and “type” field (values would be “lathe” or “mill”) I could call the fields “machineclass” and “machinetype”, but that is inconsistent with naming scheme in the rest of my schema (meaning, I do not re-use the table name in the field… For instance, I use Machine.name, not Machine.machinename) Any thought on this madness?

    Read the article

  • PlaceHolderMain controlling td width of hard-coded values

    - by Linda
    In my custom .master page I have the following code: <asp:ContentPlaceHolder id="PlaceHolderMain" runat="server" Visible="true" /> This prints out the main content of my page. It contains this structure <table ID="OuterZoneTable" width="100%"> <tr>...</tr> <tr id="OuterRow"> <td width="80%" id="OuterLeftCell">...</td> <td width="180" id="OuterRightCell">...</td> </tr> ... </table> I want to control the width of #OuterLeftCell and #OuterRightCell but it is hard-coded in the html that is returned. How would I change these values?

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • Does Hibernate's GenericGenerator cause update and saveOrUpdate to always insert instead of update?

    - by Derek Mahar
    When using GenericGenerator to generate unique identifiers, do Hibernate session methods update() and saveOrUpdate() always insert instead of update table rows, even when the given object has an existing identifier (where the identifier is also the table primary key)? Is this the correct behaviour? public class User { private String id; private String name; public User(String id, String name) { this.id = id; this.name = name; } @GenericGenerator(name="generator", strategy="guid")@Id @GeneratedValue(generator="generator") @Column(name="USER_ID", unique=true, nullable=false) public String getId() { return this.id; } public void setId(String id) { this.id = id; } @Column(name="USER_NAME", nullable=false, length=20) public String getUserName() { return this.userName; } public void setUserName(String userName) { this.userName = userName; } } class UserDao extends AbstractDaoHibernate { public void updateUser(final User user) { HibernateTemplate ht = getHibernateTemplate(); ht.saveOrUpdate(user); } }

    Read the article

  • Design Help! How can design Extended properties for Entity with simple and complex data in extended

    - by mmtemporary
    I have design question. I have entity such as "Person". Person has properties such as: FirstName, LastName, Gender, BirthDate, .... End user when create a person in application may be need to define another property that is not defined in database table schema (or class person). for example: end user nead to define "property1" that its a string property. or nead define "proerty2" that its a image, or need define "property3" that its complex type. please separate your design solution in tow level: 1-database table design 2-class design thank u.

    Read the article

  • Optimising speeds in HDF5 using Pytables

    - by Sree Aurovindh
    The problem is with respect to the writing speed of the computer (10 * 32 bit machine) and the postgresql query performance.I will explain the scenario in detail. I have data about 80 Gb (along with approprite database indexes in place). I am trying to read it from Postgresql database and writing it into HDF5 using Pytables.I have 1 table and 5 variable arrays in one hdf5 file.The implementation of Hdf5 is not multithreaded or enabled for symmetric multi processing.I have rented about 10 computers for a day and trying to write them inorder to speed up my data handling. As for as the postgresql table is concerned the overall record size is 140 million and I have 5 primary- foreign key referring tables.I am not using joins as it is not scalable So for a single lookup i do 6 lookup without joins and write them into hdf5 format. For each lookup i do 6 inserts into each of the table and its corresponding arrays. The queries are really simple select * from x.train where tr_id=1 (primary key & indexed) select q_t from x.qt where q_id=2 (non-primary key but indexed) (similarly five queries) Each computer writes two hdf5 files and hence the total count comes around 20 files. Some Calculations and statistics: Total number of records : 14,37,00,000 Total number of records per file : 143700000/20 =71,85,000 The total number of records in each file : 71,85,000 * 5 = 3,59,25,000 Current Postgresql database config : My current Machine : 8GB RAM with i7 2nd generation Processor. I made changes to the following to postgresql configuration file : shared_buffers : 2 GB effective_cache_size : 4 GB Note on current performance: I have run it for about ten hours and the performance is as follows: The total number of records written for each file is about 6,21,000 * 5 = 31,05,000 The bottle neck is that i can only rent it for 10 hours per day (overnight) and if it processes in this speed it will take about 11 days which is too high for my experiments. Please suggest me on how to improve. Questions: 1. Should i use Symmetric multi processing on those desktops(it has 2 cores with about 2 GB of RAM).In that case what is suggested or prefereable? 2. If i change my postgresql configuration file and increase the RAM will it enhance my process. 3. Should i use multi threading.. In that case any links or pointers would be of great help Thanks Sree aurovindh V

    Read the article

  • How to manipulate data in View using Asp.Net Mvc RC 2?

    - by Picflight
    I have a table [Users] with the following columns: INT SmallDateTime Bit Bit [UserId], [BirthDate], [Gender], [Active] Gender and Active are Bit that hold either 0 or 1. I am displaying this data in a table on my View. For the Gender I want to display 'Male' or 'Female', how and where do I manipulate the 1's and 0's? Is it done in the repository where I fetch the data or in the View? For the Active column I want to show a checkBox that will AutoPostBack on selection change and update the Active filed in the Database. How is this done without Ajax or jQuery?

    Read the article

  • How can I build my SQL query from these tables?

    - by vee
    Hi All, I'm thinking of building query from these 2 tables (on SQL Server 2008). I have 2 tables as shown below: Table 1 MemberId . MemberName . Percentage . Amount1 00000001 AAA 1.0 100 00000002 BBB 1.2 800 00000003 ZZZ 1.0 700 Table 2 MemberId . MemberName . Percentage . Amount2 00000002 BBB 1.5 500 00000002 BBB 1.6 100 00000002 BBB 1.6 150 The result I want is MemberId . MemberName . Percentage . Amount . NettAmount 00000001 AAA 1.0 100 100 00000002 BBB 1.2 800 50 <-- 800-(500+100+150) 00000002 BBB 1.5 500 500 00000002 BBB 1.6 650 650 00000003 ZZZ 1.0 700 700 50 comes from 800 in Table1 minus sum of Amount2 in table2 for MemberID=00000002 Plz someone help me to build the query to reach this result. Thank you in advance.

    Read the article

  • ANSI SQL question - how to insert or update a record if it already exists?

    - by morpheous
    Although I am using mySQL (for now), I dont want any DB specific SQL. I am trying to insert a record if it doesn't exist, and update a field if it does exist. I want to use ANSI SQL. The table looks something like this: create table test_table (id int, name varchar(16), weight double) ; //test data insert into test_table (id, name, weight) values(1,'homer', 900); insert into test_table (id, name, weight) values(2,'marge', 85); insert into test_table (id, name, weight) values(3,'bart', 25); insert into test_table (id, name, weight) values(4,'lisa', 15); If the record exists, I want to update the weight (increase by say 10)

    Read the article

  • How to correctly calculate address spaces?

    - by user337308
    Below is an example of a question given on my last test in a Computer Engineering course. Anyone mind explaining to me how to get the start/end addresses of each? I have listed the correct answers at the bottom... The MSP430F2410 device has an address space of 64 KB (the basic MSP430 architecture). Fill in the table below if we know the following. The first 16 bytes of the address space (starting at the address 0x0000) is reserved for special function registers (IE1, IE2, IFG1, IFG2, etc.), the next 240 bytes is reserved for 8-bit peripheral devices, and the next 256 bytes is reserved for 16-bit peripheral devices. The RAM memory capacity is 2 Kbytes and it starts at the address 0x1100. At the top of the address space is 56KB of flash memory reserved for code and interrupt vector table. What Start Address End Address Special Function Registers (16 bytes) 0x0000 0x000F 8-bit peripheral devices (240 bytes) 0x0010 0x00FF 16-bit peripheral devices (256 bytes) 0x0100 0x01FF RAM memory (2 Kbytes) 0x1100 0x18FF Flash Memory (56 Kbytes) 0x2000 0xFFFF

    Read the article

  • Future proof Primary Key design in postgresql

    - by John P
    I've always used either auto_generated or Sequences in the past for my primary keys. With the current system I'm working on there is the possibility of having to eventually partition the data which has never been a requirement in the past. Knowing that I may need to partition the data in the future, is there any advantage of using UUIDs for PKs instead of the database's built-in sequences? If so, is there a design pattern that can safely generate relatively short keys (say 6 characters instead of the usual long one e6709870-5cbc-11df-a08a-0800200c9a66)? 36^6 keys per-table is more than sufficient for any table I could imagine. I will be using the keys in URLs so conciseness is important.

    Read the article

  • Multiple Context menus in PyQt based on mouse location

    - by Nader
    I have a window with multiple tables using QTableWidget (PyQt). I created a popup menu using the right click mouse and it works fine. However, I need to create different popup menu based on which table the mouse is hovering over at the time the right mouse is clicked. How can I get the mouse to tell me which table it is hovering over? or, put in another way, how to implement a method so as to have a specific context menu based on mouse location? I am using Python and PyQt. My popup menu is developed similar to this code (PedroMorgan answer from Qt and context menu): class Foo( QtGui.QWidget ): def __init__(self): QtGui.QWidget.__init__(self, None) # Toolbar toolbar = QtGui.QToolBar() # Actions self.actionAdd = toolbar.addAction("New", self.on_action_add) self.actionEdit = toolbar.addAction("Edit", self.on_action_edit) self.actionDelete = toolbar.addAction("Delete", self.on_action_delete) # Tree self.tree = QtGui.QTreeView() self.tree.setContextMenuPolicy( Qt.CustomContextMenu ) self.connect(self.tree, QtCore.SIGNAL('customContextMenuRequested(const QPoint&)'), self.on_context_menu) # Popup Menu self.popMenu = QtGui.QMenu( self ) self.popMenu.addAction( self.actionEdit ) self.popMenu.addAction( self.actionDelete ) self.popMenu.addSeparator() self.popMenu.addAction( self.actionAdd ) def on_context_menu(self, point): self.popMenu.exec_( self.tree.mapToGlobal(point) )

    Read the article

  • Can anyone tell me what the authors mean on this line?

    - by Anirudh Goel
    i was going through this link: FAT16 Basics to Assemble Clusters. I have read the structures involved in defining a directory entry in FAT. Now when giving the example for a FAT16 File, it says the data cluster is 0x03 for the example file MyFile.txt. Which means if we logically compute the Data Cluster we will be able to reach to the first node which happens to be cluster no 3. But what I fail to understand is what the author is trying to say in the next line where it says What we can see in the File Allocation Table at this moment? How suddenly we reach to the File Allocation Table? Weren't we already there when we were going through the information of Myfile.txt? I couldn't find any reason how suddenly the author jumped to an offset location of 00000200 and is identifying the emptiness of the clusters. It will be great if someone can help me understand. Thanks

    Read the article

  • Updating Linking Tables

    - by Sasha
    I've currently adding a bit of functionality that manages holiday lettings on top of a CMS that runs on PHP and MySQL. The CMS stores the property details on a couple of tables, and I'm adding a third table (letting_times) that will contain information about when people are staying at the property. Basic functionality would allow the user to add new times when a guest is staying, edit the times that the guest is staying and remove the booking if the guest no longer wants to stay at the property. Right now the best way that I can think of updating the times that the property is occupied is to delete all the times contained in the letting_times database and reinsert them again. The only other way that I can think to do this would be to include the table's primary key and do an update if that is present and has a value, otherwise do an insert, but this would not delete rows of data if they are removed. Is there a better way of doing this?

    Read the article

  • can't insert xml dml expression as a string

    - by 81967
    Here is the code below that would explain you the problem... I create a table below with an xml column and declare a variable, initialize it and Insert the Value into the xml column, create table CustomerInfo (XmlConfigInfo xml) declare @StrTemp nvarchar(2000) set @StrTemp = '<Test></Test>' insert into [CustomerInfo](XmlConfigInfo) values (@StrTemp) Then comes the part of the question,, if I write this... update [CustomerInfo] set XmlConfigInfo.modify('insert <Info></Info> into (//Test)[1]') -- Works Fine!!! but when I try this, set @StrTemp = 'insert <Info></Info> into (//Test)[1]' update [CustomerInfo] set XmlConfigInfo.modify(@StrTemp) -- Doesn't Work!!! and throws an error The argument 1 of the xml data type method "modify" must be a string literal. is there a way around for this one? I tried this, but it is not working :(

    Read the article

  • Rails. Putting update logic in your migrations

    - by Daniel Abrahamsson
    A couple of times I've been in the situation where I've wanted to refactor the design of some model and have ended up putting update logic in migrations. However, as far as I've understood, this is not good practice (especially since you are encouraged to use your schema file for deployment, and not your migrations). How do you deal with these kind of problems? To clearify what I mean, say I have a User model. Since I thought there would only be two kinds of users, namely a "normal" user and an administrator, I chose to use a simple boolean field telling whether the user was an adminstrator or not. However, after I while I figured I needed some third kind of user, perhaps a moderator or something similar. In this case I add a UserType model (and the corresponding migration), and a second migration for removing the "admin" flag from the user table. And here comes the problem. In the "add_user_type_to_users" migration I have to map the admin flag value to a user type. Additionally, in order to do this, the user types have to exist, meaning I can not use the seeds file, but rather create the user types in the migration (also considered bad practice). Here comes some fictional code representing the situation: class CreateUserTypes < ActiveRecord::Migration def self.up create_table :user_types do |t| t.string :name, :nil => false, :unique => true end #Create basic types (can not put in seed, because of future migration dependency) UserType.create!(:name => "BASIC") UserType.create!(:name => "MODERATOR") UserType.create!(:name => "ADMINISTRATOR") end def self.down drop_table :user_types end end class AddTypeIdToUsers < ActiveRecord::Migration def self.up add_column :users, :type_id, :integer #Determine type via the admin flag basic = UserType.find_by_name("BASIC") admin = UserType.find_by_name("ADMINISTRATOR") User.all.each {|u| u.update_attribute(:type_id, (u.admin?) ? admin.id : basic.id)} #Remove the admin flag remove_column :users, :admin #Add foreign key execute "alter table users add constraint fk_user_type_id foreign key (type_id) references user_types (id)" end def self.down #Re-add the admin flag add_column :users, :admin, :boolean, :default => false #Reset the admin flag (this is the problematic update code) admin = UserType.find_by_name("ADMINISTRATOR") execute "update users set admin=true where type_id=#{admin.id}" #Remove foreign key constraint execute "alter table users drop foreign key fk_user_type_id" #Drop the type_id column remove_column :users, :type_id end end As you can see there are two problematic parts. First the row creation part in the first model, which is necessary if I would like to run all migrations in a row, then the "update" part in the second migration that maps the "admin" column to the "type_id" column. Any advice?

    Read the article

  • Grouping with operands question

    - by Filip
    I have a table: mysql> desc kursy_bid; +-----------+-------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------+-------------+------+-----+---------+-------+ | datetime | datetime | NO | PRI | NULL | | | currency | varchar(6) | NO | PRI | NULL | | | value | varchar(10) | YES | | NULL | | +-----------+-------------+------+-----+---------+-------+ 3 rows in set (0.01 sec) I would like to select some rows from a table, grouped by some time interval (can be one day) where I will have the first row and the last row of the group, the max(value) and min(value). I tried: select datetime, (select value order by datetime asc limit 1) open, (select value order by datetime desc limit 1) close, max(value), min(value) from kursy_bid_test where datetime > '2009-09-14 00:00:00' and currency = 'eurpln' group by month(datetime), day(datetime), hour(datetime); but the output is: | open | close | datetime | max(value) | min(value) | +--------+--------+---------------------+------------+------------+ | 1.4581 | 1.4581 | 2009-09-14 00:00:05 | 4.1712 | 1.4581 | | 1.4581 | 1.4581 | 2009-09-14 01:00:01 | 1.4581 | 1.4581 | As you see open and close is the same (but they shouldn't be). What should be the query to do what I want?

    Read the article

  • Returning partial address matches and mismatch position using L2S or SQL

    - by peter3
    I need to implement a method that takes an address split up into individual parts and returns any matching items from an address table. If no matches are found, I want to be able to return a value indicating where it failed. Each input param has a corresponding field in the table. The signature would look something like this: List<Address> MatchAddress(string zipCode, string streetName, string houseNumber, string houseLetter, string floor, string appartmentNo, out int mismatchPosition) { // return matching addresses // if none found, return the position where it stopped matching // zipCode is position 0, appartmentNo is position 5 // // an empty param value indicates "don't check" } I know I can construct the method such that I start with all the parameters, execute the query and then remove param by param (from the right side) until either a match is found or I run out of parameters, but can I construct a query that is more effective than that, i.e minimizing the number of calls to the db, maybe even as a single call?

    Read the article

  • Handling Datetime with decimal '2010-02-14 20:18:58.313000000'

    - by AaronLS
    In SQL Server I have some textual data in varchar fields I am trying to convert to datetime's. The funny thing is this data at some point was in a datetime field, exported to flat file, and now I am reimporting it. The problem is it is in this format 2010-02-14 20:18:58.313000000 and the conversion to datetime fails. I have no idea how it ended up like this when it was originally extracted from a datetime column. Basically a table was exported to a flat file by someone else. The original table was lost. I am reimporting back from the flatfile. I could just drop the decimal but this would be like throwing out some of the data. I'd like to maintain as much precision as possible. How can I import this data from the varchar column back into a datetime column and preserve as much accuracy as possible?

    Read the article

  • mySQL Left Join on multiple tables

    - by Jarrod
    Hi I'm really struggling with this query. I have 4 tables (http://oberto.co.nz/db-sql.png): Invoice_Payement, Invoice, Client and Calendar. I'm trying to create a report by summing up the 'paid_amount' col, in Invoice_Payment, by month/year. The query needs to include all months, even those with no data There query needs the condition (Invoice table): registered_id = [id] I have tried with the below query, which works, but falls short when 'paid_date' does not have any records for a month. The outcome is that month does not show in the results I added a Calendar table to resolved this but not sure how to left join to it. SELECT MONTHNAME(Invoice_Payments.date_paid) as month, SUM(Invoice_Payments.paid_amount) AS total FROM Invoice, Client, Invoice_Payments WHERE Client.registered_id = 1 AND Client.id = Invoice.client_id And Invoice.id = Invoice_Payments.invoice_id AND date_paid IS NOT NULL GROUP BY YEAR(Invoice_Payments.date_paid), MONTH(Invoice_Payments.date_paid) Please see the above link for a basic ERD diagram of my scenario. Thanks for reading. I've posted this Q before but I think I worded it badly.

    Read the article

  • Need to work out database structure

    - by jim smith
    Hi, Just need a little kickstart with this. I have Mysql/PHP, and I have 5,000 products. I have 30 companies I need to store some data for those 30 companies for each product as follows: a) prices b) stock qty I also need to store data historically on a daily basis. So the table... It makes sense that the records will be the products because there's 5000, and if I put the companies as the columns, I can store the prices, but what about the stock quantities? I could create two columns for each compoany, one for prices, one for qty. Then make the tablename the date for that day...so theer would be a new table for every day with 5000 products in it? is this the correct way? Some idea on how I'll be retreiving data the top 5 lowest prices (and the company) by product for a certain date the price and stock changes in the past 7 days by product

    Read the article

  • Cannot loop through Excel 2003 files in SSIS 2008

    - by Techspirit
    Hi, I am trying to execute a SSIS 2008 package on a 64-bit OS and import Excel 2003 files to SQL Server 2008. I have created an OLEDB Connection to the Excel file with a Connection String that retrieves the Excel file from a variable, inside the ForEach Loop Container. The Run64BitRunTime is set to false. I am not able to edit the SQL Command on the OLEDB Source in the Data Flow task. It returns an error : Error 2 Validation error. Load List Staged Table: Load List Staged Table: SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "List OLEDB to Excel" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed. 0 0 Appreciate any help.

    Read the article

  • Scraping paginated items from a website using scrapy

    - by Mridang Agarwalla
    I'm using scrapy to scrape items from a site. I'm not being able to implement this scraping pattern. The site I'm trying to scrape is a forum and I scrape the site once a day. Each page has a table containing posts. New posts are added to the top of the table and as more and more posts are posted to the site, the older posts go further into the pages due to pagination. This is a very simple scenario and we will assume that the order of the posts never change. I would like to scrape this site and scrape all the "new" records until the last scraped post from yesterday is encountered. I have configured my spider to paginate endlessly and when it encounters yesterday's last scraped post, it should stop. How can implement this? (My Scrapy installation works with my Django installation using django-dynamic-scraper )

    Read the article

< Previous Page | 469 470 471 472 473 474 475 476 477 478 479 480  | Next Page >