Search Results

Search found 24675 results on 987 pages for 'table'.

Page 264/987 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Composite keys as Foreign Key?

    - by paulio
    I have the following table... TABLE: Accounts ID (int, PK, Identity) AccountType (int, PK) Username (varchar) Password (varchar) I have created a composite key out of ID and AccountType columns so that people can have the same username/password but different AccountTypes. Does this mean that for each foreign table that I try and link to I'll have to create two columns? I’m using SQL Server 2008

    Read the article

  • JSF actionListener is called multiple times from within HtmlTable

    - by Rose
    I have a mix of columns in my htmltable: 1 column is an actionlistener, 2 columns are actions and other columns are simple output. <h:dataTable styleClass="table" id="orderTable" value="#{table.dataModel}" var="anOrder" binding="#{table.dataTable}" rows="#{table.rows}" <an:listenerColumn backingBean="${orderEntry}" entity="${anOrder}" actionListener="closeOrder"/ <an:column label="#{msg.hdr_orderStatus}" entity="#{anOrder}" propertyName="orderStatus" / <an:actionColumn backingBean="${orderEntry}" entity="${anOrder}" action="editOrder" / <an:actionColumn backingBean="${orderEntry}" entity="${anOrder}" action="viewOrder"/ .... I'm using custom tags, but it's the same behavior if I use the default column tags. I've noticed a very strange effect: when clicking the actionlistenercolumn, the actionevent is handled 3 times. If I remove the 2 action columns then the actionevent is handled only once. The managed bean has sessionscope, bean method: public void closeOrder(ActionEvent event) { OrdersDto order; if ((order = orderRow()) == null) { return; } System.out.println("closeOrder() 1 "); orderManager.closeOrder(); System.out.println("closeOrder() 2 "); } the console prints the'debug' text 3 times.

    Read the article

  • OpenVPN Configuration - Windows 7 client & debian server

    - by Guillaume
    I recently formatted my Windows 7 computer and lost my client's config files for OpenVPN. I recovered the certificates and default config that were left on the server but I haven't managed to make the whole thing work again. I assume the server's config and routing table are OK because it was working before (although quite some time ago). Would any of you experts be able to help? server.conf # Serveur TCP/666 mode server proto udp port 666 dev tun # Cles et certificats ca ca.crt cert server.crt key server.key dh dh1024.pem tls-auth ta.key 0 cipher AES-256-CBC # Reseau server 10.8.0.0 255.255.255.0 #push "redirect-gateway def1 bypass-dhcp" push "dhcp-option DNS 208.67.222.222" push "dhcp-option DNS 208.67.220.220" push "redirect-gateway def1" keepalive 10 120 # Securite user nobody group nogroup chroot /etc/openvpn/jail persist-key persist-tun comp-lzo # Log verb 3 mute 20 status openvpn-status.log log-append /var/log/openvpn.log client.conf # Client client dev tun proto udp remote *my server's ip address*:666 cipher AES-256-CBC # Cles ca ca.crt cert client1.crt key client1.key tls-auth ta.key 1 # Securite nobind persist-key persist-tun comp-lzo verb 3 Routing table on debian server when OpenVPN server is running: Destination Gateway Genmask Indic Metric Ref Use Iface 10.8.0.2 * 255.255.255.255 UH 0 0 0 tun0 10.8.0.0 10.8.0.2 255.255.255.0 UG 0 0 0 tun0 my server's ip * 255.255.255.0 U 0 0 0 eth0 default 72815.trg.dedic 0.0.0.0 UG 0 0 0 eth0 Routing table on Windows 7 client (OpenVPN not working) =========================================================================== Interface List 19...00 f0 8a 1b 6e 5c ......TAP-Win32 Adapter V9 12...90 2e 34 33 84 7b ......Atheros AR8151 PCI-E Gigabit Ethernet Controller ( NDIS 6.20) 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.11 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 192.168.1.0 255.255.255.0 On-link 192.168.1.11 276 192.168.1.11 255.255.255.255 On-link 192.168.1.11 276 192.168.1.255 255.255.255.255 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.11 276 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.11 276 =========================================================================== Persistent Routes: None IPv6 Route Table =========================================================================== Active Routes: [...] =========================================================================== Persistent Routes: None And when the link is established between my client and the server: The server's routing table stays the same. The client's becomes: =========================================================================== Interface List 19...00 f0 8a 1b 6e 5c ......TAP-Win32 Adapter V9 12...90 2e 34 33 84 7b ......Atheros AR8151 PCI-E Gigabit Ethernet Controller ( NDIS 6.20) 1...........................Software Loopback Interface 1 12...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter 13...00 00 00 00 00 00 00 e0 Teredo Tunneling Pseudo-Interface 16...00 00 00 00 00 00 00 e0 Microsoft ISATAP Adapter #2 =========================================================================== IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.11 20 0.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 10.8.0.1 255.255.255.255 10.8.0.5 10.8.0.6 30 10.8.0.4 255.255.255.252 On-link 10.8.0.6 286 10.8.0.6 255.255.255.255 On-link 10.8.0.6 286 10.8.0.7 255.255.255.255 On-link 10.8.0.6 286 my server's ip 255.255.255.255 192.168.1.1 192.168.1.11 20 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 10.8.0.5 10.8.0.6 30 192.168.1.0 255.255.255.0 On-link 192.168.1.11 276 192.168.1.11 255.255.255.255 On-link 192.168.1.11 276 192.168.1.255 255.255.255.255 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.11 276 224.0.0.0 240.0.0.0 On-link 10.8.0.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.11 276 255.255.255.255 255.255.255.255 On-link 10.8.0.6 286 =========================================================================== Persistent Routes: None What's working: Server and client do connect to each other, SSL certificates are OK. The client gets an IP (10.8.0.6) from the server OpenVPN client is started as an administrator. But: I cannot ping the other one on either side. 'Gateway' value is empty on client's side (in the adapter's "status" window). Client has got no internet access when the link is up. Ideal configuration: I only want the client to be able to use the server's Internet access and access its resources (MySQL server in particular). I do not need or want the server to access the client's local network. The client needs to be able to access it's local network, although all Internet traffic should be redirected to the VPN link. I spent a considerable amount of time on this but it's still not working, any help would be much appreciated. Thanks :)

    Read the article

  • How to model a mutually exclusive relationship in sql server

    - by littlechris
    Hi, I have to add functionality to an existing application and I've run into a data situation that I'm not sure how to model. I am being restricted to the creation of new tables and code. If I need to alter the existing structure I think my client may reject the proposal..although if its the only way to get it right this is what I will have to do. I have an Item table that can me link to any number of tables, and these tables may increase over time. The Item can only me linked to one other table, but the record in the other table may have many items linked to it. Examples of the tables/entities being linked to are "Person", "Vehicle", "Building", "Office". These are all separate tables. Example of Items are "Pen", "Stapler", "Cushion", "Tyre", "A4 Paper", "Plastic Bag", "Poster", "Decoration" For instance a "Poster" may be allocated to a "Person" or "Office" or "Building". In the future if they add a "Conference Room" table it may also be added to that. My intital thoughts are: Item { ID, Name } LinkedItem { ItemID, LinkedToTableName, LinkedToID } The LinkedToTableName field will then allow me to identify the correct table to link to in my code. I'm not overly happy with this solution, but I can't quite think of anything else. Please help! :) Thanks!

    Read the article

  • c# parameters question

    - by n00b
    I am new to c# and need help understanding what going on in the following function public bool parse(String s) { table.Clear(); return parse(s, table, null); } where table is a Dictionary. I can see that is is recursive but how is parse being passed three params when it is defined to take just a string?

    Read the article

  • GAE transaction exceptions

    - by bach
    Hi, In this example IS the exception being thrown if ANY of the Table elements are being changed by another client OR only if the element that we changed has been changed by another client? Just to verify - the exception is thrown from the commit() isn't it? PersistenceManager pm = PMF.get().getPersistenceManager(); try { pm.currentTransaction().begin(); List<Row> Table = (List<Row>) pm.newQuery(query).execute(); Table.get(0).setReserved(true); // <----- we change only this element pm.currentTransaction().commit(); } catch (JDOCanRetryException ex) { pm.currentTransaction().rollback() // <----- if Table.get(1) was changed by another client do we get to this point??? }

    Read the article

  • Mysql - Operand should contain 1 column(s) : What's wrong with the query...?

    - by SpikETidE
    Hi everybody.... I am trying to query a database to find the following If a customer searches for a hotel in a city between dates A and B, find and return the hotels in which rooms are free between the two dates. There will be more than one room in each room type(i.e. 5 Rooms in type A, 10 rooms in Type B etc) and we have to query the db to find only those hotels in which there is atleast one room free in atleast one type. This is my table structure.... **Structure for table 'reservations'** reservation_id hotel_id room_id customer_id payment_id no_of_rooms check_in_date check_out_date reservation_date **Structure for table 'hotels'** hotel_id hotel_name hotel_description hotel_address hotel_location hotel_country hotel_city hotel_type hotel_stars hotel_image hotel_deleted **Structure for table 'rooms'** room_id hotel_id room_name max_persons total_rooms room_price room_image agent_commision room_facilities service_tax vat city_tax room_description room_deleted And this is my query $city_search = '15'; $check_in_date = '29-03-2010'; $check_out_date = '31-03-2010'; $dateFormat_check_in = "DATE_FORMAT('$reservations.check_in_date','%d-%m-%Y')"; $dateFormat_check_out = "DATE_FORMAT('$reservations.check_out_date','%d-%m-%Y')"; $dateCheck = "$dateFormat_check_in >= '$check_in_date' AND $dateFormat_check_out <= '$check_out_date'"; $query = "SELECT $rooms.room_id, $rooms.room_name, $rooms.max_persons, $rooms.room_price, $hotels.hotel_id, $hotels.hotel_name, $hotels.hotel_stars, $hotels.hotel_type FROM $hotels,$rooms,$reservations WHERE $hotels.hotel_city = '$city_search' AND $hotels.hotel_id = $rooms.hotel_id AND $hotels.hotel_deleted = '0' AND $rooms.room_deleted = '0' AND $rooms.total_rooms - (SELECT SUM($reservations.no_of_rooms) as tot, $rooms.room_id as id FROM $reservations,$rooms WHERE $dateCheck GROUP BY $reservations.room_id) > '0'"; The number of rooms already reserved in each room type in each hotel will be stored in the reservations table... The query returns the error : Operand should contain 1 column(s) I tried running the sub-query alone but i don't get any result... And i have lost quite some amount of hair trying to de-bug this query from yesterday... What's wrong with this...? Or is there a better way to do what i mentioned above...? Thanks for your time...

    Read the article

  • SQL Server 2008 - Search Query

    - by user208662
    Hello, I am not a SQL Expert. I’m trying to elegantly solve a query problem that others have had to have had. Surprisingly, Google is not returning anything that is helping. Basically, my application has a “search” box. This search field will allow a user to search for customers in the system. I have a table called “Customer” in my SQL Server 2008 database. This table is defined as follows: Customer UserName (nvarchar) FirstName (nvarchar) LastName (nvarchar) As you can imagine, my users will enter queries of varying cases and probably mis-spell the customer’s names regularly. How do I query my customer table and return the 25 results that are closest to their query? I have no idea how to do this ranking and consider the three fields listed in my table. Thank you!

    Read the article

  • Joining a one-to-many association with a many-to-many association in Rails 3

    - by Maz
    Hi all, I have a many-to-many association between a User class and a Table class. Additionally, i have a one-to-many association between the User and the Table (one User ultimately owns the table). I am trying to access all of the tables which the user may access (essintally joining both associations). Additionally, it would be nice to do this this with named_scope (now scope) Here's what I have so far: class User < ActiveRecord::Base acts_as_authentic attr_accessible :email, :password, :password_confirmation has_many :feedbacks has_many :tables has_many :user_table_permissions has_many :editableTables, :class_name => "Table", :through => :user_table_permissions def allTables editableTables.merge(tables) end end Thanks.

    Read the article

  • When using Query Syntax in C# "Enumeration yielded no results". How to retrieve output

    - by Shantanu Gupta
    I have created this query to fetch some result from database. Here is my table structure. What exaclty is happening. DtMapGuestDepartment as Table 1 DtDepartment as Table 2 Are being used var dept_list= from map in DtMapGuestDepartment.AsEnumerable() where map.Field<Nullable<long>>("GUEST_ID") == DRowGuestPI.Field<Nullable<long>>("PK_GUEST_ID") join dept in DtDepartment.AsEnumerable() on map.Field<Nullable<long>>("DEPARTMENT_ID") equals dept.Field<Nullable<long>>("DEPARTMENT_ID") select dept.Field<string>("DEPARTMENT_ID"); I am performing this query on DataTables and expect it to return me a datatable. Here I want to select distinct department from Table 1 as well which will be my next quest. Please answer to that also if possible.

    Read the article

  • NSFetchedResultsController not processing certain section driven moves

    - by JK
    I utilize a NSFetchedResultsController (frc) with a Core Data store. I implement all the frc delegate methods. The table is sporadically updated by background threads. All the inserts, deletes and updates work fine, with the exception that updates to the frc's index key for rows toward to the bottom of the table (50 rows), do not result in a section move. e.g. if "name" is the index key and the name "Victor" is changed to "Alex", the victor row now shows the name Alex, but is not moved to the top of the table alongside all other names starting with A. As I noted, this is only for rows towards the bottom of the table. If a row like "Andy" is changed to "Ben", the move is indeed processed correctly by the frc. Any suggestions to fix this would be appreciated. I do not use a frc cache. Thanks

    Read the article

  • show horizontal scroll bar in jqgrid

    - by Hunt
    Please see the link below, http://www.logicatrix.com/example/records.html In this table there are many columns included , so what i want is to get fit the whole table into drawn gray border i.e. div element with class name bms-dashboard-body. with a horizontal scroll bar just like an excel sheet has on the right bottom corner a small one. is it possible to create liquid layout of this jqgrid table ? if someone has another approach , to fit the this table then i don't mind.

    Read the article

  • Event feed implementation - will it scale?

    - by SlappyTheFish
    Situation: I am currently designing a feed system for a social website whereby each user has a feed of their friends' activities. I have two possible methods how to generate the feeds and I would like to ask which is best in terms of ability to scale. Events from all users are collected in one central database table, event_log. Users are paired as friends in the table friends. The RDBMS we are using is MySQL. Standard method: When a user requests their feed page, the system generates the feed by inner joining event_log with friends. The result is then cached and set to timeout after 5 minutes. Scaling is achieved by varying this timeout. Hypothesised method: A task runs in the background and for each new, unprocessed item in event_log, it creates entries in the database table user_feed pairing that event with all of the users who are friends with the user who initiated the event. One table row pairs one event with one user. The problems with the standard method are well known – what if a lot of people's caches expire at the same time? The solution also does not scale well – the brief is for feeds to update as close to real-time as possible The hypothesised solution in my eyes seems much better; all processing is done offline so no user waits for a page to generate and there are no joins so database tables can be sharded across physical machines. However, if a user has 100,000 friends and creates 20 events in one session, then that results in inserting 2,000,000 rows into the database. Question: The question boils down to two points: Is this worst-case scenario mentioned above problematic, i.e. does table size have an impact on MySQL performance and are there any issues with this mass inserting of data for each event? Is there anything else I have missed?

    Read the article

  • How to map a Dictionary<string, string> spanning several tables

    - by Kim Johansson
    I have four tables: CREATE TABLE [Languages] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Code] NVARCHAR(10) NOT NULL, PRIMARY KEY ([Id]), UNIQUE INDEX ([Code]) ); CREATE TABLE [Words] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, PRIMARY KEY ([Id]) ); CREATE TABLE [WordTranslations] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Value] NVARCHAR(100) NOT NULL, [Word] INTEGER NOT NULL, [Language] INTEGER NOT NULL, PRIMARY KEY ([Id]), FOREIGN KEY ([Word]) REFERENCES [Words] ([Id]), FOREIGN KEY ([Language]) REFERENCES [Languages] ([Id]) ); CREATE TABLE [Categories] ( [Id] INTEGER IDENTITY(1,1) NOT NULL, [Word] INTEGER NOT NULL, PRIMARY KEY ([Id]), FOREIGN KEY ([Word]) REFERENCES [Words] ([Id]) ); So you get the name of a Category via the Word - WordTranslation - Language relations. Like this: SELECT TOP 1 wt.Value FROM [Categories] AS c LEFT JOIN [WordTranslations] AS wt ON c.Word = wt.Word WHERE wt.Language = ( SELECT TOP 1 l.Id FROM [Languages] WHERE l.[Code] = N'en-US' ) AND c.Id = 1; That would return the en-US translation of the Category with Id = 1. My question is how to map this using the following class: public class Category { public virtual int Id { get; set; } public virtual IDictionary<string, string> Translations { get; set; } } Getting the same as the SQL query above would be: Category category = session.Get<Category>(1); string name = category.Translations["en-US"]; And "name" would now contain the Category's name in en-US. Category is mapped against the Categories table. How would you do this and is it even possible?

    Read the article

  • foreign key constraints on primary key columns - issues ?

    - by zzzeek
    What are the pros/cons from a performance/indexing/data management perspective of creating a one-to-one relationship between tables using the primary key on the child as foreign key, versus a pure surrogate primary key on the child? The first approach seems to reduce redundancy and nicely constrains the one-to-one implicitly, while the second approach seems to be favored by DBAs, even though it creates a second index: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key references parent(id), data varchar(50) ) pure surrogate key: create table parent ( id integer primary key, data varchar(50) ) create table child ( id integer primary key, parent_id integer unique references parent(id), data varchar(50) ) the platforms of interest here are Postgresql, Microsoft SQL Server.

    Read the article

  • SQL Update help

    - by Cyborgo
    I have a really simple question, is it possible to update a table with new values using just one update statement. Say for example I have a table with author, title, date, popularity. Now I got some new data which has author name, title corresponding new popularity. How do I update the table now in one statement. Note that author and title are not unique.

    Read the article

  • Using ASP.NET SQL Membership Provider, how do I store my own per-user data?

    - by Gary McGill
    I'm using the ASP.NET SQL Membership Provider. So, there's an aspnet_Users table that has details of each of my users. (Actually, the aspnet_Membership table seems to contain most of the actual data). I now want to store some per-user information in my database, so I thought I'd just create a new table with a UserId (GUID) column and an FK relationship to aspnet_Users. However, I then discovered that I can't easily get access to the UserId since it's not exposed via the membership API. (I know I can access it via the ProviderUserKey, but it seems like the API is abstracting away the internal UserID in favor of the UserName, and I don't want to go too far against the grain). So, I thought I should instead put a LoweredUserName column in my table, and create an FK relationship to aspnet_Users using that. Bzzzt. Wrong again, because while there is a unique index in aspnet_Users that includes the LoweredUserName, it also includes the ApplicationId - so in order to create my FK relationship, I'd need to have an ApplicationId column in my table too. At first I thought: fine, I'm only dealing with a single application, so I'll just add such a column and give it a default value. Then I realised that the ApplicationId is a GUID, so it'd be a pain to do this. Not hard exactly, but until I roll out my DB I can't predict what the GUID is going to be. I feel like I'm missing something, or going about things the wrong way. What am I supposed to do?

    Read the article

  • How to change SRID of geometry column?

    - by Z77
    I have table where the one of columns is geometry column the_geom for polygons with SRID. I added new column in the same table with exactly the same geometry data as in the_geom. This another column has name the_geom4258 because I want here to set up another SRID=4258. So what is the procedure to set up another SRID geometry to be changed (in another coord.system)? Is just enough to apply following query: UPDATE table SET the_geom4258=ST_SetSRID(the_geom4258,4258);

    Read the article

  • How to check value of stored procedure output parameter

    - by Anna T
    I have a stored procedure that: A. inserts some rows into a "table variable" based on some joins B. selects all values from column x from that table into a string with comma separated values C. selects all from the "table variable" If I execute the procedure like this: EXEC CatalogGetFilmDetails2 2,111111; a table is returned as instructed per step C above. How can I execute it so that also the output parameter value is displayed? (see point B above). I need to check if it's calculated properly. And since the second parameter is of output type, meaning it's calculated inside the procedure, why is it mandatory to specify a value for it when executing the procedure? I normally use a random value for it, it anyway doesn't matter/impact the result. On the other hand if I try to execute it without the output parameter, it returns an error) Thank you very much! This is how the procedure starts: CREATE PROCEDURE CatalogGetFilmDetails2 (@FilmID int, @CommaSepString VARCHAR(50) OUTPUT) AS And this is how @CommaSepString is calculated: SELECT @CommaSepString = STUFF((SELECT ', ' + Categ FROM @Filme1 FOR XML PATH('')), 1,1,'')

    Read the article

  • GWT: Populating a page from datastore using RPC is too slow

    - by Ilya Boyandin
    Is there a way to speed up the population of a page with GWT's UI elements which are generated from data loaded from the datastore? Can I avoid making the unnecessary RPC call when the page is loaded? More details about the problem I am experiencing: There is a page on which I generate a table with names and buttons for a list of entities loaded from the datastore. There is an EntryPoint for the page and in its onModuleLoad() I do something like this: final FlexTable table = new FlexTable(); rpcAsyncService.getAllCandidates(new AsyncCallback<List<Candidate>>() { public void onSuccess(List<Candidate> candidates) { int row = 0; for (Candidate person : candidates) { table.setText(row, 0, person.getName()); table.setWidget(row, 1, new ToggleButton("Yes")); table.setWidget(row, 2, new ToggleButton("No")); row++; } } ... }); This works, but takes more than 30 seconds to load the page with buttons for 300 candidates. This is unacceptable. The app is running on Google App Engine and using the app engine's datastore.

    Read the article

  • MySQL and INT auto_increment fields

    - by PHPguy
    Hello folks, I'm developing in LAMP (Linux+Apache+MySQL+PHP) since I remember myself. But one question was bugging me for years now. I hope you can help me to find an answer and point me into the right direction. Here is my challenge: Say, we are creating a community website, where we allow our users to register. The MySQL table where we store all users would look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; So, from this snippet you can see that we have a unique and automatically incrementing for every new user 'uid' field. As on every good and loyal community website we need to provide users with possibility to completely delete their profile if they want to cancel their participation in our community. Here comes my problem. Let's say we have 3 registered users: Alice (uid = 1), Bob (uid = 2) and Chris (uid = 3). Now Bob want to delete his profile and stop using our community. If we delete Bob's profile from the 'users' table then his missing 'uid' will create a gap which will be never filled again. In my opinion it's a huge waste of uid's. I see 3 possible solutions here: 1) Increase the capacity of the 'uid' field in our table from SMALLINT (int(2)) to, for example, BIGINT (int(8)) and ignore the fact that some of the uid's will be wasted. 2) introduce the new field 'is_deleted', which will be used to mark deleted profiles (but keep them in the table, instead of deleting them) to re-utilize their uid's for newly registered users. The table will look then like this: CREATE TABLE `users` ( `uid` int(2) unsigned NOT NULL auto_increment COMMENT 'User ID', `name` varchar(20) NOT NULL, `password` varchar(32) NOT NULL COMMENT 'Password is saved as a 32-bytes hash, never in plain text', `email` varchar(64) NOT NULL, `is_deleted` int(1) unsigned NOT NULL default '0' COMMENT 'If equal to "1" then the profile has been deleted and will be re-used for new registrations', `created` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of registration', `updated` int(11) unsigned NOT NULL default '0' COMMENT 'Timestamp of profile update, e.g. change of email', PRIMARY KEY (`uid`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; 3) Write a script to shift all following user records once a previous record has been deleted. E.g. in our case when Bob (uid = 2) decides to remove his profile, we would replace his record with the record of Chris (uid = 3), so that uid of Chris becomes qual to 2 and mark (is_deleted = '1') the old record of Chris as vacant for the new users. In this case we keep the chronological order of uid's according to the registration time, so that the older users have lower uid's. Please, advice me now which way is the right way to handle the gaps in the auto_increment fields. This is just one example with users, but such cases occur very often in my programming experience. Thanks in advance!

    Read the article

  • A good approach to db planing for reporting service

    - by Itay Moav
    The scenario: Big system (~200 tables). 60,000 users. Complex reports that will require me to do multiple queries for each report and even those will be complex queries with inner queries all over the place + some processing in PHP. I have seen an approach, which I am not sure about: Having one centralized, de-normalized, table that registers any activity in the system which is reportable. This table will hold mostly foreign keys, so she should be fairly compact and fast. So, for example (My system is a virtual learning management system), A user enrolls to course, the table stores the user id, date, course id, organization id, activity type (enrollment). Of course I also store this data in a normalized DB, which the actual application uses. Pros I see: easy, maintainable queries and code to process data and fast retrieval. Cons: there is a danger of the de-normalized table to be out of sync with the real DB. Is this approach worth considering, or (preferably from experience) is total $#%#%t?

    Read the article

  • SQL Query to return maximums over decades

    - by Abraham Lincoln
    My question is the following. I have a baseball database, and in that baseball database there is a master table which lists every player that has ever played. There is also a batting table, which tracks every players' batting statistics. I created a view to join those two together; hence the masterplusbatting table. CREATE TABLE `Master` ( `lahmanID` int(9) NOT NULL auto_increment, `playerID` varchar(10) NOT NULL default '', `nameFirst` varchar(50) default NULL, `nameLast` varchar(50) NOT NULL default '', PRIMARY KEY (`lahmanID`), KEY `playerID` (`playerID`), ) ENGINE=MyISAM AUTO_INCREMENT=18968 DEFAULT CHARSET=latin1; CREATE TABLE `Batting` ( `playerID` varchar(9) NOT NULL default '', `yearID` smallint(4) unsigned NOT NULL default '0', `teamID` char(3) NOT NULL default '', `lgID` char(2) NOT NULL default '', `HR` smallint(3) unsigned default NULL, PRIMARY KEY (`playerID`,`yearID`,`stint`), KEY `playerID` (`playerID`), KEY `team` (`teamID`,`yearID`,`lgID`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; Anyway, my first query involved finding the most home runs hit every year since baseball began, including ties. The query to do that is the following.... select f.yearID, f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS This worked great. However, I now want to find the highest HR hitter in each decade since baseball began. Here is what I tried. select f.yearID, truncate(f.yearid/10,0) as decade,f.nameFirst, f.nameLast, f.HR from ( select yearID, max(HR) as HOMERS from masterplusbatting group by yearID )as x inner join masterplusbatting as f on f.yearID = x.yearId and f.HR = x.HOMERS group by decade You can see that I truncated the yearID in order to get 187, 188, 189 etc instead of 1897, 1885,. I then grouped by the decade, thinking that it would give me the highest per decade, but it is not returning the correct values. For example, it's giving me Adrian Beltre with 48 HR's in 2004 but everyone knows that Barry Bonds hit 73 HR in 2001. Can anyone give me some pointers?

    Read the article

  • Will Apple reject my app if I do not do this?

    - by mystify
    From the documentation of UITableView / UITableViewController: If you decide to use a UIViewController subclass rather than a subclass of UITableViewController to manage a table view, you should perform a couple of the tasks mentioned above to conform to the human-interface guidelines. To clear any selection in the table view before it’s displayed, implement the viewWillAppear: method to clear the selected row (if any) by calling deselectRowAtIndexPath:animated:. After the table view has been displayed, you should flash the scroll view’s scroll indicators by sending a flashScrollIndicators message to the table view; you can do this in an override of the viewDidAppear: method of UIViewController. So lets say I do my custom stuff here and I do not flash the scroll indicator, and I do not reset the selection (which I think is wrong anyways, the user wants to know from where he came from). Will they reject it?

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >