Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 346/1300 | < Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >

  • Question about OOP and objects.

    - by loddn
    I have a school assignment, a Dog Show. My assignment is to create a website where vistors can display results and Judges and Secretary can admin, crud. I have a small problem one part of the assignment. The result should be based on two protokols from different judges and after that checked by the secretary before the result is displayed for the user. I have to say i'm fairly new to programming and so i need some smart suggestions on how to design and implement this. The assignment should cover both a db and c# (.net mvc). Q1: How do i create a object (result)that depends on two other objects(protocol)? Is that even needed? Q2: How to solve this in a relational db?

    Read the article

  • Growing MS Access File Size problem

    - by user55886
    I have a large MS Access application with a lot of computations in VBA code. When I run it it eventually crashes due to excessive file size. There are a lot of intermediate tables and queries created and subsequently deleted, but Access does not reclaim the space. I have diligently closed all intermediate record sets and set all temporary objects to nothing, but nothing helps. The only way I can get my code to run is to run part of it, stop and repair/compress the file then restart the code. Isn't there a better way? Thanks

    Read the article

  • MongoDB or CouchDB - fit for production?

    - by Alan
    I was wondering if anyone can tell me if MongoDB or CouchDB are ready for a production environment. I'm now looking at these storage solutions (I'm favouring MongoDB at the moment), however these projects are quite young and so I foresee that I'm going to have to work quite hard to convince my manager that we should adopt this new technology. What I'd like to know is: 1) Who is using MongoDB or CouchDB today in a production environment? 2) How are you using MongoDB/CouchDB? 3) What problems (if any) did you come across when you adopted this new storage mechanism (and how did you overcome them)? 4) How did you deal with any migration issues that you had to deal with? 5) Do you have any good/bad experiences with either of these solutions that you'd like to share? Thanks.

    Read the article

  • capturing user IP address information for audit

    - by Samuel
    We have a requirement to log IP address information of all users who use a certain web application based on JEE 5. What would be an appropriate sql data type for storing IPv4 or IPv6 addressses in the following supported databases (h2, mysql, oracle). There is also a need to filter activity from certain IP addresses. Should I just treat the representation as a string field (say varchar(32) to hold ipv4, ipv6 addresses)

    Read the article

  • e-commerce product data/metadata schemas

    - by Shreko
    Trying to figure out how is product data/metadata schema designed. For example, how does an e-commerce site enter a product spec. Does it copy and paste from mfg spec sheet, enters it in their own fields or something else? Here is an example, looking at the D3000 Nikon DSLR Manufacturer: http://nikon.ca/en/Product.aspx?m=17300&disp=Specs futureshop.ca: www.futureshop.ca/en-CA/product/nikon-nikon-d3000-10-2mp-dslr-camera-with-18-55mm-lens-kit-d3000/10128435.aspx?path=865c2348a1542e848982c9dbd9253483en02 memoryexpress.com: www.memoryexpress.com/Products/PID-MX25539%28ME%29.aspx They are all slightly different in order or in parent/child field? What's storage is used for this type of info rdbms or xml?

    Read the article

  • Challege: merging csv files intelligently!

    - by Evenz495
    We are in the middle of changing web store platform and we need to import products' data from different sources. We currently have several different csv files from different it systems/databases because each system is missing some information. Fortunatly the product ids are the same so it's possible to relate the data using ids. We need to merge this data into one big csv file so we can import in into our new e-commerce site. My question: is there a general approach when you need to merge csv files with related data into one csv file? Are there any applications or tools that helps you out?

    Read the article

  • Query two tables from different schema

    - by Guru
    Hi- I have two different schemas in Oracle (say S1, S2). And two tables in those schemas(say S1.Table1, S2.Table2). I want to query these two tables from schema S1. Both S1 and S2 are in different databases. From DB1 - Schema S1, I want to do something like this, select T1.Id from S1.Table1 T1 , S2.Table2 T2 Where T1.Id = T2.refId I know one way of doing this would be creating a DB Link for the second schema and use it in querying. But sadly, I don't have priv to create DB link. Is there some way to do without DB link, like, in TOAD, you can compare two schema objects. But again, two schema objects and it is general comparision. Not like querying them. Any ideas, suggestions are greatly appriciated. Thanks in advance.

    Read the article

  • Setting 0/1-to-1 relationship in SQLAlchemy?

    - by Timmy
    is there a proper way of setting up a 0/1-to-1 relationship? i want to be able to check if the related item exists without creating it: if item.relationship is None: item.relationship = item2() but it creates the insert statements on the if statement

    Read the article

  • sql combine two subqueries

    - by Claudiu
    I have two tables. Table A has an id column. Table B has an Aid column and a type column. Example data: A: id -- 1 2 B: Aid | type ----+----- 1 | 1 1 | 1 1 | 3 1 | 1 1 | 4 1 | 5 1 | 4 2 | 2 2 | 4 2 | 3 I want to get all the IDs from table A where there is a certain amount of type 1 and type 3 actions. My query looks like this: SELECT id FROM A WHERE (SELECT COUNT(type) FROM B WHERE B.Aid = A.id AND B.type = 1) = 3 AND (SELECT COUNT(type) FROM B WHERE B.Aid = A.id AND B.type = 3) = 1 so on the data above, just the id 1 should be returned. Can I combine the 2 subqueries somehow?

    Read the article

  • MySQL on Windows 7 multiple connections issue.

    - by Hussain
    I just installed MySql on windows 7 ... issue is i'm unable to get multiple connection to MySql . If I connect with command line than i'm unable to connect through any other client and vice versa. Previously I was using same version of MySql i.e. 5.1 on vista and it was working fine.

    Read the article

  • How would you structure your entity model for storing arbitrary key/value data with different data t

    - by Nathan Ridley
    I keep coming across scenarios where it will be useful to store a set of arbitrary data in a table using a per-row key/value model, rather than a rigid column/field model. The problem is, I want to store the values with their correct data type rather than converting everything to a string. This means I have to choose either a single table with multiple nullable columns, one for each data type, or a set of value tables, one for each data type. I'm also unsure as to whether I should use full third normal form and separate the keys into a separate table, referencing them via a foreign key from the value table(s), or if it would be better to keep things simple and store the string keys in the value table(s) and accept the duplication of strings. Old/bad: This solution makes adding additional values a pain in a fluid environment because the table needs to be modified regularly. MyTable ============================ ID Key1 Key2 Key3 int int string date ---------------------------- 1 Value1 Value2 Value3 2 Value4 Value5 Value6 Single Table Solution This solution allows simplicity via a single table. The querying code still needs to check for nulls to determine which data type the field is storing. A check constraint is probably also required to ensure only one of the value fields contains non-nulll data. DataValues ============================================================= ID RecordID Key IntValue StringValue DateValue int int string int string date ------------------------------------------------------------- 1 1 Key1 Value1 NULL NULL 2 1 Key2 NULL Value2 NULL 3 1 Key3 NULL NULL Value3 4 2 Key1 Value4 NULL NULL 5 2 Key2 NULL Value5 NULL 6 2 Key3 NULL NULL Value6 Multiple-Table Solution This solution allows for more concise purposing of each table, though the code needs to know the data type in advance as it needs to query a different table for each data type. Indexing is probably simpler and more efficient because there are less columns that need indexing. IntegerValues =============================== ID RecordID Key Value int int string int ------------------------------- 1 1 Key1 Value1 2 2 Key1 Value4 StringValues =============================== ID RecordID Key Value int int string string ------------------------------- 1 1 Key2 Value2 2 2 Key2 Value5 DateValues =============================== ID RecordID Key Value int int string date ------------------------------- 1 1 Key3 Value3 2 2 Key3 Value6 How do you approach this problem? Which solution is better? Also, should the key column be separated into a separate table and referenced via a foreign key or be should it be kept in the value table and bulk updated if for some reason the key name changes?

    Read the article

  • PHP - Drilling down Data and Looping with Loops

    - by stogdilla
    I'm currently having difficulty finding a way of making my loops work. I have a table of data with 15 minute values. I need the data to pull up in a few different increments $filters=Array('Yrs','Qtr','Day','60','30','15'); I think I have a way of finding out what I need to be able to drill down to but the issue I'm having is after the first loop to cycle through all the Outter most values (ex: the user says they want to display by Hours, each hour should be able to have a "+" that will then add a new div to display the half hour data, then each half hour data have a "+" to display the 15 minute data upon request. Now I can just program the number of outputs for each value (6 different outputs) just in-case... but isn't there a way I can make it do the drill down for each one in a loop? so I only have to code one output once and have it just check if there are any more intervals after it and check for those? I'm sure I'm just overlooking some very simple way of doing this but my brain isn't being clever today. Sorry in advance if this is a simple solution. I guess the best way I could think of it as a reply on a form. How you would check to see if it's a reply of a reply, and then if that reply has any replys...etc for output. Can anyone help or at least point me in the right direction? Or am I stuck coding each possible check? Thanks in advance!

    Read the article

  • .NET Migrations Engine

    - by Karl Seguin
    I was once under the belief that Microsoft was working on an official, ruby-like, Migration framework. However, I haven't been able to find any additional information (or even the original source of my belief). Does anyone know any information about an official migration framework? or possibly an open source one?

    Read the article

  • Tablesorter - filter inside input fields and values

    - by Zeracoke
    I have a small quest to accomplish, and I reached a point when nothing works... So the problem is. I have a paged table with a lot of input fields inside the rows with values, and I would like to search inside these values. Let me Show this, I hope that somebody will got the idea what I should do... <script type="text/javascript"> // add parser through the tablesorter addParser method $.tablesorter.addParser({ id: 'inputs', is: function(s) { return false; }, format: function(s, table, cell, cellIndex) { var $c = $(cell); // return 1 for true, 2 for false, so true sorts before false if (!$c.hasClass('updateInput')) { $c .addClass('updateInput') .bind('keyup', function() { $(table).trigger('updateCell', [cell, false]); // false to prevent resort }); } return $c.find('input').val(); }, type: 'text' }); $(function() { $('table').tablesorter({ widgets: ['zebra', 'stickyHeaders', 'resizable', 'filter'], widgetOptions: { stickyHeaders : '', // number or jquery selector targeting the position:fixed element stickyHeaders_offset : 110, // added to table ID, if it exists stickyHeaders_cloneId : '-sticky', // trigger "resize" event on headers stickyHeaders_addResizeEvent : true, // if false and a caption exist, it won't be included in the sticky header stickyHeaders_includeCaption : true, // The zIndex of the stickyHeaders, allows the user to adjust this to their needs stickyHeaders_zIndex : 2, // jQuery selector or object to attach sticky header to stickyHeaders_attachTo : null, // scroll table top into view after filtering stickyHeaders_filteredToTop: true, resizable: true, filter_onlyAvail : 'filter-onlyAvail', filter_childRows : true, filter_startsWith : true, filter_useParsedData : true, filter_defaultAttrib : 'data-value' }, headers: { 1: {sorter: 'inputs', width: '50px'}, 2: {sorter: 'inputs'}, 3: {sorter: 'inputs'}, 4: {sorter: 'inputs'}, 5: {sorter: 'inputs'}, 6: {sorter: 'inputs'}, 7: {sorter: 'inputs', width: '100px'}, 8: {sorter: 'inputs', width: '140px'}, 9: {sorter: 'inputs'}, 10: {sorter: 'inputs'}, 11: {sorter: 'inputs'}, } }); $('table').tablesorterPager({container: $(".pager"), positionFixed: false, size: 50, pageDisplay : $(".pagedisplay"), pageSize : $(".pagesize"), }); $("#table1").tablesorter(options); /* make second table scroll within its wrapper */ options.widgetOptions.stickyHeaders_attachTo = '.wrapper'; // or $('.wrapper') $("#table2").tablesorter(options); }); </script> The structure of the tables: <tr class="odd" style="display: table-row;"> <form action="/self.php" method="POST"> </form><input type="hidden" name="f" value="data"> <td><input type="hidden" name="mod_id" value="741">741</td> <td class="updateInput"><input type="text" name="name" value="Test User Name"></td> <td class="updateInput"><input type="text" name="address" value="2548451 Random address"></td> <td class="updateInput"><input type="email" name="email" value=""></td> <td class="updateInput"><input type="text" name="entitlement" value="none"></td> <td class="updateInput"><input type="text" name="card_number" value="6846416548644352"></td> <td class="updateInput"><input type="checkbox" name="verify" value="1" checked=""></td> <td class="updateInput"><input type="checkbox" name="card_sended" value="1" checked=""></td> <td class="updateInput"><input type="text" name="create_date" value="2014-02-12 21:09:16"></td> <td class="updateInput"><a href="self.php?f=data&amp;del=741">X</a></td> <td class="updateInput"><input type="submit" value="SAVE"></td><td class="updateInput"></td></tr> So the thing is I don't know how to configure the filter to search these values... I already added some options, but none of them are working... Any help would be great!

    Read the article

  • backing up user data

    - by shani
    in my app the user saves data in archive using core data and sqllite. 1. is there a way letting him the option to back up his data and restoring it in the future? 2. does the user info is backed up with the iphone regular back up? thanks shani

    Read the article

  • sqlserver how to set job priority

    - by Buzz
    Is there any way to set one job priority higher then other, In my case there are two jobs those are working on same set of tables,first JOB-A which is running every 12 hr and other JOB-B is every 10 minutes , i think at some time when they run simultaneously JOB-B is getting in to deadlock and get failed, i google the topic and found sqlgoverner is helpful in such cases does anyone know how to resolve?

    Read the article

  • Facebook style messaging system schema design

    - by Jamie
    Hi all, I'm looking to implement a facebook style messaging system (thread messages) into a site of mine. Do you think this schema markup looks okay? Doctrine schema.yml: UserMessage: tableName: user_message actAs: [Timestampable] columns: id: { type: integer(10), primary: true, autoincrement: true } sender_id : { type: integer(10), notnull: true } sender_read: { type: boolean, default: 1 } subject: { type: string(255), notnull: true } message: { type: string(1000), notnull: true } hash: { type: string(32), notnull: true } relations: UserMessageRecipient as Recipient: type: many local: id foreign: message_id UserMessageReply as Reply: type: many local: id foreign: message_id UserMessageReply: tableName: user_message_reply columns: id: { type: integer(10), primary: true, autoincrement: true } user_message_id as message_id: { type: integer(10), notnull: true } message: { type: string(1000), notnull: true } sender_id: { type: integer(10), notnull: true } relations: UserMessage as Message: local: message_id foreign: id type: one UserMessageRecipient: tableName: user_message_recipient actAs: [Timestampable] columns: id: { type: integer(10), primary: true, autoincrement: true } user_message_id as message_id: { type: integer(10), notnull: true } recipient_id: { type: integer(10), notnull: true } recipient_read: { type: boolean, default: 0 } When I a new reply is made,i'll make sure the boolean for "recipient_read" for each recipient is set to false and of course i'll make sure sender_read is set to false too. I'm using a hash for the URL: http://example.com/user/messages/aadeb18f8bdaea49882ec4d2a8a3c062 (As the id will be starting from 1, i don't wish to have http://example.com/user/messages/1. Yeah, I could start incrementing from a bigger number, but i'd prefer to start at 1.) Is this a good way to go about it? Your thoughts and suggestions would be hugely appreciated. Thanks guys!

    Read the article

  • Table index design

    - by Swoosh
    I would like to add index(s) to my table. I am looking for general ideas how to add more indexes to a table. Other than the PK clustered. I would like to know what to look for when I am doing this. So, my example: This table (let's call it TASK table) is going to be the biggest table of the whole application. Expecting millions records. IMPORTANT: massive bulk-insert is adding data in this table table has 27 columns: (so far, and counting :D ) int x 9 columns = id-s varchar x 10 columns bit x 2 columns datetime x 5 columns INT COLUMNS all of these are INT ID-s but from tables that are usually smaller than Task table (10-50 records max), example: Status table (with values like "open", "closed") or Priority table (with values like "important", "not so important", "normal") there is also a column like "parent-ID" (self - ID) join: all the "small" tables have PK, the usual way ... clustered STRING COLUMNS there is a (Company) column (string!) that is something like "5 characters long all the time" and every user will be restricted using this one. If in Task there are 15 different "Companies" the logged in user would only see one. So there's always a filter on this one. Might be a good idea to add an index to this column? DATE COLUMNS I think they don't index these ... right? Or can / should be?

    Read the article

  • Multimap Space Issue: Guava

    - by Arpssss
    In my Java code, I am using Guavas Multimap (com.google.common.collect.Multimap) by using this: Multimap< Integer, Integer Index = HashMultimap.create() Here, Multimap key is some portion of URL and value is another portion of URL (converted into integer). Now, I assign my JVM 2560 Mb (2.5 GB) heap space (by using Xmx and Xms). However, it can only store 9 millions of such (key,value) pairs of integers (approx 10 million). But, theoretically (according to memory occupied by int) it should store more. Can anybody help me, 1) Why is this happening (means Multimap is taking lots of space) ? I checked my code with out inserting pairs in Multimap, it takes only 1/2 MB space. 2) Is there any other way or home-baked solution to solve this space issue ? More clearly, how to solve this memory issue ? Thanks in advance and any idea is perfectly OK for me.

    Read the article

< Previous Page | 342 343 344 345 346 347 348 349 350 351 352 353  | Next Page >