Search Results

Search found 9926 results on 398 pages for 'lookup tables'.

Page 2/398 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • DATE lookup table (1990/01/01:2041/12/31)

    - by Frank Developer
    I use a DATE's master table for looking up dates and other values in order to control several events, intervals and calculations within my app. It has rows for every single day begining from 01/01/1990 to 12/31/2041. One example of how I use this lookup table is: A customer pawned an item on: JAN-31-2010 Customer returns on MAY-03-2010 to make an interest pymt to avoid forfeiting the item. If he pays 1 months interest, the employee enters a "1" and the app looks-up the pawn date (JAN-31-2010) in date master table and puts FEB-28-2010 in the applicable interest pymt date. FEB-28 is returned because FEB-31's dont exist! If 2010 were a leap-year, it would've returned FEB-29. If customer pays 2 months, MAR-31-2010 is returned. 3 months, APR-30... If customer pays more than 3 months or another period not covered by the date lookup table, employee manually enters the applicable date. Here's what the date lookup table looks like: { Copyright 1990:2010, Frank Computer, Inc. } { DBDATE=YMD4- (correctly sorted for faster lookup) } CREATE TABLE datemast ( dm_lookup DATE, {lookup col used for obtaining values below} dm_workday CHAR(2), {NULL=Normal Working Date,} {NW=National Holiday(Working Date),} {NN=National Holiday(Non-Working Date),} {NH=National Holiday(Half-Day Working Date),} {CN=Company Proclamated(Non-Working Date),} {CH=Company Proclamated(Half-Day Working Date)} {several other columns omitted} dm_description CHAR(30), {NULL, holiday description or any comments} dm_day_num SMALLINT, {number of elapsed days since begining of year} dm_days_left SMALLINT, (number of remaining days until end of year} dm_plus1_mth DATE, {plus 1 month from lookup date} dm_plus2_mth DATE, {plus 2 months from lookup date} dm_plus3_mth DATE, {plus 3 months from lookup date} dm_fy_begins DATE, {fiscal year begins on for lookup date} dm_fy_ends DATE, {fiscal year ends on for lookup date} dm_qtr_begins DATE, {quarter begins on for lookup date} dm_qtr_ends DATE, {quarter ends on for lookup date} dm_mth_begins DATE, {month begins on for lookup date} dm_mth_ends DATE, {month ends on for lookup date} dm_wk_begins DATE, {week begins on for lookup date} dm_wk_ends DATE, {week ends on for lookup date} {several other columns omitted} ) IN "S:\PAWNSHOP.DBS\DATEMAST"; Is there a better way of doing this or is it a cool method?

    Read the article

  • Compile-time lookup array creation for ANSI-C?

    - by multiproximus
    A previous programmer preferred to generate large lookup tables (arrays of constants) to save runtime CPU cycles rather than calculating values on the fly. He did this by creating custom Visual C++ projects that were unique for each individual lookup table... which generate array files that are then #included into a completely separate ANSI-C micro-controller (Renesas) project. This approach is fine for his original calculation assumptions, but has become tedious when the input parameters need to be modified, requiring me to recompile all of the Visual C++ projects and re-import those files into the ANSI-C project. What I would like to do is port the Visual C++ source directly into the ANSI-C microcontroller project and let the compiler create the array tables. So, my question is: Can ANSI-C compilers compute and generate lookup arrays during compile time? And if so, how should I go about it? Thanks in advance for your help!

    Read the article

  • SharePoint Lookup Field 20 Item JS error

    - by Chops
    Hi Everyone, I've got an issue with SharePoint Server 2007 SP1 which seems to be documented in various forms, but I haven't been able to find an answer to my seemingly simpler question. http://social.msdn.microsoft.com/Forums/en-US/sharepointcustomization/thread/040533d2-c738-4ac2-b2d6-65a1602fa2d1 Essentially, we have a form with a lookup field to another SharePoint list. The lookup field has more than 20 items, and so SharePoint changes the type/behaviour of the field as per the standard behaviour. However, with my custom skin applied, clicking on this drop down causes a JS error from within the Core.js file, as described here: http://splitnut.blogspot.com/2009/06/lookup-fields-in-moss-javascript-error.html What I haven't been able to figure out is why this is only an issue when my custom master page is applied, and not the OOTB master pages. I've been through and tried to see what might cause this, but haven't been able to track down the cause of the behaviour. Any help would be much appreciated. Many thanks.

    Read the article

  • Display SharePoint lookup field on publishing website

    - by Slace
    A page within our MOSS publishing website has a property which is a lookup field. I only want the selected text to be displayed when you view the page not in edit mode, but when I use the Microsoft.SharePoint.WebControls.LookupField it generates a hyperlink to the SharePoint list item (obviously bad). Is there a way around this, short of creating my own lookup field control?

    Read the article

  • PHP - How can I lookup the Zip Code using the City & State

    - by John Himmelman
    I need to lookup the Zip Code for a list of addresses (which include the city/state). Is there a master zip code list for download (that is free) or are there any web services that will return the full postage info for an address. Ie, lookup query: 386 Bread & Cheese Hollow Rd, Northport, NY ==== 386 Bread And Cheese Hollow Rd, Northport, NY 11768 Thanks!

    Read the article

  • New article available in "SOA Suite Essentials for WLI Users" series: Dynamic Data Lookup in a Busin

    - by simone.geib
    It is my pleasure to announce the publishing of another article in our "SOA Suite Essentials for WLI Users" series: "Dynamic Data Lookup in a Business Process: Meta Data Cache Control in Oracle WebLogic Integration and Domain Value Maps in SOA Suite". This article explains how dynamic data can be retrieved in a business process using Domain Value Maps in SOA Suite and shows the similarities to the WLI XML MetaData Cache Control. Lots of customers have asked about this comparison and I hope they will find it useful. The article follows "Setting Web Service and JCA Adapter Endpoints Dynamically in Oracle SOA Suite" which describes how web services and JCA adapter endpoints in SOA Suite can be changed at run-time, and so completes the use case where a BPEL process writes to a file (via file adapter) and the output directory and the file name are set dynamically. Please let me know what you think about the series and this specific article.

    Read the article

  • Suggest a good method with least lookup time complexity

    - by Amrish
    I have a structure which has 3 identifier fields and one value field. I have a list of these objects. To give an analogy, the identifier fields are like the primary keys to the object. These 3 fields uniquely identify an object. Class { int a1; int a2; int a3; int value; }; I would be having a list of say 1000 object of this datatype. I need to check for specific values of these identity key values by passing values of a1, a2 and a3 to a lookup function which would check if any object with those specific values of a1, a2 and a3 is present and returns that value. What is the most effective way to implement this to achieve a best lookup time? One solution I could think of is to have a 3 dimensional matrix of length say 1000 and populate the value in it. This has a lookup time of O(1). But the disadvantages are. 1. I need to know the length of array. 2. For higher identity fields (say 20), then I will need a 20 dimension matrix which would be an overkill on the memory. For my actual implementation, I have 23 identity fields. Can you suggest a good way to store this data which would give me the best look up time?

    Read the article

  • Delphi Clientdataset Lookup/Aggregate

    - by TheRoadrunner
    Hi, I need a little help with ClientDatasets in Delphi. What I want to achieve is a grid showing customers, where one of the columns shows the number of orders for each customer. I put a ClientDataset on a form and load Customers.xml from Delphi demo-data. Another ClienDataset is loaded with orders.xml. Relatively simple, I can define an aggregate on the orders CDS showing the total amount per customer (or the count). (See Cary Jensens article on this: http://edn.embarcadero.com/article/29272) The problem is getting this aggregate result from orders dataset into the customer dataset. It is kind of an reverse lookup, since there is a 1-n relationship between customers and orders, not an n-1 as normally in lookup scenarios. Any ideas ? Søren

    Read the article

  • Find value within a range in lookup table

    - by francis
    I have the simplest problem to implement, but so far I have not been able to get my head around a solution in Python. I have built a table that looks similar to this one: 501 - ASIA 1262 - EUROPE 3389 - LATAM 5409 - US I will test a certain value to see if it falls within these ranges, 389 -> ASIA, 1300 -> LATAM, 5400 -> US. A value greater than 5409 should not return a lookup value. I normally have a one to one match, and would implement a dictionary for the lookup. But in this case I have to consider these ranges, and I am not seeing my way out of the problem. Maybe without providing the whole solution, could you provide some comments that would help me look in the right direction? It is very similar to a vlookup in a spreadsheet. I would describe my Python knowledge as somewhere in between basic to intermediate. Many thanks in advance.

    Read the article

  • Entity Framework - Using a lookup (picklist) table with a lookup key

    - by Dave
    I'm working on a WPF application that is working well using the Entity Framework (3.5 SP1) for complicated table structures. The problem now is I want to get a list from the EF that includes lookups into a picklist table that has multiple picklists in it. In SQL I would write a sub select as such: SELECT Name, (Select typeName from PickLists where type_id = items.type_id and picklist_key=333) as Type_desc FROM Items There are no Foreign keys for this, and the picklists table is never updated using the EF, so it is read only as far as the EF is concerned. I'm not sure the best method to put this into the model if at all. I'm displaying in a read-only datagrid on a dashboard. Thanks!

    Read the article

  • .NET values lookup

    - by Maciej
    Hi, I have a feeling of missing something obvious. UDP receiver application. It holds a collection of valid UDP sender IPs - only guys with IP on that list will be considered. Since that list must be looked at on every packet and UDPs are so volatile, that operation must be maximum fast. Good choice is Dictionary but it is a key-value structure and what I actually need here is a dictionary-like (hash lookup) key only structure. Is there something like that? Small annoyance rather than a bug but still. I can still use Dictionary Thanks, M.

    Read the article

  • Database warehouse design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to seek some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • Database warehoue design: fact tables and dimension tables

    - by morpheous
    I am building a poor man's data warehouse using a RDBMS. I have identified the key 'attributes' to be recorded as: sex (true/false) demographic classification (A, B, C etc) place of birth date of birth weight (recorded daily): The fact that is being recorded My requirements are to be able to run 'OLAP' queries that allow me to: 'slice and dice' 'drill up/down' the data and generally, be able to view the data from different perspectives After reading up on this topic area, the general consensus seems to be that this is best implemented using dimension tables rather than normalized tables. Assuming that this assertion is true (i.e. the solution is best implemented using fact and dimension tables), I would like to see some help in the design of these tables. 'Natural' (or obvious) dimensions are: Date dimension Geographical location Which have hierarchical attributes. However, I am struggling with how to model the following fields: sex (true/false) demographic classification (A, B, C etc) The reason I am struggling with these fields is that: They have no obvious hierarchical attributes which will aid aggregation (AFAIA) - which suggest they should be in a fact table They are mostly static or very rarely change - which suggests they should be in a dimension table. Maybe the heuristic I am using above is too crude? I will give some examples on the type of analysis I would like to carryout on the data warehouse - hopefully that will clarify things further. I would like to aggregate and analyze the data by sex and demographic classification - e.g. answer questions like: How does male and female weights compare across different demographic classifications? Which demographic classification (male AND female), show the most increase in weight this quarter. etc. Can anyone clarify whether sex and demographic classification are part of the fact table, or whether they are (as I suspect) dimension tables.? Also assuming they are dimension tables, could someone elaborate on the table structures (i.e. the fields)? The 'obvious' schema: CREATE TABLE sex_type (is_male int); CREATE TABLE demographic_category (id int, name varchar(4)); may not be the correct one.

    Read the article

  • haskell: a data structure for storing ascending integers with a very fast lookup

    - by valya
    Hello! (This question is related to my previous question, or rather to my answer to it.) I want to store all qubes of natural numbers in a structure and look up specific integers to see if they are perfect cubes. For example, cubes = map (\x -> x*x*x) [1..] is_cube n = n == (head $ dropWhile (<n) cubes) It is much faster than calculating the cube root, but It has complexity of O(n^(1/3)) (am I right?). I think, using a more complex data structure would be better. For example, in C I could store a length of an already generated array (not list - for faster indexing) and do a binary search. It would be O(log n) with lower ?oefficient than in another answer to that question. The problem is, I can't express it in Haskell (and I don't think I should). Or I can use a hash function (like mod). But I think it would be much more memory consuming to have several lists (or a list of lists), and it won't lower the complexity of lookup (still O(n^(1/3))), only a coefficient. I thought about a kind of a tree, but without any clever ideas (sadly I've never studied CS). I think, the fact that all integers are ascending will make my tree ill-balanced for lookups. And I'm pretty sure this fact about ascending integers can be a great advantage for lookups, but I don't know how to use it properly (see my first solution which I can't express in Haskell).

    Read the article

  • Two-phase lookup: can I avoid "code bloat"?

    - by Pietro
    Two-phase lookup question: Is there a more synthetic way to write this code, i.e. avoiding all those "using" directives? I tried with "using CBase<T>;", but it is not accepted. #include <iostream> template <typename T> class CBase { protected: int a, b, c, d; // many more... public: CBase() { a = 123; } }; template <typename T> class CDer : public CBase<T> { // using CBase<T>; // error, but this is what I would like using CBase<T>::a; using CBase<T>::b; using CBase<T>::c; //... public: CDer() { std::cout << a; } }; int main() { CDer<int> cd; } In my real code there are many more member variables/functions, and I was wondering if it is possible to write shorter code in some way. Of course, using the CBase::a syntax does not solve the problem... Thank's! gcc 4.1 MacOS X 10.6

    Read the article

  • SQL Server 2000 tables

    - by user40766
    We currently have an SQL Server 2000 database with one table containing data for multiple users. The data is keyed by memberid which is an integer field. The table has a clustered index on memberid. The table is now about 200 million rows. Indexing and maintenance are becoming issues. We are debating splitting the table into one table per user model. This would imply that we would end up with a very large number of tables potentially upto the 2,147,483,647, considering just positive values. My questions: Does anyone have any experience with a SQL Server (2000/2005) installation with millions of tables? What are the implications of this architecture with regards to maintenance and access using Query Analyzer, Enterprise Manager etc. What are the implications to having such a large number of indexes in a database instance. All comments are appreciated. Thanks

    Read the article

  • INDIA Legislation: New State 'Telangana' Added in IN_STATES System Lookup

    - by LieveDC
    With effect from June 02, 2014 the new state of Telangana will be operational in the Indian Union.Details of the new state are explained in the official gazette released on 1 March, 2014 by the Ministry of Home Affairs: http://mha.nic.in/sites/upload_files/mha/files/APRegACT2014_0.pdf This new State has been added in the IN_STATES System Lookup: a new lookup code 'TG' with meaning 'Telangana' has been added.For available patches on different R12 patch levels check out: Doc ID 1676224.1 New State Telangana Be Added In IN_STATES System Lookup.

    Read the article

  • Lookup table size reduction

    - by Ryan
    Hello: I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value. Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high. In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table. I found one called tile coding http://www.cs.ualberta.ca/~sutton/book/8/node6.html, this one is based on several look up tables, does anyone know any other method?. Thanks.

    Read the article

  • If tables are so evil, why does joomla use tables for its layout?

    - by DMin
    When you create a template you don't use tables. but when joomla rendereds the page, there are at least 4 places where tables are used to structure the content. Does this tell you something about tables or the fact that the joomla community should invest more time and learn how to code css or just the fact that tables are needed when you don't know what css is going to be throw in later(which probably means they are of some use)?

    Read the article

  • retrieve data from multiple tables referencing some tables in mysql

    - by I Like PHP
    i have 10 tables have innoDB engine 1. one is state_table which attributes are state_id and state_name 2. another table city_table which attributes are city_id and city_name 3. one more table permit_table which attribute is p_id above city_id,state_id and permit_id is references to rest of 7 tables. each table having state_id, city_id and permit_id referencing above tables now i want to extract all tables data with their respective city name and state name ( each tables may have different city id and state id) i m using below mysql query( i know it's very length way.... ) . please tell me how to do it with optimized method? SELECT p.*,cp.city_name,sp.state_name, o.*,co.city_name,so.state_name, t.*,ct.city_name,st.state_name, th.*,cth.city_name,sth.state_name, f.*,cf.city_name,sf.state_name .......so on................ .......so on................ ............................ FROM permit_table p JOIN table_city cp ON cp.city_id=p.city_id JOIN table_state sp ON sp.state_id=p.state_id JOIN table_one o ON o.permit_id=p.permit_id JOIN table_city co ON co.city_id=o.city_id JOIN table_state so ON so.state_id=o.state_id JOIN table_two t ON t.permit_id=p.permit_id JOIN table_city ct ON ct.city_id=t.city_id JOIN table_state st ON st.state_id=t.state_id JOIN table_three th ON th.permit_id=p.permit_id JOIN table_city cth ON cth.city_id=th.city_id JOIN table_state sth ON sth.state_id=th.state_id JOIN table_four f ON f.permit_id=p.permit_id JOIN table_city cf ON cf.city_id=f.city_id JOIN table_state sf ON sf.state_id=f.state_id ................so on......................... ................so on......................... .............................................. WHERE p.permit_id=base64_encode(mysql_real_escape_string($_GET[pid])); Thanks For help me always.

    Read the article

  • retrive data from multiple tables referencing some tables in mysql

    - by I Like PHP
    i have 10 tables have innoDB engine 1. one is state_table which attributes are state_id and state_name 2. another table city_table which attributes are city_id and city_name 3. one more table permit_table which attribute is p_id above city_id,state_id and permit_id is references to rest of 7 tables. each table having state_id, city_id and permit_id referencing above tables now i want to extract all tables data with their respective city name and state name ( each tables may have different city id and state id) i m using below mysql query( i know it's very length way.... ) . please tell me how to do it with optimized method? SELECT p.*,cp.city_name,sp.state_name, o.*,co.city_name,so.state_name, t.*,ct.city_name,st.state_name, th.*,cth.city_name,sth.state_name, f.*,cf.city_name,sf.state_name .......so on................ .......so on................ ............................ FROM permit_table p JOIN table_city cp ON cp.city_id=p.city_id JOIN table_state sp ON sp.state_id=p.state_id JOIN table_one o ON o.permit_id=p.permit_id JOIN table_city co ON co.city_id=o.city_id JOIN table_state so ON so.state_id=o.state_id JOIN table_two t ON t.permit_id=p.permit_id JOIN table_city ct ON ct.city_id=t.city_id JOIN table_state st ON st.state_id=t.state_id JOIN table_three th ON th.permit_id=p.permit_id JOIN table_city cth ON cth.city_id=th.city_id JOIN table_state sth ON sth.state_id=th.state_id JOIN table_four f ON f.permit_id=p.permit_id JOIN table_city cf ON cf.city_id=f.city_id JOIN table_state sf ON sf.state_id=f.state_id ................so on......................... ................so on......................... .............................................. WHERE p.permit_id=base64_encode(mysql_real_escape_string($_GET[pid]); Thanks For help me always.

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >