Search Results

Search found 53054 results on 2123 pages for 'sql sample database'.

Page 184/2123 | < Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >

  • Conversion failed when converting the varchar value to data type int

    - by desi
    *I have a varchar(1000) column declared as field that contains all numbers, as shown below. And I want to execute the following script. I need this to work please Declare @PostalCode varchar(1000)=0 set @PostalCode ='7005036,7004168,7002314,7001188,6998955' Select hl.* From CountryLocation cl INNER JOIN refPostalCodes pc ON pc.PostalCode = hl.PostalCode where pc.Postalcode in (@PostalCode) and pc.notDeleted = 1

    Read the article

  • Next Identity Key LINQ + SQL Server

    - by user569347
    To represent our course tree structure in our Linq Dataclasses we have 2 columns that could potentially be the same as the PK. My problem is that if I want to Insert a new record and populate 2 other columns with the PK that was generated there is no way I can get the next identity and stop conflict with other administrators who might be doing the same insert at the same time. Case: A Leaf node has right_id and left_id = itself (prereq_id) **dbo.pre_req:** prereq_id left_id right_id op_id course_id is_head is_coreq is_enforced parent_course_id and I basically want to do this: pre_req rec = new pre_req { left_id = prereq_id, right_id = prereq_id, op_id = 3, course_id = query.course_id, is_head = true, is_coreq = false, parent_course_id = curCourse.course_id }; db.courses.InsertOnSubmit(rec); try { db.SubmitChanges(); } Any way to solve my dilemma? Thanks!

    Read the article

  • One on One table relation - is it harmful to keep relation in both tables?

    - by EBAGHAKI
    I have 2 tables that their rows have one on one relation.. For you to understand the situation, suppose there is one table with user informations and there is another table that contains a very specific informations and each user can only link to one these specific kind of informations ( suppose second table as characters ) And that character can only assign to the user who grabs it, Is it against the rules of designing clean databases to hold the relation key in both tables? User Table: user_id, name, age, character_id Character Table: character_id, shape, user_id I have to do it for performance, how do you think about it?

    Read the article

  • Select a record with highest amount by joining two tables

    - by user2516394
    I've 2 tables Sales & Purchase, Sales table with fields SaleId, Rate, Quantity, Date, CompanyId, UserID. Purchase table with fields PurchaseId, Rate, Quantity, Date, CompanyId, UserID. I want to select a record from either table that have highest Rate*Quantity. SELECT SalesId Or PurchaseId FROM Sales,Purchase where Sales.UserId=Purchase.UserId and Sales.CompanyId=Purchase.CompanyId AND Sales.Date=Current date AND Purchase.Date=Current date AND Sales.UserId=1 AND Purchase.UserId=1 AND Sales.CompanyId=1 AND Purchase.ComoanyId=1

    Read the article

  • What db fits me?

    - by afvasd
    Dear Everyone I am currently using mysql. I am finding that my schema is getting incredibly complicated. I seek to find a new db that will suit my needs: Let's assume I am building a news aggregrator (which collects news from multiple website). I then run algorithms to determine if two news from different sites are actually referring to the same topic. I run this algorithm to cluster news together. The relationship is depicted below: cluster \--news1 \--word1 \--word2 \--news2 \--word3 \--news3 \--word1 \--word3 And then I will apply some magic and determine the importance of each word. Summing all the importance of each word gives me the importance of a news article. Summing the importance of each news article gives me the importance of a cluster. Note that above cluster there are also subgroups( like split by region etc), and categories (like sports, etc) which I have to determine the importance of that in a particular day per se. I have used views in the past to do so, but I realized that views are very slow. So i will normally do an insert into an actual table and index them for better performance. As you can see this leads to multiple tables derived like (cluster, importance), (news, importance), (words, importance) etc which can get pretty messy. Also the "importance" metric will change. It has become increasingly difficult to alter tables, update data (which I am using TRUNCATE TABLE) and then inserting from null. I am currently looking into something schemaless like Mongodb. I do not need distributedness. I would very much want something that is reasonably fast (which can be indexed) and something that is a lot more flexible that traditional RDMBS. Also, I need something that has some kind of ORM because I personally like ORM a lot. I am currently using sqlalchemy Please help!

    Read the article

  • SQL Server Query with "ELSE:"

    - by Mike D
    I have various VB6 projects I'm maintaining with some of the queries being passed to the server having "ELSE:" with the colon used in case statements. I'm wondering can someone tell me what the **ll the colon is used for? It causes errors in SQL2005 and greater, but SQL2000 works with no complaints. I'd like to just remove it from the code & re-compile, but I'm afraid it'll break 10 other things in the application.. Thanks in advance...

    Read the article

  • replicating master tables mapping in transaction tables

    - by NoDisplay
    I have three master tables for location information Country {ID, Name} State {ID, Name, CountryID} City {ID, Name, StateID} Now I have one transcation table called Person which hold the person name and his location information. My Question is shall I have only CityID in the Person table like this: Person {ID, Name, CityID}' And have view of join query which give me detail like "Person{ID,Name,City,State,Country}" or Shall I replicate the mapping Person {ID, Name, CityID, StateID, CountryID} Please suggest which do you feel is to be selected and why? if there is any other option available, please suggest. Thanks in advance.

    Read the article

  • how do i insert into two table all at once in a stored procedure?

    - by user996502
    Doing a project for school so any help would be great thank you! I have two tables how do i insert into two tables? so both tables are linked. First table called Customer with primary key called CID that auto increments CREATE TABLE [dbo].[Customer]( [CID] [int] IDENTITY(1,1) NOT NULL, [LastName] [varchar](255) NOT NULL, [FirstName] [varchar](255) NOT NULL, [MiddleName] [varchar](255) NULL, [EmailAddress] [varchar](255) NOT NULL, [PhoneNumber] [varchar](12) NOT NULL CONSTRAINT [PK__CInforma__C1F8DC5968DD69DC] PRIMARY KEY CLUSTERED ( And a second table called Employment that has a foreign key linked to the parent table CREATE TABLE [dbo].[Employment]( [EID] [int] IDENTITY(1,1) NOT NULL, [CID] [int] NOT NULL, [Employer] [varchar](255) NOT NULL, [Occupation] [varchar](255) NOT NULL, [Income] [varchar](25) NOT NULL, [WPhone] [varchar](12) NOT NULL, CONSTRAINT [PK__Employme__C190170BC7827524] PRIMARY KEY CLUSTERED (

    Read the article

  • SQL Server dilemma, performance

    - by Woland
    Hello I am creating app where user can save options witch one is better? to save into user table varchar feeld smthing like ('1,23,4354,34,3') query for this is select * from data where CHARINDEX ( 'L', Providers , 0 ) 0 create other table where user options are and just add rows select * from data where Providers in (select Providers from userdata where userid=100) thanks for help

    Read the article

  • Field to display Previous 30 Day Total

    - by whytheq
    I've got this table: CREATE TABLE #Data1 ( [Market] VARCHAR(100) NOT NULL, [Operator] VARCHAR(100) NOT NULL, [Date] DATETIME NOT NULL, [Measure] VARCHAR(100) NOT NULL, [Amount] NUMERIC(36,10) NOT NULL, --new calculated fields [DailyAvg_30days] NUMERIC(38,6) NULL DEFAULT 0 ) I've populated all the fields apart from DailyAvg_30days. This field needs to show the total for the preceding 30 days e.g. 1. if Date for a particular record is 2nd Dec then it will be the total for the period 3rd Nov - 2nd Dec inclusive. 2. if Date for a particular record is 1st Dec then it will be the total for the period 2nd Nov - 1st Dec inclusive. My attempt to try to find these totals before updating the table is as follows: SELECT a.[Market], a.[Operator], a.[Date], a.[Measure], a.[Amount], [DailyAvg_30days] = SUM(b.[Amount]) FROM #Data1 a INNER JOIN #Data1 b ON a.[Market] = b.[Market] AND a.[Operator] = b.[Operator] AND a.[Measure] = b.[Measure] AND a.[Date] >= b.[Date]-30 AND a.[Date] <= b.[Date] GROUP BY a.[Market], a.[Operator], a.[Date], a.[Measure], a.[Amount] ORDER BY 1,2,4,3 Is this a valid approach or do I need to approach this from a different angle?

    Read the article

  • UPDATE statement wrapped in an IF EXISTS block

    - by formica
    I'm trying to write a DML script that updates a column but I wanted to make sure the column existed first so I wrapped it in a IF EXISTS block IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='Client' AND COLUMN_NAME='IsClarityEnabled') BEGIN UPDATE Client SET IsClarityEnabled = 1 WHERE ClientID = 21 END So the weirdness is that it tries to execute the update even if it fails the condition. So column doesn't exist and the UPDATE statement runs and I get an error. Why? Even stranger is that this does work: IF EXISTS(SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME='Client' AND COLUMN_NAME='IsClarityEnabled') BEGIN EXEC('UPDATE Client SET IsClarityEnabled = 1 WHERE ClientID = 21') END Is there something special about an UPDATE command that causes it to behave this way?

    Read the article

  • MS SQL: How to get the newest date in a table with several equal keys

    - by Qohelet
    Unfortunately my knowledge related to statements like "group by" and "having" is quite limited, so hopefully you can help me: I have a view -here's an excerpt- (if we have some Europeans here - it's v021 of Winline/Mesonic): ID | Artikelbezeichnung1 | Bez2 | mesoyear _____________________________________________________________________ 1401MA70 | Marga ,Saracena grigio,1S,33,3/33,3 | Marazzi | 1344 1401MA70 | Marga ,Saracena grigio,1S,33,3/33,3 | Marazzi | 1356 1401MA70 | Marga ,Saracena grigio,1S,33,3/33,3 | Marazzi | 1356 1401MA71 | Marga ,Saracena beige,1S,33,3/33,3 | Marazzi | 1344 1401MA71 | Marga ,Saracena beige,1S,33,3/33,3 | Marazzi | 1356 1401MA71 | Marga ,Saracena beige,1S,33,3/33,3 | Marazzi | 1356 2401CR13 | Crista,Mahon rojo,1S,33,3/33,3 | Cristacer | 1332 2401CR13 | Crista,Mahon rojo,1S,33,3/33,3 | Cristacer | 1344 So the ID is not unique and I just need the one with the highest val in "mesoyear". My fist solution was: Select c015 as ID, c003 as Artikelbezeichnung1, c074 as Bez2, mesoyear from CWLDATEN_91.dbo.v021 group by c015 having mesoyear = max(mesoyear) But this doesn't work at all... Msg 8121, Level 16, State 1, Line 8 Column 'CWLDATEN_91.dbo.v021.mesoyear' is invalid in the HAVING clause because it is not contained in either an aggregate function or the GROUP BY clause. So I just removed the "having" statement and it went "better": Msg 8120, Level 16, State 1, Line 2 Column 'CWLDATEN_91.dbo.v021.c003' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause. So I tried to remove the error just by adding things to the "group by". And it worked. Select c015 as ID, c003 as Artikelbezeichnung1, c074 as Bez2, max(mesoyear) from CWLDATEN_91.dbo.v021 group by c015,c003,c074 gives me exactly what I want. But the correct Select contains about 24 columns and some calculations as well. The problem can't be solved just by adding all the columns to the "group by"...? Can someone please help me to find a proper command? Thank you!

    Read the article

  • strange behavior of <> to filter null values

    - by Kerezo
    Hi experts: I have a table Called tblAlarm and it has some records like this: I have another table for determine what user see what message: Now I want to write a query to show Messages that user has not seen if message didinot expired.(for example it's year between BeginYear and EndYear and so on ...). I write this query: SELECT * FROM tblAlarms LEFT OUTER JOIN tblUsersAlarms tua ON tblAlarms.Id=tua.MessageID WHERE @CurrentYear BETWEEN tblAlarms.BeginYear AND tblAlarms.EndYear AND @CurrentMonth BETWEEN tblAlarms.BeginMonth AND tblAlarms.EndMonth AND @CurrentDay BETWEEN tblAlarms.BeginDay AND tblAlarms.EndDay AND (@CurrentHour * 60 + @CurrentMinute) BETWEEN tblAlarms.BeginHour*60 + tblAlarms.BeginMinute AND tblAlarms.EndHour*60 + tblAlarms.EndMinute --AND (tua.UserID <> 128 AND tua.UserID IS NULL) and it returns : but if I unComment last line it does not return any record.How I can determine what messages that users has not been seen? thanks

    Read the article

  • DB Strategy for inserting into a high read table (Sql Server)

    - by Tom
    Looking for strategies for a very large table with data maintained for reporting and historical purposes, a very small subset of that data is used in daily operations. Background: We have Visitor and Visits tables which are continuously updated by our consumer facing site. These tables contain information on every visit and visitor, including bots and crawlers, direct traffic that does not result in a conversion, etc. Our back end site allows management of the visitor's (leads) from the front end site. Most of the management occurs on a small subset of our visitors (visitors that become leads). The vast majority of the data in our visitor and visit tables is maintained only for a much smaller subset of user activity (basically reporting type functionality). This is NOT an indexing problem, we have done all we can with indexing and keeping our indexes clean, small, and not fragmented. ps: We do not currently have the budget or expertise for a data warehouse. The problem: We would like the system to be more responsive to our end users when they are querying, for instance, the list of their assigned leads. Currently the query is against a huge data set of mostly irrelevant data. I am pondering a few ideas. One involves new tables and a fairly major re-architecture, I'm not asking for help on that. The other involves creating redundant data, (for instance a Visitor_Archive and a Visitor_Small table) where the larger visitor and visit tables exist for inserts and history/reporting, the smaller visitor1 table would exist for managing leads, sending lead an email, need leads phone number, need my list of leads, etc.. The reason I am reaching out is that I would love opinions on the best way to keep the Visitor_Archive and the Visitor_Small tables in sync... Replication? Can I use replication to replicate only data with a certain column value (FooID = x) Any other strategies?

    Read the article

  • SQL Server 05, which is optimal, LIKE %<term>% or CONTAINS() for searching large column

    - by Spud1
    I've got a function written by another developer which I am trying to modify for a slightly different use. It is used by a SP to check if a certain phrase exists in a text document stored in the DB, and returns 1 if the value is found or 0 if its not. This is the query: SELECT @mres=1 from documents where id=@DocumentID and contains(text, @search_term) The document contains mostly XML, and the search_term is a GUID formatted as an nvarchar(40). This seems to run quite slowly to me (taking 5-6 seconds to execute this part of the process), but in the same script file there is also this version of the above, commented out. SELECT @mres=1 from documents where id=@DocumentID and textlike '%' + @search_term + '%' This version runs MUCH quicker, taking 4ms compared to 15ms for the first example. So, my question is why use the first over the second? I assume this developer (who is no longer working with me) had a good reason, but at the moment I am struggling to find it.. Is it possibly something to do with the full text indexing? (this is a dev DB I am working with, so the production version may have better indexing..) I am not that clued up on FTI really so not quite sure at the moment. Thoughts/ideas?

    Read the article

  • EM12c Release 4: Database as a Service Enhancements

    - by Adeesh Fulay
    Oracle Enterprise Manager 12.1.0.4 (or simply put EM12c R4) is the latest update to the product. As previous versions, this release provides tons of enhancements and bug fixes, attributing to improved stability and quality. One of the areas that is most exciting and has seen tremendous growth in the last few years is that of Database as a Service. EM12c R4 provides a significant update to Database as a Service. The key themes are: Comprehensive Database Service Catalog (includes single instance, RAC, and Data Guard) Additional Storage Options for Snap Clone (includes support for Database feature CloneDB) Improved Rapid Start Kits Extensible Metering and Chargeback Miscellaneous Enhancements 1. Comprehensive Database Service Catalog Before we get deep into implementation of a service catalog, lets first understand what it is and what benefits it provides. Per ITIL, a service catalog is an exhaustive list of IT services that an organization provides or offers to its employees or customers. Service catalogs have been widely popular in the space of cloud computing, primarily as the medium to provide standardized and pre-approved service definitions. There is already some good collateral out there that talks about Oracle database service catalogs. The two whitepapers i recommend reading are: Service Catalogs: Defining Standardized Database Service High Availability Best Practices for Database Consolidation: The Foundation for Database as a Service [Oracle MAA] EM12c comes with an out-of-the-box service catalog and self service portal since release 1. For the customers, it provides the following benefits: Present a collection of standardized database service definitions, Define standardized pools of hardware and software for provisioning, Role based access to cater to different class of users, Automated procedures to provision the predefined database definitions, Setup chargeback plans based on service tiers and database configuration sizes, etc Starting Release 4, the scope of services offered via the service catalog has been expanded to include databases with varying levels of availability - Single Instance (SI) or Real Application Clusters (RAC) databases with multiple data guard based standby databases. Some salient points of the data guard integration: Standby pools can now be defined across different datacenters or within the same datacenter as the primary (this helps in modelling the concept of near and far DR sites) The standby databases can be single instance, RAC, or RAC One Node databases Multiple standby databases can be provisioned, where the maximum limit is determined by the version of database software The standby databases can be in either mount or read only (requires active data guard option) mode All database versions 10g to 12c supported (as certified with EM 12c) All 3 protection modes can be used - Maximum availability, performance, security Log apply can be set to sync or async along with the required apply lag The different service levels or service tiers are popularly represented using metals - Platinum, Gold, Silver, Bronze, and so on. The Oracle MAA whitepaper (referenced above) calls out the various service tiers as defined by Oracle's best practices, but customers can choose any logical combinations from the table below:  Primary  Standby [1 or more]  EM 12cR4  SI  -  SI  SI  RAC -  RAC SI  RAC RAC  RON -  RON RON where RON = RAC One Node is supported via custom post-scripts in the service template A sample service catalog would look like the image below. Here we have defined 4 service levels, which have been deployed across 2 data centers, and have 3 standardized sizes. Again, it is important to note that this is just an example to get the creative juices flowing. I imagine each customer would come up with their own catalog based on the application requirements, their RTO/RPO goals, and the product licenses they own. In the screenwatch titled 'Build Service Catalog using EM12c DBaaS', I walk through the complete steps required to setup this sample service catalog in EM12c. 2. Additional Storage Options for Snap Clone In my previous blog posts, i have described the snap clone feature in detail. Essentially, it provides a storage agnostic, self service, rapid, and space efficient approach to solving your data cloning problems. The net benefit is that you get incredible amounts of storage savings (on average 90%) all while cloning databases in a matter of minutes. Space and Time, two things enterprises would love to save on. This feature has been designed with the goal of providing data cloning capabilities while protecting your existing investments in server, storage, and software. With this in mind, we have pursued with the dual solution approach of Hardware and Software. In the hardware approach, we connect directly to your storage appliances and perform all low level actions required to rapidly clone your databases. While in the software approach, we use an intermediate software layer to talk to any storage vendor or any storage configuration to perform the same low level actions. Thus delivering the benefits of database thin cloning, without requiring you to drastically changing the infrastructure or IT's operating style. In release 4, we expand the scope of options supported by snap clone with the addition of database CloneDB. While CloneDB is not a new feature, it was first introduced in 11.2.0.2 patchset, it has over the years become more stable and mature. CloneDB leverages a combination of Direct NFS (or dNFS) feature of the database, RMAN image copies, sparse files, and copy-on-write technology to create thin clones of databases from existing backups in a matter of minutes. It essentially has all the traits that we want to present to our customers via the snap clone feature. For more information on cloneDB, i highly recommend reading the following sources: Blog by Tim Hall: Direct NFS (DNFS) CloneDB in Oracle Database 11g Release 2 Oracle OpenWorld Presentation by Cern: Efficient Database Cloning using Direct NFS and CloneDB The advantages of the new CloneDB integration with EM12c Snap Clone are: Space and time savings Ease of setup - no additional software is required other than the Oracle database binary Works on all platforms Reduce the dependence on storage administrators Cloning process fully orchestrated by EM12c, and delivered to developers/DBAs/QA Testers via the self service portal Uses dNFS to delivers better performance, availability, and scalability over kernel NFS Complete lifecycle of the clones managed by EM12c - performance, configuration, etc 3. Improved Rapid Start Kits DBaaS deployments tend to be complex and its setup requires a series of steps. These steps are typically performed across different users and different UIs. The Rapid Start Kit provides a single command solution to setup Database as a Service (DBaaS) and Pluggable Database as a Service (PDBaaS). One command creates all the Cloud artifacts like Roles, Administrators, Credentials, Database Profiles, PaaS Infrastructure Zone, Database Pools and Service Templates. Once the Rapid Start Kit has been successfully executed, requests can be made to provision databases and PDBs from the self service portal. Rapid start kit can create complex topologies involving multiple zones, pools and service templates. It also supports standby databases and use of RMAN image backups. The Rapid Start Kit in reality is a simple emcli script which takes a bunch of xml files as input and executes the complete automation in a matter of seconds. On a full rack Exadata, it took only 40 seconds to setup PDBaaS end-to-end. This kit works for both Oracle's engineered systems like Exadata, SuperCluster, etc and also on commodity hardware. One can draw parallel to the Exadata One Command script, which again takes a bunch of inputs from the administrators and then runs a simple script that configures everything from network to provisioning the DB software. Steps to use the kit: The kit can be found under the SSA plug-in directory on the OMS: EM_BASE/oracle/MW/plugins/oracle.sysman.ssa.oms.plugin_12.1.0.8.0/dbaas/setup It can be run from this default location or from any server which has emcli client installed For most scenarios, you would use the script dbaas/setup/database_cloud_setup.py For Exadata, special integration is provided to reduce the number of inputs even further. The script to use for this scenario would be dbaas/setup/exadata_cloud_setup.py The database_cloud_setup.py script takes two inputs: Cloud boundary xml: This file defines the cloud topology in terms of the zones and pools along with host names, oracle home locations or container database names that would be used as infrastructure for provisioning database services. This file is optional in case of Exadata, as the boundary is well know via the Exadata system target available in EM. Input xml: This file captures inputs for users, roles, profiles, service templates, etc. Essentially, all inputs required to define the DB services and other settings of the self service portal. Once all the xml files have been prepared, invoke the script as follows for PDBaaS: emcli @database_cloud_setup.py -pdbaas -cloud_boundary=/tmp/my_boundary.xml -cloud_input=/tmp/pdb_inputs.xml          The script will prompt for passwords a few times for key users like sysman, cloud admin, SSA admin, etc. Once complete, you can simply log into EM as the self service user and request for databases from the portal. More information available in the Rapid Start Kit chapter in Cloud Administration Guide.  4. Extensible Metering and Chargeback  Last but not the least, Metering and Chargeback in release 4 has been made extensible in all possible regards. The new extensibility features allow customer, partners, system integrators, etc to : Extend chargeback to any target type managed in EM Promote any metric in EM as a chargeback entity Extend list of charge items via metric or configuration extensions Model abstract entities like no. of backup requests, job executions, support requests, etc  A slew of emcli verbs have also been added that allows administrators to create, edit, delete, import/export charge plans, and assign cost centers all via the command line. More information available in the Chargeback API chapter in Cloud Administration Guide. 5. Miscellaneous Enhancements There are other miscellaneous, yet important, enhancements that are worth a mention. These mostly have been asked by customers like you. These are: Custom naming of DB Services Self service users can provide custom names for DB SID, DB service, schemas, and tablespaces Every custom name is validated for uniqueness in EM 'Create like' of Service Templates Now creating variants of a service template is only a click away. This would be vital when you publish service templates to represent different database sizes or service levels. Profile viewer View the details of a profile like datafile, control files, snapshot ids, export/import files, etc prior to its selection in the service template Cleanup automation - for failed and successful requests Single emcli command to cleanup all remnant artifacts of a failed request Cleanup can be performed on a per request bases or by the entire pool As an extension, you can also delete successful requests Improved delete user workflow Allows administrators to reassign cloud resources to another user or delete all of them Support for multiple tablespaces for schema as a service In addition to multiple schemas, user can also specify multiple tablespaces per request I hope this was a good introduction to the new Database as a Service enhancements in EM12c R4. I encourage you to explore many of these new and existing features and give us feedback. Good luck! References: Cloud Management Page on OTN Cloud Administration Guide [Documentation] -- Adeesh Fulay (@adeeshf)

    Read the article

  • Is this a ridiculous way to structure a DB schema, or am I completely missing something?

    - by Jim
    I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes) FirstName LastName Department Request table RequestID (primary Key, indexed, no dupes) <...> various data fields containing request details UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName 234 John Doe 516 Jane Doe 123 Foo Bar DepartmentsTable DepartmentID Name 1 Sales 2 HR 3 IT UserDepartmentTable UserDepartmentID UserID Department 1 234 2 2 516 2 3 123 1 RequestTable RequestID UserID <...> 1 516 blah 2 516 blah 3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers...

    Read the article

  • using remote MS Access database which connects to remote SQL server

    - by Manjot
    Hi, We have a Microsoft Access database + application (on Server A) which connects to a remote SQL server (Server B) using System DSN ODBC connection (on Server A) to the SQL database server. The users are open this Access database remotely as it is on a shared location on the server A. They still have to create a local ODBC connection on their computers to connect to Server B. Is there anyway that they can access the Access database and not have to create a local ODBC connection? thanks in advance

    Read the article

  • creating tables on remote database

    - by raj
    I created a database link using database link. create public database link REMOTEDB connect to REMOTEUSER identified by REMOTEPWD using 'REMOTEDB'; then i create a table in remote db like, create table MYTABLE@REMOTEDB (name varchar2(20))); It says, ORA-02021 DDL operations are not allowed on| a remote database.. Will this Not work on any cost, or am i just missing some permissions to create ?

    Read the article

< Previous Page | 180 181 182 183 184 185 186 187 188 189 190 191  | Next Page >