Is there a way to use the SQL Model (dbml) builder in VS2010 using SQLServer2000?
It works fine in VSExpress2008 + VS2008 but throws an "Upgrade SQL to 2005" error in VS2010 which seems a tad unreasonable.
Hi ,
I guess I can extend the base providers to manipulate the base layout etc.Is there any place where I can get the java source code for default providers? Where are the classes for these default providers copied to ?What are list of things that I can manipulate by extending base containers? Is there a comprehensive documentation on methods that i can override?
Thanks a lot for your time!!
Regards,
Vivek
I'm embarking on a project in which users will author, save, and share their own maps over the web. We will provide them with a large number of feature classes, but users will effectively author their own maps, map symbologies, etc. Furthermore, they will create and edit their own feature classes, which they can both map and share with other users.
The model for AGS map services seems to be: author a map in ArcMap, save an MXD/MSD, publish. I'm struggling to understand how this can help us build a dynamic web mapping platform as described above. Can anyone offer some tips on how to go about it?
I'm just now created Dynamic Data Entities Web Site.
And I have got a problem with image type. for all sql data types generated html elements by fieldTemplate. So, how can I create fieldtemplate for image-view, and image-upload ?
Hi all,
I am not able to install GP 10.0 in my system, pls somebody help me with the steps how can i install GP 10.0 in my system. I have windows XP service pack 2 OS. Going to work with GP 10.0 with SQL 2000 Database. I will be really appreciate if somebody help me with installation steps of GP 10.0 and SQL 2000. I am really fed up with installing GP 10.0 and SQL 2000. Pls help me...
Thanks,
Rahul
I have a stored procedure that takes an XML parameter and inserts the "Entity" nodes as records into a table. This works fine unless one of the numeric fields has a value of empty string in the XML. Then it throws an "error converting data type nvarchar to numeric" error.
Is there a way for me to tell SQL to convert empty string to null for those numeric fields in the code below?
-- @importData XML <- stored procedure param
DECLARE @l_index INT
EXECUTE sp_xml_preparedocument @l_index OUTPUT, @importData
INSERT INTO dbo.myTable
(
[field1]
,[field2]
,[field3]
)
SELECT
[field1]
,[field2]
,[field3]
FROM OPENXML(@l_index, 'Entities/Entity', 1)
WITH
(
field1 int 'field1'
,field2 varchar(40) 'field2'
,field3 decimal(15, 2) 'field3'
)
EXECUTE sp_xml_removedocument @l_index
EDIT: And if it helps, sample XML. Error is thrown unless I comment out field3 in the code above or provide a value in field3 below.
<?xml version="1.0" encoding="utf-16"?>
<Entities>
<Entity>
<field1>2435</field1>
<field2>843257-3242</field2>
<field3 />
</Entity>
</Entities>
Note
I have completely re-written my original post to better explain the issue I am trying to understand. I have tried to generalise the problem as much as possible.
Also, my thanks to the original people who responded. Hopefully this post makes things a little clearer.
Context
In short, I am struggling to understand the best way to design a small scale database to handle (what I perceive to be) multiple many-to-many relationships.
Imagine the following scenario for a company organisational structure:
Textile Division Marketing Division
| |
---------------------- ----------------------
| | | |
HR Dept Finance Dept HR Dept Finance Dept
| | | |
---------- ---------- ---------- ---------
| | | | | | | |
Payroll Hiring Audit Tax Payroll Hiring Audit Accounts
| | | | | | | |
Emps Emps Emps Emps Emps Emps Emps Emps
NB: Emps denotes a list of employess that work in that area
When I first started with this issue I made four separate tables:
Divisions - Textile, Marketing (PK = DivisionID)
Departments - HR, Finance (PK = DeptID)
Functions - Payroll, Hiring, Audit, Tax, Accounts (PK = FunctionID)
Employees - List of all Employees (PK = EmployeeID)
The problem as I see it is that there are multiple many-to-many relationships i.e. many departments have many divisions and many functions have many departments.
Question
Giving the database structure above, suppose I wanted to do the following:
Get all employees who work in the Payroll function of the Marketing Division
To do this I need to be able to differentiate between the two Payroll departments but I am not sure how this can be done?
I understand that I could build a 'Link / Junction' table between Departments and Functions so that I can retrieve which Functions are in which Departments. However, I would still need to differentiate the Division they belong to.
Research Effort
As you can see I am an abecedarian when it comes to database deisgn. I have spent the last two days resaerching this issue, traversing nested set models, adjacency models, reading that this issue is known not to be NP complete etc. I am sure there is a simple solution?
Context
I am fairly new to database design (=know the basics) and am grappling with how best to design my database for a project I am currently working on.
In short, my database will keep a log of which employees have attended certain health and safety courses throughout the year. There are multiple types of course e.g. moving objects, fire safety, hygiene etc.
In terms of my database design I need to accommodate the following:
Each location can have multiple
divisions
Each division can have multiple
departments
Each department can have multiple
functions
Each function can have multiple job
roles
Each job role can have different
course requirements
Also note that the structure at each location may not be the same e.g. the departments within divisions are not the same across locations and the functions within departments may also differ.
Edit - updated to better articulate problem
Let's assume I am just looking at Location, Division and Department and I have my database as follows:
LocationTable DivisionTable DepartmentTable
LocationID(PK) DivisionID(PK) DepartmentID(PK)
LocationName DivisionName DepartmentName
There is a many-to-many relationship between Locations and Divisions and also between Departments and Divisions.
Suppose I set up a 'Junction Table' as follows:
Location_Division
LocationID(FK)
DivisionID(FK)
Using Location_Division I could easily pull back the Divisions for any Location.
However, suppose I want to pull back all departments for a given Division in a given Location.
If I set up another 'Junction Table' for Division and Department then I can't see how I would differentiate Division by Location?
Division_Department
DivisionID(FK)
DepartmentID(FK)
Location_Division Division_Department
LocationID DivisionID DivisionID DepartmentID
1 1 1 1
1 2 1 2
2 1 2 1
2 2 2 2
Do I need to expand the number of columns in my 'Junction Table' e.g.
Location_Division_Department
LocationID(FK)
DivisionID(FK)
DepartmentID(FK)
Location_Division_Department
LocationID DivisionID DepartmentID
1 1 1
1 1 2
1 1 3
2 1 1
2 1 2
2 1 3
I have created a database and some dbo.tables. Now I want to create a user that are can read and write to these tables, but not modify or drop. However I want this user to be able to create own tables and let him do what he want with these.
Is this possible? Could someone explain how this can be done?
Hello
I got child / parent tables as below.
MasterTable:
MasterID, Description
ChildTable
ChildID, MasterID, Description.
Using PIVOT / UNPIVOT how can i get result as below in single row.
if (MasterID : 1 got x child records)
MasterID, ChildID1, Description1, ChildID2, Description2....... ChildIDx, Descriptionx
Thanks
I have a Database nearly 1.9Gb Database in size
MSDE2000 does not allow DBs that exceed 2.0Gb
I need to shrink this DB (and many like it at various client locations)
I have found and deleted many 100's of 1000's of records which are considered unneeded.
these records account for a large percentage of some of the main (largest) tables in the Database. Therefore it's reasonable to assume much space should now be retrievable.
So now I need to shrink the DB to account for the missing records
I execute "DBCC ShrinkDatabase('MyDB')"......No effect.
I have tried the various shrink facilities provided in MSSMS.... Still no effect.
I have backed up the database and restored it... Still no effect. Still 1.9Gb
Why?
Whatever procedure I eventually find needs to be replayable on a client machine with access to nothing other than OSql or similar.
We are starting a new project where we need to store product and many product attributes in a database. The technology stack is MS SQL 2008 and Entity Framework 4.0 / LINQ for data access.
The products (and Products Table) are pretty straightforward (a SKU, manufacturer, price, etc..). However there are also many attributes to store with each product (think industrial widgets). These may range from color to certification(s) to pipe size. Every product may have different attributes, and some may have multiples of the same attribute (Ex: Certifications).
The current proposal is that we will basically have a name/value pair table with a FK back to the product ID in each row.
An example of the attributes Table may look like this:
ProdID AttributeName AttributeValue
123 Color Blue
123 FittingSize 1.25
123 Certification AS1111
123 Certification EE2212
123 Certification FM.3
456 Pipe 11
678 Color Red
999 Certification AE1111
...
Note: Attribute name would likely come from a lookup table or enum.
So the main question here is: Is this the best pattern for doing something like this? How will the performance be? Queries will be based on a JOIN of the product and attributes table, and generally need many WHEREs to filter on specific attributes - the most common search will be to find a product based on a set of known/desired attributes.
If anyone has any suggestions or a better pattern for this type of data, please let me know.
Thanks!
-Ed
I have a table in which there are two columns : 1. import type, 2. Excel import template.
The second column - "Excel import template" should store the whole excel file.
How would I save excel file in databse...can I use binary datatype column, convert excel file to bytes and save the same ?
Thanks in advance !
Error: ('IM002', '[IM002]
[unixODBC][Driver Manager]Data source
name not found, and no default driver
specified (0) (SQLDriverConnectW)')
I'm migrating from developing on a windows development machine to linux machine in production and I'm having issues with the freetds driver. As far as I can tell that error message means it can't find the driver. I can connect via the cli via sqsh and tsql. I've setup my settings.py as such.
'bc2db': {
'ENGINE': 'sql_server.pyodbc',
'NAME': 'DataTEST',
'USER': 'appuser',
'PASSWORD': 'PASS',
'HOST': 'bc2.domain.com',
'options': {
'driver': 'FreeTDS',
}
},
Does anyone have any mssql experience with djano? do I have to use a dns? (how would I format that?)
Thanks
Hi Folks,
I have a serious performance problem.
I have a database with (related to this problem), 2 tables.
1 Table contains strings with some global information. The second table contains the string stripped down to each individual word. So the string is like indexed in the second table, word by word.
The validity of the data in the second table is of less important then the validity of the data in the first table.
Since the first table can grow like towards 1*10^6 records and the second table having an average of like 10 words for 1 string can grow like 1*10^7 records, i use a nolock in order to read the second this leaves me free for inserting new records without locking it (Expect many reads on both tables).
I have a script which keeps on adding and updating rows to the first table in a MERGE statement. On average, the data beeing merged are like 20 strings a time and the scripts runs like ones every 5 seconds.
On the first table, i have a trigger which is beeing invoked on a Insert or Update, which takes the newly inserted or updated data and calls a stored procedure on it which makes sure the data is indexed in the second table. (This takes some significant time).
The problem is that when having the trigger disbaled, Reading the first table happens in a few ms. However, when enabling the trigger and your in bad luck of trying to read the first table while this is beeing updated, Our webserver gives you a timeout after 10 seconds (which is way to long anyways).
I can quess from this part that when running the trigger, the first table is kept (partially) in a lock untill the trigger is completed.
What do you think, if i'm right, is there a easy way around this?
Thanks in advance!
Cheers, Koen
I have a couple of projects in Django and alternate between one and another every now and then. All of them have a /media/ path, which is served by django.views.static.serve, and they all have a /media/css/base.css file.
The problem is, whenever I run one project, the requests to base.css return an HTTP 304 (not modified), probably because the timestamp hasn't changed. But when I run the other project, the same 304 is returned, making the browser use the file cached by the previous project (and therefore, using the wrong stylesheet).
Just for the record, here are the middleware classes:
MIDDLEWARE_CLASSES = (
'django.middleware.common.CommonMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.middleware.transaction.TransactionMiddleware',
)
I always use the default address http://localhost:8000.
Is there another solution (other than using different ports - 8001, 8002, etc.)?
I am trying to create some script variables in T-SQL as follows:
/*
Deployment script for MesProduction_Preloaded_KLM_MesSap
*/
GO
SET ANSI_NULLS, ANSI_PADDING, ANSI_WARNINGS, ARITHABORT, CONCAT_NULL_YIELDS_NULL, QUOTED_IDENTIFIER ON;
SET NUMERIC_ROUNDABORT OFF;
GO
:setvar DatabaseName "MesProduction_Preloaded_KLM_MesSap"
However, when I run this, I get an error stating 'Incorrect syntax near ':'. What am I doing wrong?
Hi,
I am very much confused.
I have a transaction in ReadCommitted Isolation level. Among other things I am also updating a counter value in it, something similar to below:
Update tblCount set counter = counter + 1
My application is a desktop application and this transaction happens to occur quite frequently and concurrently. We recently noticed an error that sometimes the counter value doesn't get updated or is missed. We also insert one record on each counter update so we are sure that records have been inserted but somehow counter fails to update. This happens once in 2000 simulaneous transactions.
I seriously doubt it is a lost update anomaly I am facing but if you look at the command above, it's just update the counter from its own value: if I have started a transaction and the transaction has reached this statement, it should have locked the row. This should not cause lost update, but it's happening somehow.
Is the thing that this update command works in two parts? Like first it reads the counter value (during which it doesn't get the exclusive lock) and then writes the new calculated value (when it does get an exclusive lock)?
Please help, I have got really confused.
I am trying to implement hierarchyID in a table (dbo.[Message]) containing roughly 50,000 rows (will grow substantially in the future). However it takes 30-40 seconds to retrieve about 25 results.
The root node is a filler in order to provide uniqueness, therefor every subsequent row is a child of that dummy row.
I need to be able to traverse the table depth-first and have made the hierarchyID column (dbo.[Message].MessageID) the clustering primary key, have also added a computed smallint (dbo.[Message].Hierarchy) which stores the level of the node.
Usage: A .Net application passes through a hierarchyID value into the database and I want to be able to retrieve all (if any) children AND parents of that node (besides the root, as it is filler).
A simplified version of the query I am using:
@MessageID hierarchyID /* passed in from application */
SELECT
m.MessageID, m.MessageComment
FROM
dbo.[Message] as m
WHERE
m.Messageid.IsDescendantOf(@MessageID.GetAncestor((@MessageID.GetLevel()-1))) = 1
ORDER BY
m.MessageID
From what I understand, the index should be detected automatically without a hint.
From searching forums I have seen people utilizing index hints, at least in the case of breadth-first indexes, as apparently CLR calls may be opaque to the query optimizer.
I have spent the past few days trying to find a solution for this issue, but to no avail.
I would greatly appreciate any assistance, and as this is my first post, I apologize in advance if this would be considered a 'noobish' question, I have read the MS documentation and searched countless forums, but have not came across a succinct description of the specific issue.
Hello guys...
I have a Mapserver application with SDE layers...
I´d like to know how can I edit my SDE spatial data (add/edit a point/line layer) in .NET ...
Thanks
Hey everyone,
Trying to work on a query that will return the top 3 selling products with the three having a distinct artist. Im getting stuck on getting the unique artist.
Simplified Table schema
Product
ProductID
Product Name
Artist Name
OrderItem
ProductID
Qty
So results would look like this...
PID artist qty
34432, 'Jimi Hendrix', 6543
54833, 'stevie ray vaughan' 2344
12344, 'carrie underwood', 1
When using ANT to build my Java application I keep getting this error. I have tried multiple times to use SQLJDBC.JAR and SQLJDBC4.JAR but continually receive this error message. I am completely stumpped why this error is received even after upgrading to sqljdbc4.jar.
[javadoc] java.lang.UnsupportedOperationException:
Java Runtime Environment (JRE) version 1.6 is not supported by this driver.
Use the sqljdbc4.jar class library, which provides support for JDBC 4.0.
I have a fairly simple requirement -I have a table with the following (relevant) structure.
with cte as(
select 1 id,'AA,AB,AC,AD' names union all
select 2,'BA,BB' union all
select 3,'CA,CB,CC,CD,CE' union all
select 4,'DA,DB,DC'
)
i would like to create a select statement which will split each "names" column into multiple rows.
For example the first row should produce
1,'AA'
1,'AB'
1,'AC'
1,'AD'
Can we do it using only SQL. This is failry easy to do in Oracle.