Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 275/398 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Script or automate feature class creation in ESRI/ArcSDE

    - by Keith G
    I'm looking for info on how to write SQL scripts to automate the creation of a versioned feature class in ArcSDE I want to be able to automate the process itself as well as put the scripts under version control. Can anyone point me to a resource that explains how to do this? Is this even possible? It seems like there are lots of interrelationships between tables and data when a feature class is added. P.S. It doesn't have to be pure SQL, but it should be some kind of scripting so we can save to version control and run outside of ESRI desktop tools.

    Read the article

  • How to check if a Statistics is auto-created in a SQL Server 2000 DB using T-SQL?

    - by The Shaper
    Hi all. A while back I had to come up with a way to clean up all indexes and user-created statistics from some tables in a SQL Server 2005 database. After a few attempts it worked, but now I gotta have it working in SQL Server 2000 databases as well. For SQL Server 2005, I used SELECT Name FROM sys.stats WHERE object_id = object_id(@tableName) AND auto_created = 0 to fetch Statistics that were user-created. However, SQL 2000 doesn't have a sys.stats table. I managed to fetch the indexes and statistics in a distinguishable way from the sysindexes table, but I just couldn't figure out what the sys.stats.auto_created match is for SQL 2000. Any pointers? BTW: T-SQL please.

    Read the article

  • MS Sql Full-text search vs. LIKE expression

    - by Marks
    Hi. I'm currently looking for a way to search a big database (500MB - 10GB or more on 10 tables) with a lot of different fields(nvarchars and bigints). Many of the fields, that should be searched are not in the same table. An example: A search for '5124 Peter' should return all items, that ... have an ID with 5124 in it, have 'Peter' in the title or description have item type id with 5124 in it created by a user named 'peter' or a user whose id has 5124 in it created by a user with '5124' or 'peter' in his street address. How should i do the search? I read that the full-text search of MS-Sql is a lot more performant than a query with the LIKE keyword and i think the syntax is more clear, but i think it cant search on bigint(id) values and i read it has performance problems with indexing and therefore slows down inserts to the DB. In my project there will be more inserting than reading, so this could be a matter. Thanks in advance, Marks

    Read the article

  • Modifying records in my migration throws an authlogic error

    - by nfm
    I'm adding some columns to one of my database tables, and then populating those columns: def self.up add_column :contacts, :business_id, :integer add_column :contacts, :business_type, :string Contact.reset_column_information Contact.all.each do |contact| contact.update_attributes(:business_id => contact.client_id, :business_type => 'Client') end remove_column :contacts, :client_id end The line contact.update_attributes is causing the following Authlogic error: You must activate the Authlogic::Session::Base.controller with a controller object before creating objects I have no idea what is going on here - I'm not using a controller method to modify each row in the table. Nor am I creating new objects. The error doesn't occur if the contacts table is empty. I've had a google and it seems like this error can occur when you run your controller tests, and is fixed by adding before_filter :activate_authlogic to them, but this doesn't seem relevant in my case. Any ideas? I'm stumped.

    Read the article

  • Weighted Average with LINQ

    - by jsmith
    My goal is to get a weighted average from one table, based on another tables primary key. Example Data: Table1 Key WEIGHTED_AVERAGE 0200 0 Table2 ForeignKey LENGTH PCR 0200 105 52 0200 105 60 0200 105 54 0200 105 -1 0200 47 55 I need to get a weighted average based on the length of a segment and I need to ignore values of -1. I know how to do this in SQL, but my goal is to do this in LINQ. It looks something like this in SQL: SELECT Sum(t2.PCR*t2.LENGTH)/Sum(t2.LENGTH) AS WEIGHTED_AVERAGE FROM Table1 t1, Table2 t2 WHERE t2.PCR <> -1 AND t2.ForeinKey = t1.Key; I am still pretty new to LINQ, and having a hard time figuring out how I would translate this. The result weighted average should come out to roughly 55.3. Thank you.

    Read the article

  • Can an Excel VBA UDF called from the worksheet ever be passed an instance of any Excel VBA object mo

    - by jtolle
    I'm 99% sure that the answer is "no", but I'm wondering if someone who is 100% sure can say so. Consider a VBA UDF: Public Function f(x) End Function When you call this from the worksheet, 'x' will be a number, string, boolean, error, array, or object of type 'Range'. Can it ever be, say, an instance of 'Chart', 'ListObject', or any other Excel-VBA object model class? (The question arose from me moving to Excel 2007 and playing with Tables, and wondering if I could write UDFs that accept them as parameters instead of Ranges. The answer to that seems to be no, but then I realized I didn't know for sure in general.)

    Read the article

  • Core Data Relationships in pre-populated SQLite database

    - by Cardinal
    Hi All, I'm new to Core Data. Currently I have following tables on hand: tbl_teahcer tbl_student tbl_course tbl_student_course_map ----------- ----------- ---------- ---------------------- teacher_id student_id course_id student_id name name name course_id teahcer_id And I'm going to make the xcdatamodel as below: Course Teacher ------ ------- name name teacher <<----------> courses students <<---| | Student | ------- | name |----->> courses My questions are as follows: As I'd like to create TableView for Cource Entity, is it a must to create the Inverse Relationship from Teacher to Course, and Student to Course? What is the beneit for having the Inverse Relationship? I got some pre-defined data on hand, and I'd like to create a SQLite storage for pre-populated source. How can I set up the relationships (both directions) in SQLite? Thank you for your help! Regards, Cardinal

    Read the article

  • SQL Table Setup Advice

    - by Ozzy
    Hi all. Basically I have an xml feed from an offsite server. The xml feed has one parameter ?value=n now N can only be between 1 and 30 What ever value i pick, there will always be 4000 rows returned from the XML file. My script will call this xml file 30 times for each value once a day. So thats 120000 rows. I will be doing quite complicated queries on these rows. But the main thing is I will always filter by value first so SELECT * WHERE value = 'N' etc. That will ALWAYS be used. Now is it better to have one table where all 120k rows are stored? or 30 tables were 4k rows are stored? EDIT: the SQL database in question will be MySQL

    Read the article

  • Nhibernate: one-to-many, based on multiple keys?

    - by e36M3
    Lets assume I have two tables Table tA ID ID2 SomeColumns Table tB ID ID2 SomeOtherColumns I am looking to create a Object let's call it ObjectA (based on tA), that will have a one-to-many relationship to ObjectB (based on tB). In my example however, I need to use the combination of ID and ID2 as the foreign key. If I was writing SQL it would look like this: select tB.* from tA, tB where tA.ID = tB.ID and tA.ID2 = tB.ID2; I know that for each ID/ID2 combination in tA I should have many rows in tB, therefor I know it's a one-to-many combination. Clearly the below set is not sufficient for such mapping as it only takes one key into account. <set name="A2" table="A2" generic="true" inverse="true" > <key column="ID" /> <one-to-many class="A2" /> </set> Thanks!

    Read the article

  • C# SqlDataAdapter not populating DataSet

    - by Wesley
    I have searched the net and searched the net only to not quite find the probably I am running into. I am currently having an issue getting a SqlDataAdapter to populate a DataSet. I am running Visual Studio 2008 and the query is being sent to a local instance of SqlServer 2008. If I run the query itself in SqlServer, I do get information. Code is as follows: string theQuery = "select Password from Employees where employee_ID = '@EmplID'"; SqlDataAdapter theDataAdapter = new SqlDataAdapter(); theDataAdapter.SelectCommand = new SqlCommand(theQuery, conn); theDataAdapter.SelectCommand.Parameters.Add("@EmplID", SqlDbType.VarChar).Value = "EmployeeName"; theDataAdapter.Fill(theSet); The code to read the dataset: foreach (DataRow theRow in theSet.Tables[0].Rows) { //process row info } If there is any more info I can supply please let me know.

    Read the article

  • Using memtables in sql. When is it reasonable and is it safe?

    - by Spiros
    I was just reading an update from a friend's project, mentioning the use of memtables to store data temporatily and then flush to a table on disk. Up to now, I have never faced a situation where I would use a memtable, or a situation where I would think the use of a mem table would be beneficial; so I wonder, when would someone use mem tables? what makes a memtable (appart from access speed) a reasonable choice? and how safe is it, even for temp data? there is always the limitation of available physical memory.

    Read the article

  • Any exprience with Castle ActiveRecord?

    - by afsharm
    Hi all, I was searching for a light data access framework based on NHibernate. I needed simple CRUD and some simple HQL or LINQ-to-NHhibernate queries. Performance was not an important issue and applications which I'm working on have simple table structure but many tables. This data access framework is going to be used in a ASP.NET Webform application. Once a time I found S#harp architecture, but it was developed for ASP.NET MVC. Just today I found Castle ActiveRecord. But I'm wondering: If any one has any experience with it? Is it suitable for me? Should I consider any specific matter? What about its future? Is Castle ActiveRecord supposed to be developed and be active in coming years? Thanks in Advance

    Read the article

  • Freebase Query with "JOINS"

    - by codemonkey
    ... Yeah, yeah, I know traditional joins don't exist. I actually like the freebase query methodology in theory, just having a little trouble getting it to actually work for me : ) Anyone have a dumb-simple example of getting Freebase data via MQL that pulls from two different "tables"? In particular, I'm trying to get automotive data... so for example, pulling fields from both /automotive/model_year and /automotive/trim_year. I've read the documentation (for hours actually). There's a distinct possibility that I'm looking right at such an example somewhere and just not seeing it because my OLTP brain just doesn't comprehend what it's seeing. * Note * ... that the two "types" I'm working with above are siblings, not parent/child. Does freebase even allow joining data between sibling nodes... I see examples of queries pulling from parent/child, but not from siblings I don't think (or I've overlooked them).

    Read the article

  • NoSql Crash Course/Tutorial

    - by Chris Thompson
    Hi all, I've seen NoSQL pop up quite a bit on SO and I have a solid understanding of why you would use it (from here, Wikipedia, etc). This could be due to the lack of concrete and uniform definition of what it is (more of a paradigm than concrete implementation), but I'm struggling to wrap my head around how I would go about designing a system that would use it or how I would implement it in my system. I'm really stuck in a relational-db mindset thinking of things in terms of tables and joins... At any rate, does anybody know of a crash course/tutorial on a system that would use it (kind of a "hello world" for a NoSQL-based system) or a tutorial that takes an existing "Hello World" app based on SQL and converts it to NoSQL (not necessarily in code, but just a high-level explanation). I see this having one solid answer, but if you guys feel like it should be community wiki, I'll be happy to change it. Thanks! Chris

    Read the article

  • Synchronising local and remote DB

    - by nico
    Hi everyone, I have a general question about DB synchronisation. So, I'm developing a website locally (PHP + MySQL) and I would like to be able to synchronise at least the structure (and maybe the contents) of the two DB when one of the two is changed (normally I would change the local copy). Right now what I'm doing is to use mysqldump to dump the modified tables and then import them in the remote DB or do it by hand if the changes are minimal. However I find this tedious and error-prone. For the PHP I'm currently using Quanta+ which has the handy feature of finding files that have changed and just upload those. Is there something similar for MySQL? Otherwise how do you keep your DBs synchronised? Thanks nico PS: I'm sorry if this was already asked, I saw other questions that deal with similar topics, but couldn't really find an answer.

    Read the article

  • Getting mysql row that doesn't conflict with another row

    - by user939951
    I have two tables that link together through an id one is "submit_moderate" and one is "submit_post" The "submit_moderate" table looks like this id moderated_by post 1 James 60 2 Alice 32 3 Tim 18 4 Michael 60 Im using a simple query to get data from the "submit_post" table according to the "submit_moderate" table. $get_posts = mysql_query("SELECT * FROM submit_moderate WHERE moderated_by!='$user'"); $user is the person who is signed in. Now my problem is when I run this query, with the user 'Michael' it will retrieve this 1 James 60 2 Alice 32 3 Tim 18 Now technically this is correct however I don't want to retrieve the first row because 60 is associated with Michael as well as James. Basically I don't want to retrieve that value '60'. I know why this is happening however I can't figure out how to do this. I appreciate any hints or advice I can get.

    Read the article

  • Linq query with subquery as comma-separated values

    - by Keith
    In my application, a company can have many employees and each employee may have have multiple email addresses. The database schema relates the tables like this: Company - CompanyEmployeeXref - Employee - EmployeeAddressXref - Email I am using Entity Framework and I want to create a LINQ query that returns the name of the company and a comma-separated list of it's employee's email addresses. Here is the query I am attempting: from c in Company join ex in CompanyEmployeeXref on c.Id equals ex.CompanyId join e in Employee on ex.EmployeeId equals e.Id join ax in EmployeeAddressXref on e.Id equals ax.EmployeeId join a in Address on ax.AddressId equals a.Id select new { c.Name, a.Email.Aggregate(x=x + ",") } Desired Output: "Company1", "[email protected],[email protected],[email protected]" "Company2", "[email protected],[email protected],[email protected]" ... I know this code is wrong, I think I'm missing a group by, but it illustrates the point. I'm not sure of the syntax. Is this even possible? Thanks for any help.

    Read the article

  • Importing/Exporting Relationships in MS Access

    - by lamcro
    I have a couple of mdb files with the exact table structure. I have to change the primary key of the main table from autonumber to number in all of them, which means I have to: Drop the all the relationships the main table has Change the main table Create the relationships again,... for all the tables. Is there any way to export the relationships from one file and importing them to all the rest? I am sure this can be done with some macro/vb code. Does anyone has an example I could use? Thanks.

    Read the article

  • What 'best practices' exist for handing enum heirarchies?

    - by FerretallicA
    I'm curious as to any solutions out there for addressing enum heirarchies. I'm working through some docs on Entity Framework 4 and trying to apply it to a simple inventory tracking program. The possible types for inventory to fall into are as follows: INVENTORY ITEM TYPES: Hardware PC Desktop Server Laptop Accessory Input (keyboards, scanners etc) Output (monitors, printers etc) Storage (USB sticks, tape drives etc) Communication (network cards, routers etc) Software What recommendations are there for handling enums in a situation like this? Are enums even the solution? I don't really want to have a ridiculously normalised database for such a relatively simple experiment (eg tables for InventoryType, InventorySubtype, InventoryTypeToSubtype etc). I don't really want to over-complicate my data model with each subtype being inherited even though no additional properties or methods are included (except PC types which would ideally have associated accessories and software but that's probably out of scope here). It feels like there should be a really simple, elegant solution to this but I can't put my finger on it. Any assistance or input appreciated!

    Read the article

  • Sphinx non-fulltext, integer only search

    - by James
    Hello guys, I've got a few tables that literally only hold integers, no "words" and for some reason Sphinx is unable to hold this data in it's library. Just returns "0 bytes" errors for these indexes. Is it possible to do this? If so, how? Below is an example from my Sphinx.conf for one that fails. source track { type = mysql sql_host = host sql_user = user sql_pass = pass sql_db = db sql_port = port sql_query = SELECT id, user, time FROM track; sql_attr_uint = user sql_attr_uint = time sql_query_info = SELECT * FROM track WHERE id=$id } index track { source = track path = /var/lib/sphinx/track docinfo = extern charset_type = utf-8 min_prefix_len = 1 enable_star = 1 }

    Read the article

  • Query to return internal details about stored function in SQL Server database

    - by Anthony
    I have been given access to a SQL Server database that is currently used by 3rd party app. As such, I don't have any documentation on how that application stores the data or how it retrieves it. I can figure a few things out based on the names of various tables and the parameters that the user-defined functions takes and returns, but I'm still getting errors at every other turn. I was thinking that it would be really helpful if I could see what the stored functions were doing with the parameters given to return the output. Right now all I've been able to figure out is how to query for the input parameters and the output columns. Is there any built-in information_schema table that will expose what the function is doing between input and output?

    Read the article

  • Hash Table: Should I increase the element count on collisions?

    - by Nazgulled
    Hi, Right now my hash tables count the number of every element inserted into the hash table. I use this count, with the total hash table size, to calculate the load factor and when it reaches like 70%, I rehash it. I was thinking that maybe I should only count the inserted elements with fills an empty slot instead of all of them. Cause the collision method I'm using is separate chaining. The factor load keeps increasing but if there can be a few collisions leaving lots of empty slots on the hash table. You are probably thinking that if I have that many collisions, maybe I'm not using the best hashing method. But that's not the point, I'm using one of the know hashing algorithms out there, I tested 3 of them on my sample data and selected the one who produced less collisions. My question still remains. Should I keep counting every element inserted, or just the ones that fill an empty slot in the Hash Table?

    Read the article

  • Should I specify both INDEX and UNIQUE INDEX?

    - by Matt Huggins
    On one of my PostgreSQL tables, I have a set of two fields that will be defined as being unique in the table, but will also both be used together when selecting data. Given this, do I only need to define a UNIQUE INDEX, or should I specify an INDEX in addition to the UNIQUE INDEX? This? CREATE UNIQUE INDEX mytable_col1_col2_idx ON mytable (col1, col2); Or this? CREATE UNIQUE INDEX mytable_col1_col2_uidx ON mytable (col1, col2); CREATE INDEX mytable_col1_col2_idx ON mytable (col1, col2);

    Read the article

  • Comparing COUNT values within a query?

    - by outsyncof
    I have the following tables in a relation: person(ssn,sex) employment(ssn,workweeksperyear) assume ssn is a key. My assignment was to do this: Given as input the number of weeks per year a person has worked, determine whether there are more males than females who work more weeks than the input value. SELECT COUNT(sex) AS NumMales FROM person WHERE sex = 'Male' AND ssn IN (SELECT ssn FROM employment WHERE workweeksperyear > 48); The above query gets me the number of males for an input value and I could do the same for number of females but how do I compare the 2 results? Any help will be greatly appreciated!

    Read the article

  • Using a table-alias in Kohana queries?

    - by Aristotle
    I'm trying to run a simple query with $this->db in Kohana, but am running into some syntax issues when I try to use an alias for a table within my query: $result = $this->db ->select("ci.chapter_id, ci.book_id, ci.chapter_heading, ci.chapter_number") ->from("chapter_info ci") ->where(array("ci.chapter_number" => $chapter, "ci.book_id" => $book)) ->get(); It seems to me that this should work just fine. I'm stating that "chapter_info" ought to be known as "ci," yet this isn't taking for some reason. The error is pretty straight-forward: There was an SQL error: Table 'gb_data.chapter_info ci' doesn't exist - SELECT `ci`.`chapter_id`, `ci`.`book_id`, `ci`.`chapter_heading`, `ci`.`chapter_number` FROM (`chapter_info ci`) WHERE `ci`.`chapter_number` = 1 AND `ci`.`book_id` = 1 If I use the full table name, rather than an alias, I get the expected results without error. This requires me to write much more verbose queries, which isn't ideal. Is there some way to use shorter names for tables within Kohana's query-builder?

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >