Search Results

Search found 30605 results on 1225 pages for 'database abstraction'.

Page 292/1225 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • How can I speed up queries against tables I cannot add indexes to?

    - by RenderIn
    I access several tables remotely via DB Link. They are very normalized and the data in each is effective-dated. Of the millions of records in each table, only a subset of ~50k are current records. The tables are internally managed by a commercial product that will throw a huge fit if I add indexes or make alterations to its tables in any way. What are my options for speeding up access to these tables?

    Read the article

  • Move million records from MEMORY table to MYISAM table.

    - by Prashant
    Hi, I am looking for a fast way to move records from a MEMORY table to MYISAM table. MEMORY table has around 0.5 million records. Both tables have exactly the same structure (same number of columns, data types etc.). But the MYISAM table is indexed (B-TREE) on a few columns. There are around 25 columns most of which are unsigned integers. I have already tried using "INSERT INTO SELECT * FROM " query. But is there any faster way to do this? Appreciate your help. Prashant

    Read the article

  • Problem performance datawarehouse with lots of indexes

    - by Lieven Cardoen
    Our product takes tests of some 350 candidates at the same time. At the end of the test, results for each candidate are moved to a datawarehouse full of indexes on it. For each test there's some 400 records to be entered in datawarehouse. So 400 x 350 is a lot of records. If there are not much records in the datawarehouse, all goes well. But if there are already lots of records in the datawarehouse, then a lot of inserts fail... Is there a way to have indexes that are only rebuild at the end of the day or isn't that the real problem? Or how would you solve this?

    Read the article

  • Datamapper, defining your own object methods, how?

    - by Dublinclontarf
    So lets say I have a class like below class List include DataMapper::Resource property :id, Serial property :username, String def self.my_username return self[:username] end end list=List.create(:username=>,'jim') list.my_username When I run this it tells me that the method cannot be found, and on more investigation that you can only define class methods(not object methods) and that class methods don't have access to objects data. Is there any way to have these methods included as object methods and get access to object data? I'm using Ruby 1.8.6 and the latest version of datamapper.

    Read the article

  • how to effectively modify index

    - by daedlus
    Hej everyone, problem : I am looking for right way to convert an index from clustered to non-clustered Description : I have a table as below in sybase db: dbo.UserLog Id | UserId |time | .... This is hash partitioned using UserId. Currently it has 2 indexes UserId : non-clustered time: clustered This table has about 20 million records. I now want to make UserId as clustered index and time as non-clustered index. is it correct to user alter index to change from clustered to non-clustered or do i drop index and recreate. does the fact that userId is used in hash partitioning have any implications to this? To me alter seems way to go but I have not yet tried this.

    Read the article

  • Versioned RDF store

    - by Mat
    Let me try rephrasing this: I am looking for a robust RDF store or library with the following features: Named graphs, or some other form of reification. Version tracking (probably at the named graph level). Privacy between groups of users, either at named graph or triple level. Human-readable data input and output, e.g. TriG parser and serialiser. I've played with Jena, Sesame, Boca, RDFLib, Redland and one or two others some time ago but each had its problems. Have any improved in the above areas recently? Can anything else do what I want, or is RDF not yet ready for prime-time? Reading around the subject a bit more, I've found that: Jena, nothing further Sesame, nothing further Boca does not appear to be maintained any more and seems only really designed for DB2. OpenAnzo, an open-source fork, appears more promising. RDFLib, nothing further Redland, nothing further Talis Platform appears to support changesets (wiki page and reference in Kniblet Tutorial Part 5) but it's a hosted-only service. Still may look into it though. SemVersion sounded promising, but appears to be stale.

    Read the article

  • Why use NoSQL over Materialized Views?

    - by JustinT
    There has been a lot of talk recently about NoSQL. The #1 reason why I hear people use NoSQL is because they start to de-normalize their DBMS data so much so, to increase performance, that they end up with just one table with all of their data within that single table. With Materialized Views however, you can keep your data normalized, yet have it stored as a single table view for the same reasons why you'd use NoSQL. As such, why would someone use NoSQL over Materialized Views?

    Read the article

  • Berkeley DB java edition, any LGPL or BSD alternatives in Java?

    - by Ali
    Hi All, I am dealing with a huge dataset consisting of key-value pairs. The queries are always in the form of range queries on the key space (keys are numbers) hence any persistent B-Tree like structure will handle the situation. I would like to use BDB-Java Edition but the product is closed source and my company doesn't want to buy BDB-JE License. I am wondering, would you please share your experience with any non-GPL java based key-value storage system. Thanks, -A

    Read the article

  • does it makes sense to use int instead of char or nvarchar for a discriminator column if I'm using i

    - by Omu
    I have something like this: create table account ( id int identity(1,1) primary key, usertype char(1) check(usertype in ('a', 'b')) not null, unique(id, usertype) ) create table auser ( id int primary key, usertype char(1) check(usertype = 'a') not null, foreign key (id, usertype) references account(id, usertype) ) create table buser ( ... same just with b ) the question is: if I'm going to use int instead of char(1), does it going to work faster/better ?

    Read the article

  • Teradata equivalent of persisted computed column (in SQL Server)

    - by Cade Roux
    We have a few tables with persisted computed columns in SQL Server. Is there an equivalent of this in Teradata? And, if so, what is the syntax and are there any limitations? The particular computed columns I am looking at conform some account numbers by removing leading zeros - an index is also created on this conformed account number: ACCT_NUM_std AS ISNULL(CONVERT(varchar(39), SUBSTRING(LTRIM(RTRIM([ACCT_NUM])), PATINDEX('%[^0]%', LTRIM(RTRIM([ACCT_NUM])) + '.' ), LEN(LTRIM(RTRIM([ACCT_NUM]))) ) ), '' ) PERSISTED With the Teradata TRIM function, the trimming part would be a little simpler: ACCT_NUM_std AS COALESCE(CAST(TRIM(LEADING '0' FROM TRIM(BOTH FROM ACCT_NUM))) AS varchar(39)), '' ) I guess I could just make this a normal column and put the code to standardize the account numbers in all the processes which insert into the table. We did this to put the standardization code in one place.

    Read the article

  • Best ORM, Simple data Structures, Strong Query analysis.

    - by sayth
    What is the best ORM db combination for simple data structures. That is data that contains names as identifiers and locations, but whose main interaction will be numerical data for times(sports durations), and currency related data. I initially want to create a sports data base that will take names and statistics. Secondarily I plan to start into an investment and stock analysis db. Which ORM suits storing many numerical types and have strong query functions? I really am not biased to db engine (most likely use sqlite or mongo) so any suggestions to best network less db server to suit said ORM appreciated.

    Read the article

  • Firebird Data Access Designer (DDEX) installation

    - by persian Dev
    hi i want to use firebird library , and i followed its instruction as below , but i get "The referenced component 'FirebirdSql.Data.Firebird' could not be found." error. instruction : Prerequisites Make sure that you have Visual Studio .NET 2005 Standard or higher edition. Express editions are not supported. Registry update Remember to update the path in FirebirdDDEXProviderPackageLess32.reg or FirebirdDDEXProviderPackageLess64.reg, places where to update it are marked %Path%. Install the .reg file into the registry. Machine.config update Add the following two sections to machine.config (located usually at C:\WINDOWS\Microsoft.NET\Framework\v2.0.50727\CONFIG\machine.config and C:\WINDOWS\Microsoft.NET\Framework64\v2.0.50727\CONFIG\machine.config on 64-bit system). <configuration> ... <configSections> ... <section name="firebirdsql.data.firebirdclient" type="System.Data.Common.DbProviderConfigurationHandler, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089" /> ... </configSections> ... <system.data> <DbProviderFactories> ... <add name="FirebirdClient Data Provider" invariant="FirebirdSql.Data.FirebirdClient" description=".Net Framework Data Provider for Firebird" type="FirebirdSql.Data.FirebirdClient.FirebirdClientFactory, FirebirdSql.Data.FirebirdClient, Version=%Version%, Culture=%Culture%, PublicKeyToken=%PublicKeyToken%" /> ... </DbProviderFactories> </system.data> ... </configuration> And subst: %Version% With the version of the provider assembly that you have in the GAC. %Culture% With the culture of the provider assembly that you have in the GAC. %PublicKeyToken% With the PublicKeyToken of the provider assembly that you have in the GAC.

    Read the article

  • SimpleDB to ActiveResource. Rails

    - by Victor P
    Im looking for a way to map an ActiveResource to SimpleDB I want to avoid plugins/gems as all I have used are outdated/buggy/not mantained It doesnt seem hard, I wonder if any of you have succesfully implemented a rails app with simpleDB as an Active Resource. How did you do it? Thanks.

    Read the article

  • Django models: Use multiple values as a key?

    - by Rosarch
    Here is a simple model: class TakingCourse(models.Model): course = models.ForeignKey(Course) term = models.ForeignKey(Term) Instead of Django creating a default primary key, I would like to use both course and term as the primary key - taken together, they uniquely identify a tuple. Is this allowed by Django? On a related note: I am trying to represent users taking courses in certain terms. Is there a better way to do this? class Course(models.Model): name = models.CharField(max_length=200) requiredFor = models.ManyToManyField(RequirementSubSet, blank=True) offeringSchool = models.ForeignKey(School) def __unicode__(self): return "%s at %s" % (self.name, self.offeringSchool) class MyUser(models.Model): user = models.ForeignKey(User, unique=True) takingReqSets = models.ManyToManyField(RequirementSet, blank=True) takingTerms = models.ManyToManyField(Term, blank=True) takingCourses = models.ManyToManyField(TakingCourse, blank=True) school = models.ForeignKey(School) class TakingCourse(models.Model): course = models.ForeignKey(Course) term = models.ForeignKey(Term) class Term(models.Model): school = models.ForeignKey(School) isPrimaryTerm = models.BooleanField()

    Read the article

  • Use the repository pattern when using PLINQO generated data?

    - by Chad
    I'm "upgrading" an MVC app. Previously, the DAL was a part of the Model, as a series of repositories (based on the entity name) using standard LINQ to SQL queries. Now, it's a separate project and is generated using PLINQO. Since PLINQO generates query extensions based on the properties of the entity, I started using them directly in my controller... and eliminated the repositories all together. It's working fine, this is more a question to draw upon your experience, should I continue down this path or should I rebuild the repositories (using PLINQO as the DAL within the repository files)? One benefit of just using the PLINQO generated data context is that when I need DB access, I just make one reference to the the data context. Under the repository pattern, I had to reference each repository when I needed data access, sometimes needing to reference multiple repositories on a single controller. The big benefit I saw on the repositories, were aptly named query methods (i.e. FindAllProductsByCategoryId(int id), etc...). With the PLINQO code, it's _db.Product.ByCatId(int id) - which isn't too bad either. I like both, but where it gets "harrier" is when the query uses predicates. I can roll that up into the repository query method. But on the PLINQO code, it would be something like _db.Product.Where(x = x.CatId == 1 && x.OrderId == 1); I'm not so sure I like having code like that in my controllers. Whats your take on this?

    Read the article

  • MYSQL - Selecting a specific date range to get "current" popular screensavers.

    - by Joe
    Let's say I have a screensaver website. I want to display the CURRENT top 100 screensavers on the front page of the website. What I mean is, "RECENT" top 100 screensavers. What would be an example query to do this? My current one is: SELECT * FROM tbl_screensavers WHERE WEEK(tbl_screensavers.DateAdded) = WEEK('".date("Y-m-d H:i:s",strtotime("-1 week"))."') ORDER BY tbl_screensavers.ViewsCount, tbl_screensavers.DateAdded This will select the most viewed ("tbl_screensavers.ViewsCount") screensavers that were added ("tbl_screensavers.DateAdded") in the last week. However, in some cases there are no screensavers, or less than 100 screensavers, submitted in that week. So, how can I perform a query which would select "RECENT" top 100 screensavers? Hopefully you have an idea of what I'm try to accomplish when I say "RECENT" or "CURRENT" top screensavers. -- aka. the most viewed, recently - not the most viewed, all-time.

    Read the article

  • How do you find the disk size of a Postgres table and its indexes

    - by mmrobins
    I'm coming to Postgres from Oracle and looking for a way to find the table and index size in terms of bytes/MB/GB/etc, or even better the size for all tables. In Oracle I had a nasty long query that looked at user_lobs and user_segments to give back an answer. I assume in Postgres there's something I can use in the information_schema tables, but I'm not seeing where. Thanks in advance.

    Read the article

  • MS Query Analyzer / Management Studio replacement?

    - by kprobst
    I've been using SQL Server since version 6.5 and I've always been a bit amazed at the fact that the tools seem to be targeted to DBAs rather than developers. I liked the simplicity and speed of the Query Analyzer for example, but hated the built-in editor, which was really no better than a syntax coloring-capable Notepad. Now that we have Management Studio the management part seems a bit better but from a developer standpoint the tools is even worse. Visual Studio's excellent text editor... without a way to customize keyboard bindings!? Don't get me started on how unusable is the tree-based management hierarchy. Why can't I re-root the tree on a list of stored procs for example the way the Enterprise Manager used to allow? Now I have a treeview that needs to be scrolled horizontally, which makes it eminently useless. The SQL server support in Visual Studio is fantastic for working with stored procedures and functions, but it's terrible as a general ad hoc data query tool. I've tried various tools over the years but invariably they seem to focus on the management side and shortchange the developer in me. I just want something with basic admin capabilities, good keyboard support and requisite DDL functionality (ideally something like the Query Analyzer). At this point I'm seriously thinking of using vim+sqlcmd and a console... I'm that desperate :) Those of you who work day in and day out with SQL Server and Visual Studio... do you find the tools to be adequate? Have you ever wished they were better and if you have found something better, could you share please? Thanks!

    Read the article

  • Data sync solution?

    - by user321088
    For some security issues I'm in an envorinment where third party apps can't access my DB. For this reason I should have some service/tool/script (dunno what yet... i'm open to the best option, still reading to see what I'm gonna do...) which enables me to generate on a regular basis(daily, weekly, monthly) some csv file with all new/modified records for a certain application. I should be able to automate this process and also export at any time a new file. So it should keep track for each application which records he still needs. Each application will need some data in some other format (csv/xls/sql), also some fields will be needed for some application and some aren't... It should be fairly flexible... What is the best option for me? Creating some custom tables for each application? Based on that extracting modified data?

    Read the article

  • Vehicle 2 Vehicle Communication Questions

    - by pinnacler
    I have a rare opportunity to meet the man in charge of implementing vehicle 2 vehicle communication for the US Department of Transportation with 2 others in a few hours. Do YOU have any questions for him? I know this is a little outside the normal, but this is a 'reverse' thread and I felt he has some great knowledge on the subject that I want to share with this community. I'll post his answers later today to his questions. Ask about V2V implementation, privacy issues, use cases, or if you've thought of a great way to use V2V and want me to share it with him, he can at least think about it. He is in charge of panel that creates the standard. Or anything else...

    Read the article

  • What is the best way to handle validity dates in applications ?

    - by user214626
    Hello, How do we model these objects ? Scenario 1: Price changes in a time period EffectiveDate ExpiryDate Price 2009-01-01 2009-01-31 800$ 2009-02-01 Null 900$ So, if the price changes to 910$ on 2009-02-15, then the system should automatically update the expiry date on the previous effective price to 2009-02-14, to keep it consistent. Scenario 2: No price specified between 2009-02-01 to 2009-02-28 EffectiveDate ExpiryDate Price 2009-01-01 2009-01-31 800$ 2009-03-01 Null 900$ So, if new price is specified for 2009-02-15 onwards , then the system should automatically set the expiry date on the record to be inserted to 2009-02-28, because already a record effective from 2009-03-01 exists. Please suggest an effective way to handle these scenarios to model my framework, or are there any frameworks around that can do this . Thanks

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >