Search Results

Search found 6110 results on 245 pages for 'graph databases'.

Page 220/245 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • What's the "correct way" to organize this project?

    - by user571747
    I'm working on a project that allows multiple users to submit large data files and perform operations on them. The "backend" which performs these operations is written in Perl while the "frontend" uses PHP to load HTML template files and determines which content to deliver. Data is stored in a database (MySQL, SQLite, Oracle) and while there is data which has not yet been acted upon, Perl adds it to a running queue which delivers data to other threads based on system load. In addition, there may be pre- and post-processing of the data before and after the main Perl script operates (the specifications are unclear) so I may want to allow these processors to be user-selectable plugins. I had been writing this project in a more procedural fashion but I am quickly realizing the benefit of separating concerns as to limit the scope one change has on the rest of the project. I'm quite unexperienced with design patterns and am curious what the best way to proceed is. I've heard MVC thrown around quite a bit but I am unsure of how to apply it. Specifically, what are some good options to structure this code (in terms of design patterns and folder hierarchy)? How can I achieve this with both PHP and Perl while minimizing duplicated code between languages? Should I keep my PHP files in the top level so I don't have ugly paths in the URL? Also, if I want to provide interchangeable databases, does each table need its own DAO implementation?

    Read the article

  • Why would using a Temp table be faster than a nested query?

    - by Mongus Pong
    We are trying to optimise some of our queries. One query is doing the following: SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, INTO [#Gadget] FROM task t SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) as Client FROM [#Gadget] order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC DROP TABLE [#Gadget] (I have removed the complex subquery, cos I dont think its relevant other than to explain why this query has been done as a two stage process.) Now I would have thought it would be far more efficient to merge this down into a single query using subqueries as : SELECT TOP 500 TaskID, Task, Tracker, ClientID, dbo.GetClientDisplayName(ClientID) FROM ( SELECT t.TaskID, t.Name as Task, '' as Tracker, t.ClientID, (<complex subquery>) Date, FROM task t ) as sub order by CASE WHEN Date IS NULL THEN 1 ELSE 0 END , Date ASC This would give the optimiser better information to work out what was going on and avoid any temporary tables. It should be faster. But it turns out it is a lot slower. 8 seconds vs under 5 seconds. I cant work out why this would be the case as all my knowledge of databases imply that subqueries would always be faster than using temporary tables. Can anyone explain what could be going on!?!?

    Read the article

  • Creating multiple instances of a generic database

    - by sagekilla
    Hi all, currently I'm trying to have a setup where a generic database is distributed to students. They would develop an application using this database (Say a shopping cart application), submit their project onto our server, and then it would be graded automatically. These databases are being run in Microsoft SQL Server 2005. We're using user instances to instantiate each database, and multiple requests could be serviced at once. But, the problem is when more than one student submitted a project to be graded, the first database to be instantiated would be the only one and would overwrite all other copies that were currently open. So if stu1 modified his database and stu2 and stu3 had their projects being graded concurrently, at the end of the grading stu1, stu2, and stu3 would have identical DB's at the end. Is there any way I can have multiple independent copies of a generic database, each of which I can load concurrently and modify without having any changes made to any one affecting the others? I did a little reading, and thought it might be possible to do something along the lines of: Student submits project Attach the database with unique db name (specified by student) Do all necessary operations Detach the database I'm unsure if this would fix our problem or be possible, so any help would be much appreciated!

    Read the article

  • Guidance required: FIrst time gonna work with real high end database (size = 50GB).

    - by claws
    I got a project of designing a Database. This is going to be my first big scale project. Good thing about it is information is mostly organized & currently stored in text files. The size of this information is 50GB. There are going to be few millions of records in each Table. Its going to have around 50 tables. I need to provide a web interface for searching & browsing. I'm going to use MySQL DBMS. I've never worked with a database more than 200MB before. So, speed & performance was never a concern but I followed things like normalization & Indexes. I never used any kind of testing/benchmarking/queryOptimization/whatever because I never had to care about them. But here the purpose of creating a database is to make it quickly searchable. So, I need to consider all possible aspects in design. I was browsing archives & found: http://stackoverflow.com/questions/1981526/what-should-every-developer-know-about-databases http://stackoverflow.com/questions/621884/database-development-mistakes-made-by-app-developers I'm gonna keep the points mentioned in above answers in mind. What else should I know? What else should I keep in mind?

    Read the article

  • Managing My Database in Source Control

    - by Jason
    As I am working with a new database project (within VS2008), and as I have never developed a database from scratch, I immediately began looking into how to manage a database within source control (in this case, Subversion). I found some information on SO, including this post: Keeping development databases in multiple environments in sync. One of the answers in particular pointed to a number of a links, all of which had good, useful information. I was reading a series of posts by K. Scott Allen which describe how he manages database change. From my reading (and please pardon the noobishness of my question), it seems as though the database itself is never checked into a repository. Rather, scripts that can build the database, along with test data (which is also populated from scripts) is checked into the repository. Ultimately, this means that, when a developer is testing his or her app, these scripts, which are part of the build process, are run. This ensures that the database is up-to-date, but is also run locally from every developer's machine. This makes sense to me (if I am indeed reading that correctly). However, if I am missing something, I would appreciate correction or additional guidance. In addition, another question I wanted to ask - does this also mean that I should NOT check in the mdf or ldf files that are created from Visual Studio? Thanks for any help and additional insight. Always appreciated.

    Read the article

  • How do you make use of the provider independency in the Entity Framework?

    - by Anders Svensson
    I'm trying to learn more about data access and the Entity Framework. My goal is to have a "provider independent" data access layer (to be able to switch easily e.g. from SQL Server to MySQL or vice versa), and since the EF is supposed to be provider independent it seems like a good way to go. But how do you use this provider independence? I mean, I was expecting to be able to sort of "program to an interface", and then just be able to switch database provider. But as far as I can tell I'm only getting a concrete type to program against, an "Entities" class, e.g. in my case it is: UserDBEntities _context = new UserDBEntities(); What I would have expected to be able to switch provider easily was to have an interface e.g. like IEntities _context = new UserDBEntities(); Sort of like I can do with datasets... But maybe that isn't how it works at all with EF? Or do you just switch provider in the connectionstring, and the model stays the same?? Please remember that I'm a complete newbie at this EF, and rather inexperienced with databases in general, so I would really appreciate if you could be as clear as possible :-)

    Read the article

  • Just 2 free months to learn or improve my skills

    - by microspino
    On the 30 of June I will leave my every day work to start as freelance developer. I'd like to set a period of 2 months apart to improve my dev skills. At work I code in C# and during my spare time I enjoyed building Ruby on Rails web applications and creating some Arduino prototypes. I'm something more than junior but I don't feel really a senior developer because I never had a big corporate project built and designed by me with help of other juniors (although I don't think this is really a good definiton of a "senior", It helps describing my feelings). Using a scale from 0 (ignorant) to 10 (proficient like a "samurai") the list below describes my skills that I would like to improve with just 2 months. I've already bought some nice and updated books on all the subjects hereunder: The order doesn't matter C = 1 C# & .Net = 6 Arduino & Processing = 2 Ruby = 5 Rails = 5 HTML/XHTML/CSS = 9 Javascript = 6 Objective-C/iPhone dev = 2 Python = 4 Django = 4 Desing Patterns = 3 Algorythms = 3 Git = 5 I haven't included SQL or Databases in general nor Networking because I spent 10 years working in the past with them and I feel pretty solid for now. As an aside, I've made up some interest in Redis, Node.js, HTML5 reading about them on the web. After two months, since I have to pay my bills, I could go searching for some new job. If learning and developing were really good maybe I could also invest on something I gave birth during them. Can You give me some piece of advice on which you think It's better to improve or develop a learning project on (something like a "summer of code" thing)? The all point Is to see my weaknesses and work on them.

    Read the article

  • How to insert several thousand columns into sqlite3?

    - by user291071
    Similar to my last question, but I ran into problem lets say I have a simple dictionary like below but its Big, when I try inserting a big dictionary using the methods below I get operational error for the c.execute(schema) for too many columns so what should be my alternate method to populate an sql databases columns? Using the alter table command and add each one individually? import sqlite3 con = sqlite3.connect('simple.db') c = con.cursor() dic = { 'x1':{'y1':1.0,'y2':0.0}, 'x2':{'y1':0.0,'y2':2.0,'joe bla':1.5}, 'x3':{'y2':2.0,'y3 45 etc':1.5} } # 1. Find the unique column names. columns = set() for _, cols in dic.items(): for key, _ in cols.items(): columns.add(key) # 2. Create the schema. col_defs = [ # Start with the column for our key name '"row_name" VARCHAR(2) NOT NULL PRIMARY KEY' ] for column in columns: col_defs.append('"%s" REAL NULL' % column) schema = "CREATE TABLE simple (%s);" % ",".join(col_defs) c.execute(schema) # 3. Loop through each row for row_name, cols in dic.items(): # Compile the data we have for this row. col_names = cols.keys() col_values = [str(val) for val in cols.values()] # Insert it. sql = 'INSERT INTO simple ("row_name", "%s") VALUES ("%s", "%s");' % ( '","'.join(col_names), row_name, '","'.join(col_values) )

    Read the article

  • Database source control with Oracle

    - by borjab
    I have been looking during hours for a way to check in a database into source control. My first idea was a program for calculating database diffs and ask all the developers to imlement their changes as new diff scripts. Now, I find that if I can dump a database into a file I cound check it in and use it as just antother type of file. The main conditions are: Works for Oracle 9R2 Human readable so we can use diff to see the diferences. (.dmp files doesn't seem readable) All tables in a batch. We have more than 200 tables. It stores BOTH STRUCTURE AND DATA It supports CLOB and RAW Types. It stores Procedures, Packages and its bodies, functions, tables, views, indexes, contraints, Secuences and synonims. It can be turned into an executable script to rebuild the database into a clean machine. Not limitated to really small databases (Supports least 200.000 rows) It is not easy. I have downloaded a lot of demos that does fail in one way or another. EDIT: I wouldn't mind alternatives aproaches provided that they allows us to check a working system against our release DATABASE STRUCTURE AND OBJECTS + DATA in a bath mode. By the way. Our project has been developed for years. Some aproaches can be easily implemented when you make a fresh start but seem hard at this point. EDIT: To understand better the problem let's say that some users can sometimes do changes to the config data in the production eviroment. Or developers might create a new field or alter a view without notice in the realease branch. I need to be aware of this changes or it will be complicated to merge the changes into production.

    Read the article

  • SQL Database Schema Design For Large 3 Billion Relationship Database.

    - by K-Bell
    Get your geek on. Can you solve this? I am designing a products database for SQL Server 2008 R2 Ed. (not Enterprise Ed.) that will be used to store custom product configurations for over 30,000 distinct products. The database will have up to 500 users at a time. Here is the design problem… Each Product has a collection of Parts (up to 50 parts per product). So if I have 30,000 Products and each of them can have up to 50 Parts, that’s 1.5 million distinct Product-to-Part relationships …or as an equation… 30,000 (Products) X 50 (Parts) = 1.5 million Product-to-Parts records. …and If… Each Part can have up to 2000 finish options (A finish is a paint color). NOTE: Only one finish will be selected by a user at run-time. The 2000 finish options I need to store are the allowed options for a specific part on a specific product. So if I have 1.5 million distinct product-to-part relationships/records and each of those parts can have up to 2,000 finishes that is 3 billion allowable product-to-part-to finish relationships/records …or as an equation… 1.5 million (Parts) x 2,000 (Finishes) = 3 Billion Product-to-Part-to-Finishes records. How can I design this database so that I can execute fast and efficient queries for a specific product and return its list of Parts and all the allowable Finishes for each part without 3 Billion Product-to-Part-to-Finish records? Read time is more important then write time. Please post your thoughts/suggestions if you have experience with large databases. Thanks!

    Read the article

  • Android and fairly large SQLite datafiles

    - by SK9
    I'm starting an Android project, a port from an existing iPhone project I've completed. I have a fairly large read-only SQLite database, about 100Mb in all. It's called "mydata.sqlite". Where do I place this in my Eclipse workspace? It's too big for "assets". Next, how do I best get at the file? I would think to try (handling exceptions later) something like: SQLiteDatabase myDatabase = null; myDatabase = SQLiteDatabase.openDatabase(myPath, null, SQLiteDatabase.OPEN_READONLY); But I would then need the path string myPath and since I don't know where to put the resource I don't know what this needs to be. Can I put "mydata.sqlite" into "res/raw" (once I create "raw" in Eclipse?) and then referene it as a resource with "R.raw.mydata"? I would very much appreciate some direct help here, rather than a reference to a tutorial. I have checked tons of these, including those that are already cited here on stackoverflow. I've also gone through the "Notepad" project in the Android developer documents. However these and the documentation typically consider only new, empty or small databases. This should be a simple thing and given the time I've spent already it is perhaps easier to ask. Thanking you kindly in advance for your assistance.

    Read the article

  • Sync data between a windows desktop app and windows mobile client app

    - by Chris W
    I need to knock up a very quick prototype/proof of concept application to demo to someone within the next couple of days so I've minimal time to research this as fully as I normally would. The set-up is a very simple database application running on a laptop - will only ever be a single user updating a couple of tables so I was thinking of knocking up a basic Win Forms app against SQL Compact. Visual Studio's auto generated data grid edit screens will be fine with a little customisation. The second aspect is to then add a windows mobile client application that can pull data from both tables stored on the laptop, edit some data and insert some extra rows before sending the changes back to the laptop copy of the database. I've not done any WinMo development so what's the best approach for me to look at. Is it easy enough to sync data between the two databases when the WinMo device is connected to the laptop with USB? Most of the samples I've looked at so far seem to be syncing SQL Compact with SQL Standard using IIS which seems a bit overkill. The volumes of data to be synced are so small that I can easily write some manual sync code if it's easy for me to query/update the Compact DB from the laptop application when the device is connected.

    Read the article

  • MySql ODBC connection in VB6 on WinXP VERY slow. Other machines on same network are fast.

    - by Matthew
    Hi All, I have a VB6 application that has been performing very well. Recently, we upgraded our server to a Windows 2003 server. Migration of the databases and shares went well and we experienced no problems. Except one. And it has happened at multiple sites. I use the MySQL ODBC 5.1 connector to point to my MySQL database. On identical machines (as far as I can tell, they are client machines not ours), access to the DB is lightning fast on all but one computer. They use the same software and have the same connection strings. And I'm sure it's not the program, but the ODBC connection. When I press the 'Test Connection' button in the ODBC connection string window, it can take up to 10 seconds on the poorly performing machine to respond with a success. All the other computers are instantaneous. I have tried using ip address versus the machine name in the UDL, no change. I enabled option 256, which sped it up initially, but it's slow again. Most of the time on a restart the program will be fast for an hour or so then go slow again with the option 256 enabled. Frankly, I am out of ideas and willing to entertain any and all ideas or suggestions. This is getting pretty frustrating. Anyone ever experience anything like this?

    Read the article

  • At what point is it worth using a database?

    - by radix07
    I have a question relating to databases and at what point is worth diving into one. I am primarily an embedded engineer, but I am writing an application using QT to interface with our controller. We are at an odd point where we have enough data that it would be feasible to implement a database (around 700+ items and growing) to manage everything, but I am not sure it is worth the time right now to deal with. I have no problems implementing the GUI with files generated from excel and parsed in, but it gets tedious and hard to track even with VBA scripts. I have been playing around with converting our data into something more manageable for the application side with Microsoft Access and that seems to be working well. If that works out I am only a step (or several) away from using an SQL database and using the QT library to access and modify it. I don't have much experience managing data at this level and am curious what may be the best way to approach this. So what are some of the real benefits of using a database if any in this case? I realize much of this can be very application specific, but some general ideas and suggestions on how to straddle the embedded/application programming line would be helpful. Thanks

    Read the article

  • trying to backup mysql database using php

    - by user225269
    I got this code from this site: http://www.php-mysql-tutorial.com/wikis/mysql-tutorials/using-php-to-backup-mysql-databases.aspx But I'm just a beginner so I don't know what the config.php and opendb.php suppose to mean. Do I have to create those 2 files in order for this code to work? If yes, then how do I create it, it isn't included in the site how to create it. <?php include 'config.php'; include 'opendb.php'; $tableName = 'mypet'; $backupFile = 'backup/mypet.sql'; $query = "SELECT * INTO OUTFILE '$backupFile' FROM $tableName"; $result = mysql_query($query); include 'closedb.php'; ?> can I just include these lines on the top code so that I will not be putting the include 'opendb.php' anymore: $con = mysql_connect("localhost","root",""); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("Hospital", $con);

    Read the article

  • Get info from multiple files, match it and then display to end user, what is fastest?

    - by Patrick
    Hi, I need to build a website where we display data that is refreshed every 5minutes in a text file with a | separator. I currently use Java to do this. What I do now: I grab the textfile for every request through the website and process it and then display the data to the end user, this works fine, since Java can go through like 5000 lines of data fast, and when I filter it it is still extremely fast. However now the management wants the following: They added 3 textfiles with the | separator to it, and now want me to also read those files and match the information on certain fields, and if there is a match also display that information to the end user. I think soon enough, although Java is fast, I will run into trouble when 10 people want that information and I have to run through 4 total files matching the information. What can I do to make this process super fast? My Creative solutions so far: -Leave it this way, since Java is fast and end users can wait (probably less then 1second) -Have a background process that dumps new data into a MySQL database every 5minutes, since databases are extremely good at getting the same data from multiple tables. Thank you!

    Read the article

  • Why use Entity Framework over Linq2SQL if...

    - by Refracted Paladin
    To be clear, I am not asking for a side by side comparision which has already been asked Ad Nauseum here on SO. I am also Not asking if Linq2Sql is dead as I don't care. What I am asking is this.... I am building internal apps only for a non-profit organization. I am the only developer on staff. We ALWAYS use SQL Server as our Database backend. I design and build the Databases as well. I have used L2S successfully a couple of times already. Taking all this into consideration can someone offer me a compelling reason that I should use EF instead of L2S? I was at Code Camp this weekend and after an hour long demonstration on EF, all of which I could have done in L2S, I asked this same question. The speakers answer was, "L2S is dead..." Very well then! NOT! (see here) I understand EF is what MS WANTS us to use in the future(see here) and that it offers many more customization options. What I can't figure out is if any of that should, or does, matter for me in this environment. One particular issue we have here is that I inherited the Core App which was built on 4 different SQL Data bases. L2S has great difficulty with this but when I asked the aforementioned speaker if EF would help me in this regard he said "No!"

    Read the article

  • Do I need a spatial index in my database?

    - by Sanoj
    I am designing an application that needs to save geometric shapes in a database. I haven't choosen the database management system yet. In my application, all database queries will have an bounding box as input, and as output I want all shapes within that database. I know that databases with a spatial index is used for this kind of application. But in my application there will not be any queries of type "give me objects nearby x/y" or other more complex queries that are useful in a GIS application. I am planning of having a database without a spatial index and have queries looking like: SELECT * FROM shapes WHERE x < max_x AND x > min_x AND y < max_y AND y > min_y And have an index on the columns x (double) and y (double). As long I can see, I don't really need a database with an spatial index, howsoever my application is close to that kind of applications. And even if I would like to have nearby queries, then I could create a big enough bounding box around that point. Or will this lead to poor performance? Do I really need a spatial database? And when is a spatial index needed?

    Read the article

  • How do I make a lock that allows only ONE thread to read from the resource ?

    - by mare
    I have a file that holds an integer ID value. Currently reading the file is protected with ReaderWriterLockSlim as such: public int GetId() { _fileLock.EnterUpgradeableReadLock(); int id = 0; try { if(!File.Exists(_filePath)) CreateIdentityFile(); FileStream readStream = new FileStream(_filePath, FileMode.Open, FileAccess.Read); StreamReader sr = new StreamReader(readStream); string line = sr.ReadLine(); sr.Close(); readStream.Close(); id = int.Parse(line); return int.Parse(line); } finally { SaveNextId(id); // increment the id _fileLock.ExitUpgradeableReadLock(); } } The problem is that subsequent actions after GetId() might fail. As you can see the GetId() method increments the ID every single time, disregarding what happens after it has issued an ID. The issued ID might be left hanging (as said, exceptions might occur). As the ID is incremented, some IDs might be left unused. So I was thinking of moving the SaveNextId(id) out, remove it (the SaveNextId() actually uses the lock too, except that it's EnterWriteLock). And call it manually from outside after all the required methods have executed. That brings out another problem - multiple threads might enter the GetId() method before the SaveNextId() gets executed and they might all receive the same ID. I don't want any solutions where I have to alter the IDs after the operation, correcting them in any way because that's not nice and might lead to more problems. I need a solution where I can somehow callback into the FileIdentityManager (that's the class that handles these IDs) and let the manager know that it can perform the saving of the next ID and then release the read lock on the file containing the ID. Essentialy I want to replicate the relational databases autoincrement behaviour - if anything goes wrong during row insertion, the ID is not used, it is still available for use but it also never happens that the same ID is issued. Hopefully the question is understandable enough for you to provide some solutions..

    Read the article

  • Entity Framework: Detect DBSchema for licensing

    - by Program.X
    We're working on a product that may or may not have differing license schema for different databases. In particular, a lower-tier product would run on SQLExpress, but we don't want the end user to be able to use a "full-fat" SQL install - benefiting from the price cut. Clearly this must also be the case for other DBs, so Oracle may command a higher price than SQL, for instance (hypothetically). We're using Entity Framework. Obviously this hides all the neatness of accessing the core schema and using sp_version or whatever it is. We'd rather not pre-load the condition by running a series of SQL commands (one for each platform) and see what comes back, as this would limit our DB options. But if necassary, we're prepared to do it. So, is it possible to get this using EF itself? DBContext.COnnection.ServerVersion only returns something like "9.00.1234" (for SQL Server 2005). I would assume (though haven't yet checked - need to install an instance) SQLExpress would return something similar - "pretending" it is full-fat. Obviously, we have no Oracle/MySQL/etc. instance so can't establish whether that returns text "Oracle" or whatever.

    Read the article

  • Weird Rails database errors

    - by Jason Swett
    I've had some trouble getting my Rails app to connect to PostgreSQL so I decided to just say screw it and use SQLite for now. (I'm using the tutorial here: http://guides.rubyonrails.org/getting_started.html) I started a BRAND NEW, fresh Rails app from this tutorial. When I visit my app in the browser after deleting public/index.html, I get this the first time: Please install the pg adapter: `gem install activerecord-pg-adapter` (no such file to load -- active_record/connection_adapters/pg_adapter) That's odd to me because I'm not mentioning PostgreSQL anywhere. Here's my databases.yml: # SQLite version 3.x # gem install sqlite3-ruby (not necessary on OS X Leopard) development: adapter: sqlite3 database: db/development.sqlite3 pool: 5 timeout: 5000 # Warning: The database defined as "test" will be erased and # re-generated from your development database when you run "rake". # Do not set this db to the same as development or production. test: adapter: sqlite3 database: db/test.sqlite3 pool: 5 timeout: 5000 production: adapter: sqlite3 database: db/production.sqlite3 pool: 5 timeout: 5000 To make things more confusing, I only get that "pg adapter" error on the first load. For every subsequent page request, I get this error: ActiveRecord::ConnectionNotEstablished So even though I removed all mention of PostgreSQL, I'm still getting errors. What could be going on?

    Read the article

  • Would you allow this type of query?

    - by user564577
    I'm exploring using an ORM tool in our development shop, and in particular Entity Framework 4.0. Since we work with VERY large databases, I'm a bit concerned about the query's it generates. Doing something simple like getting clients with an address in a state looks like below. As a database developer or admin would you allow this? Is it as bad as it looks? Assume every join is on a clustered index. SELECT [Project2].[ClientKey] AS [ClientKey], [Project2].[FirstName] AS [FirstName], [Project2].[LastName] AS [LastName], [Project2].[IsEnabled] AS [IsEnabled], [Project2].[ChangeUser] AS [ChangeUser], [Project2].[ChangeDate] AS [ChangeDate], [Project2].[C1] AS [C1], [Project2].[AddressKey] AS [AddressKey], [Project2].[ClientKey1] AS [ClientKey1], [Project2].[AddressTypeCode] AS [AddressTypeCode], [Project2].[PrimaryAddress] AS [PrimaryAddress], [Project2].[AddressLine1] AS [AddressLine1], [Project2].[AddressLine2] AS [AddressLine2], [Project2].[City] AS [City], [Project2].[State] AS [State], [Project2].[ZIP] AS [ZIP] FROM ( SELECT [Distinct1].[ClientKey] AS [ClientKey], [Distinct1].[FirstName] AS [FirstName], [Distinct1].[LastName] AS [LastName], [Distinct1].[IsEnabled] AS [IsEnabled], [Distinct1].[ChangeUser] AS [ChangeUser], [Distinct1].[ChangeDate] AS [ChangeDate], [Extent3].[AddressKey] AS [AddressKey], [Extent3].[ClientKey] AS [ClientKey1], [Extent3].[AddressTypeCode] AS [AddressTypeCode], [Extent3].[PrimaryAddress] AS [PrimaryAddress], [Extent3].[AddressLine1] AS [AddressLine1], [Extent3].[AddressLine2] AS [AddressLine2], [Extent3].[City] AS [City], [Extent3].[State] AS [State], [Extent3].[ZIP] AS [ZIP], CASE WHEN ([Extent3].[AddressKey] IS NULL) THEN CAST(NULL AS int) ELSE 1 END AS [C1] FROM (SELECT DISTINCT [Extent1].[ClientKey] AS [ClientKey], [Extent1].[FirstName] AS [FirstName], [Extent1].[LastName] AS [LastName], [Extent1].[IsEnabled] AS [IsEnabled], [Extent1].[ChangeUser] AS [ChangeUser], [Extent1].[ChangeDate] AS [ChangeDate] FROM [Common].[Clients] AS [Extent1] INNER JOIN [Common].[ClientAddresses] AS [Extent2] ON [Extent1].[ClientKey] = [Extent2].[ClientKey] WHERE (( CAST(CHARINDEX(UPPER('D'), UPPER([Extent1].[LastName])) AS int)) > 0) AND ([Extent1].[IsEnabled] = 1) AND ([Extent2].[City] IS NOT NULL) AND ((UPPER([Extent2].[City])) = (UPPER('Colorado Springs'))) ) AS [Distinct1] LEFT OUTER JOIN [Common].[ClientAddresses] AS [Extent3] ON [Distinct1].[ClientKey] = [Extent3].[ClientKey] ) AS [Project2] ORDER BY [Project2].[ClientKey] ASC, [Project2].[FirstName] ASC, [Project2].[LastName] ASC, [Project2].[IsEnabled] ASC, [Project2].[ChangeUser] ASC, [Project2].[ChangeDate] ASC, [Project2].[C1] ASC

    Read the article

  • How do I execute a sql statement through a variable (dyname sql) that tries to do an insert into a variable table?

    - by Testifier
    If I do what I wanna do with a TEMPORARY TABLE, it works fine: DECLARE @CTRFR VARCHAR(MAX) SET @CTRFR = 'select blah blah blah' -- <-- very long select statement. this returns a 0 or some greater number. Please note! --> I NEED THIS NUMBER. IF EXISTS ( SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo][#CTRFRResult]') AND type IN ( N'U' ) ) DROP TABLE [dbo].[#CTRFRResult] CREATE TABLE #CTRFRResult ( CTRFRResult VARCHAR(MAX) ) SET @CTRFR = 'insert into #CTRFRResult ' + @CTRFR EXEC(@CTRFR) The above works fine. The problem is that several databases are using the same TEMP table. Therefore I need to use a VARIABLE table (instead of a temporary table). What I have below is not working because it says that the table must be declared. DECLARE @CTRFRResult TABLE ( CTRFRResult VARCHAR(MAX) ) SET @CTRFR = 'insert into @CTRFRResult ' + @CTRFR -- I think the issue is here. EXEC(@CTRFR) Setting @CTRFR to 'insert into...' is not working because I'm assuming the table name is out of scope. How would I go about mimicking the temporary table code using a variable table?

    Read the article

  • Database solution for 200million writes/day, monthly summarization queries

    - by sb
    Hello. I'm looking for help deciding on which database system to use. (I've been googling and reading for the past few hours; it now seems worthwhile to ask for help from someone with firsthand knowledge.) I need to log around 200 million rows (or more) per 8 hour workday to a database, then perform weekly/monthly/yearly summary queries on that data. The summary queries would be for collecting data for things like billing statements, eg. "How many transactions of type A did each user run this month?" (could be more complex, but that's the general idea). I can spread the database amongst several machines, as necessary, but I don't think I can take old data offline. I'll definitely need to be able to query a month's worth of data, maybe a year. These queries would be for my own use, and wouldn't need to be generated in real-time for an end-user (they could run overnight, if needed). Does anyone have any suggestions as to which databases would be a good fit? P.S. Cassandra looks like it would have no problem handling the writes, but what about the huge monthly table scans? Is anyone familiar with Cassandra/Hadoop MapReduce performance?

    Read the article

  • PHP include taking too long

    - by wxiiir
    I have a php file with around 100mb which is full of arrays (only arrays). I've made a script that includes this file (for processing), first it exhausted the default Xampp 128mb memory limit, i've raised it to 1024mb but it just takes forever and doesn't do anything. I'm sure the problem is being created by the sheer size of the file because i've tried removing all lines of code and just leaving the include and an echo for me to know when it finishes executing, and it does the same thing (which is taking forever), i've also tried to run the 100mb file in separate and same thing again. A 10mb file is taking forever as well but a similar 1mb file is almost instantly read and executed so the problem must be more than just the file size. I was avoiding using c++ for a simple project as this and would rather not to as php is easier for me and the task that will be executed doesn't need to benefit from the added speed that it would have if it had been done in c++ but if i have no luck in solving this problem i guess i'll have to. EDIT Reasons for not using a database: 1-Whoever made it didn't used a database and it will be pretty hard to store this in an organized database if i'm not able to do something with it first, like just reading it, copying parts from it or putting in memory or something. 2-I don't have experience working with databases as pretty much all stuff i've ever done in php didn't needed large amounts of stored data, 50kb at best, if i was thinking about a big project or huge chunks of data as this one, i definitely would, but i didn't made this mess to start with and now i have to undo it. 3-The logic for having to store a small portion of data like 10mb in hard drive when now every computer has pretty much enough ram to fit the whole OS in it is pretty much incomprehensible unless someone gives a good explanation about it, if i had to access a lot of said files simultaneously i would understand but like i said, this is a simple project, this is the only file that will be accessed at a given time this isn't even to make some kind of website, it's to run a few times and be done with it.

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >