Search Results

Search found 22526 results on 902 pages for 'multiple databases'.

Page 628/902 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • How to implement an offline reader writer lock

    - by Peter Morris
    Some context for the question All objects in this question are persistent. All requests will be from a Silverlight client talking to an app server via a binary protocol (Hessian) and not WCF. Each user will have a session key (not an ASP.NET session) which will be a string, integer, or GUID (undecided so far). Some objects might take a long time to edit (30 or more minutes) so we have decided to use pessimistic offline locking. Pessimistic because having to reconcile conflicts would be far too annoying for users, offline because the client is not permanently connected to the server. Rather than storing session/object locking information in the object itself I have decided that any aggregate root that may have its instances locked should implement an interface ILockable public interface ILockable { Guid LockID { get; } } This LockID will be the identity of a "Lock" object which holds the information of which session is locking it. Now, if this were simple pessimistic locking I'd be able to achieve this very simply (using an incrementing version number on Lock to identify update conflicts), but what I actually need is ReaderWriter pessimistic offline locking. The reason is that some parts of the application will perform actions that read these complex structures. These include things like Reading a single structure to clone it. Reading multiple structures in order to create a binary file to "publish" the data to an external source. Read locks will be held for a very short period of time, typically less than a second, although in some circumstances they could be held for about 5 seconds at a guess. Write locks will mostly be held for a long time as they are mostly held by humans. There is a high probability of two users trying to edit the same aggregate at the same time, and a high probability of many users needing to temporarily read-lock at the same time too. I'm looking for suggestions as to how I might implement this. One additional point to make is that if I want to place a write lock and there are some read locks, I would like to "queue" the write lock so that no new read locks are placed. If the read locks are removed withing X seconds then the write lock is obtained, if not then the write lock backs off; no new read-locks would be placed while a write lock is queued. So far I have this idea The Lock object will have a version number (int) so I can detect multi-update conflicts, reload, try again. It will have a string[] for read locks A string to hold the session ID that has a write lock A string to hold the queued write lock Possibly a recursion counter to allow the same session to lock multiple times (for both read and write locks), but not sure about this yet. Rules: Can't place a read lock if there is a write lock or queued write lock. Can't place a write lock if there is a write lock or queued write lock. If there are no locks at all then a write lock may be placed. If there are read locks then a write lock will be queued instead of a full write lock placed. (If after X time the read locks are not gone the lock backs off, otherwise it is upgraded). Can't queue a write lock for a session that has a read lock. Can anyone see any problems? Suggest alternatives? Anything? I'd appreciate feedback before deciding on what approach to take.

    Read the article

  • How Do I Bind a "selected Item" in a Listbox to a ItemsControl in WPF?

    - by Scott
    LowDown: I am trying to create a Document Viewer in WPF. It will allow the user to preview selected documents and if they want, compare the documents in WPF. So they can view them side by side. The layout is this: Left side is a full list box. On the right side is a Collection or an Items control. Inside the items control will be a collection of the "selected documents" in the list box. So A user can select multiple items in the list box and for each new item they select, they can add the item to the collection on the right. I want the collection to look like a image gallery that shows up in Google/Bing Image searches. Make sense? The problem I am having is I can't get the WPFPreviewer to bind correctly to the selected item in the list box under the itemscontrol. Side Note: The WPFPreviewer is something Micorosft puts out that allows us to preview documents. Other previewers can be built for all types of documents, but im going basic here until I get this working right. I have been successful in binding to the list box WITHOUT the items control here: <Window.Resources> <DataTemplate x:Key="listBoxTemplate"> <StackPanel Margin="3" > <DockPanel > <Image Source="{Binding IconURL}" Height="30"></Image> <TextBlock Text=" " /> <TextBlock x:Name="Title" Text="{Binding Title}" FontWeight="Bold" /> <TextBlock x:Name="URL" Visibility="Collapsed" Text="{Binding Url}"/> </DockPanel> </StackPanel> </DataTemplate> </Window.Resources> <Grid Background="Cyan"> <ListBox HorizontalAlignment="Left" ItemTemplate="{StaticResource listBoxTemplate}" Width="200" AllowDrop="True" x:Name="lbDocuments" ItemsSource="{Binding Path=DocumentElements,ElementName=winDocument}" DragEnter="documentListBox_DragEnter" /> <l:WPFPreviewHandler Content="{Binding ElementName=lbDocuments, Path=SelectedItem.Url}"/> </Grid> Though, once I add in the ItemsControl, I can't get it to work anymore: <Window.Resources> <DataTemplate x:Key="listBoxTemplate"> <StackPanel Margin="3" > <DockPanel > <Image Source="{Binding IconURL}" Height="30"></Image> <TextBlock Text=" " /> <TextBlock x:Name="Title" Text="{Binding Title}" FontWeight="Bold" /> <TextBlock x:Name="URL" Visibility="Collapsed" Text="{Binding Url}"/> </DockPanel> </StackPanel> </DataTemplate> </Window.Resources> <Grid> <ListBox HorizontalAlignment="Left" ItemTemplate="{StaticResource listBoxTemplate}" Width="200" AllowDrop="True" x:Name="lbDocuments" ItemsSource="{Binding Path=DocumentElements,ElementName=winDocument}" DragEnter="documentListBox_DragEnter" /> <ItemsControl x:Name="DocumentViewer" ItemsSource="{Binding ElementName=lbDocuments, Path=SelectedItem.Url}" > <ItemsControl.ItemTemplate> <DataTemplate> <Grid Background="Cyan"> <l:WPFPreviewHandler Content="{Binding Url}"/> </Grid> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </Grid> Can someone please help me out with trying to bind to the ItemsControl if I select one or even multiple items in the listbox.

    Read the article

  • Reporting System architecture for better performance

    - by pauloya
    Hi, We have a product that runs Sql Server Express 2005 and uses mainly ASP.NET. The database has around 200 tables, with a few (4 or 5) that can grow from 300 to 5000 rows per day and keep a history of 5 years, so they can grow to have 10 million rows. We have built a reporting platform, that allows customers to build reports based on templates, fields and filters. We face performance problems almost since the beginning, we try to keep reports display under 10 seconds but some of them go up to 25 seconds (specially on those customers with long history). We keep checking indexes and trying to improve the queries but we get the feeling that there's only so much we can do. Off course the fact that the queries are generated dynamically doesn't help with the optimization. We also added a few tables that keep redundant data, but then we have the added problem of maintaining this data up to date, and also Sql Express has a limit on the size of databases. We are now facing a point where we have to decide if we want to give up real time reports, or maybe cut the history to be able to have better performance. I would like to ask what is the recommended approach for this kind of systems. Also, should we start looking for third party tools/platforms? I know OLAP can be an option but can we make it work on Sql Server Express, or at least with a license that is cheap enough to distribute to thousands of deployments? Thanks

    Read the article

  • SQL server deadlock between INSERT and SELECT statement

    - by dtroy
    Hi! I've got a problem with multiple deadlocks on SQL server 2005. This one is between an INSERT and a SELECT statement. There are two tables. Table 1 and Table2. Table2 has Table1's PK (table1_id) as foreign key. Index on table1_id is clustered. The INSERT inserts a single row into table2 at a time. The SELCET joins the 2 tables. (it's a long query which might take up to 12 secs to run) According to my understanding (and experiments) the INSERT should acquire an IS lock on table1 to check referential integrity (which should not cause a deadlock). But, in this case it acquired an IX page lock The deadlock report: <deadlock-list> <deadlock victim="process968898"> <process-list> <process id="process8db1f8" taskpriority="0" logused="2424" waitresource="OBJECT: 5:789577851:0 " waittime="12390" ownerId="61831512" transactionname="user_transaction" lasttranstarted="2010-04-16T07:10:13.347" XDES="0x222a8250" lockMode="IX" schedulerid="1" kpid="3764" status="suspended" spid="52" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:13.350" lastbatchcompleted="2010-04-16T07:10:13.347" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read uncommitted (1)" xactid="61831512" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcTable2_Insert" line="18" stmtstart="576" stmtend="1148" sqlhandle="0x0300050079e62d06e9307f000b9d00000100000000000000"> INSERT INTO dbo.Table2 ( f1, table1_id, f2 ) VALUES ( @p1, @p_DocumentVersionID, @p1 ) </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 103671417] </inputbuf> </process> <process id="process968898" taskpriority="0" logused="0" waitresource="PAGE: 5:1:46510" waittime="7625" ownerId="61831406" transactionname="INSERT" lasttranstarted="2010-04-16T07:10:12.717" XDES="0x418ec00" lockMode="S" schedulerid="2" kpid="1724" status="suspended" spid="53" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2010-04-16T07:10:12.713" lastbatchcompleted="2010-04-16T07:10:12.713" clientapp=".Net SqlClient Data Provider" hostname="VIDEV01-B-ME" hostpid="3040" loginname="DatabaseName" isolationlevel="read committed (2)" xactid="61831406" currentdb="5" lockTimeout="4294967295" clientoption1="671088672" clientoption2="128056"> <executionStack> <frame procname="DatabaseName.dbo.prcGetList" line="64" stmtstart="3548" stmtend="11570" sqlhandle="0x03000500dbcec17e8d267f000b9d00000100000000000000"> <!-- XXXXXXXXXXXXXX...SELECT STATEMENT WITH Multiple joins including both Table2 table 1 and .... XXXXXXXXXXXXXXX --> </frame> </executionStack> <inputbuf> Proc [Database Id = 5 Object Id = 2126630619] </inputbuf> </process> </process-list> <resource-list> <pagelock fileid="1" pageid="46510" dbid="5" objectname="DatabaseName.dbo.table1" id="lock6236bc0" mode="IX" associatedObjectId="72057594042908672"> <owner-list> <owner id="process8db1f8" mode="IX"/> </owner-list> <waiter-list> <waiter id="process968898" mode="S" requestType="wait"/> </waiter-list> </pagelock> <objectlock lockPartition="0" objid="789577851" subresource="FULL" dbid="5" objectname="DatabaseName.dbo.Table2" id="lock970a240" mode="S" associatedObjectId="789577851"> <owner-list> <owner id="process968898" mode="S"/> </owner-list> <waiter-list> <waiter id="process8db1f8" mode="IX" requestType="wait"/> </waiter-list> </objectlock> </resource-list> </deadlock> </deadlock-list> Can anyone explain why the INSERT gets the IX page lock ? Am I not reading the deadlock report properly? BTW, I have not managed to reproduce this issue. Thanks!

    Read the article

  • Where are tables in Mnesia located?

    - by Sanoj
    I try to compare Mnesia with more traditional databases. As I understand it tables in Mnesia can be located to: ram_copies - tables are stored in RAM only, so no durability as in ACID. disc_copies - tables are located on disc and a copy is located in RAM, so the table can not be bigger than the available memory? disc_only_copies - tables are located to disc only, so no caching in memory and worse performance? And the size of the table are limited to the size of dets or the table has to be fragmented. So if I want the performance of doing reads from RAM and the durability of writes to disc, then the size of the tables are very limited compared to a traditional RDBMS like MySQL or PostgreSQL. I know that Mnesia aren't meant to replace traditional RDBMS:s, but can it be used as a big RDBMS or do I have to look for another database? The server I will use is a VPS with limited amount of memory, around 512MB, but I want good database performance. Are disc_copies and the other types of tables in Mnesia so limited as I have understood?

    Read the article

  • .NET Membership with Repository Pattern

    - by Zac
    My team is in the process of designing a domain model which will hide various different data sources behind a unified repository abstraction. One of the main drivers for this approach is the very high probability that these data sources will undergo significant change in the near future and we don't want to be re-writing business logic when this happens. One data source will be our membership database which was originally implemented using the default ASP.Net Membership Provider. The membership provider is tied to the System.Web.Security namespace but we have a design guideline requiring that our domain model layer is not dependent upon System.Web (or any other implementation/environment dependency) as it will be consumed in different environments - nor do we want our websites directly communicating with databases. I am considering what would be a good approach to reconciling the MembershipProvider approach with our abstracted n-tier architecture. My initial feeling is that we could create a "DomainMembershipProvider" which interacts with the domain model and then implement objects in the model which deal with the repository and handle validation/business logic. The repository would then implement data access using our (as-yet undecided) ORM/data access tool. Are there are any glaring holes in this approach - I haven't worked closely with the MembershipProvider class so may well be missing something. Alternatively, is there an approach that you think will better serve the requirements I described above? Thanks in advance for your thoughts and advice. Regards, Zac

    Read the article

  • Django Models / SQLAlchemy are bloated! Any truly Pythonic DB models out there?

    - by Luke Stanley
    "Make things as simple as possible, but no simpler." Can we find the solution/s that fix the Python database world? from someAmazingDB import * class Task (model): title = '' isDone = False db.taskList = [] #or db.taskList = expandableTypeCollection(Task) #not sure what this syntax would be db['taskList'].append(Task(title='Beat old sql interfaces',done=False)) db.taskList.append(Task('Illustrate different syntax modes',True)) #at this point it should autosave #we should be able to reload the console and access like: >> from someAmazingDB import * >> print 'Done tasks:' >> for task in db.taskList: >> if task.done: >> print task 'Illustrate different syntax modes' I'm a fan of Python, webPy and Cherry Py, and KISS in general. We're talking automatic Python to SQL type translation or NoSQL. We don't have to totally be SQL compatible! Just a scalable subset or ignore it! Re:model changes, it's ok to ask the developer when they try to change it or have a set of sensible defaults. Here is the challenge: The above code should work with very little modification or thinking required. Why must we put up with compromise when we know better? It's 2010, we should be able to code scalable, simple databases in our sleep. If you think this is important, please upvote!

    Read the article

  • Ideal Multi-Developer Lamp Stack?

    - by devians
    I would like to build an 'ideal' lamp development stack. Dual Server (Virtualised, ESX) Apache / PHP on one, Databases (MySQL, PgSQL, etc) on the other. User (Developer) Manageable mini environments, or instance. Each developer instance shares the top level config (available modules and default config etc) A developer should have control over their apache and php version for each project. A developer might be able to change minor settings, ie magicquotes on for legacy code. Each project would determine its database provider in its code The idea is that it is one administrate-able server that I can control, and provide globally configured things like APC, Memcached, XDebug etc. Then by moving into subsets for each project, i can allow my users to quickly control their environments for various projects. Essentially I'm proposing the typical system of a developer running their own stack on their own machine, but centralised. In this way I'd hope to avoid problems like Cross OS code problems, database inconsistencies, slightly different installs producing bugs etc. I'm happy to manage this in custom builds from source, but if at all possible it would be great to have a large portion of it managed with some sort of package management. We typically use CentOS, so yum? Has anyone ever built anything like this before? Is there something turnkey that is similar to what I have described? Are there any useful guides I should be reading in order to build something like this?

    Read the article

  • How can I build something like Amazon S3 in Perl?

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe REST (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • invalid context 0x0 under iOS 7.0 and system degradation

    - by Alex
    I've read as many search results I could find on this dreaded problem, unfortunatelly, each one seems to focus on a specific function call. My problem is that I get the same error from multiple functions, which I am guessing are being called back from functions that I use. To make matters worse, the actual code is within a custom private framework which is being imported in another project, and as such, debugging isn't as simple? Can anyone point me to the right direction? I have a feeling I'm calling certain methods wrongly or with bad context, but the output from xcode is not very helpful at this point. : CGContextSetFillColorWithColor: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextSetStrokeColorWithColor: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. CGContextSaveGState: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextSetFlatness: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextAddPath: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextDrawPath: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextRestoreGState: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. : CGContextGetBlendMode: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update. Those errors may occur when a custom view is presented, or one of its inherited classes. At which point they spawn multiple times, until the keyboard won't provide any input. Touch events are still registered, but system slows down, and eventually may lead to unallocated object errors.

    Read the article

  • sql 2008 sqldmo alternative

    - by alexdelpiero
    Hi! I previously was using sqldmo to automatically generate scripts from the databse. Now I upgraded to sql server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr < 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr < 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr < 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status < -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr < 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr < 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • Java errors on Lotus Domino Designer Client 8.5.1

    - by ajcooper
    I have a clean install of Lotus Notes 8.5.1 (now with FP3) and I'm getting the following errors in Designer. This is with a new database with a couple of forms and views. I'm finding this is typical across all databases. Is there something I need to install/configure etc.? I'm not new to Notes, but I'm new to 8.5 Thanks Aidan Description Resource Path Location Type Cannot resolve plug-in: org.eclipse.core.runtime plugin.xml TestAgent.nsf line 9 Plug-in Problem Cannot resolve plug-in: org.eclipse.ui plugin.xml TestAgent.nsf line 8 Plug-in Problem Cannot resolve plug-in: com.ibm.commons plugin.xml TestAgent.nsf line 10 Plug-in Problem Cannot resolve plug-in: com.ibm.commons.vfs plugin.xml TestAgent.nsf line 12 Plug-in Problem Cannot resolve plug-in: com.ibm.commons.xml plugin.xml TestAgent.nsf line 11 Plug-in Problem Cannot resolve plug-in: com.ibm.designer.runtime plugin.xml TestAgent.nsf line 15 Plug-in Problem Cannot resolve plug-in: com.ibm.designer.runtime.directory plugin.xml TestAgent.nsf line 14 Plug-in Problem Cannot resolve plug-in: com.ibm.jscript plugin.xml TestAgent.nsf line 13 Plug-in Problem Cannot resolve plug-in: com.ibm.notes.java.api plugin.xml TestAgent.nsf line 20 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.core plugin.xml TestAgent.nsf line 16 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.core plugin.xml TestAgent.nsf line 21 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.designer plugin.xml TestAgent.nsf line 18 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.designer plugin.xml TestAgent.nsf line 22 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.domino plugin.xml TestAgent.nsf line 19 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.domino plugin.xml TestAgent.nsf line 23 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.extsn plugin.xml TestAgent.nsf line 17 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.extsn plugin.xml TestAgent.nsf line 24 Plug-in Problem Cannot resolve plug-in: com.ibm.xsp.rcp plugin.xml TestAgent.nsf line 25 Plug-in Problem

    Read the article

  • Database migration pattern for Java?

    - by Eno
    Im working on some database migration code in Java. Im also using a factory pattern so I can use different kinds of databases. And each kind of database im using implements a common interface. What I would like to do is have a migration check that is internal to the class and runs some database schema update code automatically. The actual update is pretty straight forward (I check schema version in a table and compare against a constant in my app to decide whether to migrate or not and between which versions of schema). To make this automatic I was thinking the test should live inside (or be called from) the constructor. OK, fair enough, that's simple enough. My problem is that I dont want the test to run every single time I instantiate a database object (it runs a query so having it run on every construction is not efficient). So maybe this should be a class static method? I guess my question is, what is a good design pattern for this type of problem? There ought to be a clean way to ensure the migration test runs only once OR is super-efficient.

    Read the article

  • Linq to SQL Problem System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager

    - by luckyluke
    I have a really tricky thing going up here. My project has around 100 tables and they are all mapped by LINQ. Everything works fine in a dev and test environment. These enviroments are MS Win 2008 r2 servers with SQL 2008 sp1 databases. IIS and SQL are on a different machines. Now on production enviroment which is MS Win 2003 x64 web farm + geoclustered SQL 2008 IT DOES not work. All I get is the exception System.IndexOutOfRangeException: Index was outside the bounds of the array. at System.Data.Linq.IdentityManager.StandardIdentityManager.MultiKeyManager3.TryCreateKeyFr>om Values(Object[] values, MultiKey& k) at System.Data.Linq.IdentityManager.StandardIdentityManager.IdentityCache2.Find(Object[] keyValues) at System.Data.Linq.ChangeProcessor.GetOtherItem(MetaAssociation assoc, Object instance) at System.Data.Linq.ChangeProcessor.BuildEdgeMaps() at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at ERS.IIMP.Services.ExposuresSrv.Update(Int32 ExpID, Int32 AssID) Services\ExposuresSrv.cs` My question is What the hell. They have precisely the same DBML, the DB has exactly THE SAME structure (when I get the DB from prod to TEST and mount it eveything works just great), the binaries on the WEB Server are the same. I seriously do not know what to do.... Did anyone found that Linq works on one env and does not on the second?? I mam really lost here. I really hope You can help me:)

    Read the article

  • h2 (embedded mode ) database files problem

    - by aeter
    There is a h2-database file in my src directory (Java, Eclipse): h2test.db The problem: starting the h2.jar from the command line (and thus the h2 browser interface on port 8082), I have created 2 tables, 'test1' and 'test2' in h2test.db and I have put some data in them; when trying to access them from java code (JDBC), it throws me "table not found exception". A "show tables" from the java code shows a resultset with 0 rows. Also, when creating a new table ('newtest') from the java code (CREATE TABLE ... etc), I cannot see it when starting the h2.jar browser interface afterwards; just the other two tables ('test1' and 'test2') are shown (but then the newly created table 'newtest' is accessible from the java code). I'm inexperienced with embedded databases; I believe I'm doing something fundamentally wrong here. My assumption is, that I'm accessing the same file - once from the java app, and once from the h2 console-browser interface. I cannot seem to understand it, what am I doing wrong here?

    Read the article

  • Several Small, Specific, MySQL Query Cache Questions

    - by Robbie
    I've look all over the web and in the questions asked here about MySQL caching and most of them seem very non-specific about a couple of questions that I have about performance and MySQL query caching. Specifically I want answers to these questions, assume for all questions that I have the query cache enabled and it is of type 2, or "DEMAND": Is the query cache per table, per database, or per server? Meaning if I have the cache size set to X and have T tables and D databases will I be caching TX, DX, or X amount of data? If I have table T1 which I regularly use the SQL_CACHE hint on for SELECT queries and table T2 which I never do, when I query T2 with a SELECT query will it check through the cache first before performing the query? *Note: I don't want to use the SQL_NO_CACHE for all T2 queries.* Assume the same situation as in question 2. If I alter (INSERT, DELETE) table T2 will any processing be done on the cache? For answers to 2 and 3, is this processing time negligible if T2 is constantly being altered and is the target of a majority of my SELECT queries?

    Read the article

  • SQL Server 2008 alternative for SQL-DMO

    - by alexdelpiero
    Hi! I previously was using SQL-DMO to automatically generate scripts from the database. Now I upgraded to SQL Server 2008 and I don’t want to use this feature anymore since Microsoft will be dropping this feature off. Is there any other alternative I can use to connect to a server and generate scripts automatically from a database? Any answer is welcome. Thanks in advance. This is the procedure i was previously using: CREATE PROC GenerateSP ( @server varchar(30) = null, @uname varchar(30) = null, @pwd varchar(30) = null, @dbname varchar(30) = null, @filename varchar(200) = 'c:\script.sql' ) AS DECLARE @object int DECLARE @hr int DECLARE @return varchar(200) DECLARE @exec_str varchar(2000) DECLARE @spname sysname SET NOCOUNT ON -- Sets the server to the local server IF @server is NULL SELECT @server = @@servername -- Sets the database to the current database IF @dbname is NULL SELECT @dbname = db_name() -- Sets the username to the current user name IF @uname is NULL SELECT @uname = SYSTEM_USER -- Create an object that points to the SQL Server EXEC @hr = sp_OACreate 'SQLDMO.SQLServer', @object OUT IF @hr <> 0 BEGIN PRINT 'error create SQLOLE.SQLServer' RETURN END -- Connect to the SQL Server IF @pwd is NULL BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END ELSE BEGIN EXEC @hr = sp_OAMethod @object, 'Connect', NULL, @server, @uname, @pwd IF @hr <> 0 BEGIN PRINT 'error Connect' RETURN END END --Verify the connection EXEC @hr = sp_OAMethod @object, 'VerifyConnection', @return OUT IF @hr <> 0 BEGIN PRINT 'error VerifyConnection' RETURN END SET @exec_str = 'DECLARE script_cursor CURSOR FOR SELECT name FROM ' + @dbname + '..sysobjects WHERE type = ''P'' ORDER BY Name' EXEC (@exec_str) OPEN script_cursor FETCH NEXT FROM script_cursor INTO @spname WHILE (@@fetch_status <> -1) BEGIN SET @exec_str = 'Databases("'+ @dbname +'").StoredProcedures("'+RTRIM(UPPER(@spname))+'").Script(74077,"'+ @filename +'")' EXEC @hr = sp_OAMethod @object, @exec_str, @return OUT IF @hr <> 0 BEGIN PRINT 'error Script' RETURN END FETCH NEXT FROM script_cursor INTO @spname END CLOSE script_cursor DEALLOCATE script_cursor -- Destroy the object EXEC @hr = sp_OADestroy @object IF @hr <> 0 BEGIN PRINT 'error destroy object' RETURN END GO

    Read the article

  • What type of webapp is the sweet spot for Scala's Lift framework?

    - by ajay
    What kind of applications are the sweet spot for Scala's lift web framework. My requirements: Ease of development and maintainability Ready for production purposes. i.e. good active online community, regular patches and updates for security and performance fixes etc. Framework should survive a few years. I don't want to write a app in a framework for which no updates/patches are available after 1 year. Has good UI templating engines Interoperation with Java (Scala satisfies this arleady. Just mentioning here for completeness sake) Good component oriented development. Time required to develop should be proportion to the complexity of web application. Should not be totally configuration based. I hate it when code gets automatically generated for me and does all sorts of magic under the hood. That is a debugging nightmare. Amount of Lift knowledge required to develop a webapp should be proportional to the complexity of the web application. i.e I should't have to spend 10+ hours learning Lift just to develop a simple TODO application. (I have knowledge of Databases, Scala) Does Lift satisfy these requirements?

    Read the article

  • The "first past the post election" query problem

    - by MPelletier
    This problem may seem like school work, but it isn't. At best it is self-imposed school work. I encourage any teachers to take is as an example if they wish. "First past the post" elections are single-round, meaning that whoever gets the most votes win, no second rounds. Suppose a table for an election. CREATE TABLE ElectionResults ( DistrictHnd INTEGER NOT NULL, PartyHnd INTEGER NOT NULL, CandidateName VARCHAR2(100) NOT NULL, TotalVotes INTEGER NOT NULL, PRIMARY KEY DistrictHnd, PartyHnd); The table has two foreign keys: DistrictHnd points to a District table (lists all the different electoral districts) and PartyHnd points to a Party table (lists all the different political parties). I won't bother with other tables here, joining them is trivial. This is just a wee bit of context. The question: What SQL query will return a table listing the DistrictHnd, PartyHnd, CandidateName and TotalVotes of the winners (max votes) in each District? This does not suppose any particular database system. If you wish to stick to a particular implementation of SQL, go the way of SQLite and MySQL. If you can devise a better schema (or an easier one), that is acceptable too. Criteria: simplicity, portability to other databases.

    Read the article

  • Setting up a multi-site CMS, collecting thoughts about the DB schema

    - by Ben Fransen
    Hello all, I'm collecting some thoughts about creating a multisite CMS. In my opinion there are two major approaches. All data is stored into 1 database, giving me the advantage of single point of updates; Seperated databases, so each client has its own database. Giving me the advantage to measure bandwith. Option 1 gives me the disadvantage of measuring bandwith while option is giving me the disadvantage of a single point of update structure. Are there any generic approaches for creating a sort of update system? So my clients can download a small package (maybe a zip with a conf file to tell the updatescript where to put all the files and how to extend the database??) Do you guys have some thougths about the best solution for a situation like this? I have my own webserver, full access to all resources and I'm developing in PHP with MySQL as DBMS. I hope to hear from you and I surely appreciate any effort you make to help me further! Greets from Holland, Ben Fransen

    Read the article

  • Parsing a blackberry .ipd file

    - by galaxywatcher
    I recently lost my Blackberry. When I discovered it was gone very shortly afterwards and called it, the sim card had already been removed. I ain't seeing that Blackberry again. Ok. I am out $300, but at least my data is backed up. I had an older working Blackberry fortunately and I got a new sim card and proceeded to restore my data using Blackberry Desktop Manager. 7000+ emails, hundreds of autotext entries, sms messages, calendar events, all backing up. Looking good. Lo and behold! My Address Book contacts refuse to back up? I try advanced, and it is greyed out as an option to restore. Far more frustrating than losing my bberry in the first place is wrangling with software that defies human logic. Ok, now I guess I will have to enter all 327 names by hand. That is, if I can read the .ipd file. I have tried the free version of ABC Amber Blackberry editor, but when I open the .ipd file, the contacts just do not show up. I am beginning to feel like the gods are conspiring against me. Then I found this: http://jabide.com/2009/03/parse-blackberry-ipd-files/ He posted a perl script that claims to extract the files. I copied and pasted the code and it did list all the different databases in my .ipd file, I was elated that a cool solution like this was published. I followed the instructions and garbled data with some discernible ascii was sent to standard output unlike a .csv file like he said it would. This is enough to make a grown man cry. Does anyone out there have a solution to extract my address book contacts from an .ipd file?

    Read the article

  • Managing lots of callback recursion in Nodejs

    - by Maciek
    In Nodejs, there are virtually no blocking I/O operations. This means that almost all nodejs IO code involves many callbacks. This applies to reading and writing to/from databases, files, processes, etc. A typical example of this is the following: var useFile = function(filename,callback){ posix.stat(filename).addCallback(function (stats) { posix.open(filename, process.O_RDONLY, 0666).addCallback(function (fd) { posix.read(fd, stats.size, 0).addCallback(function(contents){ callback(contents); }); }); }); }; ... useFile("test.data",function(data){ // use data.. }); I am anticipating writing code that will make many IO operations, so I expect to be writing many callbacks. I'm quite comfortable with using callbacks, but I'm worried about all the recursion. Am I in danger of running into too much recursion and blowing through a stack somewhere? If I make thousands of individual writes to my key-value store with thousands of callbacks, will my program eventually crash? Am I misunderstanding or underestimating the impact? If not, is there a way to get around this while still using Nodejs' callback coding style?

    Read the article

  • Handling dependencies with IoC that change within a single function call

    - by Jess
    We are trying to figure out how to setup Dependency Injection for situations where service classes can have different dependencies based on how they are used. In our specific case, we have a web app where 95% of the time the connection string is the same for the entire Request (this is a web application), but sometimes it can change. For example, we might have 2 classes with the following dependencies (simplified version - service actually has 4 dependencies): public LoginService (IUserRepository userRep) { } public UserRepository (IContext dbContext) { } In our IoC container, most of our dependencies are auto-wired except the Context for which I have something like this (not actual code, it's from memory ... this is StructureMap): x.ForRequestedType().Use() .WithCtorArg("connectionString").EqualTo(Session["ConnString"]); For 95% of our web application, this works perfectly. However, we have some admin-type functions that must operate across thousands of databases (one per client). Basically, we'd want to do this: public CreateUserList(IList<string> connStrings) { foreach (connString in connStrings) { //first create dependency graph using new connection string ???? //then call service method on new database _loginService.GetReportDataForAllUsers(); } } My question is: How do we create that new dependency graph for each time through the loop, while maintaining something that can easily be tested?

    Read the article

  • A business Case for Enterprise Python

    - by Srirangan
    Hi all, This will not be a "programming" question but more technology / platform related question. I'm trying to figure out whether Python can be a suitable Java alternative for enterprise / web applications. Which are the ideal cases where you would prefer to use Python instead of Java? How would a typical Python web application (databases/sessions/concurrency) perform as compared to a typical Java application? How do specific Python frameworks square up against Java based frameworks (Spring, SEAM, Grails etc.)? For businesses, switching from the Java infrastructure to a Python infrastructure .. is it too hard/expensive/resource intensive/not viable? Also shed some light on the business case for providing a Python + Google AppEngine based solution to the end customer. Will it be cost effective in an typical scenario? Sorry if I am asking too wide a question, I would have liked to keep it specific, but I need your help to evaluate Python as a whole from the perspectives of the programmers, service providing company and end business customer. For an SME, a Python/GoogleAppEngine based technology stack is a clear scalable and affordable platform. But what about a large MNC that already has a lot invested in Java. Thank you so much. I am researching this myself and will gladly share my conclusions here! Thank you, Srirangan

    Read the article

  • Beginner SQL question: querying gold and silver tag badges in Stack Exchange Data Explorer

    - by polygenelubricants
    I'm using the Stack Exchange Data Explorer to learn SQL, but I think the fundamentals of the question is applicable to other databases. I'm trying to query the Badges table, which according to Stexdex (that's what I'm going to call it from now on) has the following schema: Badges Id UserId Name Date This works well for badges like [Epic] and [Legendary] which have unique names, but the silver and gold tag-specific badges seems to be mixed in together by having the same exact name. Here's an example query I wrote for [mysql] tag: SELECT UserId as [User Link], Date FROM Badges Where Name = 'mysql' Order By Date ASC The (slightly annotated) output is: as seen on stexdex: User Link Date --------------- ------------------- // all for silver except where noted Bill Karwin 2009-02-20 11:00:25 Quassnoi 2009-06-01 10:00:16 Greg 2009-10-22 10:00:25 Quassnoi 2009-10-31 10:00:24 // for gold Bill Karwin 2009-11-23 11:00:30 // for gold cletus 2010-01-01 11:00:23 OMG Ponies 2010-01-03 11:00:48 Pascal MARTIN 2010-02-17 11:00:29 Mark Byers 2010-04-07 10:00:35 Daniel Vassallo 2010-05-14 10:00:38 This is consistent with the current list of silver and gold earners at the moment of this writing, but to speak in more timeless terms, as of the end of May 2010 only 2 users have earned the gold [mysql] tag: Quassnoi and Bill Karwin, as evidenced in the above result by their names being the only ones that appear twice. So this is the way I understand it: The first time an Id appears (in chronological order) is for the silver badge The second time is for the gold Now, the above result mixes the silver and gold entries together. My questions are: Is this a typical design, or are there much friendlier schema/normalization/whatever you call it? In the current design, how would you query the silver and gold badges separately? GROUP BY Id and picking the min/max or first/second by the Date somehow? How can you write a query that lists all the silver badges first then all the gold badges next? Imagine also that the "real" query may be more complicated, i.e. not just listing by date. How would you write it so that it doesn't have too many repetition between the silver and gold subqueries? Is it perhaps more typical to do two totally separate queries instead? What is this idiom called? A row "partitioning" query to put them into "buckets" or something?

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >