Search Results

Search found 33242 results on 1330 pages for 'database optimization'.

Page 763/1330 | < Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >

  • Use external datasource with NUnit's TestCaseAttribute

    - by Hamman359
    Is it possible to get the values for a TestCaseAttribute from an external data source such as an Excel Spreadsheet, CSV file or Database? i.e. Have a .csv file with 1 row of data per test case and pass that data to NUnit one at a time. Here's the specific situation that I'd like to use this for. I'm currently merging some features from one system into another. This is pretty much just a copy and paste process from the old system into the new one. Unfortunately, the code being moved not only does not have any tests, but is not written in a testable manner (i.e. tightly coupled with the database and other code.) Taking the time to make the code testable isn't really possible since its a big mess, i'm on a tight schedule and the entire feature is scheduled to be re-written from the ground up in the next 6-9 months. However, since I don't like the idea of not having any tests around the code, I'm going to create some simple Selenium tests using WebDriver to test the page through the UI. While this is not ideal, it's better than nothing. The page in question has about 10 input values and about 20 values that I need to assert against after the calculations are completed, with about 30 valid combinations of values that I'd like to test. I already have the data in a spreadsheet so it'd be nice to simply be able to pull that out rather than having to re-type it all in Visual Studio.

    Read the article

  • Out Of Memory error while executing mysqldump

    - by Nishaz Salam
    Hi, I am getting the following error when trying to backup a database using mysqldump from the command prompt. C:\Documents and Settings\bobC:\Adobe\LiveCycle8.2\mysql\bin\mysqldump --quick --add-locks --lock-tables -c --default-character-set=utf8 --skip-opt -pxxxx -u adobe -r C:\Adobe\LiveCycle8.2\configurationManager\working\upgrade\mysql\adobe. sql -B adobe --port=3306 --host=localhost mysqldump: Out of memory (Needed 10380928 bytes) mysqldump: Got error: 2008: MySQL client ran out of memory when retrieving data from server As you can see i am using the --quick and --skip-opt too; cannot figure out what is causing the issue. The server log has the following messages 100420 15:16:39 InnoDB: Error: cannot allocate 4814100 bytes of memory for InnoDB: a BLOB with malloc! Total allocated memory InnoDB: by InnoDB 33427880 bytes. Operating system errno: 2 InnoDB: Check if you should increase the swap file or InnoDB: ulimits of your operating system. InnoDB: On FreeBSD check you have compiled the OS with InnoDB: a big enough maximum process size. 100420 15:16:40 InnoDB: Warning: could not allocate 3814100 + 1000000 bytes to retrieve InnoDB: a big column. Table name adobe/tb_form_data Any help on this regard is highly appreciated P.S: The backup works fine without any issues when i use the MYSQL Administrator, but since an external app( adobe livecycle installer) uses the above command to backup the database during install, i need to get this working. Thanks, Nishaz Salam

    Read the article

  • Approach for replacing forms authentication in .NET application

    - by Ash Machine
    My question is about an approach, and I am looking for tips or links to help me develop a solution. I have an .NET 4.0 web forms application that works with Forms authentication using the aspnetdb SQL database of users and passwords. A new feature for the application is a new authentication mechanism using single sign on to allow access for thousands of new users. Essentially, when the user logs in through the new single-sign-on method, I will be able to identify them as legitimate users with a role. So I will have something like HttpContext.Current.Session["email_of_authenticated_user"] (their identity) and HttpContext.Current.Session["role_of_authenticated_user"] (their role). Importantly, I don't necessarily want to maintain these users and roles redundantly in the aspnetdb database which will be retired, but I do want to use the session objects above to allow the user to pass through the application as if they were in passing through with forms authentication. I don't think CustomRoleProviders or CustomMemberProviders are helpful since they do not allow for creating session-level users. So my question is how to use the session level user and role that I do have to "mimic" all the forms authentication goodness like enforcing: [System.Security.Permissions.PrincipalPermission(System.Security.Permissions.SecurityAction.Demand, Role = "Student")] or <authorization> <allow users="wilma, barney" /> </authorization> Thanks for any pointers.

    Read the article

  • em.persist seems doesn't persist data on postgreSQL db

    - by Mario
    I've got a simple java main which must write bean data on a PostgreSQL database. I use Entity manager to persist or update object. I use hibernate and toplink driver connection which are specified in persistence.xml file. When I call em.persist(obj), nothing is saved on database, I don't know why. here is my simple code: private static void importa(FileReader f) throws IOException { EntityManagerFactory emf = Persistence .createEntityManagerFactory("orpt2"); EntityManager em = emf.createEntityManager(); dispositivoMedico = new DispositivoMedico(); dispositivoMedico.setCategoria("prova"); dispositivoMedico.setCodice("323"); em.persist(dispositivoMedico); And here is my persistence.xml http://java.sun.com/xml/ns/persistence/persistence_1_0.xsd" it.ariadne.orpt2.entities.AccessoriScheda it.ariadne.orpt2.entities.CampiSchede it.ariadne.orpt2.entities.CampiSchedeSalvati it.ariadne.orpt2.entities.CampoAggiuntivo it.ariadne.orpt2.entities.Categorie it.ariadne.orpt2.entities.CategorieCampi it.ariadne.orpt2.entities.CategorieCampiPK it.ariadne.orpt2.entities.ClasseCivab it.ariadne.orpt2.entities.DecodificaStato it.ariadne.orpt2.entities.DispositivoMedico it.ariadne.orpt2.entities.Ente it.ariadne.orpt2.entities.FormaNegoziazione it.ariadne.orpt2.entities.Fornitore it.ariadne.orpt2.entities.LogSession it.ariadne.orpt2.entities.Modello it.ariadne.orpt2.entities.Periodicita it.ariadne.orpt2.entities.Produttore it.ariadne.orpt2.entities.Ruolo it.ariadne.orpt2.entities.RuoloPK it.ariadne.orpt2.entities.RuoloUtente it.ariadne.orpt2.entities.Scheda it.ariadne.orpt2.entities.SchedaSalvata it.ariadne.orpt2.entities.Tipologia it.ariadne.orpt2.entities.Utente Thank you for your help. Mario

    Read the article

  • Syncing data between devel/live databases in Django

    - by T. Stone
    With Django's new multi-db functionality in the development version, I've been trying to work on creating a management command that let's me synchronize the data from the live site down to a developer machine for extended testing. (Having actual data, particularly user-entered data, allows me to test a broader range of inputs.) Right now I've got a "mostly" working command. It can sync "simple" model data but the problem I'm having is that it ignores ManyToMany fields which I don't see any reason for it do so. Anyone have any ideas of either how to fix that or a better want to handle this? Should I be exporting that first query to a fixture first and then re-importing it? from django.core.management.base import LabelCommand from django.db.utils import IntegrityError from django.db import models from django.conf import settings LIVE_DATABASE_KEY = 'live' class Command(LabelCommand): help = ("Synchronizes the data between the local machine and the live server") args = "APP_NAME" label = 'application name' requires_model_validation = False can_import_settings = True def handle_label(self, label, **options): # Make sure we're running the command on a developer machine and that we've got the right settings db_settings = getattr(settings, 'DATABASES', {}) if not LIVE_DATABASE_KEY in db_settings: print 'Could not find "%s" in database settings.' % LIVE_DATABASE_KEY return if db_settings.get('default') == db_settings.get(LIVE_DATABASE_KEY): print 'Data cannot synchronize with self. This command must be run on a non-production server.' return # Fetch all models for the given app try: app = models.get_app(label) app_models = models.get_models(app) except: print "The app '%s' could not be found or models could not be loaded for it." % label for model in app_models: print 'Syncing %s.%s ...' % (model._meta.app_label, model._meta.object_name) # Query each model from the live site qs = model.objects.all().using(LIVE_DATABASE_KEY) # ...and save it to the local database for record in qs: try: record.save(using='default') except IntegrityError: # Skip as the record probably already exists pass

    Read the article

  • Set the property hibernate.dialect error message

    - by user281180
    I am having the following error when configuring mvc3 and Nhibernate. Can anyone guide me what I have missed please. the dialect was not set. Set the property hibernate.dialect. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: NHibernate.HibernateException: The dialect was not set. Set the property hibernate.dialect. Source Error: Line 16: { Line 17: NHibernate.Cfg.Configuration configuration = new NHibernate.Cfg.Configuration(); Line 18: configuration.AddAssembly(System.Reflection.Assembly.GetExecutingAssembly()); Line 19: sessionFactory = configuration.BuildSessionFactory(); Line 20: } My web.config is as follows: <configSections> <section name="cachingConfiguration"type="Microsoft.Practices.EnterpriseLibrary.Caching.Configuration.CacheManagerSettings,Microsoft.Practices.EnterpriseLibrary.Caching"/> <section name="log4net"type="log4net.Config.Log4NetConfigurationSectionHandler,log4net"/> <section name="hibernate-configuration"type="NHibernate.Cfg.ConfigurationSectionHandler, NHibernate"/ <appSettings> <add key="BusinessObjectAssemblies" value="Keeper.API"></add> <add key="ConnectionString" value="Server=localhost\SQLSERVER2005;Database=KeeperDev;User=test;Pwd=test;"></add> <add key="ClientValidationEnabled" value="true"/> <add key="UnobtrusiveJavaScriptEnabled" value="true"/> </appSettings> <hibernate-configuration xmlns="urn:nhibernate-configuration-2.2"> <session-factory> <property name="dialect">NHibernate.Dialect.MsSql2000Dialect</property> <property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property> <property name="connection.connection_string">Server=localhost\SQLServer2005;Database=KeeperDev;User=test;Pwd=test;</property> <property name="proxyfactory.factory_class">NHibernate.ByteCode.Castle.ProxyFactoryFactory, NHibernate.ByteCode.Castle</property> </session-factory> </hibernate-configuration>

    Read the article

  • Castle Windsor, Fluent Nhibernate, and Automapping Isession closed problem

    - by SImon
    I'm new to the whole castle Windsor, Nhibernate, Fluent and Automapping stack so excuse my ignorance here. I didn't want to post another question on this as it seems there are already a huge number of questions that try to get a solution the Windsor nhib Isession management problem, but none of them have solved my problem so far. I am still getting a ISession is closed exception when I'm trying to call to the Db from my Repositories,Here is my container setup code. container.AddFacility<FactorySupportFacility>() .Register( Component.For<ISessionFactory>() .LifeStyle.Singleton .UsingFactoryMethod(() => Fluently.Configure() .Database( MsSqlConfiguration.MsSql2005. ConnectionString( c => c.Database("DbSchema").Server("Server").Username("UserName").Password("password"))) .Mappings ( m => m.AutoMappings.Add ( AutoMap.AssemblyOf<Message>(cfg) .Override<Client>(map => { map.HasManyToMany(x => x.SICCodes).Table("SICRefDataToClient"); }) .IgnoreBase<BaseEntity>() .Conventions.Add(DefaultCascade.SaveUpdate()) .Conventions.Add(new StringColumnLengthConvention(),new EnumConvention()) .Conventions.Add(new EnumConvention()) .Conventions.Add(DefaultLazy.Never()) ) ) .ExposeConfiguration(ConfigureValidator) .ExposeConfiguration(BuildDatabase) .BuildSessionFactory() as SessionFactoryImpl), Component.For<ISession>().LifeStyle.PerWebRequest.UsingFactoryMethod(kernel => kernel.Resolve<ISessionFactory>().OpenSession() )); In my repositories i inject private readonly ISession session; and use it as followes public User GetUser(int id) { User u; u = session.Get<User>(id); if (u != null && u.Id > 0) { NHibernateUtil.Initialize(u.UserDocuments); } return u; in my web.config inside <httpModules>. i have also added this line <add name="PerRequestLifestyle" type="Castle.MicroKernel.Lifestyle.PerWebRequestLifestyleModule, Castle.Windsor"/> I'm i still missing part of the puzzle here, i can't believe that this is such a complex thing to configure for a basic need of any web application development with nHibernate and castle Windsor. I have been trying to follow the code here windsor-nhibernate-isession-mvc and i posted my question there as they seemed to have the exact same issue but mine is not resolved.

    Read the article

  • java connectivity with mysql error

    - by abson
    I just started with the connectivity and tried this example. I have installed the necessary softwares. Also copied the jar file into the /ext folder.Yet the code below has the following error import java.sql.*; public class Jdbc00 { public static void main(String args[]){ try { Statement stmt; Class.forName("com.mysql.jdbc.Driver"); String url = "jdbc:mysql://localhost:3306/mysql" DriverManager.getConnection(url,"root", "root"); //Display URL and connection information System.out.println("URL: " + url); System.out.println("Connection: " + con); //Get a Statement object stmt = con.createStatement(); //Create the new database stmt.executeUpdate( "CREATE DATABASE JunkDB"); stmt.executeUpdate( "GRANT SELECT,INSERT,UPDATE,DELETE," + "CREATE,DROP " + "ON JunkDB.* TO 'auser'@'localhost' " + "IDENTIFIED BY 'drowssap';"); con.close(); }catch( Exception e ) { e.printStackTrace(); }//end catch }//end main }//end class Jdbc00 But it gave the following error D:\Java12\Explore>java Jdbc00 java.lang.ClassNotFoundException: com.mysql.jdbc.Driver at java.net.URLClassLoader$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source) at java.lang.ClassLoader.loadClass(Unknown Source) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Unknown Source) at Jdbc11.main(Jdbc00.java:11) Could anyone please guide me in correcting this?

    Read the article

  • Slow retrieval of data in SQLITE takes a long using ContentProvider

    - by Arlyn
    I have an application in Android (running 4.0.3) that stores a lot of data in Table A. Table A resides in SQLite Database. I am using a ContentProvider as an abstraction layer above the database. Lots of data here means almost 80,000 records per month. Table A is structured like this: String SQL_CREATE_TABLE = "CREATE TABLE IF NOT EXISTS " + TABLE_A + " ( " + COLUMN_ID + " INTEGER PRIMARY KEY NOT NULL" + "," + COLUMN_GROUPNO + " INTEGER NOT NULL DEFAULT(0)" + "," + COLUMN_TIMESTAMP + " DATETIME UNIQUE NOT NULL" + "," + COLUMN_TAG + " TEXT" + "," + COLUMN_VALUE + " REAL NOT NULL" + "," + COLUMN_DEVICEID + " TEXT NOT NULL" + "," + COLUMN_NEW + " NUMERIC NOT NULL DEFAULT(1)" + " )"; Here is the index statement: String SQL_CREATE_INDEX_TIMESTAMP = "CREATE INDEX IF NOT EXISTS " + TABLE_A + "_" + COLUMN_TIMESTAMP + " ON " + TABLE_A + " (" + COLUMN_TIMESTAMP + ") "; I have defined the columns as well as the table name as String Constants. I am already experiencing significant slow down when retrieving this data from Table A. The problem is that when I retrieve data from this table, I first put it in an ArrayList and then I display it. Obviously, this is possibly the wrong way of doing things. I am trying to find a better way to approach this problem using a ContentProvider. But this is not the problem that bothers me. The problem is for some reason, it takes a lot longer to retrieve data from other tables which have only upto 12 records maximum. I see this delay increase as the number of records in Table A increase. This does not make any sense. I can understand the delay if I retrieve data from Table A, but why the delay in retrieving data from other tables. To clarify, I do not experience this delay if Table A is empty or has less than 3000 records. What could be the problem?

    Read the article

  • In what order does DOJO handles ajax requests?

    - by mm2887
    Hi, Using a form in a dialog box I am using Dojo in jsp to save the form in my database. After that request is completed using dojo.xhrPost() I am sending in another request to update a dropdown box that should include the added form object that was just saved, but for some reason the request to update the dropdown is executed before saving the form in the database even though the form save is called first. Using Firebug I can see that the getLocations() request is completed before the sendForm() request. This is the code: Add Team sendForm("ajaxResult1", "addTeamForm"); dijit.byId("addTeamDialog").hide(); getLocations("locationsTeam"); </script> function sendForm(target, form){ var targetNode = dojo.byId(target); var xhrArgs = { form: form, url: "ajax", handleAs: "text", load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrPost(xhrArgs); } function getLocations(id) { var targetNode = dojo.byId(id); var xhrArgs = { url: "ajax", handleAs: "text", content: { location: "yes" }, load: function(data) { targetNode.innerHTML = data; }, error: function(error) { targetNode.innerHTML = "An unexpected error occurred: " + error; } } //Call the asynchronous xhrGet var deferred = dojo.xhrGet(xhrArgs); } Why is this happening? Is there way to make the first request complete first before the second executes? To reduce the possibilities of why this is happening I tried setting the cache property in xhrGet to false but the result is still the same. Please help!

    Read the article

  • Dealing with Time Zones

    - by RavIncredible
    Hi All, I need some guidance with one of my project requirements, I am developing an application which has to deal with various time zones. Scenario: User 1 is from India, so his time zone would be GMT+05:30 User 2 is from UK, so his time zone would be GMT+01:00 If the User 1 sends a message to User 2, I want to show the Message Sent/Received Time as per the user’s time zone. For example User 1 sends a message at 6:30 Indian time, when User 2 would view the message it would show as 2:00 UK time. Here goes my question, whenever I save the message should I convert it to GMT+00, so all my base times stamps are the same and then later when I display the message, I convert it back to User specific time zone. Would this be complex? Is this the best way of doing this? I like to get views for both saving and displaying, also when I should do the time conversion from optimization point of view. I would need do deal with any/all timezones. I am developing this application with PHP and MySQL and I am aware of timezone conversion method come with both PHP and MySQL. I am just trying to figure out the best way of doing this. Look forward to have all valuable suggestions. Note : As of now I am not much worried with day/light savings. Thanks Ravi

    Read the article

  • How do I create self-relationships in polymorphic inheritance in Elixir and Pylons?

    - by Turukawa
    I am new to programming and am following the example in the Pylons documentation on creating a Wiki. The database I want to link to the wiki was created with Elixir so I rewrote the Wiki database schema and have continued from there. In the wiki there is a requirement for a Navigation table which is inherited by Pages and Sections. A section can have many pages, while a page can only have one section. In addition, each sibling node can be chain-referenced to each other. So: Nav has "section" (OneToMany) and "before" (OneToOne - to reference preceeding node) Page has "section" (ManyToOne - many pages in one section) and inherits "before" Section inherits all from Nav The code I've written looks like this: class Nav(Entity): using_options(inheritance='multi') name = Field(Unicode(30), default=u'Untitled Node') path = Field(Unicode(255), default=u'') section = OneToMany('Page', inverse='section') after = OneToOne('Nav', inverse='before') before = OneToMany('Nav', inverse='after') class Page(Nav): using_options(inheritance='multi') content = Field(UnicodeText, nullable=False) posted = Field(DateTime, default=now()) title = Field(Unicode(255), default=u'Untitled Page') heading = Field(Unicode(255)) tags = ManyToMany('Tag') comments = OneToMany('Comment') section = ManyToOne('Nav', inverse='section') class Section(Nav): using_options(inheritance='multi') Errors received on this: sqlalchemy.exc.OperationalError: (OperationalError) table nav has no column named aftr_id u'INSERT INTO nav (name, path, aftr_id, row_type) VALUES (?, ?, ?, ?)' I've also tried: before = ManyToMany('Nav', inverse='before') on Nav in the hopes this might break the problem, but also not. The original SQLAlchemy code from the tutorial for these declarations is as follows: nav_table = schema.Table('nav', meta.metadata, schema.Column('id', types.Integer(), schema.Sequence('nav_id_seq', optional=True), primary_key=True), schema.Column('name', types.Unicode(255), default=u'Untitled Node'), schema.Column('path', types.Unicode(255), default=u''), schema.Column('section', types.Integer(), schema.ForeignKey('nav.id')), schema.Column('before', types.Integer(), default=None), schema.Column('type', types.String(30), nullable=False) ) page_table = schema.Table('page', meta.metadata, schema.Column('id', types.Integer, schema.ForeignKey('nav.id'), primary_key=True), schema.Column('content', types.Text(), nullable=False), schema.Column('posted', types.DateTime(), default=now), schema.Column('title', types.Unicode(255), default=u'Untitled Page'), schema.Column('heading', types.Unicode(255)), ) section_table = sa.Table('section', meta.metadata, schema.Column('id', types.Integer, schema.ForeignKey('nav.id'), primary_key=True), ) orm.mapper(Nav, nav_table, polymorphic_on=nav_table.c.type, polymorphic_identity='nav') orm.mapper(Section, section_table, inherits=Nav, polymorphic_identity='section') orm.mapper(Page, page_table, inherits=Nav, polymorphic_identity='page', properties={ 'comments':orm.relation(Comment, backref='page', cascade='all'), 'tags':orm.relation(Tag, secondary=pagetag_table) }) Any help is much appreciated.

    Read the article

  • PDO empy result from SELECT when using AND

    - by Jurgen
    Hello, I've come upon a rather interesting thing, which I can't seem to figure out myself. Everytime when executing a SQL statement which contains a '... AND ...' the result is empty. Example: echo('I have a user: ' . $email . $wachtwoord . '<br>'); $dbh = new PDO($dsn, $user, $password); $sql = 'SELECT * FROM user WHERE email = :email AND wachtwoord= :wachtwoord'; $stmt = $dbh->prepare($sql); $stmt->bindParam(:email,$email,PDO::PARAM_STR); $stmt->bindParam(:wachtwoord,$wachtwoord,PDO::PARAM_STR); $stmt->execute(); while($row = $stmt->fetchObject()) { echo($row-email . ',' . $row-wachtwoord); $user[] = array( 'email' = $row-email, 'wachtwoord' = $row-wachtwoord ); } The first echo displays the correct values, however the line with echo($row->email . ',' . $row->wachtwoord); is never reached. A few things I want to add: 1) I am connected to the database since other queries work, only the ones where I add an 'AND' after my 'WHERE's fail. 2) Working with the while works perfectly with queries that do not contain '... AND ...' 3) Error reporting is on, PDO gives no exceptions on my query (or anything else) 4) Executing the query directly on the database does give what I want: SELECT * FROM user WHERE email = '[email protected]' AND wachtwoord = 'jurgen' I can stare at it all day long again (which I already did once, but I managed to work around the 'AND'), but maybe one of you can give me a helping hand. Thank you in advance. Jurgen

    Read the article

  • Pinax TemplateSyntaxError

    - by Spikie
    hi, i ran into this errors while trying to modify pinax database model i am using eclipse pydev i have this error on the pydev Exception Type: TemplateSyntaxError at / Exception Value: Caught an exception while rendering: (1146, "Table 'test1.announcements_announcement' doesn't exist") please how do i correct this UPDATE: i asked this question and left unresolved some months back and you what ran into the bug again this week and typed the error message in google hit the page with the question and unanswered so i think i have to answer it and hope it help someone in the future have the same problem. some the problem is that the sqlite path is out of place so django or this case pinax can not find it so to resolve that change the absolute path to sqlite like it DATABASE_ENGINE = 'sqlite3' # 'postgresql_psycopg2', 'postgresql', 'mysql', 'sqlite3' or 'ado_mssql'. DATABASE_NAME = os.path.join(PROJECT_ROOT,'dev.db' ) # Or path to database file if using sqlite3. DATABASE_USER = '' # Not used with sqlite3. DATABASE_PASSWORD = '' # Not used with sqlite3. DATABASE_HOST = '' # Set to empty string for localhost. Not used with sqlite3. DATABASE_PORT = '' # Set to empty string for default. Not used with sqlite3. i hope that help

    Read the article

  • Alter Table Filestream runs even if IF statement is false

    - by Allison
    I am writing a script to update a database to add Filestream capability. The script needs to be able to be run multiple times without erroring. This is what I currently have IF ((select count(*) from sys.columns a inner join sys.objects b on a.object_id = b.object_id inner join sys.default_constraints c on c.parent_object_id = a.object_id where a.name = 'evidence_data' and b.name='evidence' and c.name='DF__evidence_evidence_data') = 0) begin ALTER TABLE evidence SET ( FILESTREAM_ON = AnalysisFSGroup ) ALTER TABLE evidence ALTER COLUMN id ADD ROWGUIDCOL; end GO The first time I run this against the database it works fine. The second time when the if statement should be false it throws an error saying "Cannot add FILESTREAM filegroup or partition scheme since table 'evidence' has a FILESTREAM filegroup or partition scheme already." If I put a simple select into the if statement and take out the alter table filestream on line it functions correctly and does not perform the if statement. So esentially it is always running the alter table filestream on statement even if the if statement is false. Any thoughts or suggestions would be great. Thanks.

    Read the article

  • JPA @ManyToMany on only one side?

    - by Ethan Leroy
    I am trying to refresh the @ManyToMany relation but it gets cleared instead... My Project class looks like this: @Entity public class Project { ... @ManyToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER) @JoinTable(name = "PROJECT_USER", joinColumns = @JoinColumn(name = "PROJECT_ID", referencedColumnName = "ID"), inverseJoinColumns = @JoinColumn(name = "USER_ID", referencedColumnName = "ID")) private Collection<User> users; ... } But I don't have - and I don't want - the collection of Projects in the User entity. When I look at the generated database tables, they look good. They contain all columns and constraints (primary/foreign keys). But when I persist a Project that has a list of Users (and the users are still in the database), the mapping table doesn't get updated gets updated but when I refresh the project afterwards, the list of Users is cleared. For better understanding: Project project = ...; // new project with users that are available in the db System.out.println(project getUsers().size()); // prints 5 em.persist(project); System.out.println(project getUsers().size()); // prints 5 em.refresh(project); System.out.println(project getUsers().size()); // prints 0 So, how can I refresh the relation between User and Project?

    Read the article

  • Publish failed using Ant publisher (Eclipse/datanucleus).

    - by aronp
    Dear All, I am being driven mad the following (apparently hard) error from eclipse. Publish failed using Ant publisher Resource is out of sync with the file system: '/MyServlet/build/classes/com/inver/hotzones/database/BaseNetworkData.class'. I have seen comments on similar errors where refreshing eclipses view of the project helps but it is not helping me. Have tried cleaning the project, removing it from the webserver, deleting war files but cant seem to clear it. I have reset my TMPDIR variable so that it uses a directory on the same filesystem as that appeared to be another possible cause. The error occurs on classes which have been enhanced by datanuculeus. I have auto-enhance on the project. The other references to this problem indicate that it is due to Eclipses view of the project being out of step with the filesystem, and I am guessing that this has something to do with thedata nucleus enhancement. Any ideas? Thanks. I am using Eclipse 3.5.2 with latest datanucleus pluggins. Stack trace org.eclipse.core.runtime.CoreException: Resource is out of sync with the file system: '/MyServlet/build/classes/com/inver/hotzones/database/BaseNetworkData.class'. at org.eclipse.jst.server.generic.core.internal.publishers.AbstractModuleAssembler.copyModule(AbstractModuleAssembler.java:172) at org.eclipse.jst.server.generic.core.internal.publishers.WarModuleAssembler.assemble(WarModuleAssembler.java:31) at org.eclipse.jst.server.generic.core.internal.publishers.AntPublisher.assembleModule(AntPublisher.java:167) at org.eclipse.jst.server.generic.core.internal.publishers.AntPublisher.publish(AntPublisher.java:128) at org.eclipse.jst.server.generic.core.internal.GenericServerBehaviour.publishModule(GenericServerBehaviour.java:82) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publishModule(ServerBehaviourDelegate.java:949) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publishModules(ServerBehaviourDelegate.java:1039) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:872) at org.eclipse.wst.server.core.model.ServerBehaviourDelegate.publish(ServerBehaviourDelegate.java:708) at org.eclipse.wst.server.core.internal.Server.publishImpl(Server.java:2731) at org.eclipse.wst.server.core.internal.Server$PublishJob.run(Server.java:278) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:55)

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Cross-platform and language (de)serialization

    - by fwgx
    I'm looking for a way to serialize a bunch of C++ structs in the most convenient way so that the serialization is portable across C++ and Java (at a minimum) and across 32bit/64bit, big/little endian platforms. The structures to be serialized just contain data, i.e. they're pure data objects with no state or behavior. The idea being that we serialize the structs into an octet blob that we can store in a database "generically" and be read out later on. Thus avoiding changing the database whenever a struct changes and also avoiding assigning each data member to a field - i.e. we only want one table to hold everything "generically" as a binary blob. This should make less work for developers and require less changes when structures change. I've looked at boost.serialize but don't think there's a way to enable compatibility with Java. And likewise for inheriting Serializable in Java. If there is a way to do it by starting with an IDL file that would be best as we already have IDL files that describe the structures. Cheers in advance!

    Read the article

  • ADO/SQL Server: What is the error code for "timeout expired"?

    - by Ian Boyd
    i'm trying to trap a "timeout expired" error from ADO. When a timeout happens, ADO returns: Number: 0x80040E31 (DB_E_ABORTLIMITREACHED in oledberr.h) SQLState: HYT00 NativeError: 0 The NativeError of zero makes sense, since the timeout is not a function of the database engine (i.e. SQL Server), but of ADO's internal timeout mechanism. The Number (i.e. the COM hresult) looks useful, but the definition of DB_E_ABORTLIMITREACHED in oledberr.h says: Execution stopped because a resource limit was reached. No results were returned. This error could apply to things besides "timeout expired" (some potentially server-side), such as a governor that limits: CPU usage I/O reads/writes network bandwidth and stops a query. The final useful piece is SQLState, which is a database-independent error code system. Unfortunately the only reference for SQLState error codes i can find have no mention of HYT00. What to do? What do do? Note: i can't trust 0x80040E31 (DB_E_ABORTLIMITREACHED) to mean "timeout expired", anymore than i could trust 0x80004005 (E_UNSPECIFIED_ERROR) to mean "Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim". My pseudo-question becomes: does anyone have documentation on what the SQLState "HYT000" means? And my real question still remains: How can i specifically trap an ADO timeout expired exception thrown by ADO? Gotta love the questions where the developer is trying to "do the right thing", but nobody knows how to do the right thing. Also gotta love how googling for DB_E_ABORTLIMITREACHED and this question is #9, with MSDN nowhere to be found. Update 3 From the OLEdb ICommand.Execute reference: DB_E_ABORTLIMITREACHED Execution has been aborted because a resource limit has been reached. For example, a query timed out. No results have been returned. "For example", meaning not an exhaustive list.

    Read the article

  • Rails preventing duplicates in polymorphic has_many :through associations

    - by seaneshbaugh
    Is there an easy or at least elegant way to prevent duplicate entries in polymorphic has_many through associations? I've got two models, stories and links that can be tagged. I'm making a conscious decision to not use a plugin here. I want to actually understand everything that's going on and not be dependent on someone else's code that I don't fully grasp. To see what my question is getting at, if I run the following in the console (assuming the story and tag objects exist in the database already) s = Story.find_by_id(1) t = Tag.find_by_id(1) s.tags << t s.tags << t My taggings join table will have two entries added to it, each with the same exact data (tag_id = 1, taggable_id = 1, taggable_type = "Story"). That just doesn't seem very proper to me. So in an attempt to prevent this from happening I added the following to my Tagging model: before_validation :validate_uniqueness def validate_uniqueness taggings = Tagging.find(:all, :conditions => { :tag_id => self.tag_id, :taggable_id => self.taggable_id, :taggable_type => self.taggable_type }) if !taggings.empty? return false end return true end And it works almost as intended, but if I attempt to add a duplicate tag to a story or link I get an ActiveRecord::RecordInvalid: Validation failed exception. It seems that when you add an association to a list it calls the save! (rather than save sans !) method which raises exceptions if something goes wrong rather than just returning false. That isn't quite what I want to happen. I suppose I can surround any attempts to add new tags with a try/catch but that goes against the idea that you shouldn't expect your exceptions and this is something I fully expect to happen. Is there a better way of doing this that won't raise exceptions when all I want to do is just silently not save the object to the database because a duplicate exists?

    Read the article

  • SQL Server insert performance

    - by Jose
    I have an insert query that gets generated like this INSERT INTO InvoiceDetail (LegacyId,InvoiceId,DetailTypeId,Fee,FeeTax,Investigatorid,SalespersonId,CreateDate,CreatedById,IsChargeBack,Expense,RepoAgentId,PayeeName,ExpensePaymentId,AdjustDetailId) VALUES(1,1,2,1500.0000,0.0000,163,1002,'11/30/2001 12:00:00 AM',1116,0,550.0000,850,NULL,@ExpensePay1,NULL); DECLARE @InvDetail1 INT; SET @InvDetail1 = (SELECT @@IDENTITY); This query is generated for only 110K rows. It takes 30 minutes for all of these query's to execute I checked the query plan and the largest % nodes are A Clustered Index Insert at 57% query cost which has a long xml that I don't want to post. A Table Spool which is 38% query cost <RelOp AvgRowSize="35" EstimateCPU="5.01038E-05" EstimateIO="0" EstimateRebinds="0" EstimateRewinds="0" EstimateRows="1" LogicalOp="Eager Spool" NodeId="80" Parallel="false" PhysicalOp="Table Spool" EstimatedTotalSubtreeCost="0.0466109"> <OutputList> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvoiceId" /> <ColumnReference Database="[SkipPro]" Schema="[dbo]" Table="[InvoiceDetail]" Column="InvestigatorId" /> <ColumnReference Column="Expr1054" /> <ColumnReference Column="Expr1055" /> </OutputList> <Spool PrimaryNodeId="3" /> </RelOp> So my question is what is there that I can do to improve the speed of this thing? I already run ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL Before the queries and then ALTER TABLE TABLENAME NOCHECK CONSTRAINTS ALL after the queries. And that didn't shave off hardly anything off of the time. Know I am running these queries in a .NET application that uses a SqlCommand object to send the query. I then tried to output the sql commands to a file and then execute it using sqlcmd, but I wasn't getting any updates on how it was doing, so I gave up on that. Any ideas or hints or help?

    Read the article

  • exec problem in sql 2005

    - by IordanTanev
    Hi, i have the situation where i have two databases whith same structure. The first have some data in its datatables. I need to create a script that will transfer the data from the first database to the second. I have created this script. DECLARE @table_name nvarchar(MAX), @query nvarchar(MAX) DECLARE @table_cursor CURSOR SET @table_cursor = CURSOR FAST_FORWARD FOR Select TABLE_NAME FROM INFORMATION_SCHEMA.TABLES OPEN @table_cursor FETCH NEXT FROM @table_cursor INTO @table_name WHILE @@FETCH_STATUS = 0 BEGIN SET @query = 'INSERT INTO ' + @table_name + ' SELECT * FROM MyDataBase.dbo.' + @table_name print @query exec @query FETCH NEXT FROM @table_cursor INTO @table_name END CLOSE @table_cursor DEALLOCATE @table_cursor The problem is that when i run th script the "print @query" statement prints statement like this INSERT INTO table SELECT * FROM MyDataBase.dbo.table When i copy this and run it from Management studio it works fine. But when the script trys to run it with exec i get this error Msg 911, Level 16, State 1, Line 21 Could not locate entry in sysdatabases for database 'INSERT INTO table SELECT * FROM MPDEV090314'. No entry found with that name. Make sure that the name is entered correctly. Hope someone can tell me whot is wront with this. Best Regards, Iordan Tanev

    Read the article

  • Design Practice Retrieve Multiple Users - PHP

    - by pws5068
    Greetings all, I'm looking for a more efficient way to grab multiple users without too much code redundancy. I have a Users class, (here's a highly condensed representation) Class Users { function __construct() { ... ... } private static function getNew($id) { // This is all only example code $sql = "SELECT * FROM Users WHERE User_ID=?"; $user = new User(); $user->setID($id); .... return $user; } public static function getNewestUsers($count) { $sql = "SELECT * FROM Users ORDER BY Join_Date LIMIT ?"; $users = array(); // Prepared Statement Data Here while($stmt->fetch()) { $users[] = Users::getNew($id); ... } return $users; } // These have more sanity/security checks, demonstration only. function setID($id) { $this->id = $id; } function setName($name) { $this->name = $name; } ... } So clearly calling getNew() in a loop is inefficient, because it makes $count number of calls to the database each time. I would much rather select all users in a single SQL statement. The problem is that I have probably a dozen or so methods for getting arrays of users, so all of these would now need to know the names of the database columns which is repeating a lot of structure that could always change later. What are some other ways that I could approach this problem? My goals include efficiency, flexibility, scalability, and good design practice.

    Read the article

  • Efficient splitting of elements in a field

    - by Gary
    I have a field in a text file exported from a database. The field contains addresses but sometimes they are quite long and the database allows them to contain multiple lines. When exported, the newline character gets replaced with a dollar sign like this: first part of very long address$second part of very long address$third part of very long address Not every address has multiple lines and no address contains more than three lines. The length of each line is variable. I'm massaging the data for import into MS Access which is used for a mailmerge. I want to split the field on the $ sign if it's there but if the field only contains 1 line, I want to set my two extra output fields to a zero length string so that I don't wind up with blank lines in the address when it gets printed. I have an awk file that's working correctly on all the other data in the textfile but I need to get this last bit working. I tried the below code. Aside from the fact that I get a syntax error at the else, I'm not sure this is a good way to do what I want. This is being done with gawk on Windows. BEGIN { FS = "|" } $1 != "HEADER" { if ($6 ~ /\$/) split($6, arr, "$") address = arr[1] addresstwo = arr[2] addressthree = arr[3] addressLength = length(address) addressTwoLength = length(addresstwo) addressThreeLength = length(addressthree) else { address = $6 addressLength = length($6) addresstwo = "" addressTwoLength = length(addresstwo) addressthree = "" addressThreeLength = length(addressthree) } printf("%*s\t%*s\t\%*s\n", addressLength, address, addressTwoLength, addresstwo, addressThreeLength, addressthree) }

    Read the article

< Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >