Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 617/1260 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • What can I do about a SQL Server ghost FK constraint?

    - by rcook8601
    I'm having some trouble with a SQL Server 2005 database that seems like it's keeping a ghost constraint around. I've got a script that drops the constraint in question, does some work, and then re-adds the same constraint. Normally, it works fine. Now, however, it can't re-add the constraint because the database says that it already exists, even though the drop worked fine! Here are the queries I'm working with: alter table individual drop constraint INDIVIDUAL_EMP_FK ALTER TABLE INDIVIDUAL ADD CONSTRAINT INDIVIDUAL_EMP_FK FOREIGN KEY (EMPLOYEE_ID) REFERENCES EMPLOYEE After the constraint is dropped, I've made sure that the object really is gone by using the following queries: select object_id('INDIVIDUAL_EMP_FK') select * from sys.foreign_keys where name like 'individual%' Both return no results (or null), but when I try to add the query again, I get: The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "INDIVIDUAL_EMP_FK". Trying to drop it gets me a message that it doesn't exist. Any ideas?

    Read the article

  • Want to build simple SQL admin interface to change a few values in a table.

    - by Adam McC
    i am currently building a system in MSSQL 2K5. i have a table that holds information about certain insurance schemes such as overheads and other things. these values will change occasionally and currently i administer the database straight through the management Studio. i would like to build a simple interface that will allow my colleagues to change these values by selecting the company in a dropdown and the current values will populate. they can then edit these values and submit them to the database. is this possible in the current Visual Studio supplied with MSSQL server 2K5 or do i need to get another product. i am confident that with the help of stack overflow and google i can build this myself, but i need pointed in the right direction as to which environment would be easiest and best to start building it. Many thanks, adam

    Read the article

  • nth smallest number among two databases of size n each using divide and conquer

    - by urfriend
    we have two databases of size n containing numbers without repeats. So, in total we have 2n elements. They can be accessed through a query to one database at a time. The query is such that you give it a k and it returns kth smallest entry in that database. we need to find nth smallest entry among all the 2n elements in O(logn) queries. the idea is to use divide and conquer but i need some help thinking through this. thanks!

    Read the article

  • How can i get notifications from server to android device?

    - by Vaibs
    I am creating a app wherein user shares some information. This data i am storing it in database through servlets i.e. i am calling my own servlets which will take data through url and store it in database. So i want other users of that same app to get notify that some information is available and in turn they will get the information that other user has updated. For this to work we can use polling or pusher. But polling will take lots of battery power. I have tried C2DM but its not working for me. So i am thinking for some other mechanism by which i can implement other than C2DM. Please suggest some way to work it. and e.g. if u have came across.

    Read the article

  • Does anyone have a backup strategy for SQL Azure databases?

    - by Pete
    I'm using SQL Azure on a project and it works great. The problem is that the usual backup features do not exist. I have exported the database a couple of times using SQLAzureMW ( http://sqlazuremw.codeplex.com/ ) but this tool is now choking trying to download the database data with bcp. In any case, it's not as nice a solution as SQL Server backups. Is anyone aware of a commercial or open source tool, or other technique, for making reliable backups of SQL Azure databases? This is really a showstopper.

    Read the article

  • When should I open and close a connection to SQL Server

    - by Martin
    I have a simple static class with a few methods in it. Each of those methods open a SqlConnection, query the database and close the connection. This way, I am sure that I always close the connection to the database, but on the other hand, I don't like to always open and close connection. Below is an example of what my methods look like. public static void AddSomething(string something) { using (SqlConnection connection = new SqlConnection("...")) { connection.Open(); // ... connection.Close(); } } Considering that the methods are inside a static class, should I have a static member containing a single SqlConnection? How and when should I drop it? What are the best practices?

    Read the article

  • Should an event-sourced aggregate root have access to the event sourcing repository?

    - by JD Courtoy
    I'm working on an event-sourced CQRS implementation, using DDD in the application / domain layer. I have an object model that looks like this: public class Person : AggregateRootBase { private Guid? _bookingId; public Person(Identification identification) { Apply(new PersonCreatedEvent(identification)); } public Booking CreateBooking() { // Enforce Person invariants var booking = new Booking(); Apply(new PersonBookedEvent(booking.Id)); return booking; } public void Release() { // Enforce Person invariants // Should we load the booking here from the aggregate repository? // We need to ensure that booking is released as well. var booking = BookingRepository.Load(_bookingId); booking.Release(); Apply(new PersonReleasedEvent(_bookingId)); } [EventHandler] public void Handle(PersonBookedEvent @event) { _bookingId = @event.BookingId; } [EventHandler] public void Handle(PersonReleasedEvent @event) { _bookingId = null; } } public class Booking : AggregateRootBase { private DateTime _bookingDate; private DateTime? _releaseDate; public Booking() { //Enforce invariants Apply(new BookingCreatedEvent()); } public void Release() { //Enforce invariants Apply(new BookingReleasedEvent()); } [EventHandler] public void Handle(BookingCreatedEvent @event) { _bookingDate = SystemTime.Now(); } [EventHandler] public void Handle(BookingReleasedEvent @event) { _releaseDate = SystemTime.Now(); } // Some other business activities unrelated to a person } With my understanding of DDD so far, both Person and Booking are seperate aggregate roots for two reasons: There are times when business components will pull Booking objects separately from the database. (ie, a person that has been released has a previous booking modified due to incorrect information). There should not be locking contention between Person and Booking whenever a Booking needs to be updated. One other business requirement is that a Booking can never occur for a Person more than once at a time. Due to this, I'm concerned about querying the query database on the read side as there could potentially be some inconsistency there (due to using CQRS and having an eventually consistent read database). Should the aggregate roots be allowed to query the event-sourced backing store by id for objects (lazy-loading them as needed)? Are there any other avenues of implementation that would make more sense?

    Read the article

  • Case insensitive string compare in LINQ-to-SQL

    - by BlueMonkMN
    I've read that it's unwise to use ToUpper and ToLower to perform case-insensitive string comparisons, but I see no alternative when it comes to LINQ-to-SQL. The ignoreCase and CompareOptions arguments of String.Compare are ignored by LINQ-to-SQL (if you're using a case-sensitive database, you get a case-sensitive comparison even if you ask for a case-insensitive comparison). Is ToLower or ToUpper the best option here? Is one better than the other? I thought I read somewhere that ToUpper was better, but I don't know if that applies here. (I'm doing a lot of code reviews and everyone is using ToLower.) Dim s = From row In context.Table Where String.Compare(row.Name, "test", StringComparison.InvariantCultureIgnoreCase) = 0 This translates to an SQL query that simply compares row.Name with "test" and will not return "Test" and "TEST" on a case-sensitive database.

    Read the article

  • Sending BLOBs in a JSON service,... how?

    - by Marten Sytema
    Hello I have a webservice (ie. servlet) implemented in Java. It gets some data from a MySQL table, with one column being of type BLOB (an image), and some other columns are just plain text. Normally I would store the file outside the database with a pointer to it in the database, but due to circumstance I now have to use this BLOB column... What is the proper way to send this? How to encode the image in a JSONObject, and how to parse (and RENDER!) it on the otherside ? I want to use JSONP, to avoid having to proxy it through the consumer's webserver. So that the consumer can just put in a tag pointing to the webservice, calling a callback. Any thoughts how to handle images in this situation? Also thoughts on performance etc. are interesting!

    Read the article

  • How to implement a syndication receiver? (multi-client / single server)

    - by LeonixSolutions
    I have to come up with a system architecture. A few hundred remote devices will be communicating over internet with a central server which will receive data and store it in a database. I could write my own TCP/IP based protocol use SOAP use AJAX use RSS anything else? This is currently seen as one way (telemetry, as opposed to SCADA). Would it make a difference if we make it bi-directional. There are no plans to do so, but Murphy's law makes me wary of a uni-directional solution (on the data plane; I imagine that the control plane is bi-directional in all solutions (?)). I hope that this is not too subjective. I would like a solution which is quick and easy to implement and for others to support and where the general "communications pipeline" from remote deceives to database server can be re-used as the core of future projects. I have a strong background in telecomms protocols, in C/C++ and PHP.

    Read the article

  • how to run foreach loop with ternary condition

    - by I Like PHP
    i have two foreach loop to display some data, but i want to use a single foreach on basis of database result. means if there is any row returns from database then forach($first as $fk=>$fv) should execute otherwise foreach($other as $ok) should execute. i m unsing below ternary operator which gives parse error $n=$db->numRows($taskData); // databsse results <?php ($n) ? foreach ($first as $fk=>$fv) : foreach ($other as $ok) { ?> <table><tr><td>......some data...</td></tr></table> <?php } ?> please suggest me how to handle such condition via ternary operator or any other idea. Thanks

    Read the article

  • Passing parameters to web service method in c#

    - by wafa.cs1
    Hello I'm developing an android application .. which will send location data to a web service to store in the server database. In Java: I've used REST protocol so the URI is: HttpPost request = new HttpPost("http://trafficmapsa.com/GService.asmx/GPSdata? lon="+Lon+"&Lat="+Lat+"&speed="+speed); In Asp.net (c#) web service will be: [WebMethod] public CountryName GPSdata(Double Lon, Double Lat, Double speed) { after passing the data from android ..nothing return in back .. and there is no data in the database!! Is there any library missing in the CS file!! I cannot figure out what is the problem.

    Read the article

  • How can I display characters in a map by mapfile?

    - by sirius
    Hello. I'm trying to show a map using postGIS+Mapserver.And I've displayed a PNG picture in my WEB. However, I want to show some charactors in the map, just like this: this is the example from Mapserver Now I'm using database(postgreSQL), but not a shape file. How can I add the charactors then? Here is a part of my mapfile: LAYER CONNECTIONTYPE postgis NAME "state" //Connect to a remote spatial database CONNECTION "user=postgres dbname=*** host=*** password=***" PROCESSING "CLOSE_CONNECTION=DEFER" DATA "the_geom from province" STATUS ON TYPE POLYGON CLASS STYLE COLOR 122 122 122 OUTLINECOLOR 0 0 0 END LABEL COLOR 132 31 31 SHADOWCOLOR 218 218 218 SHADOWSIZE 2 2 TYPE TURETYPE FONT arial-bold SIZE 12 ANTIALIAS TRUE POSITION CL PARTIALS FALSE MINDISTANCE 300 BUFFER 4 END END Some said adding a "TEXT ([*])" in "LABEL", but I don't know howto? Thanks for your help!

    Read the article

  • Large ResultSet on postgresql query

    - by tuler
    I'm running a query against a table in a postgresql database. The database is on a remote machine. The table has around 30 sub-tables using postgresql partitioning capability. The query will return a large result set, something around 1.8 million rows. In my code I use spring jdbc support, method JdbcTemplate.query, but my RowCallbackHandler is not being called. My best guess is that the postgresql jdbc driver (I use version 8.3-603.jdbc4) is accumulating the result in memory before calling my code. I thought the fetchSize configuration could control this, but I tried it and nothing changes. I did this as postgresql manual recomended. This query worked fine when I used Oracle XE. But I'm trying to migrate to postgresql because of the partitioning feature, which is not available in Oracle XE. My environment: Postgresql 8.3 Windows Server 2008 Enterprise 64-bit JRE 1.6 64-bit Spring 2.5.6 Postgresql JDBC Driver 8.3-603

    Read the article

  • Working with hibernate/DAO problems

    - by Gandalf StormCrow
    Hello everyone here is my DAO class : public class UsersDAO extends HibernateDaoSupport { private static final Log log = LogFactory.getLog(UsersDAO.class); protected void initDao() { //do nothing } public void save(User transientInstance) { log.debug("saving Users instance"); try { getHibernateTemplate().saveOrUpdate(transientInstance); log.debug("save successful"); } catch (RuntimeException re) { log.error("save failed", re); throw re; } } public void update(User transientInstance) { log.debug("updating User instance"); try { getHibernateTemplate().update(transientInstance); log.debug("update successful"); } catch (RuntimeException re) { log.error("update failed", re); throw re; } } public void delete(User persistentInstance) { log.debug("deleting Users instance"); try { getHibernateTemplate().delete(persistentInstance); log.debug("delete successful"); } catch (RuntimeException re) { log.error("delete failed", re); throw re; } } public User findById( java.lang.Integer id) { log.debug("getting Users instance with id: " + id); try { User instance = (User) getHibernateTemplate() .get("project.hibernate.Users", id); return instance; } catch (RuntimeException re) { log.error("get failed", re); throw re; } } } Now I wrote a test class(not a junit test) to test is everything working, my user has these fields in the database : userID which is 5characters long string and unique/primary key, and fields such as address, dob etc(total 15 columns in database table). Now in my test class I intanciated User added the values like : User user = new User; user.setAddress("some address"); and so I did for all 15 fields, than at the end of assigning data to User object I called in DAO to save that to database UsersDao.save(user); and save works just perfectly. My question is how do I update/delete users using the same logic? Fox example I tried this(to delete user from table users): User user = new User; user.setUserID("1s54f"); // which is unique key for users no two keys are the same UsersDao.delete(user); I wanted to delete user with this key but its obviously different can someone explain please how to do these. thank you

    Read the article

  • Shared Variable Among Ruby Processes

    - by Jesse J
    I have a Ruby program that loads up two very large yaml files, so I can get some speed-up by taking advantage of the multiple cores by forking off some processes. I've tried looking, but I'm having trouble figuring how, or even if, I can share variables in different processes. The following code is what I currently have: @proteins = "" @decoyProteins = "" fork do @proteins = YAML.load_file(database) exit end fork do @decoyProteins = YAML.load_file(database) exit end p @proteins["LVDK"] P displays nil though because of the fork. So is it possible to have the forked processes share the variables? And if so, how?

    Read the article

  • Evaluating Django Chained QuerySets Locally

    - by jnadro52
    Hello All: I am hoping someone can help me out with a quick question I have regarding chaining Django querysets. I am noticing a slow down because I am evaluating many data points in the database to create data trends. I was wondering if there was a way to have the chained filters evaluated locally instead of hitting the database. Here is a (crude) example: pastries = Bakery.objects.filter(productType='pastry') # <--- will obviously always hit DB, when evaluated cannoli = pastries.filter(specificType='cannoli') # <--- can this be evaluated locally instead of hitting the DB when evaluated, as long as pastries was evaluated? I have checked the docs and I do not see anything specifying this, so I guess it's not possible, but I wanted to check with the 'braintrust' first ;-). BTW - I know that I can do this myself by implementing some methods to loop through these datapoints and evaluate the criteria, but there are so many datapoints that my deadline does not permit me manually implementing this. Thanks in advance.

    Read the article

  • How to consolidate multiple LOG files into one .LDF file in SQL2000

    - by John Galt
    Here is what sp_helpfile says about my current database (recovery model is Simple) in SQL2000: name fileid filename size maxsize growth usage MasterScratchPad_Data 1 C:\SQLDATA\MasterScratchPad_Data.MDF 6041600 KB Unlimited 5120000 KB data only MasterScratchPad_Log 2 C:\SQLDATA\MasterScratchPad_Log.LDF 2111304 KB Unlimited 10% log only MasterScratchPad_X1_Log 3 E:\SQLDATA\MasterScratchPad_X1_Log.LDF 191944 KB Unlimited 10% log only I'm trying to prepare this for a detach then an attach to a sql2008 instance but I don't want to have the 2nd .LDF file (I'd like to have just one file for the log). I have backed up the database. I have issued: BACKUP LOG MasterScratchPad WITH TRUNCATE_ONLY. I have run multiple DBCC SHRINKFILE commands on both of the LOG files. How can I accomplish this goal of having just one .LDF? I cannot find anything on how to delete the one with fileid of 3 and/or how to consolidate multiple files into one log file.

    Read the article

  • MVC Entity Model not showing my table

    - by Jessica
    I have a database with multiple tables, and some basic relationships. Here is an example of the problem I am having: My Database: **Org** ID Name etc **Detail1** ID D1name **Org_Detail1** Org_ID Detail1_ID **Detail2** ID D2Name **Org_Detail2** Org_ID Detial1_ID BooleanField My problem is, the Org_detail1 table is not showing up in the entity model, but the Org_Details2 table does. I thought it may have been because the Org_Detail1 table only contains two ID fields that are both primary keys, while the Org_Details2 table contains 2 primary key ID fields as well as a boolean field. If I add a dummy field to Org_detail1 and update it, it still won't show up and wont allow me to add a new entity relating to the Org_Detail1 table. The table won't even show up in the list, but it is listed under the tables. Is there any solution to get this table to appear in my model?

    Read the article

  • How to manipulate a gridview in asp.net when the columns in the gridview are generated during runtim

    - by Nandini
    Hi guys, In my application,i have a gridview in which columns are created at runtime.Actually these columns are created when i am entering data in a database table. My gridview is like this:- Description | Column1 | Column2 | Column3 | ...... |....... |......... Input volt | 55 | 56 | 553 |........ Output volt | 656 | 45 | 67 | where Column1,Column2,column3 may vary during runtime.i need to enter values to these columns during runtime.But i cant bind these columns because these are created during runtime. The first column "Description" will not change.It will remain constant.For each description,there will be values for each column.At last these data will be saved to the database.How to add columns in the gridview during runtime?

    Read the article

  • Problem with datagridvieew in vb.net

    - by user225269
    I'm trying to add a datagridview in vb.net, but it does not allow me to change the connection string or the database that should be imported to connect to it. The only thing that I'm seeing is the previous ms sql database that I connected with datagridview and everytime I click the new connection, the window closes and it leaves me with the datagridview with the previous connection that I have. And its not applicable because, now I want to connect it with mysql. Not ms sql. Its some sort of a cache like feature in vb.net, how do I get rid of it. so that I can add the new connection for mysql? Do I need to reinstall visual studio 2008?

    Read the article

  • Convert object to DateRange

    - by user655832
    I'm querying an underlying PostgreSQL database using Pandas 0.8. Pandas is returning the DataFrame properly but the underlying timestamp column in my database is being returned as a generic "object" type in Pandas. As I would eventually like to seasonal normalization of my data I am curious as to how to convert this generic "object" column to something that is appropriate for analysis. Here is my current code to retrieve the data: # get records from db example import pandas.io.sql as psql import psycopg2 # define query to get all subs created this year QRY = """ select i i, i * random() f, case when random() > 0.5 then true else false end t, (current_date - (i*random())::int)::timestamp with time zone tsz from generate_series(1,1000) as s(i) order by 4 ; """ CONN_STRING = "host='localhost' port=5432 dbname='postgres' user='postgres'" # connect to db conn = psycopg2.connect(CONN_STRING) # get some data set index on relid column df = psql.frame_query(QRY, con=conn) print "Row count retrieved: %i" % (len(df),) Thanks for any help you can render. M

    Read the article

  • Encoding issue with form and HTML Purifier / MySQL

    - by Andrew Heath
    Driving me nuts... Page with form is encoded as Unicode (UTF-8) via: <meta http-equiv="content-type" content="text/html; charset=utf-8"> entry column in database is text utf8_unicode_ci copying text from a Word document with " in it, like this: “1922.” is insta-fail and ends up in the database as â??1922.â?? (typing new data into the form, including " works fine... it's cut and pasting from Word...) PHP steps behind the scenes are: grab value from POST run through HTML Purifier default settings run through mysql_real_escape_string insert query into dbase Help?

    Read the article

  • How do I display a field's hidden characters in the result of a query in Oracle?

    - by Chris Williams
    I have two rows that have a varchar column that are different according to a Java .equals(). I can't easily change or debug the Java code that's running against this particular database but I do have access to do queries directly against the database using SQLDeveloper. The fields look the same to me (they are street addresses with two lines separated by some new line or carriage feed/new line combo). Is there a way to see all of the hidden characters as the result of a query?I'd like to avoid having to use the ascii() function with substr() on each of the rows to figure out which hidden character is different. I'd also accept some query that shows me which character is the first difference between the two fields.

    Read the article

  • Should I be concerned about performance with connections to multiple databases?

    - by Josh Ryan
    I have a PHP app that is running on an Apache server with MySQL databases. Based on the subdomain that users access, I am connecting them to a database (sub1.domain.com connects to database_sub1 and sub2.domain.com connects to database_sub2). Right now there are 10 subdomain-database combos, but that number could potentially grow to well over 100. So, is this a bad thing? Considering my situation, is myslq_pconnect a better way to go? Thanks, and please let me know if more info would be helpful. Josh

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >