Search Results

Search found 11052 results on 443 pages for 'linked tables'.

Page 307/443 | < Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >

  • When I run Django on Dreamhost using SQLite, why do I get an OperationalError telling me that a tabl

    - by Paul D. Waite
    I had a Django site running on Dreamhost. Although I used SQLite when developing locally, I initially used MySQL on Dreamhost, because that’s what the wiki page said to do, and because if I’m using an ORM, I might as well take advantage of it by running against a different database. After a while, I switched the settings on the server to use SQLite, to make it easier to keep my development database in sync with the server one. python manage.py syncdb worked on the server, but when I tried to access the site, I got an OperationalError. The Django error page said that one of my tables didn’t exist. I checked the database using sqlite on the command line on the server, and using python manage.py shell, and both worked fine.

    Read the article

  • add schema to path in postgresql

    - by veilig
    I'm the process of moving applications over from all in the public schema to each having their own schema. for each application, I have a small script that will create the schema and then create the tables,functions,etc... to that schema. Is there anyway to automatically add a newly created schema to the search_path? Currently, the only way I see is to find the users current path SHOW search_path; and then add the new schema to it SET search_path to xxx,yyy,zzz; I would like some way to just say, append schema zzz to the users_search path. is this possible?

    Read the article

  • CheckboxList not setting Selected with Viewstate disabled

    - by Earlz
    I have a CheckboxList that seems to load and do everything right, except for when I do a postback, it will not have the Item.Selected property set. I have viewstate disabled for the entire page. I load it like so(inside Page_Load on every load): foreach (DataRow service in d.Tables[0].Rows) { cblServices.Items.Add(new ListItem((string)service["description"], service["id"].ToString())); } My markup is simple: <asp:CheckBoxList runat="server" ID="cblServices" Width="300px"></asp:CheckBoxList> and then, I use basically something like this(in a _Click serverside event for a button) foreach(ListItem item in cblServices.Items){ if(item.Selected){ MyLabel.Text+="selected: "+item.Value+item.Text; } } and MyLabel never has any text added to it. I can verify with the debugger that it does reach the _Click's foreach loop, but no item is ever selected. What could be the cause of this?

    Read the article

  • A way to avoid deriving from the provider classes in mvc authentication

    - by Shymep
    Looking for the best practice for authentication in MVC I unfortunately didn't find the clear answer to my question. Thinking of the problem I tried to imagine some priciples that could be useful in my design. Well, I would like to use a base AccountController class I want to place all the tables such as "Users", Roles, Rights etc into my own database. But I wouldn't like to implement the standard aspnetdb design (which can be easy got by using aspnet_regsql) So the main question is can I do this without deriving abstract classes like MembershipProvider, RoleProvider etc? What I would prefer not to do is implement all the abstract methods from these classes. The second question is still about the best practice for authentication e.g. for the small projects, for the large ones?

    Read the article

  • Can this sql query be simplified?

    - by Bas
    I have the following tables: Person, {"Id", "Name", "LastName"} Sports, {"Id" "Name", "Type"} SportsPerPerson, {"Id", "PersonId", "SportsId"} For my query I want to get all the Persons that excersise a specific Sport whereas I only have the Sports "Name" attribute at my disposal. To retrieve the correct rows I've figured out the following queries: SELECT * FROM Person WHERE Person.Id in ( SELECT SportsPerPerson.PersonId FROM SportsPerPerson INNER JOIN Sports on SportsPerPerson.SportsId = Sports.Id WHERE Sports.Name = 'Tennis' ) AND Person.Id in ( SELECT SportsPerPerson.PersonId FROM SportsPerPerson INNER JOIN Sports on SportsPerPerson.SportsId = Sports.Id WHERE Sports.Name = 'Soccer' ) OR SELECT * FROM Person WHERE Id IN (SELECT PersonId FROM SportsPerPerson WHERE SportsId IN (SELECT Id FROM Sports WHERE Name = 'Tennis')) AND Id IN (SELECT PersonId FROM SportsPerPerson WHERE SportsId IN (SELECT Id FROM Sports WHERE Name = 'Soccer')) Now my question is, isn't there an easier way to write this query? Using just OR won't work because I need the person who play 'Tennis' AND 'Soccer'. But using AND also doesn't work because the values aren't on the same row.

    Read the article

  • MYSQL Event to update another database table

    - by Lee
    Hey All, I have just taken over a project for a client and the database schema is in a total mess. I would like to rename a load of fields make it a relationship database. But doing this will be a painstaking process as they have an API running of it also. So the idea would be to create a new database and start re-writing the code to use this instead. But I need a way to keep these tables in sync during this process. Would you agree that I should use MYSQL EVENT's to keep updating the new table on Inserts / updates & deletes?? Or can you suggest a better way?? Hope you can advise !! thanks for any input I get

    Read the article

  • How to specify an association relation using declarative base

    - by sam
    I have been trying to create an association relation between two tables, intake and module . Each intake has a one-to-many relationship with the modules. However there is a coursework assigned to each module, and each coursework has a duedate which is unique to each intake. I tried this but it didnt work: intake_modules_table = Table('tg_intakemodules',metadata, Column('intake_id',Integer,ForeignKey('tg_intake.intake_id', onupdate="CASCADE",ondelete="CASCADE")), Column('module_id',Integer,ForeignKey('tg_module.module_id', onupdate ="CASCADE",ondelete="CASCADE")), Column('dueddate', Unicode(16)) ) class Intake(DeclarativeBase): __tablename__ = 'tg_intake' #{ Columns intake_id = Column(Integer, autoincrement=True, primary_key=True) code = Column(Unicode(16)) commencement = Column(DateTime) completion = Column(DateTime) #{ Special methods def __repr__(self): return '"%s"' %self.code def __unicode__(self): return self.code #} class Module(DeclarativeBase): __tablename__ ='tg_module' #{ Columns module_id = Column(Integer, autoincrement=True, primary_key=True) code = Column(Unicode(16)) title = Column(Unicode(30)) #{ relations intakes = relation('Intake', secondary=intake_modules_table, backref='modules') #{ Special methods def __repr__(self): return '"%s"'%self.title def __unicode__(self): return '"%s"'%self.title #} When I do this the column duedate specified in the intake_module_table is not created. Please some help will be appreciated here. thanks in advance

    Read the article

  • Converting to Byte Array after reading a BLOB from SQL in C#

    - by Soham
    Hi All, I need to read a BLOB and store it in a byte[], before going forward with Deserializing; Consider: //Reading the Database with DataAdapterInstance.Fill(DataSet); DataTable dt = DataSet.Tables[0]; foreach (DataRow row in dt.Rows) { byte[] BinDate = Byte.Parse(row["Date"].ToString()); // convert successfully to byte[] } I need help in this C# statement, as I am not able to convert an object type into a byte[]. Note, "Date" field in the table is a blob and not of type Date; Help appreciated; Soham

    Read the article

  • Is there a difference SMO ServerConnection transaction methods versus using the SqlConnectionObject

    - by YWE
    I am using SMO to create databases and tables on a SQL Server. I want to do so in a transaction. Are both of these methods of doing so valid and equivalent: First method: Server server; //... server.ConnectionContext.BeginTransaction(); //... server.ConnectionContext.CommitTransaction(); Second method: Server server; // ... SqlConnection conn = server.ConnectionContext.SqlConnectionObject; SqlTransaction trans = conn.BeginTransaction(); // ... trans.Commit();

    Read the article

  • C# ProgressBar design pattern

    - by MadSeb
    Hi, I'm working on a database upgrader application. The upgrader updates the schema of a database ( adds new columns, renames columns , adds new tables, new views to an existing database by executing SQL statements ). When a user wants to upgrade from version 1.0 to 2.0 , "Upgrader" objects are taken from an object factory and the "Execute" method of each "Upgrader" is called. The GUI of the application has a progress bar and each time an upgrader object performs a SQL statement the progress bar gets incremented. while (!version.Equal(CurrentVersion)) { IUpgrader myUpgrader = UpgraderFactory.GetUpgrader(version); myUpgrader.Execute(UpgradedFile,progressbar); version.Increment(); } My question is very simple : how should the upgrader object communicate with the progressbar. In the code above, the upgrader object is given direct access to the progressbar but I'm wondering if some better way of doing this or better design pattern exists. Regards, Seb

    Read the article

  • Global find object references in NHibernate

    - by Miral
    Is it possible to perform a global reversed-find on NHibernate-managed objects? Specifically, I have a persistent class called "Io". There are a huge number of fields across multiple tables which can potentially contain an object of that type. Is there a way (given a specific instance of an Io object), to retrieve a list of objects (of any type) that actually do reference that specific object? (Bonus points if it can identify which specific fields actually contain the reference, but that's not critical.) Since the NHibernate mappings define all the links (and the underlying database has corresponding foreign key links), there ought to be some way to do it.

    Read the article

  • Change Large Number of Record Keys using Map Table

    - by Coyote
    I have a set of records indexed by id numbers, I need to convert these record's indexes to a new id number. I have a two column table mapping the old numbers to the new numbers. For example given these two tables, what would the update statement look like? Given: OLD_TO_NEW oldid | newid ----------------- 1234 0987 7698 5645 ... ... and id | data ---------------- 1234 'yo' 7698 'hey' ... ... Need: id | data ---------------- 0987 'yo' 5645 'hey' ... ... This oracle so I have access to PL/SQL, I'm just trying to avoid it.

    Read the article

  • Methods for ensuring security between users in multi-user applications

    - by Emilio
    I'm writing a multiuser application (.NET - C#) in which each user's data is separated from the others and there is no data that's common between users. It's critical to ensure that no user has access to another user's data. What are some approaches for implementing security at the database level and/or in the application architecture to to accomplish this? For example (and this is totally made up - I'm not suggesting it's a good or bad approach) including a userID column in all data tables might be an approach. I'm developing the app in C# (asp.net) and SQL Server 2008. I'm looking for options that are are either native in the tools I'm using or general patterns.

    Read the article

  • Mixing a4 and a3 in one Latex file

    - by jorgusch
    Hello, I am finishing my thesis and have a large appendix. Some tables only look good, if first done on a3 and then (paper)printed on a4. Anyhow, working all the files seperalty is fine, but I struggle to compile all in one. I use the geometry package and start the document with: \usepackage[a4paper,left=30mm,right=20mm,top=20mm, bottom=20mm]{geometry} For the appendix, I want to include the table and use \newgeometry{a3paper,left=25mm,right=15mm, top=15mm, bottom=15mm} However, the command is completely ignored and I can read "a3paper,left=25mm,right=15mm, top=15mm, bottom=15mm" above the table. What did I miss? Is it even possible? If not, how do I get the numbers right, if I have to include it as a pdf (which works)? Thanks!

    Read the article

  • How to SET ARITHABORT ON for connections in Linq To SQL

    - by Laurence
    By default, the SQL connection option ARITHABORT is OFF for OLEDB connections, which I assume Linq To SQL is using. However I need it to be ON. The reason is that my DB contains some indexed views, and any insert/update/delete operations against tables that are part of an indexed view fail if the connection does not have ARITHABORT ON. Even selects against the indexed view itself fail if the WITH(NOEXPAND) hint is used (which you have to use in SQL Standard Edition to get the performance benefit of the indexed view). Is there somewhere in the data context I can specify I want this option ON? Or somewhere in code I can do it?? I have managed a clumsy workaround, but I don't like it .... I have to create a stored procedure for every select/insert/update/delete operation, and in this proc first run SET ARITHABORT ON, then exec another proc which contains the actual select/insert/update/delete. In other words the first proc is just a wrapper for the second. It doesn't work to just put SET ARITHABORT ON above the select/insert/update/delete code.

    Read the article

  • How to include associative table information and still retain strong typing

    - by mwright
    I am using LINQ to SQL to create strongly typed objects in my project. Let's say I have an object that is represented by a database table. This object has a "Current State" that is kept in an associative table. I would like to make a single db call where I pull back the two tables joined but am unsure how I should be populating that information into some sort of object to preserve strong typing within my model so that the view using the information can just consume the information from the objects. I looked into creating a view model for this but it doesn't seem to quite fit. Am I thinking about this in the wrong way? What information can I include to help clarify my problem? Other details that may or may not be important: It's an MVC project....

    Read the article

  • Internet Explorer visual element stacking issue

    - by Michael
    Gday All, I know this issue is well known, however I have searched high and low for a solution to no avail. I have created a menu system using nested ordered lists where the menu functionality is controlled by CSS and Jquery. The menu works perfectly in FF, Chrome, Opera and Epiphany. However in IE 6/7/8 my popup menu is being displayed underneath a table. See the image below. The very top box is a div element containing my menu system. I am working with legacy code that uses tables for display so the next box and the "ts found. Try a different subcate" text is in a "td" element of a table. I have tried to force the table to have a lower z-index but this does not work. Any insights into why this is only present in IE would be appreciated. Cheers, Michael

    Read the article

  • SQL Server - Query Short-Circuiting?

    - by Sam Schutte
    Do T-SQL queries in SQL Server support short-circuiting? For instance, I have a situation where I have two database and I'm comparing data between the two tables to match and copy some info across. In one table, the "ID" field will always have leading zeros (such as "000000001234"), and in the other table, the ID field may or may not have leading zeros (might be "000000001234" or "1234"). So my query to match the two is something like: select * from table1 where table1.ID LIKE '%1234' To speed things up, I'm thinking of adding an OR before the like that just says: table1.ID = table2.ID to handle the case where both ID's have the padded zeros and are equal. Will doing so speed up the query by matching items on the "=" and not evaluating the LIKE for every single row (will it short circuit and skip the LIKE)?

    Read the article

  • Drawing star shapes with variable parameters

    - by owca
    Hi. I have task to write program allowing users to draw stars, which can differ in size and amount of arms. When I was dealing with basic stars I was doing it with GeneralPath and tables of points : int xPoints[] = { 55, 67, 109, 73, 83, 55, 27, 37, 1, 43 }; int yPoints[] = { 0, 36, 36, 54, 96, 72, 96, 54, 36, 36 }; Graphics2D g2d = ( Graphics2D ) g; GeneralPath star = new GeneralPath(); star.moveTo( xPoints[ 0 ], yPoints[ 0 ] ); for ( int k = 1; k < xPoints.length; k++ ) star.lineTo( xPoints[ k ], yPoints[ k ] ); star.closePath(); g2d.fill( star ); What method should I choose for drawing stars with variable inner and outer radius, as well as different amount of arms ? This is what I should obtain :

    Read the article

  • mod_wsgi | linux installation error

    - by MMRUser
    I'm getting the following error when I try to install mod_wsgi ./configure checking for apxs2... no checking for apxs... /usr/sbin/apxs checking Apache version... 2.2.3 configure: creating ./config.status config.status: creating Makefile make /usr/sbin/apxs -c -I/usr/local/include/python2.6 -DNDEBUG mod_wsgi.c -L/usr/local/lib -L/usr/local/lib/python2.6/config -lpython2.6 -lpthread -ldl -lutil -lm /apr-1/build/libtool --silent --mode=compile gcc -prefer-pic -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -fno-strict-aliasing -DLINUX=2 -D_REENTRANT -D_GNU_SOURCE -D_LARGEFILE64_SOURCE -pthread -I/usr/include/httpd -I/usr/include/apr-1 -I/usr/include/apr-1 -I/usr/local/include/python2.6 -DNDEBUG -c -o mod_wsgi.lo mod_wsgi.c && touch mod_wsgi.slo sh: /apr-1/build/libtool: No such file or directory apxs:Error: Command failed with rc=8323072 . make: *** [mod_wsgi.la] Error 1 libtool is installed on my system.. mod_wsgi 3.2 *Apache 2.2* *Python 2.6*

    Read the article

  • SQL query performance optimization (TimesTen)

    - by Sergey Mikhanov
    Hi community, I need some help with TimesTen DB query optimization. I made some measures with Java profiler and found the code section that takes most of the time (this code section executes the SQL query). What is strange that this query becomes expensive only for some specific input data. Here’s the example. We have two tables that we are querying, one represents the objects we want to fetch (T_PROFILEGROUP), another represents the many-to-many link from some other table (T_PROFILECONTEXT_PROFILEGROUPS). We are not querying linked table. These are the queries that I executed with DB profiler running (they are the same except for the ID): Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; < 1169655247309537280 > < 1169655249792565248 > < 1464837997699399681 > 3 rows found. Command> select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; < 1169655247309537280 > 1 row found. This is what I have in the profiler: 12:14:31.147 1 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272 12:14:31.147 2 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:47) cmdType:100, cmdNum:1146695. 12:14:31.147 3 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.147 4 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 5 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.148 6 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 7 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:31.228 8 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1464837998949302272; 12:14:35.243 9 SQL 2L 6C 10825P Preparing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928 12:14:35.243 10 SQL 4L 6C 10825P sbSqlCmdCompile ()(E): (Found already compiled version: refCount:01, bucket:44) cmdType:100, cmdNum:1146697. 12:14:35.243 11 SQL 4L 6C 10825P Opening: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 12 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 13 SQL 4L 6C 10825P Fetching: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; 12:14:35.243 14 SQL 4L 6C 10825P Closing: select G.M_ID from T_PROFILECONTEXT_PROFILEGROUPS CG, T_PROFILEGROUP G where CG.M_ID_EID = G.M_ID and CG.M_ID_OID = 1466585677823868928; It’s clear that the first query took almost 100ms, while the second was executed instantly. It’s not about queries precompilation (the first one is precompiled too, as same queries happened earlier). We have DB indices for all columns used here: T_PROFILEGROUP.M_ID, T_PROFILECONTEXT_PROFILEGROUPS.M_ID_OID and T_PROFILECONTEXT_PROFILEGROUPS.M_ID_EID. My questions are: Why querying the same set of tables yields such a different performance for different parameters? Which indices are involved here? Is there any way to improve this simple query and/or the DB to make it faster? UPDATE: to give the feeling of size: Command> select count(*) from T_PROFILEGROUP; < 183840 > 1 row found. Command> select count(*) from T_PROFILECONTEXT_PROFILEGROUPS; < 2279104 > 1 row found.

    Read the article

  • upload image during registration in asp.net

    - by Vikrant
    I am developing a student registration form in asp.net. there are two tables viz std_registration and gallery(having pic_id and pic_url). During registration, before the click of submit button i want to retrieve pic_id from gallery table and then pass it to the insert query of registration table. image upload is an optional part. if no image is uploaded i want a default image to get uploaded. pls help me on this. thanx.

    Read the article

  • mysqldump generating an empty file

    - by chupinette
    Hello all! Im trying to use mysqldump like below: mysqldump -hlocalhost -uadmin -padmin shop> D:\b2\shop3.sql When i execute it in command prompt, the file shop3 is generated with all the tables from the shop database. But, when I use it in my php file like below, it generates an empty file. $cmd = 'mysqldump -hlocalhost -uadmin -padmin shop > D:\b2\shop3.sql'; system($cmd); Can anyone help me find my error please? Thanks

    Read the article

  • Sql server indexed view

    - by Jose
    OK, I'm confused about sql server indexed views(using 2008) I've got an indexed view called AssignmentDetail when I look at the execution plan for select * from AssignmentDetail it shows the execution plan of all the underlying indexes of all the other tables that the indexed view is supposed to abstract away. I would think that the execution plan woul simply be an clustered index scan of PK_AssignmentDetail(the name of the clustered index for my view) but it doesn't. There seems to be no performance gain with this indexed view what am I supposed to do? Should I also create a non-clustered index with all of the columns so that it doesn't have to hit all the other indexes? Any insight would be greatly appreciated

    Read the article

  • How do I use SimpleRepository without the migration approach?

    - by Bill Sempf
    I am evaluating SubSonic for use in Phase 2 of a large project. This is an ASP.NET project, with 700 tables in a SQL Server database. We are planning for our domain model to consist of POCO classes to assist with an offline access requirements we have. I believe that the SimpleRepository pattern would be among my best options. Since I have a database already, however, the migration assistance doesn't help me. Are there T4 templates for SimpleRepository that I just overlooked? How do I 'turn off' migration? If I missed something in the Wiki, point me there, otherwise get me started and I'll write up a Wiki entry for y'all when we get there.

    Read the article

< Previous Page | 303 304 305 306 307 308 309 310 311 312 313 314  | Next Page >