Search Results

Search found 12714 results on 509 pages for 'db schema'.

Page 113/509 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Practiaal rules for Django MiddleWare ordering?

    - by o_O Tync
    The official documentation is a bit messy: 'before' & 'after' are used for ordering MiddleWare in a tuple, but in some places 'before'&'after' refers to request-response phases. Also, 'should be first/last' are mixed and it's not clear which one to use as 'first'. I do understand the difference.. however it seems to complicated for a newbie in Django. Can you suggest some correct ordering for builtin MiddleWare classes (assuming we enable all of them) and — most importantly — explain WHY one goes before/after other ones? here's the list, with the info from docs I managed to find: UpdateCacheMiddleware Before those that modify 'Vary:' SessionMiddleware, GZipMiddleware, LocaleMiddleware GZipMiddleware Before any MW that may change or use the response body After UpdateCacheMiddleware: Modifies 'Vary:' ConditionalGetMiddleware Before CommonMiddleware: uses its 'Etag:' header when USE_ETAGS=True SessionMiddleware After UpdateCacheMiddleware: Modifies 'Vary:' Before TransactionMiddleware: we don't need transactions here LocaleMiddleware, One of the topmost, after SessionMiddleware, CacheMiddleware After UpdateCacheMiddleware: Modifies 'Vary:' After SessionMiddleware: uses session data CommonMiddleware Before any MW that may change the response (it calculates ETags) After GZipMiddleware so it won't calculate an E-Tag on gzipped contents Close to the top: it redirects when APPEND_SLASH or PREPEND_WWW CsrfViewMiddleware AuthenticationMiddleware After SessionMiddleware: uses session storage MessageMiddleware After SessionMiddleware: can use Session-based storage XViewMiddleware TransactionMiddleware After MWs that use DB: SessionMiddleware (configurable to use DB) All *CacheMiddleWare is not affected (as an exception: uses own DB cursor) FetchFromCacheMiddleware After those those that modify 'Vary:' if uses them to pick a value for cache hash-key After AuthenticationMiddleware so it's possible to use CACHE_MIDDLEWARE_ANONYMOUS_ONLY FlatpageFallbackMiddleware Bottom: last resort Uses DB, however, is not a problem for TransactionMiddleware (yes?) RedirectFallbackMiddleware Bottom: last resort Uses DB, however, is not a problem for TransactionMiddleware (yes?) (I will add suggestions to this list to collect all of them in one place)

    Read the article

  • how create a sql database fom a stongly typed dataset

    - by Keith Vinson
    I'm looking for an easy way to transfer a database schema I have developed inside visual studio as a strongly typed dataset (xsd file) into a corresponding sql server database. Silly me I assumed the process would be forthright, but I can't find out how to do it. I assume I could duplicate the tables column by column, but that seems so error prone. Does anyone know of a way to perform the schema transfer like this? Maybe a tool to translate the xsd file into a corresponding sql server ddl file? Final thought once I have the schema transferred moving data around between the two data stores will be straight forward, its just getting the schemas synced that has me stumped... Thanks, Keith

    Read the article

  • How to close an oracle db connection from php on an apache server? I mean close instantly.

    - by Valentin Jacquemin
    Usually closing a connection is simply done by oci_clone($connection); or in a worse case when the php script ends the connection pass away. In my case however, I face a different behavior. If I access my application which uses PHP 5.2.8, Apache 2.2.11 and oci8 1.2.5, the connection is kept during several minutes. Actually it seems to: if I launch netstat -b I see that the process httpd.exe remains with the ESTABLISHED status on the database's URL during a while (a few minutes). Could someone enlighten me on that behavior? P.S. I do not use persistent connections.

    Read the article

  • DataSetProvider - DataSet to ClientDataSet

    - by nomad311
    I am trying to take data from a TMSQuery which is connected to my DB and populate a ClientDataSet with some of that data using a DataSetProvider. My problem is that I will need to modify some of this data before it can go into my ClientDataSet. The ClientDataSet has persistent fields that will not match up with the raw DB data. I can't even get a string from the DB into a memo field in ClientDataSet. The ClientDataSet is a part of my data tier so I will need to conform the data from the DB to the ClientDataSet field by field (well most will be able to go right through, but many will require routing and/or conversion). Does anyone have experience with this?

    Read the article

  • Search and replace with sed

    - by Binoy Babu
    Last week I accidently externalized all my strings of my eclipse project. I need to revert this and my only hope is sed. I tried to create scripts but failed pathetically because I'm new with sed and this would be a very complicated operation. What I need to do is this: Strings in class.java file is currently in the following format(method) Messages.getString(<key>). Example : if (new File(DataSource.DEFAULT_VS_PATH).exists()) { for (int i = 1; i <= c; i++) { if (!new File(DataSource.DEFAULT_VS_PATH + Messages.getString("VSDataSource.89") + i).exists()) { //$NON-NLS-1$ getnewvfspath = DataSource.DEFAULT_VS_PATH + Messages.getString("VSDataSource.90") + i; //$NON-NLS-1$ break; } } } The key and matching Strings are in messages.properties file in the following format. VSDataSource.92=No of rows in db = VSDataSource.93=Verifying db entry : VSDataSource.94=DB is open VSDataSource.95=DB is closed VSDataSource.96=Invalid db entry for VSDataSource.97=\ removed. So I need the java file back in this format: if (new File(DataSource.DEFAULT_VS_PATH).exists()) { for (int i = 1; i <= c; i++) { if (!new File(DataSource.DEFAULT_VS_PATH + "String 2" + i).exists()) { //$NON-NLS-1$ getnewvfspath = DataSource.DEFAULT_VS_PATH + "String 1" + i; //$NON-NLS-1$ break; } } } How can I accomplish this with sed? Or is there an easier way?

    Read the article

  • C# IQueryable<T> does my code make sense?

    - by Pandiya Chendur
    I use this to get a list of materials from my database.... public IQueryable<MaterialsObj> FindAllMaterials() { var materials = from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id select new MaterialsObj() { Id = Convert.ToInt64(m.Mat_id), Mat_Name = m.Mat_Name, Mes_Name = Mt.Name, }; return materials; } But i have seen in an example that has this, public IQueryable<MaterialsObj> FindAllMaterials() { return from m in db.Materials join Mt in db.MeasurementTypes on m.MeasurementTypeId equals Mt.Id select new MaterialsObj() { Id = Convert.ToInt64(m.Mat_id), Mat_Name = m.Mat_Name, Mes_Name = Mt.Name, }; } Is there a real big difference between the two methods... Assigning my linq query to a variable and returning it... Is it a good/bad practise? Any suggestion which should i use?

    Read the article

  • Clear Asp.Net cache from outside of application (not using source code)

    - by TheJudge
    Hi, I have a asp.net web application and I'm using cache (HttpRuntime.Cache) to save some stuff from db. I also update db from time to time so that data in db does not match the data in my application's cache. Is there any way how to clear my application's cache without modifying any source code or republishing the page? I tried to restart IIS and to clear browsers cache but nothing helps. Please help.

    Read the article

  • MongoDB query against geospatial index with maxDistance fails from node.js client

    - by user1735497
    I want to query against a geospatial index in mongo-db (designed after this tutorial http://www.mongodb.org/display/DOCS/Geospatial+Indexing). So when I execute this from the shell everything works fine: db.sellingpoints.find(( { location : { $near: [48.190120, 16.270895], $maxDistance: 7 / 111.2 } } ); but the same query from my nodejs application (using mongoskin or mongoose), won't return any results until i set the distance-value to a very high number (5690) db.collection('sellingpoints') .find({ location: { $near: [lat,lng], $maxDistance: distance / 111.2} }) .limit(limit) .toArray(callback); Has someone any idea how to fix that?

    Read the article

  • How to create a view of table that contains a timestamp column?

    - by Matt Faus
    This question is an extension of a previous one I have asked. I have a table (2014_05_31_transformed.Video) with a schema that looks like this. I have put up the JSON returned by the BigQuery API describing it's schema in this gist. I am trying to create a view against this table with an API call that looks like this: { 'view': { 'query': u 'SELECT deleted_mod_time FROM [2014_05_31_transformed.Video]' }, 'tableReference': { 'datasetId': 'latest_transformed', 'tableId': u 'Video', 'projectId': 'redacted' } } But, the BigQuery API is returning this error: HttpError: https://www.googleapis.com/bigquery/v2/projects/124072386181/datasets/latest_transformed/tables?alt=json returned "Invalid field name "deleted_mod_time.usec". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long." The schema that the BigQuery API does not make any distinction between a TIMESTAMP data type and a regular nullable INTEGER data type, so I can't think of a way to programmatically correct this problem. Is there anything I can do, or is this a bug with BigQuery's view implementation?

    Read the article

  • SSIS Runs Okay Individual Tasks, Not Together

    - by davemackey
    I have a simple SSIS Project. In the control flow I have three steps: Step 1: Select Data from Db1.Table1 Step 2: Create Table2 in Db2 Step 3: Copy Data in Db1.Table1 to Db2.Table2 If I "Execute Task" one by one in order, it executes fine...but if I try running the entire project I receive the following error: Error at Copy Data from Table1 to DB2 dbo Table2 Task [OLE DB Destination[40]]: SSIS Error Code DTS_E_OLEDBERROR. An OLE DB error has occurred. Error code: 0x80040E37. An OLE DB record is available. Source: "Microsoft OLE DB Provider for SQL Server" Hresult: 0x80040E37 Description: "Invalid object name 'DB2.dbo.Table2".".

    Read the article

  • Recommendations for an in memory database vs thread safe data structures

    - by yx
    TLDR: What are the pros/cons of using an in-memory database vs locks and concurrent data structures? I am currently working on an application that has many (possibly remote) displays that collect live data from multiple data sources and renders them on screen in real time. One of the other developers have suggested the use of an in memory database instead of doing it the standard way our other systems behaves, which is to use concurrent hashmaps, queues, arrays, and other objects to store the graphical objects and handling them safely with locks if necessary. His argument is that the DB will lessen the need to worry about concurrency since it will handle read/write locks automatically, and also the DB will offer an easier way to structure the data into as many tables as we need instead of having create hashmaps of hashmaps of lists, etc and keeping track of it all. I do not have much DB experience myself so I am asking fellow SO users what experiences they have had and what are the pros & cons of inserting the DB into the system?

    Read the article

  • SecurityException when trying to export a java resource

    - by thecoop
    I'm trying to get the source of a java resource stored in an oracle database using this code (connecting as SYSTEM for testing): DECLARE javalob CLOB; BEGIN DBMS_LOB.CREATETEMPORARY(javalob, false); DBMS_JAVA.EXPORT_RESOURCE('RESOURCENAME', 'SCHEMA', javalob); DBMS_OUTPUT.PUT_LINE(javalob); END; But when I try to run it I get this: Java call terminated by uncaught Java exception: java.lang.SecurityException: cannot read <Resource Handle: RESOURCENAME|SCHEMA|301> because SYSTEM does not have execute privilege on it This thing is, I'm not sure how to grant permissions on <Resource Handle: RESOURCENAME|SCHEMA|301>, as this isn't a SQL or PL/SQL object. And why doesn't SYSTEM have access to it anyway?

    Read the article

  • Backing up an online database

    - by Veejay
    I havea 70MB db of my website which is hosted with a provider. I am able to access my db using SSMS 2008 remotely. On a running website, which is the best way I can back up the db locally on machine Thanks

    Read the article

  • Insert query results into table in ms access 2010

    - by CodeMed
    I need to transform data from one schema into another in an MS Access database. This involves writing queries to select data from the old schema and then inserting the results of the queries into tables in the new schema. The below is an example of what I am trying to do. The SELECT component of the below works fine, but the INSERT component does not work. Can someone show me how to fix the below so that it effectively inserts the results of the SELECT statement into the destination table? INSERT INTO CompaniesTable (CompanyName) VALUES ( SELECT DISTINCT IIF(a.FIRM_NAME IS NULL, b.SUBACCOUNT_COMPANY_NAME, a.FIRM_NAME) AS CompanyName FROM (SELECT ContactID, FIRM_NAME, SUBACCOUNT_COMPANY_NAME FROM qrySummaryData) AS a LEFT JOIN (SELECT ContactID, FIRM_NAME, SUBACCOUNT_COMPANY_NAME FROM qrySummaryData) AS b ON a.ContactID = b.ContactID ); The definition of the target table (CompaniesTable) is: CompanyID Autonumber CompanyName Text Description Text WebSite Text Email Text TypeNumber Number

    Read the article

  • Is there a way to only backup a SQL 2005 database structure fully, but only the data in a certain se

    - by TheSoftwareJedi
    I have several schemas in my database, and the largest one ("large" meaning disk space consumed) is my "web" schema which is a denormalized copy of data in the operational schemas. This denormalized data is able to be reconstructed at anytime, and is merely there for extremely fast read purposes. Since the data is redundant, and VERY large - I'd like to exclude it from being backed up. I already have stored procedures that can regenerate all of the data in that schema in a couple of hours - for use in the event of a failure. I assume I can split the tables in this schema out to another data file or such (ideally even on another drive for faster reads), but is there a way to never have that data file backup, yet still in the event of a failure its structure could be restored (and other DDL stuff like procs, views, etc)? Somewhat related, can I also have these tables not do transaction logging, if I go to "Full" backup mode for the rest of the database?

    Read the article

  • How to camelcase decibels?

    - by Roddy
    The standard abbreviations for decibels is "dB" (note the case!) So, if I have a variable, holding (for instance) a maximum dB value, how best to name it? maxDbValue maxdBValue maxDecibelValue something else? Each has disdvantages - #1 swaps the case of the unit, #2 doesn't clearly split max from dB, and #3 is verbose... I think #1 feels best, but...???

    Read the article

  • How does one pre-populate a Python Formish form?

    - by Jace
    How does one pre-populate a Formish form? The obvious method as per the documentation doesn't seem right. Using one of the provided examples: import formish, schemaish structure = schemaish.Structure() structure.add( 'a', schemaish.String() ) structure.add( 'b', schemaish.Integer() ) schema = schemaish.Structure() schema.add( 'myStruct', structure ) form = formish.Form(schema, 'form') If we pass this a valid request object: form.validate(request) The output is a structure like this: {'myStruct': {'a': 'value', 'b': 0 }} However, pre-populating the form using defaults requires this: form.defaults = {'myStruct.a': 'value', 'myStruct.b': 0} The dottedish package has a DottedDict object that can convert a nested dict to a dotted dict, but this asymmetry doesn't seem right. Is there a better way to do this?

    Read the article

  • Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)

    - by Fotis
    I am a newbie programmer, so I will need your help! Locally the webapp works ok with the db on it! When I uploaded the application on the cloudcontrol, it comes up with the following error: CDbConnection failed to open the DB connection: SQLSTATE[HY000] [2002] Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)! I suppose I have not uploaded the db. This is the very first time I upload a webapp on a server so I do not know the exact steps that I have to follow in order to upload the db on a server. Cloudcontrol has documentation about mysql! I did follow the steps but the webapp comes with the same error! Could you please tell me what steps I have to follow in order to make it working? I am sure that this error is due to lack of knowledge!

    Read the article

  • How to sum up values of an array in assembly?

    - by Pablo Fallas
    I have been trying to create a program which can sum up all the values of an "array" in assembly, I have done the following: ORG 1000H TABLE DB DUP(2,4,6,8,10,12,14,16,18,20) FIN DB ? TOTAL DB ? MAX DB 13 ORG 2000H MOV AL, 0 MOV CL, OFFSET FIN-OFFSET TABLE MOV BX, OFFSET TABLE LOOP: ADD AL, [BX] INC BX DEC CL JNZ LOOP HLT END BTW I am using msx88 to compile this code. But I get an error saying that the code 0 has not been recognized. Any advise?

    Read the article

  • How do I get this sql to linq? Multiple groups

    - by Dwight T
    For a db person, LINQ can be frustrating. I need to convert the following SQL into Linq. SELECT COUNT(o.objectiveid), COUNT(distinct r.ReviewId), l.Abbreviation FROM Objective o JOIN Review r on r.ReviewId = o.ReviewId and r.ReviewPeriodId = 3 and r.IsDeleted = 0 JOIN Position p on p.PositionId = r.EmployeePositionId and p.DivisionId = 2 JOIN Location l on l.LocationId = p.LocationId GROUP BY l.Abbreviation The group by nested example might be the way I have to go, but not sure. Doing one group by I have used the following code: var query = from rev in db.Reviews .Where(r => r.IsDeleted == false && r.ReviewPeriodId == reviewPeriodId) from obj in db.Objectives .Where(o => o.ReviewId == rev.ReviewId && o.IsDeleted == false) from pos in db.Positions .Where(p => rev.EmployeePositionId == p.PositionId && p.IsDeleted == false && p.DivisionId == divisionId ) from loc in db.Locations .Where(l => pos.LocationId == l.LocationId) group loc by loc.Abbreviation into locgroup select new ReportResults { KeyId = 0, Description = locgroup.Key, Count = locgroup.Count() }; return query.ToList(); What is the correct way? Thanks

    Read the article

  • Using find and tar with files with special characters in the name

    - by Costi
    I want to archive all .ctl files in a folder, recursively. tar -cf ctlfiles.tar `find /home/db -name "*.ctl" -print` The error message : tar: Removing leading `/' from member names tar: /home/db/dunn/j: Cannot stat: No such file or directory tar: 74.ctl: Cannot stat: No such file or directory I have these files: /home/db/dunn/j 74.ctl and j 75. Notice the extra space. What if the files have other special characters? How do I archive these files recursively?

    Read the article

  • How to compare 2 lists and merge them in Python/MySQL?

    - by NJTechGuy
    I want to merge data. Following are my MySQL tables. I want to use Python to traverse though a list of both Lists (one with dupe = 'x' and other with null dupes). For instance : a b c d e f key dupe -------------------- 1 d c f k l 1 x 2 g h j 1 3 i h u u 2 4 u r t 2 x From the above sample table, the desired output is : a b c d e f key dupe -------------------- 2 g c h k j 1 3 i r h u u 2 What I have so far : import string, os, sys import MySQLdb from EncryptedFile import EncryptedFile enc = EncryptedFile( os.getenv("HOME") + '/.py-encrypted-file') user = enc.getValue("user") pw = enc.getValue("pw") db = MySQLdb.connect(host="127.0.0.1", user=user, passwd=pw,db=user) cursor = db.cursor() cursor2 = db.cursor() cursor.execute("select * from delThisTable where dupe is null") cursor2.execute("select * from delThisTable where dupe is not null") result = cursor.fetchall() result2 = cursor2.fetchall() for cursorFieldname in cursor.description: for cursorFieldname2 in cursor2.description: if cursorFieldname[0] == cursorFieldname2[0]: ### How do I compare the record with same key value and update the original row null field value with the non-null value from the duplicate? Please fill this void... cursor.close() cursor2.close() db.close() Thanks guys!

    Read the article

  • Trouble with Code First DatabaseGenerated Composite Primary Key

    - by Nick Fleetwood
    This is a tad complicated, and please, I know all the arguments against natural PK's, so we don't need to have that discussion. using VS2012/MVC4/C#/CodeFirst So, the PK is based on the date and a corresponding digit together. So, a few rows created today would be like this: 20131019 1 20131019 2 And one created tomorrow: 20131020 1 This has to be automatically generated using C# or as a trigger or whatever. The user wouldn't input this. I did come up with a solution, but I'm having problems with it, and I'm a little stuck, hence the question. So, I have a model: public class MainOne { //[Key] //public int ID { get; set; } [Key][Column(Order=1)] [DatabaseGenerated(DatabaseGeneratedOption.None)] public string DocketDate { get; set; } [Key][Column(Order=2)] [DatabaseGenerated(DatabaseGeneratedOption.None)] public string DocketNumber { get; set; } [StringLength(3, ErrorMessage = "Corp Code must be three letters")] public string CorpCode { get; set; } [StringLength(4, ErrorMessage = "Corp Code must be four letters")] public string DocketStatus { get; set; } } After I finish the model, I create a new controller and views using VS2012 scaffolding. Then, what I'm doing is debugging to create the database, then adding the following instead of trigger after Code First creates the DB [I don't know if this is correct procedure]: CREATE TRIGGER AutoIncrement_Trigger ON [dbo].[MainOnes] instead OF INSERT AS BEGIN DECLARE @number INT SELECT @number=COUNT(*) FROM [dbo].[MainOnes] WHERE [DocketDate] = CONVERT(DATE, GETDATE()) INSERT INTO [dbo].[MainOnes] (DocketDate,DocketNumber,CorpCode,DocketStatus) SELECT (CONVERT(DATE, GETDATE ())),(@number+1),inserted.CorpCode,inserted.DocketStatus FROM inserted END And when I try to create a record, this is the error I'm getting: The changes to the database were committed successfully, but an error occurred while updating the object context. The ObjectContext might be in an inconsistent state. Inner exception message: The object state cannot be changed. This exception may result from one or more of the primary key properties being set to null. Non-Added objects cannot have null primary key values. See inner exception for details. Now, what's interesting to me, is that after I stop debugging and I start again, everything is perfect. The trigger fired perfectly, so the composite PK is unique and perfect, and the data in other columns is intact. My guess is that EF is confused by the fact that there is seemingly no value for the PK until AFTER an insert command is given. Also, appearing to back this theory, is that when I try to edit on of the rows, in debug, I get the following error: The number of primary key values passed must match number of primary key values defined on the entity. Same error occurs if I try to pull the 'Details' or 'Delete' function. Any solution or ideas on how to pull this off? I'm pretty open to anything, even creating a hidden int PK. But it would seem redundant. EDIT 21OCT13 [HttpPost] public ActionResult Create(MainOne mainone) { if (ModelState.IsValid) { var countId = db.MainOnes.Count(d => d.DocketDate == mainone.DocketNumber); //assuming that the date field already has a value mainone.DocketNumber = countId + 1; //Cannot implicitly convert type int to string db.MainOnes.Add(mainone); db.SaveChanges(); return RedirectToAction("Index"); } return View(mainone); } EDIT 21OCT2013 FINAL CODE SOLUTION For anyone like me, who is constantly searching for clear and complete solutions. if (ModelState.IsValid) { String udate = DateTime.UtcNow.ToString("yyyy-MM-dd"); mainone.DocketDate = udate; var ddate = db.MainOnes.Count(d => d.DocketDate == mainone.DocketDate); //assuming that the date field already has a value mainone.DocketNumber = ddate + 1; db.MainOnes.Add(mainone); db.SaveChanges(); return RedirectToAction("Index"); }

    Read the article

  • How to group a database write and spreadsheet write in single "transaction"

    - by WhyGeeEx
    I have a Java program that writes results to both a DB (SQL Server) and a spreadsheet (POI), and it would be best if neither is written to if there's an error with either. It would be a lot worse if the spreadsheet was produced and then an error happened while saving to the DB, so I'm doing the DB-write first. Even so, I'm wondering if someone knows of a way to guarantee they both succeed or fail as a unit. Thanks!

    Read the article

  • How To Run Postgres locally

    - by Rohit Rayudu
    I read the Postgres docs for Flask and they said that to run Postgres you should have the following code app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = postgresql://localhost/[YOUR_DB_NAME]' db = SQLAlchemy(app) How do I know my database name? I wrote db as the name - but I got an error sqlalchemy.exc.OperationalError: (OperationalError) FATAL: database "[db]" does not exist Running Heroku with Flask if that helps

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >