Search Results

Search found 31483 results on 1260 pages for 'database migration'.

Page 628/1260 | < Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • Can we do a DNSSEC 101? [closed]

    - by PAStheLoD
    Please share your opinions, FAQs, HOWTOs, best practices (or links to the one you think is the best) and your fears and thoughts about the whole migration (or should I just call it a new piece of tech?). Is DNSSEC just for DNS providers (name server operators)? What ought John Doe to do, who hosts johndoe.com at some random provider (GoDaddy, DreamHost and such)? Also, what if the provider's name server doesn't do automatic signing magic, can John do it manually? In a fire-and-forget way, without touching KSKs and ZSKs rollovers and updating and headaches?) Does it bring any change regarding CERT records? Do browsers support it? How come it became so complex? Why didn't they just merged it with SSL? DKIM is pretty straightforward, IANA/IETF could've opted for something like that. (Yes I know that creating a trust anchor would be still problematic, but browsers are already full of CA certs. So, they could've just let anyone get a cert for a domain for shiny green padlocks, or just generate one for a poor blue lock, put it into a TXT record, encrypt the other records and let the parent zone sign the whole for you with its cert.) Thanks! And for disclosure (it seemed like the customary thing to do around here), I've asked the same on the netsec subreddit.

    Read the article

  • pyramid - How to handle complex URL in a elegant way?

    - by Lingfeng Xiong
    I'm writing a admin website which control several websites with same program and database schema but different content. The URL I designed like this: http://example.com/site A list of all sites which under control http://example.com/site/{id} A brief overview of select site with ID id http://example.com/site/{id}/user User list of target site http://example.com/site/{id}/item A list of items sold on target site http://example.com/site/{id}/item/{iid} Item detailed information # ...... something similar As you can see, nearly all URL are need the site_id. And in almost all views, I have to do some common jobs like query Site model against database with the site_id. Also, I have to pass site_id whenever I invoke request.route_path. So... is there anyway for me to make my life easier? Thanks in advance.

    Read the article

  • .NET make a copy of an embedded file resource to the local drive

    - by Matt H.
    Hi, i'm new to the realm to working with Files in .NET I'm creating a WPF application in VB.NET with the 3.5 Framework. (If you provide an example in C#, that's perfectly fine.) In my project I have a Template for an MS Access database. My desired behavior is that when the users clicks File--New, they can create a new copy of this template, give it a filename, and save it to their local directory. The database already has the tables and some starting data needed to interface with my application (a user-friendly data editor) I'm thinking the approach is to include this "template.accdb" file as a resource in the project, and write it to a file somehow at runtime? Any guidance will be very, very appreciated. Thanks!

    Read the article

  • C# Linq: Can you merge DataContexts?

    - by Andreas Grech
    Say I have one database, and this database has a set of tables that are general to all Clients and some tables that are specific to certain clients. Now what I have in mind is creating a primary DataContext that includes only the tables that are general to all the clients, and then create separate DataContexts that contain only the tables that are specific to the client. Is there a way to kind of "merge" DataContexts so that it becomes one context? So for Client A, I need one DataContext that includes both the general tables and also the tables for that specific client (retrieved from two different DataContexts) ? [Update] What I think I can do is, from the Partial Class of the DataContext instead of letting my DataContext inherit from DataContext I make it inherit from MyDataContext; that way, the tables from MyDataContext and the other DataContext will be available in one DataContext class. What do you think about this approach? Of course with something like this you can only merge two datacontexts at once though...

    Read the article

  • Reflection & Parameters in C#

    - by Jim
    Hello, I'm writing an application that runs "things" to a schedule. Idea being that the database contains assembly, method information and also the parameter values. The timer will come along, reflect the method to be run, add the parameters and then execute the method. Everything is fine except for the parameters. So, lets say the method accepts an ENUM of CustomerType where CustomerType has two values of CustomerType.Master and CustomerType.Associate. EDIT I don't the type of parameter that will be getting passed in. ENUM used as an example END OF EDIT We want to run Method "X" and pass in parameter "CustomerType.Master". In the database, there will be a varchar entry of "CustomerType.Master". How do I convert the string "CustomerType.Master" into a type of CustomerType with a value of "Master" generically? Thanks in advance, Jim

    Read the article

  • Can I improve performance by refactoring SQL commands into classes?

    - by Matthew Jones
    Currently, my entire website does updating from SQL parameterized queries. It works, we've had no problems with it, but it can occasionally be very slow. I was wondering if it makes sense to refactor some of these SQL commands into classes so that we would not have to hit the database so often. I understand hitting the database is generally the slowest part of any web application For example, say we have a class structure like this: Project (comprised of) Tasks (comprised of) Assignments Where Project, Task, and Assignment are classes. At certain points in the site you are only working on one project at a time, and so creating a Project class and passing it among pages (using Session, Profile, something else) might make sense. I imagine this class would have a Save() method to save value changes. Does it make sense to invest the time into doing this? Under what conditions might it be worth it?

    Read the article

  • Asp.Net Impersonation Fails On First Try But Succeeds on Second

    - by KevDog
    We are using RDLC's in a Asp.net web application. For reasons beyond our understanding, the first call to the database server fails with the following error: An error has occurred during report processing. Cannot open database "TryParkingIt2" requested by the login. The login failed. Login failed for user 'EXTRANET\OurServerNameHere$'. Run the report again, it works. Huh? Update Click the button the first time, it fails. Click the button again, it works. The account being impersonated is a domain account.

    Read the article

  • will_paginate without use of activerecord

    - by truthSeekr
    I apologize if this is a trivial question or my understanding of rails is weak. I have 2 actions in my controller, index and refine_data. index fetches and displays all the data from a database table. refine_data weeds out unwanted data using regex and returns a subset of the data. Controller looks like: def index Result.paginate :per_page => 5, :page => params[:page], :order => 'created_at DESC' end def refine_data results = Result.all new_results = get_subset(results) redirect_to :action => 'index' end I would like to redirect the refine_data action to the same view (index) with new_results. As new_results are not from the database table (or model), how do I go about constructing my paginate?

    Read the article

  • EntityFramework: Access dabase-column using user defined Type

    - by Nils
    I "inherited" an old Database where dates are stored as Int32 values (times too, but dates shall suffice for this example) i.e. 20090101 for Jan. 1st 2009. I can not change the schema of this database because it is still accessed by the old programs. However I want to write a new program using EF as the O/R-M. Now I'd love to have DateTime-Values in my Model. Is there a way to write a custom type and use this as a Type in the EF-Model. Something like a Type "OldDate" with properties: DatabaseValue : Int23 Value : DateTime (specifying the date-part whereas the time-part is always 0:00:00) Is something like this possible with the EF-Model ?

    Read the article

  • ASP.Net / MySQL : Translating content into several languages

    - by philwilks
    I have an ASP.Net website which uses a MySQL database for the back end. The website is an English e-commerce system, and we are looking at the possibility of translating it into about five other languages (French, Spanish etc). We will be getting human translators to perform the translation - we've looked at automated services but these aren't good enough. The static text on the site (e.g. headings, buttons etc) can easily be served up in multiple languages via .Net's built in localization features (resx files etc). The thing that I'm not so sure about it how best to store and retrieve the multi-language content in the database. For example, there is a products table that includes these fields... productId (int) categoryId (int) title (varchar) summary (varchar) description (text) features (text) The title, summary, description and features text would need to be available in all the different languages. Here are the two options that I've come up with... Create additional field for each language For example we could have titleEn, titleFr, titleEs etc for all the languages, and repeat this for all text columns. We would then adapt our code to use the appropriate field depending on the language selected. This feels a bit hacky, and also would lead to some very large tables. Also, if we wanted to add additional languages in the future it would be time consuming to add even more columns. Use a lookup table We could create a new table with the following format... textId | languageId | content ------------------------------- 10 | EN | Car 10 | FR | Voiture 10 | ES | Coche 11 | EN | Bike 11 | FR | Vélo We'd then adapt our products table to reference the appropriate textId for the title, summary, description and features instead of having the text stored in the product table. This seems much more elegant, but I can't think of a simple way of getting this data out of the database and onto the page without using complex SQL statements. Of course adding new languages in the future would be very simple compared to the previous option. I'd be very grateful for any suggestions about the best way to achieve this! Is there any "best practice" guidance out there? Has anyone done this before?

    Read the article

  • Column.DbType affecting runtime behavior

    - by leppie
    Hi According to the MSDN docs, the DbType property/attribute of a Column type/element is only used for database creation. Yet, today, when trying to submit data to an IMAGE column on a SQLCE database (not sure if only on CE), I got an exception of 'Data truncated to 8000 bytes'. This was due to the DbType still being defined as VARBINARY(MAX) which SQLCE does not support. Changing the type to IMAGE in the DbType fixes the issue. So what other surprises does Linq2SQL attributes hold in store? Is this a bug or intended? Should I report it to MS? UPDATE After getting the answer from Guffa, I tested it, but it seems for NVARCHAR(10) adding a 11 char length string causes a SQL exception, and not Linq2SQL one. The data was truncated while converting from one data type to another. [ Name of function(if known) = ] A first chance exception of type 'System.Data.SqlServerCe.SqlCeException' occurred in System.Data.SqlServerCe.dll

    Read the article

  • Longish list of allowed referrers

    - by Tim
    I want to allow hotlinking only from a list of referrers (paying customers, probably a few hundred). I am on Apache 1.3 and I do not have access to the configuration (only .htaccess). What is the fastest way to implement this? My thoughts so far: PHP with database and readfile() (SSI with) Perl and database the list implemented as symlinks named after the allower referrer, then RewriteCond using HTTP_REFERER everything in .htaccess, lots of RewriteCond's everything in .htaccess, lots of SetEnvIf's Any better (faster) ways to do this? Thanks!

    Read the article

  • A question about making a C# class persistent during a file load

    - by Adam
    Apologies for the indescriptive title, however it's the best I could think of for the moment. Basically, I've written a singleton class that loads files into a database. These files are typically large, and take hours to process. What I am looking for is to make a method where I can have this class running, and be able to call methods from within it, even if it's calling class is shut down. The singleton class is simple. It starts a thread that loads the file into the database, while having methods to report on the current status. In a nutshell it's al little like this: public sealed class BulkFileLoader { static BulkFileLoader instance = null; int currentCount = 0; BulkFileLoader() public static BulkFileLoader Instance { // Instanciate the instance class if necessary, and return it } public void Go() { // kick of 'ProcessFile' thread } public void GetCurrentCount() { return currentCount; } private void ProcessFile() { while (more rows in the import file) { // insert the row into the database currentCount++; } } } The idea is that you can get an instance of BulkFileLoader to execute, which will process a file to load, while at any time you can get realtime updates on the number of rows its done so far using the GetCurrentCount() method. This works fine, except the calling class needs to stay open the whole time for the processing to continue. As soon as I stop the calling class, the BulkFileLoader instance is removed, and it stops processing the file. What I am after is a solution where it will continue to run independently, regardless of what happens to the calling class. I then tried another approach. I created a simple console application that kicks off the BulkFileLoader, and then wrapped it around as a process. This fixes one problem, since now when I kick off the process, the file will continue to load even if I close the class that called the process. However, now the problem I have is that cannot get updates on the current count, since if I try and get the instance of BulkFileLoader (which, as mentioned before is a singleton), it creates a new instance, rather than returning the instance that is currently in the executing process. It would appear that singletons don't extend into the scope of other processes running on the machine. In the end, I want to be able to kick off the BulkFileLoader, and at any time be able to find out how many rows it's processed. However, that is even if I close the application I used to start it. Can anyone see a solution to my problem?

    Read the article

  • System.Data.DataException thrown when launching application after install

    - by jwarzech
    I currently have a (Visual Studio 2008) installer that has a 'Primary output...' custom action under 'Install' with 'InstallerClass' set to false. During the install I get a System.Data.DataException. When I run the debugger I am getting an exception being thrown from a bit of database access code that runs when the application starts. The database access is using Windows Authentication and works correctly when the application is started manually. Is this something to do with permission settings not allowing db access when launched from the installer? (I'm out of ideas)

    Read the article

  • SharePoint SLK and T-SQL xp_cmdshell safety

    - by Mitchell Skurnik
    I am looking into a TSQL command called "xp_cmdshell" to use to monitor a change to a the SLK (SharePoint Learning Kit) database and then execute a batch or PowerShell script that will trigger some events that I need. (It is bad practice to modify SharePoint's database directly, so I will be using its API) I have been reading on various blogs and MSDN that there are some security concerns with this approach. The sites suggest that you limit security so the command can be executed by only a specific user role. What other tips/suggestions would you recommend with using "xp_cmdshell"? Or should I go about this another way and create a script or console application that constantly checks if a change has been made? I am running Server 2008 with SQL 2008.

    Read the article

  • how to handle exceptions/errors in php?

    - by fayer
    when using 3rd part libraries they tend to throw exceptions to the browser and hence kill the script. eg. if im using doctrine and insert a duplicate record to the database it will throw an exception. i wonder, what is best practice for handling these exceptions. should i always do a try...catch? but doesn't that mean that i will have try...catch all over the script and for every single function/class i use? Or is it just for debugging? i don't quite get the picture. Cause if a record already exists in a database, i want to tell the user "Record already exists". And if i code a library or a function, should i always use "throw new Expcetion($message, $code)" when i want to create an error? Please shed a light on how one should create/handle exceptions/errors. Thanks

    Read the article

  • How do I submit really big amounts of data to a form?

    - by William Calleja
    I have an HTML from that's posting a really big amount of data which is eventually being saved into an SQL Server 2005, the form is as follows: <form name="frmForm" method="post" action="saveData.aspx"> the target page takes the content of a control within the form and saves it to the database through a normal SQL insert statement. However only a portion of the data is being saved. The field in the database is an ntext. Should I use a different field? Or is something happening while I'm transferring from one page to another? Or even still there's something happening when I'm sending the really big SQL statement through c# in saveData.aspx?

    Read the article

  • Showing current value in DB in a drop down box

    - by user195257
    Hello, this is my code which populates a drop down menu, all working perfectly, but when editing a database record, i want the first value in the drop down to be what is currently in the database, how would i do this? <li class="odd"><label class="field-title">Background <em>*</em>:</label> <label><select class="txtbox-middle" name="background" /> <?php $bgResult = mysql_query("SELECT * FROM `backgrounds`"); while($bgRow = mysql_fetch_array($bgResult)){ echo '<option value="'.$bgRow['name'].'">'.$bgRow['name'].'</option>'; } ?> </select></li>

    Read the article

  • Can I improve performance by refactoring SQL commands into C# classes?

    - by Matthew Jones
    Currently, my entire website does updating from SQL parameterized queries. It works, we've had no problems with it, but it can occasionally be very slow. I was wondering if it makes sense to refactor some of these SQL commands into classes so that we would not have to hit the database so often. I understand hitting the database is generally the slowest part of any web application For example, say we have a class structure like this: Project (comprised of) Tasks (comprised of) Assignments Where Project, Task, and Assignment are classes. At certain points in the site you are only working on one project at a time, and so creating a Project class and passing it among pages (using Session, Profile, something else) might make sense. I imagine this class would have a Save() method to save value changes. Does it make sense to invest the time into doing this? Under what conditions might it be worth it?

    Read the article

  • Is Amazon SQS the right choice here? Rails performance issue.

    - by ole_berlin
    I'm close to releasing a rails app with the common networking features (messaging, wall, etc.). I want to use some kind of background processing (most likely Bj) for off-loading tasks from the request/response cycle. This would happen when users invite friends via email to join and for email notifications. I'm not sure if I should just drop these invites and notifications in my Database, using a model and then just process it with a worker process every x minutes or if I should go for Amazon SQS, storing the messages and invites there and let my worker retrieve it from Amazon SQS for processing (sending the invites / notifications). The Amazon approach would get load off my Database but I guess it is slower to retrieve messages from there. What do you think?

    Read the article

  • Disabled Checkboxes

    - by tonsils
    Hi, Hoping someone can assist or point me in the right direction to another thread/url with regards to disabled checkboxes. I basically have a couple of checkboxes in my form, that are not database items but am retrieving the source value based on a database column. As I do not want the user to be able to change these checboxes, I have disabled these checboxes When I query a record, the form at start-up retrieves the value for the checkbox with no issues. The problem is, when I hit a validation error on my form when the page is submitted, my checkbox values that were originally retrieved as 'Y', i.e checked are now null. Can anyone pls assist with what is going on or a workaround inorder to maintain these values when a validation error occurs. Thanks.

    Read the article

  • Security when writing a PHP webservice?

    - by chustar
    I am writing a web service in PHP for the first time and had ran into some security problems. 1) I am planning to hash passwords using md5() before I write them to the database (or to authenticate the user) but I realize that to do that, I would have to transmit the password in plaintext to the server and hash it there. Because of this I thought of md5()ing it with javascript client side and then rehashing on the server but then if javascript is disabled, then the user can't login, right? 2) I have heard that anything that when the action is readonly, you should use GET but if it modifies the database, you should use POST. Isn't post just as transparent as GET, just not in the address bar?

    Read the article

  • Attaching catalog with SQL Authentication credentials attaches it as Read-Only

    - by Nissim
    Hello, As part of our product's installation process, a database is attached to the server. We use EXEC sp_attach_db in order to attach it to MSSQL. The problem occures when we try to attach it with "SQL Authentication" connection string - the database is attached to the server as read-only, thus preventing any write access from being performed This is driving us nuts... it's working just fine with Windows Authentication, and the only difference is the connection string... I tried googling for it but no mention for such a scenario is found. Any ideas anyone? It's important to mention that the MDF/LDF physical files are not set with "ReadOnly" attribute, so this is not the problem.

    Read the article

  • Size SKU in Magento

    - by latvian
    Hi, How can I allow SKU be longer than 34 characters (for simple products) for all products? When i add new product(simple) and enter more than 34 characters, Magento cuts it to 34 after saving. In the database the 'sku' attributes ( for quete item,order item,invoice item, shipment item) from eav_attribute table hold varchar(255). For Catalog_Product_Entity the attributed is VarChar(64). In either case, it is more than 34characters. Thus, I could change SKU to 64 characters without making any change in database, correct? How do i do that? I have good understanding of the user side magento code, but not Admin side. Can you suggest me good tutorial on making changes in Admin side that could help me figure this question myself. Thank you, Margots

    Read the article

< Previous Page | 624 625 626 627 628 629 630 631 632 633 634 635  | Next Page >